Aquatic Insect Identifications

Identification of macroinvertebrate samples from Sublette County, WY is progressing at faster pace with the addition of Esmeralda to our team.  She is a meticulous sorter and has a sorting efficiency of 97-100%!  This is impressive to me because, some of the other companies I have worked for have the sorters strive for 90%.  The idea is that if sorters remove 90% or more on the first sort, that the sample passes the Quality Assurance Standards of most bioassessment programs BUT that if they exceed it “too much” they are spending too much time on a sample.  Since laboratory work is usually conducted at a fixed price (per sample, regardless of how long it takes), one way to increase the profit margin is to ensure that employees spend as little time on each sample as possible. I wonder, if it is truely more cost effective to have the sorters aim a little lower.

For Example, if a sorting technician speeds through a sample, knowingly missing a few specimens, aiming for 90% efficiency, actually only sorts 70% of the insects. The sample would fail the QA/QC check and need to be re-sorted.  If the rechecked sample is sorted t0 88%, the entire processed portion needs to be sorted… again… Personally, I don’t think this would work well in my lab.  I think that it is more cost effective to take 20% longer to aim 10% higher (aim for 100%), than it is to retrieve the sample from storage, resort it, amending the data later–even if you only have to do that to a small portion of the samples.  But then we are a small capacity laboratory and it feels like our infrastructure is better suited for minimizing re-sorts.  I think it is a fairly valid assumption that the sample that has been re-sorted to 98% efficiency is just as good as the sample that was sorted to 98% efficiency the first time, so it is really about how the labs handle logistics–not so much about data quality. So, Esmeralda, keep up the highly efficient sorting–it is a good fit here!

I did just realized that some readers may not know about the two standard types of Quality Assurance measures applied to benthic macroinvertebrate sample processing: Sorting efficiency, and subsampling consistency.  We just discussed sorting efficiency (above). It is the portion of the total number of specimens found relative to the actual number in the sample. To calculate this number, one person sorts the sample and removes all the specimens from the sample. Later, another investigator examines the sample and removes all the specimens they find.  If the first person found 90 critters, and the second found 10, the first sorter’s efficiency would be 90%.

Sorting efficiency is a measure of the completeness of the sorting effort in the laboratory’s staff and may indicate the need for corrective action, whereas “subsampling consistency” describes some inherent characteristic of the of the samples composition–the clumpiness. Most bioassessment samples are not completely sorted–they are usually subsampled. So, if 25% of a sample was sorted to reach the SOP’s target number of organisms, (100, 200, 300, 500, or 1,000) then another equal portion of the sample (25%) would be analyzed in the laboratory. Both the taxonomic composition and total number of organisms are issues for comparison. Ideally the composition of the two portions taken from the same sample would be very very similar. However, in some instances specimens remain clumped together and one subsample is quite different from another portion of the sample. There is really nothing that can be done about this within the confines of study design.  If you add the two samples together, the new sample represents twice as much effort as the other samples in the study and would violate several assumptions in the analysis. If you keep them separate they violate other assumptions. Thus the number serves as a warning sign about the amount of variation with in a sample… Subsampling consistency involves as much work as a new sample, so it costs the same as an additional sample. Thus, most clients do not elect to perform this analysis on their benthic samples.  If a state agency routinely sends out 300 samples in a year, they would need to pay for 30 additional (~$9,000) samples to have subsampling consistency checks on 10% of their samples.  I think I can understand their desire to spend those funds sampling additional samples rather than describing an uncontrollable aspect of  sample composition. The flip side is if they assume the samples are 100% uniform and representative, some poor decisions can be made.

More on the effect of subsampling efficiency latter. Meanwhile, here is a thought question: Why do you think sorting efficiency matters?

Tags: , , , , , ,

Comments are closed.