Laboratory Productivity and the Rate of Manual Peripheral Blood Smear Review: A College of American Pathologists Q-Probes Study of 95 141 Complete Blood Count Determinations Performed in 263 Institutions
Novis, David A
Context.-Automated laboratory hematology analyzers are capable of performing differential counts on peripheral blood smears with greater precision and more accurate detection of distributional and morphologic abnormalities than those performed by manual examinations of blood smears. Manual determinations of blood morphology and leukocyte differential counts are time-consuming, expensive, and may not always be necessary. The frequency with which hematology laboratory workers perform manual screens despite the availability of labor-saving features of automated analyzers is unknown.
Objective.-To determine the normative rates with which manual peripheral blood smears were performed in clinical laboratories, to examine laboratory practices associated with higher or lower manual review rates, and to measure the effects of manual smear review on the efficiency of generating complete blood count (CBC) determinations.
Design.-From each of 3 traditional shifts per day, participants were asked to select serially, 10 automated CBC specimens, and to indicate whether manual scans and/or reviews with complete differential counts were performed on blood smears prepared from those specimens. Sampling continued until a total of 60 peripheral smears were reviewed manually. For each specimen on which a manual review was performed, participants indicated the patient’s age, hemoglobin value, white blood cell count, platelet count, and the primary reason why the manual review was performed. Participants also submitted data concerning their institutions’ demographic profiles and their laboratories’ staffing, work volume, and practices regarding CBC determinations. The rates of manual reviews and estimations of efficiency in performing CBC determinations were obtained from the data.
Setting.-A total of 263 hospitals and independent laboratories, predominantly located in the United States, participating in the College of American Pathologists Q-Probes Program.
Results.-There were 95141 CBC determinations examined in this study; participants reviewed 15423 (16.2%) peripheral blood smears manually. In the median institution (50th percentile), manual reviews of peripheral smears were performed on 26.7% of specimens. Manual differential count review rates were inversely associated with the magnitude of platelet counts that were required by laboratory policy to trigger smear reviews and with the efficiency of generating CBC reports. Lower manual differential count review rates were associated with laboratory policies that allowed manual reviews solely on the basis of abnormal automated red cell parameters and that precluded performing repeat manual reviews within designated time intervals. The manual scan rate elevated with increased number of hospital beds. In more than one third (35.7%) of the peripheral smears reviewed manually, participants claimed to have learned additional information beyond what was available on automated hematology analyzer printouts alone.
Conclusion.-By adopting certain laboratory practices, it may be possible to reduce the rates of manual reviews of peripheral blood smears and increase the efficiency of generating CBC results.
(Arch Pathol Lab Med. 2006;130:596-601)
Automated laboratory hematology analyzers are capable of performing leukocyte differential counts and erythrocyte morphologic evaluations with greater precision and accuracy than those available by manual screening of blood smears.1,2 Manual determinations of blood morphology and differential leukocyte counts are timeconsuming and expensive.3-7 Performing differential counts manually when automated determinations yield identical results may undermine efficiency and lower productivity in a medical laboratory. The frequency with which hematology laboratory workers perform manual reviews of peripheral blood smears despite the availability of labor-saving features of automated analyzers is unknown.
Since 1989, the College of American Pathologists QProbes program has conducted multi-institutional studies that have determined a broad range of performance benchmarks in anatomic pathology and laboratory medicine.8-10 Participants in these studies, representing the entire spectrum of practice styles worldwide, have been able to compare their performances with those of their peers. Previous Q-Probes studies have examined efficient utilization of many laboratory services.11-25 In this Q-Probes study, we assessed the normative rates with which manual peripheral blood smears were performed in clinical laboratories, and examined hospital and laboratory practices associated with lower or higher rates.
MATERIALS AND METHODS
Definitions of Terms
Automated complete blood count (CBC): Any CBC performed on an automated analyzer (with or without a white blood cell [WBC] differential count), whether or not a peripheral smear is prepared or reviewed.
Autoverification: A process in which the automatically measured CBC results (with or without a WBC differential count) are accepted, validated, and released without any form of human review.
Delta criteria: A specified interval change expressed in either units of measure or a percent of change that triggers or suppresses reflexive action.
Flag: Any automated instrument failure, distribution, or morphologic parameter that can be set in such a manner that a test result value falling outside the range of those parameters will generate an operator message drawing attention to that “flagged” value and giving the instrument operator an opportunity to review the result before verifying it.
Full-time equivalent: A unit of manpower equal to 40 hours per week or 2080 hours per year of employment. For the purpose of this study, calculations of total full-time equivalents included all hours worked by staff on all work shifts to perform CBC and manual peripheral smear reviews.
Manual differential count: An examination of a peripheral smear in which a technologist reviews the peripheral smear microscopically and performs a formal leukocyte differential count.
Manual peripheral smear review (also referred to as manual review): A manual scan of a peripheral smear and/or a manual differential count.
Manual scan of peripheral smear: A cursory examination of a peripheral blood smear performed microscopically for a narrow purpose (such as verifying a platelet count), but for which detailed evaluation of morphologic findings and differential count are not performed.
Complete blood count efficiency (also known as productivity): The yearly number of CBC determinations performed (including those determinations in which manual smear reviews were performed) in each institution divided by the number of full-time equivalents required to produce them.
Laboratory personnel in institutions enrolled in the College of American Pathologists Q-Probes program for the third quarter of 2004 participated in this study. The study was conducted and the data were handled in a manner similar to that previously described.8 On their enrollments in the Q-Probes program, participants from each institution submitted certain demographic information including their institutions’ geographic locations, community classifications (urban, suburban, rural), teaching status, residency program status, number of occupied beds, and both hospital and laboratory accreditation status.
Participants were asked to select serially, 10 automated CBC specimens from each of 3 traditional shifts and to indicate whether manual scans and/or reviews with complete differential counts were performed on blood smears prepared from those specimens. Sampling continued until a total of 60 peripheral smears were reviewed manually. Participants were asked to include tests that may have been ordered singly or as part of panels that included other laboratory tests. Participants submitting fewer than 25 blood smear reviews were not included in the analysis.
The CBC specimens received from all hospital locations (ie, emergency departments, inpatient units, outpatient drawing stations) were included. For each specimen on which a manual review was performed, participants indicated the patient’s age, hemoglobin value, WBC count, platelet count, and the primary reason for which the manual review was performed. The total manual peripheral blood review rate, the manual scan review rate, and the manual differential count rate were determined for each participating institution.
We attempted to approximate the added value gained in performing manual reviews of peripheral blood smears on specimens for which cellular morphologic and differential information were already available on data generated from automated hematology analyzers. We asked participants to indicate whether they believed that they were availed new information about patients not otherwise generated by the automated instruments. We did not inquire of them as to the nature or the gravity of this information. This study was not intended to compare the clinical efficacy of automated and manual blood examinations or to associate blood examinations with patient outcomes.
To approximate the magnitude of efficiency gained in producing CBC results when the components of peripheral smear reviews were removed from the process, levels of productivity for performing CBCs were measured and then compared with the manual smear review rates in each participating institution. We tested whether there was a linear relationship between the 2 variables, and considered P ≤ .05 to be statistically significant.
We evaluated the effects of various practice characteristics on the rates of manual peripheral smear reviews. By completing detailed questionnaires, participants indicated whether
* autoverification of CBC results were used routinely in their laboratories;
* automated hematology analyzers included features that allowed technicians to set discrete parameters (eg, flags) that alerted them to perform manual reviews of peripheral smears;
* laboratory policies allowed clinicians to order manual WBC differential counts regardless of whether automated counts were available;
* manual WBC differential counts that were ordered by clinicians were performed even when differential counts were reported as normal (ie, not flagged on automated hematology analyzers);
* manual reviews were performed only during certain shifts or days of the week;
* performance of manual reviews was based on specific patient demographic information (eg, age);
* laboratory policies contained provisions that precluded performing repeat manual reviews within designated time intervals;
* laboratory policies contained provisions that precluded performing repeat manual differential counts when designated delta criteria were not met.
If a participant failed to answer a question, for any of the mentioned practice characteristics, that participant’s data were excluded from the database for that question only. We used WiIcoxon and Kruskal-Wallis tests to assess differences in manual review rates among the various practice and demographic variables of the participants. We considered P ≤ .05 to be statistically significant. Data from institutions in which fewer than 25 cases were submitted were not included in this analysis.
Laboratory personnel representing 263 institutions in 45 states (254), Canada (6), Australia (1), Saudi Arabia (1), and South Korea (1) submitted data for this study. Most (76.0%) of the hospitals were inspected by the Joint Commission on Accreditation of Hospitals Organization and most (82.0%) of the laboratories were inspected by the College of American Pathologists. Table 1 shows other demographic characteristics of institutions represented by study participants. About three fourths (74.5%) of these institutions contained 300 or fewer occupied beds.
Table 2 shows the selected laboratory practices of institutions represented in this study. For instance, in most (91.8%) institutions, hematology analyzers were set to indicate specific values triggering manual reviews.
Table 3 shows the percentile distribution of manual blood smear reviews performed at 263 institutions in which this study was conducted. For instance, at one extreme, in 10% of participating institutions (10th percentile or lower), manual scans were performed on 0.8% or fewer blood specimens submitted for CBC determinations. At the other extreme, in 10% of participating institutions (90th percentile or higher) manual scans were performed on 23.6% or more blood specimens submitted for CBC determinations. In the median institution, of all specimens submitted for CBC determinations, manual scans were performed on 9.1%, manual differentials were performed on 14.7% and either one or both manual procedures were performed on 26.7% of the specimens.
Table 4 shows the median manual scan and leukocyte differential rates for 5 groups according to number of occupied hospital beds that were represented by participants in this study. Manual scan rates tended to increase with increasing number of occupied hospital beds (P = .03). There was no association between number of occupied hospital beds and manual leukocyte differential count rates.
Participants submitted data collected from 95141 automated CBC specimens. Manual reviews were performed on 15423 (16.2%) peripheral blood smears, including 6147 (6.5%) scans and 9276 (9.77%) differential counts. Table 5 lists the reasons why manual reviews of smears were performed. For instance, 4902 (36.77%) smears were reviewed manually because WBC values fell outside the ranges of acceptance criteria preset on automated hematology analyzers. Similarly, 487 smears (3.77%) were reviewed at the request of clinicians who desired smear reviews regardless of automated analyzer findings. Participants indicated that some new information was learned from reviewing 5471 (35.77%), and no new information was learned from reviewing 9875 (64.37%), blood smears.
Table 6 displays the distribution of parameters for WBC, platelet, and red blood cell threshold triggers that prompt manual reviews of peripheral blood smears in laboratories represented by participants of this study.
Table 7 shows the relationship between laboratory productivity and manual leukocyte differential count rates. Lower manual differential count review rates were associated with higher productivity ratios (P
Table 8 shows laboratory practices significantly associated with lower rates of manual leukocyte differential counts. For instance, the median institutional rate with which manual differential counts were performed on peripheral smears was 13.4% when laboratory policies required that manual blood reviews be performed even if autoanalyzers indicated that only red cell parameters were abnormal, compared with a median institutional rate of 17.0% when no such policies existed. These and other laboratory practices assessed were not associated with manual scan rates (data not shown).
The College of American Pathologists Q-Probes studies are designed to benchmark performance in anatomic pathology and laboratory medicine, and to evaluate the effects of laboratory and institutional practices on those benchmarks.8 Unlike most published investigations that describe the experiences within individual institutions, many of which are academic and/or presumably have homogeneous practice environments, Q-Probes studies reflect the daily experiences of a large, heterogeneous group of hospitals and private laboratories that serve diverse community populations and that vary widely in their laboratory practices (Tables 1 and 2). By having the normative rates of selected parameters of quality available to them, participants in Q-Probes studies are able to derive benchmarks of quality that they believe are appropriate to apply in their own communities.
This Q-Probes study was designed to examine the efficiency with which certain laboratory services were provided. Specifically, we determined the normative rates with which peripheral blood smears were examined manually in laboratories using hematology analyzers capable of providing similar or equivalent information automatically. The review rates were meant to be indicators of efficiency of generating work, not measures of the quality of the work produced or their contributions to the quality of patient care. Therefore, we did not correlate smear review rates with patient outcomes, nor did we attempt to determine whether lower or higher review rates indicated better or worse laboratory performance. We did not correlate review rates with various hematologic parameters that may have been available on different models of analyzers.
The rates of manual review varied considerably among participants (Table 3). Of the 263 laboratories represented in this study, participants in the median institution performed manual reviews of peripheral blood smears on about one fourth (26.7%) of all specimens submitted for CBC determinations. In the 10% of institutions in which the lowest percentage of peripheral blood smears were reviewed manually, participants examined less than 10% (9.9% or less) of specimens submitted for CBC testing. Among the 10% of institutions in which the largest percentage of smears were reviewed manually, participants examined at least half (50.0% or more) of the specimens submitted for CBC testing.
The rates of manual scan of peripheral blood smears increased as the number of occupied beds of the hospitals represented in this study increased (Table 4). Perhaps patients who are admitted to larger hospitals have illnesses of greater severity, which may generate greater numbers of abnormal CBC results, and hence require additional manual examination.
More than 80% (80.7%) of all manual smear reviews in this study were triggered by hematology analyzer flags indicating hématologie values exceeding preset instrument limits of normality (Table 5). These preset threshold limits varied widely among participants (Table 6). We did not inquire as to other threshold limits that participants may have used to trigger manual reviews. We did not determine the clinical validity of these thresholds, nor did we attempt to compare them with recommended threshold triggers published elsewhere.26,27
More than one third (35.7%) of participants indicated that they believed that manual reviews of smears provided clinicians with new information (Table 5). We did not inquire as to the content or importance of this information, nor did we attempt to establish the validity of these impressions. Such inquiries would have required expanding this study beyond the limits of what a single Q-Probes study is capable of accomplishing. Nevertheless, we believed that asking this question even in vague terms provided a general sense of the relative merits of supplementing hematology autoanalyzer results with those of manual slide reviews. We did not ask the users of those test results whether they thought they also learned new information from manual slide reviews. We believe that documenting whether the users of CBC results agree with the providers of those results that supplemental manual slide reviews offer them additional helpful information is a crucial first step in deciding what laboratory services should and should not be eliminated in their institutions.
We examined the relationship between performing manual examinations on peripheral blood smears and the efficiency of generating CBC results. Laboratory productivity was related inversely to manual differential count review rates: the higher the manual differential count rate, the lower the productivity (Table 7). Because number of occupied hospital beds, presumably each with proportionately increased or decreased laboratory testing volume, was not associated with the rate of manual leukocyte differential counting, we believe that this productivity was not associated with laboratory testing workloads. We did not measure the effects of all laboratory practices on the efficiency of performing CBCs. Although it is possible that certain laboratory practices about which we did not inquire may have augmented or undermined efficiency in some participating laboratories, we reasoned that it would be unlikely that those practices would overshadow the effects of processes as intensively labor-consuming as performing manual slide examinations.
Three practice variables about which we inquired were associated significantly with lower rates of manual differential count review (Table 8). The rates with which peripheral smears were reviewed manually for the purpose of performing leukocyte differential counts decreased as the upper thresholds of automated platelet counts triggering smear reviews increased (P = .003). These rates were lower in hospitals in which laboratory policies allowed smear reviews to be performed if instrument flags were raised when related only to red blood cell (but not other hematologic) abnormalities (P = .002), and only after intervals following the reporting of previous manual reviews had elapsed (P = .004), compared with hospitals in which there were no such policies. The rates with which peripheral smears were reviewed microscopically for purposes other than performing differential counts were not associated significantly with any of the practice variables about which we inquired.
These findings suggest that increasing the efficiency of generating CBC results may be accomplished both by setting instrument threshold triggers and developing internal laboratory policies that limit the number of manual peripheral blood reviews performed in laboratories. Because only 3.7% of smear reviews were performed at the requests of physicians (Table 5), establishing policies that restrict clinicians’ abilities to order manual reviews regardless of automated analyzer results may not be the most efficient method by which to reduce workloads in hematology laboratories.
The authors would like to thank Kimberly M. O’Donnell for her editorial contributions.
1. Pierre RV. Peripheral blood film review: the demise of the eyecount leukocyte differential. Clin Lab Med. 2002;22:279-297.
2. Buttarello M, Cadotti M, Lorenz C, et al. Evaluation of four automated hematology analyzers: a comparative study of differentia] counts (imprecision and inaccuracy). Am J Clin Pathol. 1992;97:345-352.
3. Croner W, Simson E. Practical Guide to Modem Hematology Analyzers. New York, NY: John Wiley & Sons; 1995:188-197.
4. National Commmittce for Clinical Laboratory Standards. Reference Leukocyte Differential Count (Proportional) and Evaluation of Instrument Methods. NCCLS document H20-A. Villanova, Pa: National Committee for Clinical Laboratory Standards; 1992.
5. Davis CM. Auto-verification of the peripheral blood count. Lab Med. 1994; 25:528.
6. Peterson P, Blomberg D), Rabinovitch A, Cornbleet PJ. Physician review of the peripheral blood smear: when and why: an opinion. Lab Hematol. 2001 ;7: 175-179.
7. Lantis KL, Harris RJ, Davis G, Renner N, Finn WG. Elimination of instrument-driven reflex manual differential leukocyte counts: optimization of manual blood smear review criteria in a high-volume automated hematology laboratory. Am J Clin Pathol. 2003;119:656-662.
8. Howanitz PJ. Quality assurance measurements in departments of pathology and laboratory medicine. Arch Pathol Lab Mod 1990;114:1131-1135.
9. Lawson NS, Howanitz PJ. The College of American Pathologists, 19461996: quality assurance service. Arch Pathol Lab Mod. 1997;121:1000-1008.
10. Schifman RB, Howanitz PJ, Zarbo RJ. Q-Probes: a College of American Pathologists benchmarking program for quality management in pathology and laboratory medicine. Adv Pathol Lab Med. 1996;9:83-120.
11. Howanitz P), Steindel S). Digoxin therapeutic drug monitoring practices: a College of American Pathologists Q-Probes study of 666 institutions and 18 679 toxic levels. Arch Pathol Lab Med. 1993;11 7:684-690.
12. Valenstein PN, Howanitz P). Ordering accuracy: a College of American Pathologists Q-Probes study of 577 institutions. Arch Pathol Lab Med. 1995;119: 117-122.
13. Jones BA, Meier FA, Howanitz PJ. Complete blood count specimen acceptability: a College of American Pathologists Q-Probes study of 703 laboratories. Arch Pathol Lab Med. 1995;119:203-208.
14. Vaienstein P, Pfaller M, Yungbluth M. The use and abuse of stool microbiology: a College American Pathologists Q-Probes study of 601 institutions. Arch Pathol Lab Med. 1996; 120:206-211.
15. Nakhleh RE, Zarbo RJ. Surgical pathology specimen identification and accessioning: a College of American Pathologists Q-Probes study of 1,004,115 cases from 417 institutions. Arch Pathol Lab Med. 1996;120:227-233.
16. Valenstein P, Schifman RB. Duplicate laboratory orders: a Collegeof American Pathologists Q-Probes study of thyrotropin requests in 502 institutions. Arch Pathol Lab Med. 1996; 12 0:917-921.
17. Schifman RB, Bachner P, Howanitz PJ. Blood culture quality improvement: a College of American Pathologists Q-Probes study involving 909 institutions and 289,572 blood culture sets. Arch Pathol Lab Med. 1996;120:999-1002.
18. Jones BA, Calam RR, Howanitz PJ. Chemistry specimen acceptability: a College of American Pathologists Q-Probes study of 453 laboratories. Arch Pathol Lab Med. 1997;121:19-26.
19. Valenstein P, Meier F. Urine culture contamination: a College of American Pathologists Q-Probes study of contaminated urine cultures in 906 institutions. Arch Pathol Lab Med. 1998;122:123-129.
20. Schifman RB, Strand CL, Meier FA, Howanitz P). Blood culture contamination: a College of American Pathologists Q-Probes study involving 640 institutions and 497,134 specimens from adult patients. Arch Pathol Lab Med. 1998; 122:216-221.
21. Valenstein PN, Meier F. Outpatient order accuracy: a College of American Radiologists Q-Probes study of requisition order entry accuracy in 660 institutions. Arch Pathol Lab Med. 1999;123:1145-1150.
22. Novis DA, Dale JC, Schifman RB, Ruby SG, Walsh MK. Solitary blood cultures: a College of American Pathologists Q-Probes study of 132,778 blood culture sets in 333 small hospitals. Arch Pathol Lab Med. 2001;125:1285-1289.
23. Novis DA, Renner S, Friedberg R, Walsh M, Saladino A. Quality indicators of blood utilization: three College of American Pathologists Q-Probes studies of 12,288,404 red blood cell units in 1,639 hospitals. Arch Pathol Lab Med. 2002; 126:150-156.
24. Dale JC, Ruby SG. Specimen collection volumes for laboratory tests: a College of American Pathologists study of 140 laboratories. Arch Pathol Lab Med. 2003;127:162-168.
25. Friedberg RC, Jones BA, Walsh M. Type and screen completion for scheduled surgical procedures: a College of American Pathologists Q-Probes study of 8941 type and screen tests in 108 institutions. Arch Pathol Lab Med. 2003;127: 533-540.
26. International Consensus Group for Hematology Review, International Society for Laboratory Hematology. Suggested criteria for action following automated CBC and WBC differential analysis. Available at: http://www.islh.org/2004/ Committees/ConsensusGroup/CGICGHReview.htm. Accessed April 3, 2006.
27. Smith N, Rosenfeld D, Watman R. Hematology autovalidation system. Lab Hematol. 1999;5:52-55.
David A. Novis, MD; Molly Walsh, PhD; David Wilkinson, MD; Mary St. Louis, MT; Jonathon Ben-Ezra, MD
Accepted for publication December 7, 2005.
From the Department of Pathology, Wentworth Douglass Hospital, Dover, NH (Dr Novis); College of American Pathologists, Northfield, III (Dr Walsh); Department of Pathology, School of Medicine, Medical College of Virginia (Drs Wilkinson and Ben-Ezra) and the Hematology Laboratory (Ms St. Louis and Dr Ben-Ezra), Virginia Commonwealth University, Richmond. Dr Novis is now a trustee of Wentworth Douglass Hospital, Dover, NH, and a self-employed health care consultant.
The authors have no relevant financial interest in the products or companies described in this article.
Reprints: David A. Novis, MD, 18 Toon Ln, Lee, NH 03824 (e-mail: email@example.com).
Copyright College of American Pathologists May 2006
Provided by ProQuest Information and Learning Company. All rights Reserved