Laboratory Quality Control: What It Is and How It Works

Quality control in a laboratory is the practice of testing known reference materials alongside real patient samples to verify that results are accurate and consistent. It catches errors before they reach a patient’s report. Every clinical lab runs quality control checks daily, and in some cases every eight hours, to confirm that instruments, reagents, and processes are performing within acceptable limits.

How Quality Control Fits Into the Testing Process

Laboratory testing happens in three phases: pre-analytical (everything before analysis, like collecting and transporting a sample), analytical (the actual measurement), and post-analytical (interpreting and reporting results). Quality control primarily monitors the analytical phase, the moment when instruments measure what’s in a blood, urine, or tissue sample. The goal is straightforward: if a known control material with a predetermined value runs through the system and comes back with the expected result, the lab can trust that patient samples processed alongside it are also reliable.

But errors in the pre-analytical phase can undermine even the best analytical QC. Labs maintain detailed protocols for specimen handling that cover patient preparation (fasting overnight for at least 12 hours, avoiding exercise before collection), tourniquet duration, blood draw posture, collection tube order, transport temperature, centrifuge settings, and storage conditions. The timing of blood collection matters too, since many substances in the body fluctuate throughout the day. Sample identification guidelines, anticoagulant selection, and rules for handling samples that are hemolyzed (damaged red blood cells), lipemic (high fat), or icteric (high bilirubin) are all spelled out in the lab’s quality manual.

Internal Quality Control vs. External Quality Assessment

Labs use two complementary systems to monitor performance. Internal quality control (IQC) happens inside the lab every day. Technicians run commercially prepared control materials with known concentrations at the same time they run patient specimens. If the control result falls outside its expected range, the lab holds all patient results from that run until the problem is resolved. IQC exists to prevent the lab from releasing erroneous information about a patient’s health.

External quality assessment (EQA), sometimes called proficiency testing, works differently. An outside organization sends identical test samples to many laboratories, collects their results, and compares them. This reveals how a lab performs relative to its peers and uncovers biases that internal controls alone might miss. EQA programs serve an educational purpose: they don’t just flag problems, they help labs understand the diagnostic meaning of their results and advise clinicians on test interpretation. Interestingly, a lab can sometimes pass one type of external program while failing another for the same test, because mandatory proficiency testing and educational EQA schemes may use different scoring criteria.

What Happens During a Typical QC Check

For quantitative tests (those that produce a number, like blood glucose or cholesterol), labs run at least two control materials at different concentrations each day that patient samples are tested. One control sits near a normal value, the other near an abnormal value. For qualitative tests (those that produce a positive or negative result, like a rapid strep test), the lab runs both a positive and a negative control. Blood gas analyzers, which measure oxygen and carbon dioxide levels, require a control sample every eight hours of testing. Manual cell counts in hematology also follow an eight-hour cycle.

These frequencies come from federal CLIA regulations, which set the minimum standards for any lab testing human specimens in the United States. Some labs choose to run controls more frequently than the minimum, especially for high-volume or high-risk tests. Staining materials used in microbiology and pathology are checked each day of use, with Gram stains verified weekly.

Levey-Jennings Charts and Tracking Trends

Raw QC numbers alone don’t tell the full story. Labs plot each day’s control results on a Levey-Jennings chart, a simple graph with the control value on the vertical axis and the date on the horizontal axis. A horizontal line marks the established mean for that control material, and additional lines mark one, two, and three standard deviations above and below the mean.

In a well-functioning system, control values scatter randomly around the mean, mostly within one standard deviation. When values start drifting in one direction over several days (a trend) or suddenly jump to a new level and stay there (a shift), that signals a systematic problem even if individual points haven’t crossed the rejection limit yet. A single control value that lands within two standard deviations can still raise concern if it’s part of a pattern. The chart makes these patterns visible in a way that a spreadsheet of numbers cannot.

Westgard Rules for Accepting or Rejecting Results

Levey-Jennings charts are interpreted using a set of decision rules developed by James Westgard. Each rule describes a specific pattern in the control data and what it means. The most commonly used rules include:

  • 1-3s rule: A single control value falls more than three standard deviations from the mean. This is the primary rejection rule. The run is stopped and patient results are not reported.
  • 2-2s rule: Two consecutive control values both exceed two standard deviations in the same direction, suggesting systematic error like a calibration drift or reagent problem.
  • R-4s rule: The difference between two control levels within the same run exceeds four standard deviations, pointing to random error.
  • 4-1s rule: Four consecutive control values all fall on the same side of the mean and exceed one standard deviation. This catches smaller, creeping shifts before they become large enough to trigger the 1-3s rule.

Labs don’t apply all these rules to every test. The selection depends on how well a test method performs, measured by a metric called sigma. A method performing at six sigma or above is so precise that the lab only needs to run one control level once daily and watch for the 1-3s rule alone. At four to six sigma, two control levels with multiple rules are needed. At three to four sigma, two control levels run twice daily with the full set of multi-rules is the standard. Three sigma is considered the absolute minimum acceptable performance. Below that threshold, the lab must conduct a root cause analysis before using the method for patient testing.

How Labs Measure Precision

The key metric for evaluating how consistently a test performs is the coefficient of variation (CV), calculated by dividing the standard deviation of repeated control measurements by their mean. A lower CV means tighter clustering of results, which means better precision. Labs compare their CV against benchmarks tied to each test’s allowable total error, the maximum amount of inaccuracy that’s clinically acceptable for a given measurement. A widely used guideline recommends that a lab’s long-term imprecision stay below one-third of the allowable total error.

Another approach sets the target based on natural biological variation. The idea is that your lab’s analytical imprecision should be small enough that it doesn’t meaningfully add to the normal fluctuations that occur in a person’s body from day to day. There are three tiers: optimum (analytical variation is no more than one-quarter of a person’s natural variation), desirable (one-half), and minimum (three-quarters).

What Happens When QC Fails

When a control result falls outside acceptable limits, no patient results from that run are released. The first step is checking the basics: Is the control material expired? Was it stored and prepared correctly? Was the right control used for the right test? These simple issues account for a surprising number of QC failures.

If the basics check out, the lab characterizes the error. The pattern in the control data points toward either random error (unpredictable, inconsistent deviations) or systematic error (a consistent shift in one direction). Random error leads technicians to investigate pipetting technique, air bubbles, temperature fluctuations, or timing inconsistencies. Systematic error points toward reagent lot changes, calibration drift, or deteriorating control materials.

For a possible random occurrence, repeating the QC is a reasonable first step. If the repeat falls within two standard deviations, the lab treats it as an isolated event and reports patient results. If any repeated control exceeds two standard deviations, the entire run is rejected and patient samples are held for retesting.

When systematic error is identified and corrected, the instrument is recalibrated and controls are retested. If results come back in range, every patient sample believed to have been affected by the error is retested before reporting. If results remain out of range after recalibration, the lab sequesters all results and initiates a formal root cause analysis, a structured investigation examining five categories: personnel, equipment, materials, method, and environment. Patient results stay on hold until the problem is definitively resolved and confirmed by acceptable QC.