The modern laboratory relies heavily on the pipette, an instrument foundational to experimental science. Every result, from sequencing a genome to diagnosing a disease, begins with the accurate and precise transfer of liquid volumes. Even small errors in volume delivery can lead to dramatically skewed data, compromising the validity of research and clinical outcomes. Understanding the mechanics of measurement and the factors that influence performance is necessary to ensure reliable results.
Defining Accuracy and Precision
Accuracy and precision are distinct measurements that define the overall quality of a pipette’s performance. Accuracy describes how close a measured volume is to the true target volume, which relates to systematic error or bias. If a pipette is consistently set to dispense 100 microliters but actually dispenses 95 microliters, the measurement is inaccurate because of a consistent, repeatable deviation from the target.
Precision refers to the degree of agreement among a series of repeated measurements, regardless of whether they are close to the target volume. A precise pipette will dispense nearly the same volume every time, even if that volume is systematically incorrect.
To illustrate, consider a dartboard analogy where the bull’s-eye represents the true target volume. Darts tightly clustered far from the bull’s-eye are highly precise but inaccurate. Conversely, darts scattered widely but centered around the bull’s-eye are accurate but imprecise. The goal is to achieve both minimal systematic and random error. High precision with low accuracy often points to a calibration problem, while low precision suggests a random variable, such as poor technique.
Quantifying Pipette Performance
The standard method for assessing a pipette’s performance is the gravimetric method, which uses an analytical balance to weigh dispensed volumes of distilled water. Since the density of water at a given temperature is known, the weight of the dispensed liquid can be converted directly into volume using a Z-factor correction. This process is typically performed across the pipette’s volume range, often at 100%, 50%, and 10% of the maximum setting, with multiple repetitions at each volume.
The collected data is used to calculate both accuracy and precision. Accuracy is determined by calculating the mean volume of all measurements and comparing it to the set volume, revealing systematic bias. Precision is quantified by calculating the Standard Deviation (SD) of the measurements. A more practical measure is the Coefficient of Variation (CV), which expresses the standard deviation as a percentage of the mean volume, providing a normalized measure of reproducibility.
Environmental and Mechanical Sources of Error
Pipette performance can be significantly affected by factors outside of the operator’s immediate control. Ambient and liquid temperatures are major contributors to inaccuracy, particularly in air-displacement pipettes. A temperature difference causes thermal expansion or contraction of the air cushion, leading to an incorrect aspirated volume. For example, a cold liquid pipetted in a warm environment will cause the trapped air to cool and contract, resulting in an under-delivery of liquid.
Environmental conditions such as air pressure and humidity also influence the density of the air within the pipette barrel, which is why calibration standards like ISO 8655 require a Z-factor correction based on these variables. Mechanical faults within the instrument itself are another common source of error. Issues like worn piston seals or damaged O-rings lead to air leaks and inconsistent pressure. These failures compromise the airtight seal necessary for accurate aspiration, resulting in unpredictable volume discrepancies. The use of ill-fitting or non-manufacturer-specified tips can also break the airtight seal, contributing to both inaccuracy and imprecision.
User Technique and Consistent Results
The operator’s technique is a primary factor in minimizing random error and achieving consistent, precise results. A fundamental practice is pre-wetting the tip, which involves aspirating and expelling the liquid at least three times before the sample is taken for delivery. This process saturates the air space inside the tip with the liquid’s vapor, reducing evaporation.
Consistent aspiration and dispensing speed are also necessary. Drawing liquid too quickly can cause air bubbles or splashing, while slow, smooth action ensures a uniform hydrodynamic flow.
The proper immersion depth of the tip is important; for small volumes, a depth of 2 to 3 millimeters is generally recommended. Immersing the tip too deeply can cause liquid to cling to the outside, leading to an over-delivery. Too shallow an immersion risks aspirating air, which results in under-delivery.
Maintaining a consistent vertical angle during aspiration, typically no more than a 20-degree tilt, prevents the hydrostatic pressure from changing the aspirated volume. Finally, consistent plunger control must be performed with a steady rhythm. Depressing to the first stop for aspiration and the second stop for final dispense ensures the same volume is delivered repeatedly.

