Calibrating an instrument means comparing its readings against a known, trusted reference standard and then documenting how far off those readings are. If a thermometer consistently reads 1.2 degrees too high, calibration reveals that gap. In many cases, the instrument is then adjusted so its readings align with the standard, but calibration itself is the comparison and documentation step, not the adjustment.
What Calibration Actually Involves
The process has two parts. First, a technician measures something with both the instrument being tested and a reference standard whose accuracy is already well established. This creates a map between what the instrument displays and what the true value actually is. Second, that relationship is used to either correct the instrument’s future readings or to apply a known correction factor to them.
Think of it like checking a bathroom scale. You place a 10-pound weight on it, and it reads 10.3 pounds. You place a 20-pound weight on it, and it reads 20.5 pounds. Now you know exactly how the scale’s readings relate to reality. That’s calibration. Adjusting the scale so it reads 10.0 and 20.0 is a separate step, technically called adjustment, though most people use “calibration” to mean both.
Every calibration produces a certificate or report that records the readings, the reference standards used, and a number called measurement uncertainty. That uncertainty is an honest statement of how close the calibration itself can get to the truth. A plasma glucose meter calibrated to read 5.4 mmol/L might carry an uncertainty of ±0.1 mmol/L, meaning the true value is believed to fall between 5.3 and 5.5 with 95% confidence. No measurement is perfect, and uncertainty quantifies how imperfect it is.
Why the Reference Standard Matters
A calibration is only as good as the standard you compare against. That standard needs to be more accurate than the instrument being tested, and it needs to be traceable, meaning its own accuracy has been verified through an unbroken chain of comparisons that leads all the way back to the international definitions of measurement units (the SI system). NIST describes this as “a complete, explicitly described, and documented series of calibrations that successively link a measurement result to the values and uncertainties of each intermediate reference standard.”
In practice, this chain might look like: your lab thermometer is compared against a reference thermometer in your facility, which was compared against a higher-quality standard at a regional calibration lab, which was compared against a national standard at an institution like NIST in the United States or a similar body elsewhere. At each link, the uncertainty is measured and documented. By the time you reach your lab thermometer, you can state exactly how its readings relate to the fundamental definition of temperature.
How Often Instruments Need Calibration
There is no single answer. NIST identifies several factors that determine how often recalibration is needed: the accuracy your work demands, any regulations or contracts that specify intervals, how stable the particular instrument is over time, and the environmental conditions it operates in. A pressure gauge in a climate-controlled lab drifts differently than one bolted to an outdoor pipeline in a desert.
The best approach, according to NIST, is to track an instrument’s performance over time using control charts. You record the “as submitted” readings each time the instrument comes in for calibration (before any adjustments) alongside the corrected readings afterward. Over months and years, this data reveals how quickly the instrument drifts. If a scale drifts out of acceptable range in eight months, calibrating it every six months gives you a safety margin. If it barely moves in two years, annual calibration may be more than enough.
Some industries set mandatory intervals by regulation. Medical devices, aerospace components, and pharmaceutical equipment typically follow strict schedules. Outside of regulated fields, the interval is a judgment call based on risk and data.
What Happens Without Calibration
All instruments drift over time. Components age, springs fatigue, electronics shift with temperature changes, and sensors degrade. Without calibration, an instrument could be giving you confidently wrong numbers, and you would have no way to know.
The consequences scale with the stakes. In manufacturing, uncalibrated equipment leads to production delays, defective products, costly rework, and recalls. A single mis-calibrated device on a production line can compromise an entire batch of product. In healthcare, the risks include incorrect drug doses, misdiagnoses, and failed emergency interventions. In construction and aerospace, measurement errors have contributed to structural failures and vehicle recalls due to faulty sensors. Beyond safety, noncompliance with calibration requirements can result in fines, legal liability, and loss of accreditation.
Calibration vs. Verification vs. Adjustment
These three terms get mixed up constantly. Calibration is the comparison: you find out how far off the instrument is and document it. Verification is a pass/fail check: you determine whether the instrument’s readings fall within acceptable limits for a specific use. Adjustment is the physical correction: you change the instrument’s settings so its readings match the reference standard.
A calibration report might show that a pressure gauge reads 2 psi high at the 100 psi mark. If your application tolerates ±5 psi, verification tells you the gauge passes. If your application requires ±1 psi, it fails, and you need an adjustment. The calibration data is the foundation for both decisions.
What a Calibration Certificate Tells You
When an instrument comes back from a calibration lab, it should arrive with documentation that includes: the specific readings taken at various points across the instrument’s range, the reference standards used (with their own traceability information), the measurement uncertainty at each point, the environmental conditions during calibration (temperature, humidity), and the date. Labs that perform calibrations are often accredited to ISO/IEC 17025:2017, an international standard that governs the competence of testing and calibration laboratories. Accreditation to this standard means the lab’s processes, staff qualifications, and equipment have been independently audited.
The uncertainty values on the certificate are particularly important. If your work requires measurements accurate to ±0.5 degrees, and the calibration uncertainty alone is ±0.4 degrees, there is almost no room left for the instrument’s own variability. You need a calibration with tighter uncertainty, which usually means a higher-grade reference standard.
Common Instruments That Require Calibration
- Scales and balances: Used in pharmacies, labs, and manufacturing. Even high-quality balances drift with temperature changes and mechanical wear.
- Thermometers and temperature probes: Critical in food safety, healthcare, and chemical processing where a few degrees can matter.
- Pressure gauges: Found in HVAC systems, industrial plants, and medical gas delivery. Mechanical gauges are especially prone to drift from vibration.
- Electrical meters (multimeters, oscilloscopes): Used in electronics manufacturing and maintenance. Component aging shifts readings gradually.
- Pipettes: Laboratory pipettes that dispense precise liquid volumes need regular calibration, since worn seals and tips change delivery accuracy.
The principle is the same regardless of the instrument. You compare it against something more accurate, document the difference, and use that information to either correct the instrument or account for its error in your work.

