Why Can a Measurement Never Be Exactly Correct?

No measurement can ever be exactly correct because every measuring process introduces some degree of uncertainty, from the physical limits of instruments to the unavoidable interaction between the measurer and the thing being measured. This isn’t a flaw in technique or technology. It’s a fundamental property of measurement itself. Even the most advanced laboratories in the world report their results as a value plus or minus some uncertainty, never as a single perfect number.

The “True Value” Is a Theoretical Idea

At the heart of this question is a concept that scientists call the “true value” of a quantity. The true value is what you’d get if you could measure something with absolutely zero interference, zero limitation, and infinite precision. But the international framework for measurement science treats the true value as a purely theoretical concept, one that is “assumed to exist, but is unknowable even in principle.” You can never compare your measurement to the true value because nobody has access to it. The gap between your reading and that unknowable true value is, by definition, also unknowable.

This is why modern measurement science has shifted its focus away from “error” (which implies you know what the right answer was) and toward “uncertainty,” which honestly describes how confident you can be in what you measured. A measurement can unknowably be very close to the true value and still carry a large uncertainty, or it can appear precise while being systematically off. You simply can’t know for certain.

Every Instrument Has a Resolution Limit

Every measuring tool, whether it’s a kitchen scale or a particle detector, has a smallest change it can detect. This is called its resolution. A ruler marked in millimeters can’t reliably tell you anything about a fraction of a millimeter. A digital thermometer that reads to one decimal place can’t distinguish between 98.64°F and 98.65°F. It rounds to 98.6°F either way.

Digital instruments face an additional, built-in limitation called quantization. When a sensor converts a continuous physical signal (like temperature or voltage) into a digital number, it maps a whole range of possible values onto a single output. Imagine a staircase: every analog value that falls on the same “step” gets reported as the same number. The difference between the actual value and the reported step is quantization error, and it’s always present in digital readings. This error is small in high-quality instruments, but it never reaches zero.

Analog instruments have their own version of the problem. A pointer sitting between two marks on a dial forces you to estimate. If you view that pointer from even slightly off-center, parallax error shifts the apparent reading. Instruments calibrated in one position (say, horizontal) read differently in another because of mechanical imbalance and friction in their moving parts.

Measuring Changes What You Measure

One of the deeper reasons measurement can’t be perfect is that the act of measuring always disturbs the thing being measured, at least slightly. To learn anything about a system, you have to interact with it. You have to bounce light off it, run current through it, or press a probe against it. That interaction transfers energy, and the system you’re now observing is no longer in the same state it was before you looked.

NASA’s description of this principle puts it clearly: the observation is never one of the system “at rest,” but of the system perturbed. To gather information about any physical system, at least one tiny packet of energy or momentum must cross the boundary between you and it. The system absorbs or reflects that packet and changes as a result. A good measurement keeps that disturbance small relative to the quantity being measured, but it can never eliminate it entirely.

This effect is negligible when you’re measuring a bridge or a building. But at very small scales, like individual atoms or photons, the disturbance from measurement becomes comparable to the thing you’re trying to measure. At that point, the uncertainty isn’t something you can engineer away. It’s baked into the physics.

Random Noise Sets a Floor

Even in a perfectly designed instrument sitting in a perfectly controlled room, there’s a source of interference you can’t eliminate: thermal noise. Every conductor above absolute zero contains electrons in constant random motion, jostling around due to their own thermal energy. This molecular vibration generates tiny, random electrical signals that mix in with whatever the instrument is trying to detect.

The amount of thermal noise is directly proportional to temperature. Since no real instrument operates at absolute zero, there is always some baseline level of random electrical fluctuation. This sets a lower bound on how faint a signal can be and still be distinguished from noise. No matter how sensitive your detector, thermal noise means there’s always a point where the signal you’re trying to measure disappears into a background hum of random atomic motion.

On top of thermal noise, random errors arise from countless small, unpredictable sources: air currents, vibrations from a nearby road, tiny fluctuations in electrical supply. Each one is insignificant on its own, but together they cause repeated measurements of the same thing to scatter slightly around a central value rather than landing on the same number every time.

Systematic Errors Hide in Plain Sight

Random errors at least average out over many measurements. Systematic errors don’t. These are consistent biases built into the measurement process that push every reading in the same direction. A scale that’s slightly miscalibrated will read too high every single time. A digital filter with an inadequate sampling rate will lose part of the input signal on every reading, consistently underreporting the true quantity.

Systematic errors are especially tricky because they’re invisible from within your own data. If every measurement you take is shifted by the same amount, your results will look beautifully consistent while all being wrong in the same way. Detecting systematic error requires comparing your instrument against a more reliable reference, and that reference has its own uncertainties.

Even Our Best Standards Aren’t Perfect

For over a century, the kilogram was defined by a single physical object: a platinum-iridium cylinder stored in a vault near Paris. Every mass measurement in the world was ultimately traceable to that one artifact. The problem was that the artifact itself was changing. Comparisons over decades showed tiny but measurable drift, likely from surface contamination or cleaning. The “exact” kilogram was, itself, inexact.

This is why, in 2019, the international system of units was redefined so that all seven base units are anchored to fixed values of universal physical constants rather than physical objects. The speed of light, for instance, is defined as exactly 299,792,458 meters per second. The Planck constant, the elementary charge, and four other constants were similarly locked to precise numerical values. These definitions are as close to permanent and universal as science can get.

But here’s the catch: while the constants themselves are now defined as exact, any real-world attempt to realize those definitions through actual laboratory equipment reintroduces uncertainty. The definition of the second is perfect in theory, tied to a specific frequency of cesium-133 atoms. Building a cesium clock that reproduces that frequency, however, involves lasers, electronics, shielding, and temperature control, each contributing its own small uncertainty. The standard is exact. The measurement against that standard never is.

How Scientists Handle Uncertainty

Because perfect measurement is impossible, science has developed rigorous methods for quantifying just how imperfect each measurement is. Rather than pretending a result is a single clean number, scientists report it alongside a combined standard uncertainty that accounts for every identified source of variability. This includes the resolution of the instrument, the scatter in repeated readings, thermal effects, calibration limits, and any other factor that could shift the result.

These individual uncertainties are combined mathematically, weighted by how sensitive the final result is to each one. If your measurement depends on five different input quantities, each with its own uncertainty, the final uncertainty reflects all of them together. Two sources of error that tend to move together (correlated errors) are handled differently from those that vary independently.

The result is an honest statement: “We measured this quantity to be X, and we’re confident the value lies within a certain range around X.” That range can be made smaller with better instruments, more measurements, and tighter environmental control. It can never be made zero. Every measurement is an approximation, and the best measurements are simply the ones that come with the smallest, most carefully characterized uncertainty.