What Is Precision in Physics and How Is It Measured?

Precision in physics describes how closely repeated measurements of the same quantity agree with each other. If you measure the length of a table five times and get 1.52 m, 1.53 m, 1.52 m, 1.52 m, and 1.53 m, those results are highly precise because they cluster tightly together. The formal definition from the International Vocabulary of Metrology puts it as “the closeness of agreement among indications or measured values obtained by replicate measurements under specified conditions.” Understanding precision is fundamental to every branch of physics, because no measurement is perfect, and knowing how consistent your results are tells you how much you can trust them.

How Precision Differs From Accuracy

Precision and accuracy sound interchangeable, but they describe two different things. Accuracy is how close a measurement lands to the true or accepted value. Precision is how close your measurements land to each other, regardless of whether they’re near the true value.

The classic way to visualize this is a dartboard (or a target with GPS dots, as physics textbooks often show). Imagine four combinations:

  • High accuracy, high precision: All your darts cluster tightly around the bullseye.
  • High accuracy, low precision: Your darts are scattered across the board, but their average position is the bullseye.
  • High precision, low accuracy: Your darts cluster tightly together, but off to one side of the bullseye.
  • Low accuracy, low precision: Your darts are scattered and nowhere near the center.

That third case, high precision but low accuracy, is particularly important to recognize. It means your measurements are very repeatable, yet consistently wrong. This usually signals a systematic error: something in your setup that biases every reading in the same direction, like a scale that’s always 0.5 grams heavy. Precision alone can’t catch that kind of problem. You need a known reference value or a different measurement method to detect it.

What Limits Precision

Random errors are the main enemy of precision. These are unpredictable fluctuations that change slightly from one measurement to the next. Electronic noise in a circuit, tiny air currents affecting a balance, vibrations in a building, or irregular temperature shifts during an experiment can all introduce random variation. Because these disturbances are unpredictable, they cause your measured values to scatter around some central point rather than landing on exactly the same number every time.

The more random error present, the wider the spread of your results, and the lower your precision. Unlike systematic errors (which shift all readings in one direction and hurt accuracy), random errors pull individual readings in both directions. They’re the reason you never get the exact same number twice when measuring something carefully.

How Precision Is Quantified

Physicists use standard deviation as the primary tool for putting a number on precision. To calculate it, you first find the average (mean) of your measurements. Then you look at how far each individual measurement falls from that average, square those distances, add them up, divide by one less than the number of measurements, and take the square root. A small standard deviation means your values are tightly clustered, which signals high precision. A large one means they’re spread out.

There’s also a related quantity called the standard error, or standard deviation of the mean, which tells you something slightly different. While the standard deviation describes how far a single measurement is likely to fall from the average, the standard error describes how far your calculated average is likely to fall from the true value. You get it by dividing the standard deviation by the square root of the number of measurements. This is why taking more measurements improves your estimate: if you quadruple the number of readings, you cut the standard error in half.

When reporting results, physicists typically write a measurement as a value plus or minus an uncertainty, like 9.81 ± 0.02 m/s². That ± range communicates precision directly, telling anyone reading the result how tightly the measurements clustered.

Significant Figures and Precision

The number of significant figures in a reported measurement is a quick shorthand for its precision. If you write that a rod is 2.34 m long, those three significant figures imply your measurement is precise to the hundredths place. Writing 2.3 m instead communicates less precision, and writing 2.340 m communicates more (the trailing zero says you’re confident in that last digit).

This is why rounding matters in physics. If your ruler can only measure to the nearest millimeter, reporting a result to the nearest tenth of a millimeter overstates your precision. The number of digits you write down should honestly reflect how precise your measurement actually was. In calculations, the rule of thumb is that your final answer should carry no more significant figures than the least precise measurement that went into it.

Instrument Resolution Is Not the Same Thing

A common source of confusion is the difference between an instrument’s resolution and the precision of your measurements. Resolution is the smallest change an instrument can detect. A digital scale that reads to 0.01 grams has a resolution of 0.01 grams. But that doesn’t automatically mean your measurements are precise to 0.01 grams. If environmental vibrations, temperature fluctuations, or other random factors cause the scale to give you readings of 4.52 g, 4.48 g, 4.55 g, and 4.50 g for the same object, your precision (the spread in those values) is much worse than the instrument’s resolution suggests.

Resolution sets a floor: you can never be more precise than the smallest increment your instrument can detect. But the actual precision you achieve in practice depends on all the sources of random error in the entire measurement process, not just the instrument itself.

Why Precision Matters in Practice

Pushing precision higher has driven some of the biggest advances in physics. The values of fundamental physical constants, like the fine-structure constant that governs how light interacts with matter, are known to extraordinary precision. The 2018 CODATA adjustment of physical constants, for example, reduced the uncertainty of the fine-structure constant and improved the precision of particle mass measurements by nearly two orders of magnitude, partly because the revised SI system now defines the Planck constant, elementary charge, Boltzmann constant, and Avogadro constant as exact numbers. When those anchor points are exact, many other constants that depend on them become more precise automatically.

Not every constant has benefited equally, though. The gravitational constant, G, remains one of the least precisely known fundamental constants in physics. Different high-quality experiments continue to produce values that don’t fully agree with each other, an inconsistency that has persisted for decades. This is a case where the precision of individual experiments may be high, but the results across experiments don’t converge, pointing to unidentified systematic errors somewhere in the measurement chain.

How To Improve Precision

The most straightforward way to improve precision is to take more measurements and average them. Because the standard error shrinks with the square root of the number of measurements, going from 10 readings to 100 readings makes your average about three times more precise. This approach has diminishing returns, though. Getting another factor-of-three improvement would require jumping to 900 readings.

Beyond simply repeating measurements, you can improve precision by controlling the environment (reducing temperature changes, vibrations, air currents), using instruments with finer resolution, and standardizing your measurement procedure so it’s done the same way every time. In advanced physics experiments, researchers go to extreme lengths: cooling detectors to near absolute zero to reduce thermal noise, isolating equipment from seismic vibrations, or shielding electronics from electromagnetic interference. Each of these targets a specific source of random error that would otherwise widen the spread of results.