What Is Error Propagation and How Does It Work?

Error propagation is a set of mathematical rules for figuring out how uncertainties in your measurements carry through into your final calculated result. Every measurement you take has some degree of imprecision, whether from the ruler, the scale, or the thermometer. When you plug those measurements into a formula, those small uncertainties combine and grow. Error propagation tells you by how much.

The concept shows up constantly in physics, chemistry, and engineering labs, but it applies anywhere you calculate a result from imperfect inputs. Understanding it lets you report not just an answer, but how confident you should be in that answer.

The Core Idea

Suppose you measure two things, each with a small uncertainty, and then use those measurements in a calculation. The uncertainty in your final answer depends on two factors: how big the individual uncertainties are, and how your formula combines the measurements. Some formulas amplify errors. Others dampen them. Error propagation gives you a systematic way to trace that path.

The standard approach uses a mathematical technique called linearization. You approximate how your result changes when each input wiggles by a tiny amount. This works well as long as the errors are small relative to the measurements themselves, the errors in different variables aren’t correlated with each other, and the uncertainties follow a roughly bell-shaped (Gaussian) distribution. In most lab settings, all three of these assumptions hold comfortably.

Rules for Common Operations

You don’t need calculus every time you propagate an error. For the most common operations, the rules simplify into patterns you can memorize.

Addition and Subtraction

If your result is z = x + y or z = x − y, the absolute uncertainty in z is the sum of the absolute uncertainties in x and y. Notice that subtraction doesn’t cancel the errors. It adds them, just like addition does. This is why subtracting two nearly equal numbers can produce a result with enormous relative uncertainty, even when each individual measurement is quite precise.

Multiplication and Division

When you multiply or divide, you work with relative (or percentage) uncertainties instead. If z = x × y or z = x / y, you add the relative uncertainties of x and y to get the relative uncertainty of z. In the more precise “quadrature” version used in most lab courses, you square each relative uncertainty, add them, and take the square root. Written out, that looks like: the relative uncertainty in z equals the square root of (relative uncertainty in x)² + (relative uncertainty in y)². This quadrature version avoids overestimating the error by accounting for the statistical reality that not all errors will push the result in the same direction at once.

Powers

When a variable is raised to a power, the exponent multiplies the relative uncertainty. If z = x², the relative uncertainty in z is twice the relative uncertainty in x. For a general formula z = xᵖ × yᵍ, the relative uncertainty becomes p times the relative uncertainty of x plus q times the relative uncertainty of y (or the quadrature equivalent). This means that squaring a value doubles its relative error, cubing it triples the error, and taking a square root cuts the error in half.

A Practical Example

Imagine you’re calculating the density of a metal block. You measure its mass as 150.0 ± 0.5 grams and its volume as 20.0 ± 0.3 cubic centimeters. Density equals mass divided by volume, so you use the multiplication/division rule. The relative uncertainty in mass is 0.5/150.0 = 0.33%. The relative uncertainty in volume is 0.3/20.0 = 1.5%. Using the quadrature method, the combined relative uncertainty is the square root of (0.0033)² + (0.015)², which comes out to about 1.54%.

Your calculated density is 7.50 g/cm³, and 1.54% of that is roughly 0.12 g/cm³. So you’d report the density as 7.50 ± 0.12 g/cm³. Notice how the volume measurement, with its larger relative uncertainty, dominated the final error. This kind of insight is one of the practical benefits of error propagation: it tells you which measurement to improve if you need a more precise result.

When the Simple Rules Break Down

The standard formulas rely on the assumption that you can approximate your function as a straight line over the small range of your uncertainty. For most lab work, this linear approximation is perfectly adequate. But it can fail in specific situations.

Highly nonlinear functions can cause trouble. If your calculation involves exponentials, logarithms, or sharp curves, the linear approximation may underestimate or overestimate the true uncertainty. Research at Delft University of Technology demonstrated that the standard Taylor-based method can introduce a visible offset in the estimated mean when higher-order terms become significant, something the simple formulas completely miss.

Non-Gaussian inputs also pose a problem. If your measurement errors follow a skewed or otherwise non-bell-shaped distribution, the standard formulas may not capture the true shape of the uncertainty in your result. And when your input errors are correlated (meaning one tends to be high when the other is), you need additional correction terms that the basic rules don’t include.

Monte Carlo Simulation as an Alternative

When the standard formulas aren’t reliable, Monte Carlo simulation offers a powerful alternative. Instead of deriving a formula, you let a computer generate millions of random input values, each drawn from the known uncertainty distribution of that measurement. You run your calculation on every set of random inputs and then look at the spread of the results.

This approach has several advantages. It handles nonlinear functions naturally, works with any distribution shape, and gives you the full probability distribution of your result rather than just a single uncertainty number. That full distribution can be valuable for hypothesis testing, setting confidence levels, or identifying asymmetric uncertainties where the error bar is larger in one direction than the other. Results from Monte Carlo simulations generally stabilize after about a million samples, which takes modern computers only seconds.

The tradeoff is that Monte Carlo methods require computation and programming rather than pencil-and-paper math, making them overkill for straightforward lab calculations where the standard rules work fine.

Formal Standards for Reporting Uncertainty

In professional metrology (the science of measurement), error propagation follows an internationally recognized framework called the Guide to the Expression of Uncertainty in Measurement, or GUM, maintained by the Joint Committee for Guides in Metrology. The GUM standardizes how uncertainty should be evaluated and reported, and it explicitly codifies the law of propagation of uncertainty as its central method.

Under this framework, uncertainty can be expressed in several ways: as a standard uncertainty (analogous to one standard deviation), as an expanded uncertainty with a coverage factor (similar to a confidence interval), or as a full probability distribution. If you work in a field where measurements need to meet regulatory or quality standards, the GUM framework is the reference point for how to handle and communicate your uncertainties.

Why It Matters Beyond the Lab

Error propagation isn’t just a homework exercise. Any time you build a conclusion from imprecise data, you’re dealing with propagated uncertainty, whether you calculate it or not. Engineers designing a bridge need to know how tolerances in material strength and load estimates combine to affect safety margins. Climate scientists combining satellite data, ocean temperature readings, and atmospheric models need to track how uncertainty flows through each step. Financial analysts estimating a company’s value from uncertain revenue projections and discount rates face the same underlying math.

The core lesson of error propagation is that uncertainty doesn’t disappear when you do math with it. It transforms, and often grows. Knowing by how much is the difference between a result you can trust and one that only looks precise.