Absolute error is the difference between a measured (or estimated) value and the true value of a quantity. It tells you how far off a measurement is in concrete, real-world units. If you measure a table as 1.52 meters long and its true length is 1.55 meters, your absolute error is 0.03 meters. The formula is simple: take the measured value, subtract the true value, and ignore the sign.
The Formula
Absolute error is calculated as:
Absolute Error = |Measured Value − True Value|
The vertical bars mean you take the absolute value, so the result is always positive. It doesn’t matter whether your measurement was too high or too low. A reading of 102 grams when the true value is 100 grams gives the same absolute error (2 grams) as a reading of 98 grams.
One important property: absolute error always carries the same units as the measurement itself. If you’re measuring length in meters, the error is in meters. If you’re measuring mass in grams, the error is in grams. A length measurement of 0.428 m ± 0.002 m has an absolute error of 0.002 meters.
Absolute Error vs. Relative Error
Absolute error tells you the size of the gap in raw units, but it doesn’t tell you how significant that gap is. Being off by 1 centimeter when measuring a football field is trivial. Being off by 1 centimeter when measuring a microchip is catastrophic. That’s where relative error comes in.
Relative error expresses the absolute error as a fraction (or percentage) of the true value. If you weigh an object at 3.28 g on a balance accurate to ±0.05 g, your absolute error is 0.1 g. Your relative error is about 3 percent, which gives you a better sense of how trustworthy the measurement actually is.
Use absolute error when you need to know the actual size of the discrepancy in real units. Use relative error when you want to compare the quality of measurements across different scales or different experiments.
How Absolute Error Shows Up in Practice
In lab and field work, absolute error appears every time you report a measurement with a “±” value. That ± number is the absolute uncertainty, and it defines the range within which the true value likely falls.
Consider measuring the length of a dog by marking where its nose and tail align against a wall. You measure the tail position as 1.53 ± 0.05 m and the nose position as 0.76 ± 0.02 m. The dog’s length is 0.77 m, but the absolute errors from both measurements add together, giving a combined uncertainty of ±0.07 m. You’d report the length as 0.77 ± 0.07 m.
Errors also propagate through calculations. If you measure an angle as 47.3 ± 0.5 degrees and need the sine of that angle, you can estimate the error by calculating sine at the upper and lower bounds (47.8° and 46.8°). The sine values range from 0.729 to 0.741, so you’d report your result as 0.735 ± 0.006. The absolute error in your final answer reflects the uncertainty you started with, carried through the math.
Accuracy, Precision, and What Error Measures
Absolute error specifically quantifies accuracy, which is how close a measurement comes to the true value. This is different from precision, which describes how repeatable a measurement is. You can be very precise (getting 5.12, 5.13, and 5.11 on three tries) but inaccurate if the true value is actually 5.40. Absolute error captures that gap between what you got and what you should have gotten, not how consistent your results were.
Reporting Error Correctly
When you report absolute error, match your decimal places to the precision of your instrument. If a balance is accurate to ±0.1 mg, every mass measurement from that balance carries ±0.1 mg of uncertainty, no matter what the reading says. A reported value of 3.98 g implies an absolute uncertainty of about ±0.005 g (or 0.01 g total range), because rounding to the hundredths place means the true value could be anywhere from 3.975 to 3.985 g.
The general rule: round your error to one or two significant figures, then round your measurement to match the same decimal place. Reporting a length as 12.3456 ± 0.1 m is misleading because those extra decimal places suggest false precision beyond what your error allows.
Mean Absolute Error in Data Science
The concept of absolute error scales up when you’re evaluating predictions across an entire dataset. Mean absolute error (MAE) is one of the most common metrics for judging how well a forecasting or machine learning model performs. It works by calculating the absolute error for every prediction, then averaging them all together.
MAE has a few properties that make it popular. The result is in the same units as whatever you’re predicting, so it’s immediately interpretable. If your model predicts house prices and your MAE is $12,000, that means the model is off by $12,000 on average. Changes in MAE are also linear, meaning every additional dollar of error increases the score by the same amount. This contrasts with metrics like root mean squared error, which penalizes large errors disproportionately because it squares them before averaging.
An MAE of zero means a perfect fit. In practice, you compare MAE values across different models to see which one produces predictions closest to reality. Because it treats all errors equally regardless of direction or magnitude, MAE gives a straightforward, intuitive picture of prediction quality.

