A rounding error is the small difference between a calculated or stored number and its true mathematical value, caused by the need to fit numbers into a limited amount of space. Every calculator, spreadsheet, and computer program works with a finite number of digits, so numbers that can’t be represented exactly get rounded to the closest value the system can handle. That tiny gap between the real number and the stored number is the rounding error.
These errors are usually invisible in everyday math. But in computing, finance, and engineering, they can pile up across thousands or millions of calculations and produce results that are significantly wrong.
Why Computers Can’t Store Every Number
Computers store numbers in binary (base 2) rather than the decimal (base 10) system humans use. This creates a fundamental problem: some numbers that look perfectly simple in decimal become infinitely repeating strings in binary. The classic example is 0.1. In decimal, it’s just one-tenth. In binary, dividing 1 by 10 produces an infinite repeating sequence that never resolves cleanly, much like how 1/3 becomes 0.333… in decimal. Since the computer only has a fixed number of digits to work with, it chops off that infinite tail and stores the closest approximation it can.
This is why you’ll occasionally see bizarre results in programming, like adding 0.1 and 0.2 and getting 0.30000000000000004 instead of 0.3. Neither 0.1 nor 0.2 is stored exactly, and their small errors combine when you add them.
The precision limit of a standard 64-bit number (the format most software uses) is roughly 15 to 16 significant digits. The smallest possible rounding error at this precision, called machine epsilon, is about 2.2 × 10⁻¹⁶. That’s vanishingly small for a single calculation, but it’s not zero, and it doesn’t stay small forever.
How Small Errors Become Big Problems
Rounding errors don’t just sit quietly in one calculation. They propagate through every operation that follows, and the way they grow depends on what kind of math you’re doing.
When you add or subtract numbers, the absolute errors simply add together. If you’re combining ten numbers that each carry a tiny error, the final result carries roughly ten times the error of any single number. This is straightforward, and for small datasets it’s manageable.
Multiplication and division work differently. Instead of absolute errors adding up, it’s the fractional (or percentage) errors that accumulate. Multiply three numbers together, and the percentage error in the result is roughly the sum of the three individual percentage errors. This means a long chain of multiplications can quietly amplify errors in a way that’s harder to predict.
A particularly dangerous scenario involves subtracting two numbers that are very close in value. If both numbers carry a small rounding error but their difference is tiny, that error can suddenly represent a huge fraction of the result. This is sometimes called “catastrophic cancellation,” and it’s a well-known trap in scientific computing. Similarly, when adding a very large number and a very small number, the small number may be lost entirely because the system doesn’t have enough digits to represent both the large value and the fine detail of the small one.
Rounding Error vs. Truncation Error
These two terms sound similar but refer to different things. Rounding error comes from representing numbers with limited precision. Truncation error comes from using a simplified version of a mathematical formula instead of the exact one.
For example, many calculations in physics and engineering rely on infinite series, equations that technically require adding up an infinite number of terms to get the perfect answer. In practice, a computer stops after a certain number of terms and calls it close enough. The difference between that approximation and the true answer is the truncation error. It has nothing to do with how numbers are stored and everything to do with stopping a calculation early.
Both types of error can exist in the same calculation. A physics simulation might carry truncation error from its simplified equations and rounding error from the way its numbers are stored, with each compounding the other across millions of time steps.
The Vancouver Stock Exchange Disaster
One of the most cited examples of rounding error happened at the Vancouver Stock Exchange in 1982. The exchange launched a new stock index, starting at 1,000.000. Every time a trade occurred, the index was recalculated and updated. But the programmers made a critical choice: instead of rounding the index value to the nearest representable number, they truncated it, simply chopping off the extra decimal places.
Truncation always rounds down. With thousands of trades happening every day, each update shaved off a tiny sliver of value. Twenty-two months later, the index had fallen to 520, nearly half its starting value, even though the stocks it tracked hadn’t actually lost that much. When the calculation was corrected to use proper rounding, the index should have read 1,098.892. The difference between 520 and 1,098 was entirely artificial, the accumulated result of thousands of tiny downward nudges from truncation.
Different Ways to Round
The rounding you learned in school, where you round 5 up, is called arithmetic rounding (or “round half up”). It’s intuitive, but it has a flaw: it introduces a systematic upward bias. Because 5 is exactly halfway between two values, always pushing it upward means that over many calculations, your results drift slightly higher than they should be.
To see how this plays out, consider rounding every number from 0.5 to 9.5 to the nearest whole number using the “always round 5 up” rule. The average of the original numbers is exactly 5. After rounding up, the average becomes 5.5. That half-point bias might seem negligible, but spread it across millions of financial transactions or data points and it distorts the total meaningfully.
The alternative used in most modern computing is called “round half to even,” often known as banker’s rounding. When a number falls exactly on the midpoint, this method rounds to whichever neighbor is even. So 2.5 rounds to 2, but 3.5 rounds to 4. It sounds arbitrary, but the effect is that roughly half the midpoint values get rounded up and half get rounded down. Using the same 0.5-to-9.5 sequence, banker’s rounding produces an average of exactly 5, preserving the true mean with no directional bias. This method is part of the IEEE 754 standard, the technical specification that governs how virtually all modern computers handle decimal math.
Rounding in Finance and Accounting
Financial systems face rounding errors constantly because currency has a smallest unit (a cent, a penny) but calculations often produce values with many more decimal places. Interest calculations, tax computations, and currency conversions all generate fractional cents that need to go somewhere.
Under Generally Accepted Accounting Principles (GAAP), financial statements are allowed to contain minor discrepancies from rounding, typically to the nearest dollar or cent. Loan agreements often specify exactly which rounding method to use. A common contractual requirement is to carry financial ratios to one extra decimal place beyond what’s required, then round using symmetric arithmetic rounding. This prevents disputes over tiny differences while keeping results consistent.
Large organizations that consolidate hundreds of subsidiary financial statements can see rounding discrepancies of thousands of dollars, not because any single number is wrong, but because each subsidiary rounds independently and the small differences accumulate when everything is added together. Accounting software typically includes reconciliation tools specifically to track and explain these rounding gaps.
Reducing Rounding Error in Practice
You can’t eliminate rounding error entirely, but you can manage it. The most common strategies are straightforward.
- Use higher precision when it matters. Most programming languages offer extended-precision number types that store more digits. Financial software often uses fixed-point decimal formats that represent cents exactly, avoiding the binary conversion problem altogether.
- Round late, not early. Keeping full precision through intermediate steps and only rounding the final result prevents errors from compounding across a chain of operations.
- Sort before summing. When adding a long list of numbers that vary widely in size, adding the smallest values first and the largest last reduces the chance of small numbers being swallowed by large ones.
- Use banker’s rounding. For any process that rounds large volumes of data, round-half-to-even prevents the systematic bias that arithmetic rounding introduces.
The core principle behind all of these approaches is the same: treat every stored number as an approximation carrying a small, invisible error, and structure your calculations so those errors stay small and don’t drift in one direction.

