What Is Forecast Accuracy and How Is It Measured?

Forecast accuracy measures how close a prediction comes to what actually happens. In business, it’s most commonly used to evaluate demand forecasts: you predicted you’d sell 1,000 units, you actually sold 900, and forecast accuracy tells you how far off you were. The most widely used formula expresses this as a percentage: calculate the average percentage error across all your forecasts, then subtract that from 100%. A result of 85% means your forecasts were off by an average of 15%.

How Forecast Accuracy Is Calculated

The standard approach starts with a metric called Mean Absolute Percentage Error, or MAPE. For each forecast period, you take the difference between the actual value and the forecast value, divide by the actual value, and express it as a percentage. You then average those percentages across all periods. Forecast accuracy is simply 100% minus that average error. If your MAPE is 12%, your forecast accuracy is 88%.

The “absolute” part matters. Without it, over-forecasts and under-forecasts would cancel each other out, making a wildly inaccurate forecast look perfect on paper. By ignoring the direction of each error and treating them all as positive numbers, MAPE captures the true magnitude of your misses.

Other Ways to Measure Forecast Error

MAPE is popular because percentages are easy to interpret and compare across products. But it has a well-known weakness: when actual demand is very low or close to zero, even a small miss produces a huge percentage error that distorts the average. If actual demand was 2 units and you forecast 5, that’s a 150% error for a difference of just 3 units.

Several alternative metrics address this and other limitations:

  • Weighted Absolute Percentage Error (WAPE) weights each error by the size of the actual value instead of treating every data point equally. This makes it far more reliable for businesses with low-demand products or off-peak periods, because a miss on a slow-moving item won’t blow up the overall number.
  • Mean Absolute Error (MAE) simply averages the raw differences between forecast and actual values, with no percentage conversion. It’s useful when you want to know error in the same units you’re forecasting, like “we were off by an average of 50 units.”
  • Mean Squared Error (MSE) squares each error before averaging. Squaring penalizes large errors much more heavily than small ones, which is helpful when big misses are disproportionately costly.
  • Root Mean Squared Error (RMSE) takes the square root of MSE, bringing the result back into the original units while still preserving that extra penalty for large errors.

No single metric works best in every situation. The right choice depends on whether you care more about percentage performance, raw unit differences, or the cost of occasional large misses.

Accuracy vs. Bias

Forecast accuracy and forecast bias answer two different questions. Accuracy tells you how far off your forecasts are. Bias tells you whether they consistently lean in one direction.

A forecast with low accuracy could be erratic in both directions, overshooting one month and undershooting the next. A biased forecast, on the other hand, systematically over-forecasts or under-forecasts. You can spot bias by looking at the errors without taking absolute values: if the sum is consistently positive, you’re over-forecasting. If it’s consistently negative, you’re under-forecasting. A forecast can be reasonably accurate on average yet still carry a persistent bias that creates real problems, like gradually building up excess inventory over time.

One tool for catching bias early is the tracking signal, which divides the running sum of forecast errors by the mean absolute deviation. Values between -4 and +4 indicate normal variation. Once the signal drifts beyond approximately 3.75 in either direction, it suggests a systematic bias that needs correction. Values exceeding 4.5 signal an urgent problem.

What Makes Forecasts More or Less Accurate

The single biggest factor affecting achievable accuracy is demand volatility. Statisticians measure this with the coefficient of variation: the standard deviation of demand divided by the average. Products with a high coefficient of variation, meaning demand swings widely relative to its average, are inherently harder to forecast. A staple grocery item with steady weekly sales might be forecastable to within a few percentage points. A seasonal specialty product with unpredictable spikes could have a MAPE of 40% or more, and that might be the best anyone can do.

Forecast horizon also matters. Predictions further into the future are less accurate in absolute terms. An eight-quarter forecast will have a larger error than a two-quarter forecast for the same variable. This is intuitive: more can change over longer periods. Short-term forecasts benefit from recent trends and partial indicators that simply aren’t available when looking further ahead.

Combining multiple forecasting methods tends to improve accuracy. Research using large-scale competition data has found that blended forecasts perform more consistently than any single method, producing less bias and lower error variance. This is because different models capture different patterns in the data, and their individual mistakes partially cancel out when averaged together.

Why Forecast Accuracy Matters for Inventory

Better forecast accuracy directly affects how much safety stock a business needs to carry. Safety stock exists to buffer against forecast errors. When forecasts improve, the buffer can shrink, reducing the cost of holding inventory. Research into the relationship between forecasting methods and inventory performance has confirmed that more accurate approaches require less safety stock to maintain the same service level.

That said, the relationship between accuracy and inventory cost isn’t always straightforward. A large study using retail competition data found that forecast accuracy matters most when holding costs are similar to or larger than the cost of lost sales. In situations where running out of stock is far more expensive than carrying extra inventory, the most accurate forecast isn’t always the most profitable one to act on. A slightly less accurate method that avoids stockouts might produce better financial outcomes.

Product characteristics also play a role. Items with intermittent demand, where sales happen sporadically with many zero-demand periods in between, require specialized forecasting approaches. Standard accuracy metrics can be misleading for these products, and the link between improved MAPE and improved inventory performance is weaker. For high-volume, steady-demand products, the payoff from better accuracy is much more direct.

Setting Realistic Accuracy Targets

There’s no universal benchmark for “good” forecast accuracy. A MAPE of 20% might be excellent for a volatile product category and mediocre for a stable one. The most useful approach is to evaluate accuracy relative to your own baseline and to the volatility of what you’re forecasting.

Start by measuring accuracy at the most granular level that matters for your decisions, typically by product and location. Aggregate forecasts almost always look more accurate than detailed ones, because over-forecasts and under-forecasts at the item level partially offset each other when you roll up to a category or region. If your inventory decisions happen at the item level, that’s where accuracy needs to be measured. Track both accuracy and bias over time, using the tracking signal as an early warning system, so you catch systematic drift before it compounds into costly inventory imbalances.