A MAPE score below 10% is generally considered excellent, while 10% to 20% is good, and anything above 50% suggests your forecasting model needs serious improvement. But “good” depends heavily on what you’re forecasting, because some industries are inherently harder to predict than others.
MAPE, or mean absolute percentage error, tells you how far off your predictions are from actual values, expressed as a percentage. A MAPE of 15% means your forecasts are off by an average of 15%. Lower is better, and zero would mean perfect prediction.
General MAPE Benchmarks
As a rough guide across industries and use cases:
- Below 10%: Excellent. Only achievable in highly predictable, stable environments.
- 10% to 20%: Good. Realistic for many business forecasting scenarios.
- 20% to 50%: Reasonable, depending on your industry and how volatile your data is.
- Above 50%: Poor in most contexts. Your model is missing something significant.
These thresholds are useful starting points, but comparing your MAPE to industry-specific benchmarks gives you a much clearer picture of whether your model is performing well.
What “Good” Looks Like by Industry
Some products and markets are simply harder to forecast. A MAPE that would be disappointing in one industry might be perfectly acceptable in another.
In consumer packaged goods (CPG), where demand patterns tend to be relatively stable, a MAPE of 15% to 25% is generally acceptable. Pharmaceuticals with steady demand can achieve even tighter accuracy, typically landing between 10% and 20%. Manufacturing forecasts usually fall in the 20% to 40% range, reflecting more variability in orders and production cycles.
Apparel and fashion retail often exceed 30% MAPE, and that’s considered normal. Short product lifecycles, seasonal swings, and trend-driven buying make demand inherently unpredictable. If you’re forecasting in this space, chasing a sub-10% MAPE is unrealistic for most product lines.
How MAPE Is Calculated
MAPE works by comparing each forecast to its actual value, calculating the percentage each prediction was off, then averaging those percentages across all data points. Specifically, for each observation you take the difference between the actual value and the forecast, divide by the actual value, and take the absolute value. Sum those ratios across all observations, divide by the number of data points, and multiply by 100 to get a percentage.
For example, if the actual value was 100 and you predicted 90, that single observation has a 10% error. If another actual value was 50 and you predicted 60, that’s a 20% error. The MAPE across those two points would be 15%.
When MAPE Can Be Misleading
MAPE has some quirks that can distort your read on model performance. Understanding these helps you avoid over-relying on a single number.
The biggest issue is that MAPE breaks when actual values are zero or close to zero. Because the formula divides by the actual value, a zero in the denominator produces an infinite or undefined result. Even very small actual values can inflate MAPE dramatically. If your data includes periods of zero demand (common in spare parts, seasonal products, or new product launches), MAPE becomes unreliable.
MAPE also penalizes over-forecasting more harshly than under-forecasting by the same absolute amount. If the actual value is 10 and you predict 20, that’s a 100% error. But if the actual is 20 and you predict 10, that’s only a 50% error, even though both forecasts are off by the same 10 units. This asymmetry means a model that consistently under-predicts can look better on MAPE than one that over-predicts by the same margin.
MAPE scores can also exceed 100%, which sometimes surprises people. This simply means forecasts are off by more than the actual values themselves. If the actual value is 1 and you predict 3, that single observation has a MAPE of 200%. Scores above 100% are a clear signal that the model is performing poorly, but they’re mathematically valid.
Alternatives Worth Considering
MAPE’s popularity comes from being easy to interpret and scale-independent, meaning you can compare accuracy across products or datasets with different magnitudes. A 15% MAPE means the same thing whether you’re forecasting units in the dozens or the millions.
When MAPE doesn’t fit your data, a few alternatives are worth knowing. Mean absolute error (MAE) measures the average size of errors in the same units as your data, which avoids the division-by-zero problem entirely. It’s often considered a more robust measure of accuracy, though it’s harder to compare across different scales. Symmetric MAPE (sMAPE) addresses the asymmetry problem by averaging the actual and forecast values in the denominator, giving equal weight to over- and under-predictions.
For intermittent or lumpy demand data (where many periods have zero sales), specialized metrics exist that handle sparse data more gracefully than MAPE ever can. If your data has frequent zeros, switching away from MAPE isn’t just preferable, it’s necessary for meaningful results.
Improving a High MAPE Score
If your MAPE is higher than the benchmarks for your industry, a few common culprits are worth investigating. Outliers and anomalies in your data can inflate MAPE significantly, especially if actual values are small. Cleaning your data or segmenting your forecasts (separating high-volume products from slow movers, for instance) often brings MAPE down quickly.
Seasonality that isn’t captured in your model is another frequent cause. If demand spikes predictably around holidays or specific months, your model needs to account for that pattern. Similarly, promotional activity, new product launches, or supply disruptions can all introduce forecast error that a standard model won’t anticipate without those signals as inputs.
Rather than obsessing over a single MAPE number across your entire portfolio, consider tracking MAPE at the product or category level. Your overall MAPE might be 25%, but that could be masking excellent accuracy on your top sellers and terrible accuracy on your long tail of slow-moving items. Breaking it down reveals where your forecasting actually needs work.

