Mean absolute error (MAE) is calculated by finding the absolute difference between each predicted value and its actual value, then averaging those differences. If your model predicts five house prices and is off by $10K, $5K, $8K, $3K, and $4K respectively, your MAE is $6,000. It’s one of the most intuitive ways to measure how accurate a prediction model is.
The Formula
MAE = (1/n) × Σ|actual − predicted|
In plain terms: for each data point, subtract the predicted value from the actual value, drop any negative signs (that’s the absolute value part), add all those differences together, and divide by the total number of data points. The result tells you, on average, how far off your predictions are.
Step-by-Step Example
Say you’re predicting daily temperatures and have five observations:
- Day 1: Actual 72°F, Predicted 70°F → |72 − 70| = 2
- Day 2: Actual 65°F, Predicted 68°F → |65 − 68| = 3
- Day 3: Actual 80°F, Predicted 79°F → |80 − 79| = 1
- Day 4: Actual 74°F, Predicted 70°F → |74 − 70| = 4
- Day 5: Actual 69°F, Predicted 71°F → |69 − 71| = 2
Sum the absolute errors: 2 + 3 + 1 + 4 + 2 = 12. Divide by the number of observations: 12 / 5 = 2.4. Your MAE is 2.4°F, meaning your model is off by about 2.4 degrees on average.
How to Interpret the Result
MAE uses the same units as whatever you’re measuring. If you’re predicting revenue in dollars, your MAE is in dollars. If you’re predicting weight in kilograms, your MAE is in kilograms. This makes it easy to explain to anyone: “Our model is off by an average of X units.”
A lower MAE is always better, with zero meaning perfect predictions. But whether a given MAE is “good” depends entirely on context. An MAE of 5 is excellent if you’re predicting house prices in thousands of dollars, but terrible if you’re predicting someone’s age. You need to judge MAE relative to the range and scale of your data.
One important limitation: because MAE is tied to the scale of your data, you can’t use it to compare models that predict different things. Comparing the MAE of a temperature model to the MAE of a revenue model is meaningless.
MAE vs. Mean Squared Error
The other common accuracy metric is mean squared error (MSE), which squares each error instead of taking its absolute value. That single difference has a big practical consequence: squaring makes large errors count much more heavily. An error of 10 contributes 100 to MSE but only 10 to MAE.
This means MSE is sensitive to outliers. If your data has a few extreme misses, MSE will balloon, pulling your model’s optimization toward fixing those outliers specifically. MAE treats all errors equally regardless of size, which makes it more robust when your data is noisy or contains values you don’t want dominating the metric.
Neither is universally better. Use MAE when you want a straightforward, outlier-resistant measure of average error. Use MSE (or its square root, RMSE) when large errors are genuinely more costly than small ones and you want your model to prioritize avoiding them.
Calculating MAE in Python
The fastest way is with scikit-learn’s built-in function:
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(y_true, y_pred)
The function takes two arrays: your actual values (y_true) and your predicted values (y_pred). It returns a single float. You can also pass a sample_weight parameter if certain data points matter more than others in your use case.
If you’d rather not import a library, the calculation is simple enough to do manually with base Python or NumPy:
import numpy as npmae = np.mean(np.abs(np.array(actual) - np.array(predicted)))
Both approaches give identical results. The scikit-learn version is convenient when you’re already using that library for model building, since it handles edge cases like multi-output regression automatically.
Calculating MAE in Excel
Put your actual values in column A and predicted values in column B. In column C, calculate the absolute error for each row with =ABS(A2-B2) and drag that formula down. Then use =AVERAGE(C2:C100) (adjusting the range to fit your data) to get the MAE. No plugins or special tools needed.
When MAE Is the Right Choice
MAE works well as a loss function or evaluation metric when you want to minimize average prediction error without giving extra weight to occasional large misses. It’s commonly used in weather forecasting, demand planning, and any scenario where overshooting by 20 units is exactly twice as bad as overshooting by 10 units, not four times as bad (which is how MSE would treat it).
MAE is also easier to explain to non-technical stakeholders. Saying “our forecast is off by an average of 3 units” communicates clearly. Saying “our mean squared error is 15.7” requires additional translation, since MSE is in squared units and doesn’t map intuitively to real-world quantities.

