There is no single number that qualifies as a “good” standard error in regression. A standard error of 5 could be excellent or terrible depending on what you’re measuring, the scale of your data, and what you’re trying to do with the model. What matters is the standard error’s size relative to the coefficient it belongs to or, for the overall model error, relative to the range of values you’re predicting.
Two Types of Standard Error in Regression
When people ask about standard errors in regression, they’re usually asking about one of two things, and the answer differs for each.
The standard error of the regression (sometimes called the standard error of the estimate) measures how far your data points typically fall from the regression line. It’s in the same units as your outcome variable. If you’re predicting home prices and your standard error of the estimate is $15,000, that means your predictions are off by roughly $15,000 on average. About two-thirds of your data points will fall within one standard error above or below the regression line, and about 95% will fall within two standard errors.
The standard error of a coefficient tells you how precisely you’ve estimated a specific predictor’s effect. If your coefficient for “square footage” is 150 with a standard error of 30, that means your best guess is that each additional square foot adds $150 to the price, but the true value could plausibly be somewhat higher or lower.
Judging the Standard Error of a Coefficient
For individual coefficients, the most practical test is straightforward: divide the coefficient by its standard error. This gives you a t-statistic. If the absolute value is roughly 2 or greater, the coefficient is statistically significant at the 95% confidence level, meaning it’s unlikely to be zero. The exact threshold depends on your sample size. With smaller samples, you need a slightly larger ratio. With 17 data points, for example, the threshold is about 2.11. With hundreds of observations, 1.96 is sufficient.
A coefficient that is less than twice its standard error is too imprecise to confidently distinguish from zero. That doesn’t necessarily mean the variable has no effect. It means your data can’t nail it down. This is an important distinction: a large standard error relative to the coefficient signals uncertainty, not proof of no relationship.
But significance alone isn’t the full picture. The American Statistical Association has emphasized that p-values and statistical significance don’t measure the size or importance of an effect. A coefficient can be statistically significant but practically meaningless, or insignificant but still worth investigating with more data. Look at the confidence interval (roughly the coefficient plus or minus two standard errors) and ask whether the plausible range of values matters in your context.
Judging the Overall Model Error
For the standard error of the regression as a whole, “good” depends entirely on what accuracy you need. Compare it to the mean or range of your dependent variable. A standard error of 10 when your outcome variable averages 1,000 means your predictions are off by about 1%. That’s excellent for most purposes. A standard error of 10 when your outcome averages 50 means you’re off by 20%, which is probably not useful.
A common shortcut is to express the standard error as a percentage of the mean (essentially a coefficient of variation for your model’s errors). Single-digit percentages suggest strong predictive accuracy. Double-digit percentages suggest the model is missing important information.
Many people instinctively look at R-squared instead, but the standard error of the regression is often more informative. R-squared tells you the proportion of variance explained, which is abstract. The standard error tells you the typical size of your prediction errors in real units, dollars, kilograms, degrees, whatever you’re measuring. As one Duke University statistics resource puts it, the standard error of the regression is the real “bottom line” because it measures unexplained variation in practical terms. All other standard errors in your model, for coefficients and for predictions, are directly proportional to it.
What Drives Standard Errors Higher
Understanding what inflates standard errors helps you diagnose whether a “bad” standard error is fixable.
- Small sample size. All standard errors shrink as you add more data. Specifically, they’re inversely proportional to the square root of sample size. Quadrupling your sample cuts standard errors roughly in half.
- Noisy data. If the outcome variable has high natural variability that your predictors can’t explain, the model’s standard error will be large. Adding better predictors is the fix here, not just more data.
- Narrow range of predictor values. If your independent variable doesn’t vary much in your sample, the model has little information to work with and the coefficient’s standard error will be large. Collecting data across a wider range of conditions helps.
- Multicollinearity. When two or more predictors are highly correlated with each other, their individual standard errors inflate dramatically. This is measured by the variance inflation factor (VIF). A VIF above 5 to 10 signals a problem. The coefficient estimates become unstable: wide confidence intervals, low t-statistics, and results that flip between significant and insignificant with small changes to the data.
Practical Benchmarks to Apply
Since there’s no universal cutoff, here are the comparisons that actually help you evaluate your standard errors.
For coefficient standard errors, compute the t-statistic (coefficient divided by standard error). Values above 2 indicate the estimate is precise enough to be statistically distinguishable from zero. But also look at the confidence interval. If the interval is so wide that it includes both trivially small and very large effects, the estimate isn’t precise enough to be useful even if it clears the significance bar.
For the model’s standard error, compare it to the standard deviation of your outcome variable. The standard error of the regression will almost always be smaller than the raw standard deviation of the outcome, because the regression line gets closer to the data points than a flat line at the mean would. How much smaller tells you how much your predictors are helping. If the standard error is only slightly smaller than the standard deviation, your model isn’t adding much explanatory power.
For prediction purposes, think about whether the error band is acceptable for your decision. If you’re predicting next quarter’s revenue and your model’s standard error implies a 95% prediction interval of plus or minus $2 million, the question isn’t whether that’s statistically “good.” It’s whether a $4 million range of uncertainty is narrow enough for your business decision. Context always wins over arbitrary thresholds.
When Standard Errors Look Good but Aren’t
Artificially small standard errors can be misleading. Overfitting, where a model captures noise in the training data rather than real patterns, produces small residuals and small standard errors that won’t hold up on new data. If your model has nearly as many predictors as data points, treat small standard errors with skepticism.
Multicollinearity creates the opposite illusion: it can make a genuinely important variable look insignificant by inflating its standard error. If a predictor you expect to matter shows a large standard error, check the VIF before concluding the variable isn’t useful. A VIF above 10 means the standard error is at least 3.2 times larger than it would be without the collinearity problem.
Heteroscedasticity, where the spread of residuals changes across different values of your predictors, also distorts standard errors. Your coefficient estimates may still be unbiased, but the standard errors (and therefore your confidence intervals and significance tests) can be wrong in either direction. Plotting residuals against predicted values is a quick diagnostic. If the spread fans out or narrows, the reported standard errors aren’t trustworthy without correction.

