What Is the SE Coefficient in Regression?

The SE coefficient (standard error of the coefficient) is a measure of how precisely a regression model has estimated the effect of each variable. You’ll typically see it in the “Std. Error” column of regression output in software like R, SPSS, Excel, or Stata, right next to each coefficient estimate. A smaller SE means the estimate is more precise; a larger SE means there’s more uncertainty about the true value of that coefficient.

What the SE Coefficient Tells You

When you run a regression, the model produces coefficient estimates that describe the relationship between your independent variables and the outcome. But those estimates come from a sample, not the entire population. If you collected a different sample and ran the same regression, you’d get slightly different coefficients each time. The SE coefficient captures that variability: it tells you how much a given coefficient would bounce around across repeated samples.

Think of it as a margin of error for each coefficient. A small SE relative to the coefficient itself suggests the estimate is stable and reliable. A large SE suggests the model can’t pin down the true effect with much confidence. For example, if a slope coefficient is 4.5 with an SE of 0.8, you have a fairly precise estimate. If that same coefficient of 4.5 had an SE of 4.0, the true value could plausibly be close to zero, meaning the variable might have no real effect at all.

How It’s Used in Hypothesis Testing

The SE coefficient is the denominator in one of the most common calculations in statistics: the t-statistic. To test whether a coefficient is meaningfully different from zero, you divide the coefficient estimate by its standard error. The result is a t-value, which then maps to a p-value that tells you how likely it is you’d see a coefficient that large if the true value were actually zero.

For instance, if a coefficient is 3.2 and its SE is 1.0, the t-statistic is 3.2. A larger t-value (farther from zero) means a smaller p-value, which strengthens the evidence that the variable genuinely matters. The degrees of freedom for this test are n minus k minus 1, where n is your sample size and k is the number of predictors. This is a two-tailed test because you’re checking whether the coefficient is different from zero in either direction, positive or negative.

In regression output tables, the coefficient, standard error, t-value, and p-value typically appear side by side in columns, making this chain of logic easy to follow once you know what each piece does.

Building Confidence Intervals

The SE coefficient also lets you construct a confidence interval around each estimate. For a 95% confidence interval, you multiply the SE by approximately 1.96 and then add and subtract that value from the coefficient. The result is a range that, with 95% confidence, contains the true population value of the coefficient.

If your regression estimates a slope of 2.5 with an SE of 0.6, the 95% confidence interval runs from about 1.32 to 3.68 (2.5 plus or minus 1.96 × 0.6). Because that interval doesn’t include zero, you’d conclude the effect is statistically significant at the 5% level. If the interval did cross zero, you couldn’t rule out the possibility that the variable has no effect.

What Makes the SE Larger or Smaller

Three main factors drive the size of the SE coefficient:

  • Sample size. Larger samples produce smaller standard errors. More data means more information, which means more precise estimates. This is the most straightforward way to shrink SEs.
  • Residual variance. If your model’s predictions are far from the actual data points (large residuals), the SE will be larger. A model that fits the data well, with tightly clustered residuals, produces smaller SEs.
  • Spread of the independent variable. The formula for the SE of a slope coefficient has the spread of the predictor variable in the denominator. When your predictor values cover a wide range, the SE shrinks. When the predictor values are bunched together, the SE grows because the model has less information about how changes in that variable relate to the outcome.

The formula for the slope’s SE makes this explicit: it equals the residual standard deviation divided by the square root of the sum of squared deviations of the predictor from its mean. So anything that increases the numerator (noisier data) or decreases the denominator (less variation in the predictor) inflates the standard error.

Reading the SE Column in Software Output

In a typical regression table from SPSS, R, Stata, or Excel, you’ll see a row for each predictor and a row for the intercept (constant). Each row includes the estimated coefficient and its standard error. The SE is then used internally to generate the t-value and p-value columns. Many outputs also include the lower and upper bounds of a 95% confidence interval, which are derived directly from the SE.

When reviewing output, don’t evaluate the SE in isolation. An SE of 5.0 might be tiny if the coefficient is 200, or enormous if the coefficient is 3. What matters is the ratio between the two, which is exactly what the t-statistic captures. A common rule of thumb: if the absolute value of the coefficient is at least twice its SE, the result is roughly significant at the 5% level (though the exact threshold depends on your sample size and number of predictors).

When Standard SEs Can Be Misleading

The standard SE calculations assume that the variability of your residuals is constant across all values of the predictor, a property called homoskedasticity. When that assumption is violated, meaning the spread of residuals fans out or compresses at different points, the classical SE estimates become biased. Your coefficients might still be unbiased, but the SEs (and therefore the t-values and p-values) can be wrong.

In these situations, researchers use robust standard errors, which adjust for uneven residual variance. If you notice that your robust and classical SEs differ substantially, it’s a signal that something in your model may need attention. The discrepancy could reflect a violation of the constant-variance assumption, or it could point to a deeper issue like omitted variables that would bias the coefficient estimates themselves. Robust SEs are a useful diagnostic tool, not just a fix.

SE Coefficient vs. Standard Error of the Estimate

It’s easy to confuse the SE of a coefficient with the standard error of the estimate (sometimes called the residual standard error). These are related but different. The standard error of the estimate measures how far the model’s predictions are from the actual data points overall. It’s a single number summarizing the model’s accuracy. The SE of a coefficient, by contrast, is specific to each predictor and describes the precision of that predictor’s estimated effect. The standard error of the estimate feeds into the calculation of every coefficient’s SE, but they answer different questions: one is about model fit, the other is about how well you’ve pinned down a specific relationship.