How to Read a Regression Table and Interpret Results

A regression table is a grid of numbers that tells you how strongly each predictor variable is related to an outcome, whether those relationships are statistically meaningful, and how well the overall model explains what you’re studying. Once you know what each column represents, the table becomes surprisingly straightforward to read. The key columns you’ll encounter are the coefficient (B), standard error (SE), t-value, p-value, and sometimes a standardized coefficient (Beta).

The Coefficient Column: The Core of the Table

The coefficient column, usually labeled “B” or “Coeff,” is where you should look first. Each row represents a different predictor variable, and the coefficient tells you: for every one-unit increase in this predictor, the outcome changes by this amount, assuming all other variables in the model stay the same.

For example, if you’re predicting science test scores and the coefficient for math scores is 0.389, that means each additional point on the math test predicts a 0.389-point increase in the science score. The direction matters too. A positive coefficient means the outcome goes up as the predictor increases. A negative coefficient means the outcome goes down.

How you read a coefficient depends on whether the predictor is a number or a category. For a continuous predictor like age, income, or test scores, it’s a straightforward per-unit change. For a categorical predictor like gender or treatment group (coded as 0 or 1), the coefficient represents the difference in the average outcome between the two groups. If a model predicting muscle mass has a coefficient of -3.2 for a “female” variable (coded as 1 for female, 0 for male), that means the average muscle mass for females is 3.2 units lower than for males of the same age.

The Intercept Row

The first row is typically labeled “Constant” or “Intercept.” This is the predicted value of the outcome when every predictor in the model equals zero. Sometimes that’s useful, and sometimes it’s nonsensical. If your predictors include something like mid-upper arm circumference, setting that to zero describes a person who doesn’t exist. In one published example, the intercept produced a negative BMI value, which is physically impossible. So treat the intercept as a mathematical anchor for the model rather than a meaningful finding, unless zero is a realistic value for all your predictors.

Standard Error, t-Value, and P-Value

These three columns work together to answer one question: is this coefficient reliably different from zero, or could it just be noise in the data?

The standard error (SE) measures how precisely the model has estimated the coefficient. A small standard error relative to the coefficient means the estimate is stable. A large one means there’s a lot of uncertainty. Think of it as the margin of error around the coefficient.

The t-value is simply the coefficient divided by its standard error. A larger t-value (in absolute terms) means the coefficient is large relative to its uncertainty, which makes it more likely to be a real effect rather than random chance.

The p-value tells you the probability of seeing a coefficient this large if the predictor actually had no relationship to the outcome. The standard threshold is 0.05. If the p-value is below 0.05, the coefficient is considered statistically significant, meaning it’s unlikely to be zero. Many tables use star notation as a shorthand: one asterisk (*) typically means p < 0.05, two (**) means p < 0.01, and three (***) means p < 0.001. Always check the footnote at the bottom of the table, because different publications use slightly different conventions.

A non-significant p-value doesn’t prove the predictor has no effect. It means the data didn’t provide strong enough evidence to rule out zero. This distinction matters when you’re interpreting results.

Confidence Intervals

Some tables include two extra columns showing the lower and upper bounds of a confidence interval, usually at the 95% level. This interval is built from the coefficient and its standard error: the coefficient plus or minus roughly two standard errors (the exact multiplier depends on sample size). A 95% confidence interval means that if you repeated the study many times, about 95% of the intervals constructed this way would contain the true value of the coefficient.

The practical shortcut: if a 95% confidence interval for a coefficient does not contain zero, the coefficient is significant at the 0.05 level. If the interval crosses zero, the effect could plausibly be positive, negative, or nonexistent. Confidence intervals also give you something the p-value alone doesn’t: a sense of the range of plausible effect sizes. A coefficient of 5.0 with a confidence interval of [4.2, 5.8] tells a very different story than one with an interval of [0.3, 9.7], even if both are statistically significant.

Standardized Coefficients (Beta)

Some tables include a column labeled “Beta” or “Std. Coeff.” alongside the regular coefficient column. These are standardized coefficients, calculated by converting both the predictor and the outcome into standard deviation units before running the regression. The formula multiplies the unstandardized coefficient by the standard deviation of the predictor, then divides by the standard deviation of the outcome.

The purpose is comparison. If your model includes income (measured in dollars) and years of education, their unstandardized coefficients are on completely different scales and can’t be directly compared. Standardized coefficients put everything on the same scale: a one-standard-deviation change in the predictor is associated with a certain fraction of a standard-deviation change in the outcome. A Beta of 0.45 for one predictor versus 0.12 for another tells you the first has a substantially stronger relationship with the outcome.

That said, standardized coefficients have limitations. Their values depend on the variability in your specific sample, so they can shift across studies with different populations. They work best when you genuinely want to compare the relative strength of predictors within a single model. For communicating the real-world size of an effect (“each additional year of education is associated with $2,400 more in annual income”), the unstandardized coefficient is more informative.

R-Squared and Adjusted R-Squared

Below or above the main table, you’ll usually find model-level statistics. The most common is R-squared, which tells you the proportion of variation in the outcome that your predictors collectively explain. An R-squared of 0.62 means the model accounts for 62% of the variability in the outcome. The remaining 38% is unexplained by the predictors you included.

R-squared has a known weakness: it never decreases when you add another predictor, even if that predictor is useless. Adjusted R-squared corrects for this by penalizing the addition of predictors that don’t meaningfully improve the model. The correction grows larger when the sample size is small or the number of predictors is large. If you’re comparing two models with different numbers of predictors, adjusted R-squared is the better metric. When adjusted R-squared barely changes (or drops) after adding new variables, those variables likely aren’t contributing real explanatory power and the simpler model is preferable.

The F-Statistic

The F-statistic and its associated p-value test whether the model as a whole is doing something useful. Specifically, it asks: do all of the predictors, taken together, explain the outcome better than a model with no predictors at all? This is different from what the individual t-tests do. A predictor might not be individually significant, yet the collection of predictors together could still be significant (and vice versa). If the F-statistic’s p-value is below 0.05, you can conclude that at least some of your predictors are meaningfully related to the outcome.

Putting It All Together

When you sit down with a regression table, work through it in this order. First, check the F-statistic and R-squared at the bottom. These tell you whether the model is worth interpreting at all and how much of the outcome it explains. If the F-test isn’t significant, the individual coefficients are hard to trust.

Next, scan the p-value column to identify which predictors are statistically significant. Then look at the coefficients for those significant predictors to understand the direction and size of each relationship. For continuous predictors, read it as “each one-unit increase in X is associated with a [coefficient] change in the outcome.” For categorical predictors, read it as “the [coded group] has, on average, a [coefficient] higher or lower outcome than the reference group.” If you want to compare which predictor matters most, look at the standardized Beta column.

One crucial caveat: regression coefficients describe associations, not causes. A coefficient tells you that two things move together after controlling for the other variables in the model, but it doesn’t tell you that one causes the other. Correlation, measured by “r,” tells you how tightly data points cluster around a line. The regression coefficient tells you the slope of that line. They answer related but different questions, and neither one proves causation on its own.