A regression coefficient tells you how much your outcome variable changes when a predictor variable increases by one unit. That single idea is the foundation, but the exact interpretation shifts depending on the type of regression, whether your variables are transformed, and whether your predictors are continuous or categorical. Here’s how to read coefficients correctly in each situation.
Linear Regression: The One-Unit Change
In a simple linear regression with one predictor, the coefficient (often called the slope or “b”) represents the amount of change in the outcome for every one-unit increase in the predictor. If you’re predicting salary based on years of experience and the coefficient is 3,200, that means each additional year of experience is associated with $3,200 more in salary.
The intercept (sometimes labeled the “constant”) is the predicted value of the outcome when the predictor equals zero. In the salary example, the intercept would be the predicted salary for someone with zero years of experience. Sometimes that number makes practical sense, sometimes it doesn’t. A model predicting adult height from age might produce a nonsensical intercept because no adults have an age of zero.
Multiple Regression: Holding Other Variables Constant
When your model has more than one predictor, interpretation adds a crucial phrase: “holding all other variables constant.” If the coefficient for years of experience is 2,800 in a model that also includes education level and industry, that means each additional year of experience is associated with $2,800 more in salary among people with the same education level working in the same industry.
This “all else equal” logic is powerful but comes with a catch. In some models, holding everything else constant is physically impossible. If your model includes both age and age-squared (a common way to capture curved relationships), you can’t change age by one unit without also changing age-squared. In those cases, you interpret the combined effect of related terms together rather than reading each coefficient in isolation.
The coefficient in a multiple regression model will often differ from what you’d get in a simple regression with just that one predictor. That’s expected. The multiple regression coefficient reflects the unique contribution of that variable after accounting for the others, which removes the influence of shared relationships between predictors.
Categorical Predictors and Reference Groups
When a predictor is categorical (like region, treatment group, or job title), the model converts it into a set of binary indicators, sometimes called dummy variables. One category is designated as the reference group, and every other category gets its own coefficient representing the difference from that reference.
Say you’re predicting test scores and your model includes a variable for school type with three levels: public, private, and charter. If public is the reference group and the coefficient for private is 12, that means students in private schools scored 12 points higher on average than students in public schools, after controlling for other variables in the model. The coefficient for charter might be 5, meaning charter school students scored 5 points higher than public school students. The reference group itself has no coefficient because it’s baked into the intercept.
Changing which group serves as the reference changes the coefficients but not the model’s predictions. It simply reframes the comparisons. If you switched the reference to private schools, the coefficient for public would become -12.
Logistic Regression: Odds Ratios, Not Averages
Logistic regression predicts the probability of a binary outcome (yes/no, survived/died, clicked/didn’t click), and its coefficients work differently. The raw coefficient represents the change in the log-odds of the outcome for a one-unit increase in the predictor. Log-odds aren’t intuitive for most people, so the standard practice is to exponentiate the coefficient to get an odds ratio.
If the coefficient for a predictor is 0.47, the odds ratio is e raised to the power of 0.47, which equals about 1.60. That means a one-unit increase in the predictor multiplies the odds of the outcome by 1.60, or increases them by 60%. An odds ratio above 1 means the predictor increases the odds. Below 1 means it decreases them. An odds ratio of exactly 1 means no association.
One common mistake is treating odds ratios as if they describe probability directly. An odds ratio of 2.0 does not mean the event is twice as likely. It means the odds are doubled, and odds and probability are different quantities. The distinction matters most when the outcome is common (occurring in more than about 10% of cases).
Log-Transformed Variables
Researchers often apply logarithmic transformations to variables that are skewed or that have multiplicative relationships (income, population, prices). The interpretation of coefficients changes depending on which variables are transformed.
When only the outcome is log-transformed, you exponentiate the coefficient to get a multiplicative factor. A coefficient of 0.05 means each one-unit increase in the predictor multiplies the outcome by e^0.05, roughly a 5% increase.
When only the predictor is log-transformed, you divide the coefficient by 100. The result tells you how much the outcome changes in its original units for every 1% increase in the predictor. If the coefficient is 15, a 1% increase in the predictor is associated with a 0.15-unit increase in the outcome.
When both the outcome and the predictor are log-transformed, the coefficient is an elasticity: a 1% increase in the predictor is associated with roughly a coefficient-percent change in the outcome. A coefficient of 0.20 means a 1% increase in the predictor corresponds to about a 0.20% increase in the outcome. This setup is common in economics, where researchers want to express relationships in percentage terms on both sides.
Standardized vs. Unstandardized Coefficients
The coefficients discussed so far are unstandardized, meaning they’re expressed in the original units of the variables. A coefficient of 3,200 in a salary model literally means 3,200 dollars. This makes individual coefficients easy to interpret, but hard to compare across predictors measured in different units. Is years of experience “more important” than education level? The raw coefficients can’t tell you because they’re on different scales.
Standardized coefficients (sometimes called beta weights) attempt to solve this by converting everything to standard deviations. A standardized coefficient of 0.45 means that a one-standard-deviation increase in the predictor is associated with a 0.45-standard-deviation change in the outcome. Because all predictors are now on the same unitless scale, you can compare their relative magnitudes within a model.
This comparison has limits, though. The standardized coefficient depends on the variability of the predictor in your specific sample. If your dataset happens to have very little variation in one predictor, its standardized coefficient will look smaller even if the underlying relationship is strong. Two studies with different samples can produce different standardized coefficients for the same real-world relationship, simply because the spread of the data differs. For reporting practical, real-world effects, unstandardized coefficients are generally more informative. Use standardized coefficients as a rough ranking tool within a single model, not as a universal measure of importance.
Checking Whether a Coefficient Matters
A coefficient’s size alone doesn’t tell you whether the relationship it describes is real or just noise in your data. That’s where statistical significance comes in. Each coefficient in a regression output comes with a p-value testing whether the true coefficient could plausibly be zero (meaning no relationship). If the p-value is below your chosen threshold, typically 0.05, you reject the idea that the coefficient is zero and call it statistically significant.
Confidence intervals give you the same information in a more useful form. A 95% confidence interval provides a range of plausible values for the coefficient. If that range includes zero, the coefficient is not statistically significant at the 0.05 level. If the range excludes zero, it is. But confidence intervals also tell you something the p-value alone doesn’t: the precision of your estimate and the range of effect sizes consistent with your data. A coefficient of 5.0 with a confidence interval of 4.2 to 5.8 tells a very different story than one with a confidence interval of 0.3 to 9.7, even if both are significant.
Statistical significance also doesn’t mean practical significance. A coefficient can be tiny and still significant if your sample is large enough, or large and nonsignificant if your sample is small. Always look at the actual size of the coefficient and ask whether that magnitude would matter in the real world, not just whether the p-value clears a threshold.

