A main effect in ANOVA is the overall impact of one independent variable on your outcome, averaged across all levels of the other independent variables in the study. In a two-way ANOVA with two independent variables, you get three results: a main effect for each variable and an interaction effect between them. Understanding what main effects tell you, and when they can be misleading, is one of the most important parts of reading ANOVA output correctly.
How a Main Effect Works
Imagine a study testing whether a drug improves depression scores, and whether cognitive behavioral therapy (CBT) also improves them. You have two independent variables: drug (medication vs. placebo) and therapy (CBT vs. waitlist). A two-way ANOVA will produce three separate statistical tests from this design.
The main effect of drug asks: across all patients, regardless of whether they got CBT or sat on a waitlist, did the medication group score differently than the placebo group? The main effect of therapy asks the same kind of question in reverse: regardless of which pill patients took, did the CBT group score differently than the waitlist group? The third result, the interaction effect, asks whether the combination of drug and therapy produced something beyond what either did alone.
The key phrase is “regardless of.” A main effect collapses across the other variable and looks only at the overall difference between levels of the variable in question. In the study above, the main effect of drug compared average scores for all medication patients (whether they also got CBT or not) against average scores for all placebo patients.
What Main Effects Are Not
A common misconception is that a main effect is the same thing as running a simple comparison, like a t-test, between two groups. It isn’t. In a factorial ANOVA, the main effect for each variable is calculated after partitioning out the interaction effect. So the main effect of drug is not simply “medication group mean vs. placebo group mean.” It’s the difference between those groups after the model accounts for how drug and therapy might work together.
This distinction matters most when an interaction is present. If the drug works well for patients who also get CBT but barely works for patients on the waitlist, the overall main effect of drug might still come back significant, but that single number would hide an important pattern. The drug’s benefit depends on therapy status, and just reporting the main effect would be misleading.
Reading the F-Statistic
Each main effect is tested with an F-ratio. The F value is calculated by dividing the variance explained by that factor (called mean square for the factor) by the leftover, unexplained variance in the data (called mean square error). A larger F value means the factor explains more variability relative to random noise.
The F value comes with two degrees of freedom. The first (numerator) reflects the number of levels of your independent variable minus one. If your variable has three groups, the numerator degrees of freedom is 2. The second (denominator) reflects the size of your sample and the complexity of your model. The software compares your F value to an F distribution with those degrees of freedom and returns a p-value telling you how likely that result would be if the factor had no real effect.
In the depression study, the main effect of drug produced F(1, 92) = 361.55, p < .001, and the main effect of therapy produced F(1, 92) = 65.59, p < .001. Both were statistically significant, meaning each variable independently influenced depression scores.
When Interactions Complicate the Picture
A main effect and an interaction can both be significant at the same time. When that happens, the interaction usually takes priority in interpretation because it tells you that the effect of one variable changes depending on the level of the other. Reporting just the main effect in that situation would oversimplify what’s actually going on.
Consider a pain study where a drug is tested in both men and women. If the drug lowers pain in women but not in men, you might still find a significant main effect of drug (because averaging across sexes, the drug group has somewhat lower pain overall). But that main effect is misleading on its own. The interaction tells the real story: the drug’s effect depends on sex.
It’s also possible to find an interaction with no main effects at all. This means neither variable, by itself, shifted the outcome in one consistent direction. But the combination of specific levels did something notable. Picture two lines on a graph that cross each other: neither variable has a consistent advantage, but the pattern flips depending on the other variable’s level.
Spotting Main Effects on a Graph
Bar graphs and line graphs make main effects easy to see. In a bar graph from a 2×2 design, a main effect of one variable shows up when the bars for one level of that variable are consistently higher (or lower) than the bars for the other level, across all conditions. If blue bars represent cell phone use and red bars represent no cell phone use, and the red bars are taller on average, there’s a main effect of cell phone use on driving performance.
In a line graph where one independent variable is on the x-axis and separate lines represent levels of the other variable, parallel lines suggest main effects without an interaction. If the lines are not parallel, that’s a visual signal of an interaction. Lines that spread apart indicate the effect of one variable grows stronger at certain levels of the other. Lines that cross over each other indicate the direction of the effect actually reverses.
Effect Size for Main Effects
Statistical significance tells you whether a main effect likely exists, but it doesn’t tell you how large it is. For that, researchers report an effect size measure, most commonly partial eta-squared. This value represents the proportion of variance in the outcome that is explained by that particular factor, after accounting for other factors in the model.
General benchmarks for partial eta-squared are 0.01 for a small effect, 0.06 for a medium effect, and 0.14 for a large effect. These are rough guidelines, and what counts as meaningful varies by field. A small effect size in a clinical trial might still matter if the outcome is life or death, while a large effect size in a lab task might have little practical relevance.
Reporting a Main Effect
In APA format, main effects are reported with the F value, both degrees of freedom in parentheses, and the p-value. For example: “A main effect of testing time was found, F(2, 99) = 12.24, p < .001.” If the result is not significant, it’s reported the same way but noted as nonsignificant: “The main effect of year in college was not significant, F(3, 98) = 2.33, p = n.s.” You should also report the means and standard deviations for each group so readers can see the direction and size of the differences.
Assumptions That Must Be Met
For a main effect test to be valid, the data need to satisfy several assumptions. The outcome variable should be continuous and approximately normally distributed within each group. The variance of the outcome should be roughly equal across groups, a condition called homogeneity of variance. Observations need to be independent of each other, meaning one participant’s score shouldn’t influence another’s. If you’re measuring the same participants multiple times, standard ANOVA is not appropriate because it violates the independence assumption; repeated-measures ANOVA, which adds a sphericity assumption (consistent variability across time points), is needed instead.
Violations of these assumptions can inflate or deflate your F values, making main effects appear significant when they aren’t, or hiding real effects. Most statistical software includes tests for these assumptions, such as Levene’s test for equal variances, and corrections you can apply when assumptions are violated.
Scaling Beyond Two Variables
Main effects aren’t limited to two-way designs. In a three-way ANOVA with variables for sex, drug, and therapy type, you would get three main effects (one for each variable), three two-way interactions, and one three-way interaction. The logic stays the same: each main effect tells you the overall influence of one variable, collapsed across all levels of the other two. As designs grow more complex, the number of interactions multiplies, and interpreting main effects in isolation becomes increasingly risky without first checking whether interactions are present.

