A main effect is the independent influence of one variable on an outcome, averaged across all levels of the other variables in an experiment. If you’re running a study with two or more factors (like a drug and a therapy), the main effect of the drug tells you whether that drug made a difference on its own, regardless of which therapy patients received. It’s one of the most fundamental concepts in factorial research design, and understanding it unlocks how scientists pull apart the individual contributions of multiple variables at once.
How a Main Effect Works
Most experiments in psychology, medicine, and the social sciences test more than one variable at a time. A factorial design is the formal name for this setup. In a simple 2×2 factorial design, you have two independent variables, each with two levels, and you’re measuring some outcome. That gives you three things to examine: the main effect of variable A, the main effect of variable B, and the interaction between A and B.
The key idea is that a main effect looks at one variable while collapsing across the other. Imagine a study testing whether a medication improves depression scores, and separately whether cognitive behavioral therapy (CBT) improves depression scores. The main effect of the medication asks: did patients on the medication do better than patients on a placebo, regardless of whether they also received CBT or were on a waitlist? The main effect of therapy asks the mirror question: did CBT patients do better than waitlist patients, regardless of which drug they were taking?
If the medication group had lower depression scores than the placebo group across both therapy conditions, that’s a main effect of medication. If CBT patients improved more than waitlist patients across both drug conditions, that’s a main effect of therapy. Each effect is evaluated independently of the other factor.
Marginal Means: The Math Behind It
The way researchers actually calculate a main effect is through marginal means. These are the averages you get when you collapse one variable and look at the rows or columns of a data table. Picture a table where the rows represent one variable (say, intervention vs. no intervention) and the columns represent another (say, English department vs. Psychology department). Each cell in the table holds the average outcome for that specific combination. The marginal means sit on the edges of the table, showing the overall average for each level of each variable.
In one educational study, the marginal mean for students who received a growth mindset intervention was 2.08, while students who did not receive the intervention averaged 1.76. That difference in marginal means is the main effect of the intervention. Similarly, English students averaged 2.49 and Psychology students averaged 1.34. That’s the main effect of department. Notice that these marginal means don’t tell you anything about how the two variables combine. They only tell you about each variable on its own.
Main Effects vs. Interaction Effects
The distinction between a main effect and an interaction effect trips up a lot of people, but it’s straightforward once you see it. A main effect says: this variable matters, period. An interaction effect says: how much this variable matters depends on the level of the other variable.
Back to the depression study. If the medication outperformed the placebo and CBT outperformed the waitlist, those are two main effects. But suppose the medication worked dramatically better when combined with CBT than when patients were on the waitlist. That’s an interaction: the benefit of the drug depended on which therapy patients received. Importantly, a main effect is calculated after excluding the interaction effect. So a significant main effect of the medication means it would outperform the placebo even if there were no interaction at all.
You can have main effects without an interaction, an interaction without main effects, or all three at once. They’re separate questions answered by separate statistical tests.
How to Spot a Main Effect on a Graph
Factorial results are often displayed as line graphs or bar charts, and there are simple visual shortcuts for identifying main effects. When one line sits consistently above or below another across the entire graph, that gap signals a main effect of whichever variable the lines represent. The larger the gap between the midpoints of the two lines, the larger the main effect.
For the variable plotted on the horizontal axis, a main effect shows up as a difference between the midpoints of the values at each position along that axis. If the average of the two data points on the left side of the graph is noticeably different from the average on the right side, there’s likely a main effect of the x-axis variable. When lines are parallel, there’s no interaction. When they cross or diverge sharply, an interaction is present, but that’s a separate question from whether main effects exist.
Testing Whether a Main Effect Is Significant
Observing a difference in marginal means isn’t enough. Researchers need to confirm that the difference is unlikely to have occurred by chance. In a factorial design, this is done using an analysis of variance (ANOVA), which produces an F-statistic for each main effect and for the interaction. The F-statistic compares the variance explained by the factor to the variance left unexplained (error). A larger F-value means the factor explains more of the outcome relative to random noise.
Each F-statistic comes with a p-value. The conventional threshold is p < 0.05, meaning there’s less than a 5% probability of seeing a difference this large if the variable truly had no effect. That threshold is a convention dating back to the statistician R.A. Fisher, not a law of nature. Some fields use stricter cutoffs like 0.01 or 0.001 for stronger evidence. In an agricultural study comparing crop varieties and planting densities, both main effects produced p-values below 0.001, indicating strong evidence that variety and density each independently affected yield.
Measuring How Large a Main Effect Is
Statistical significance tells you whether an effect exists, but not how big it is. That’s where effect size comes in. The most common effect size measure for main effects in ANOVA is partial eta squared, which ranges from 0 to 1. It represents the proportion of variance in the outcome that’s explained by one independent variable after removing the influence of the other variables and their interactions. A value of 0 means the variable explains nothing. A value of 1 means it explains everything.
In practice, values around 0.01 are considered small, 0.06 medium, and 0.14 or above large. Reporting effect size alongside the p-value gives a much fuller picture. A main effect can be statistically significant but tiny in practical terms, especially with large sample sizes.
How Main Effects Are Reported in Research
If you’re reading a journal article, main effects typically appear in a standard format. Under APA style, you’ll see something like: F(1, 56) = 12.34, p = .001, partial η² = .18. The F is the test statistic, the numbers in parentheses are degrees of freedom (related to the number of groups and participants), the p-value indicates significance, and partial eta squared indicates effect size. Exact p-values are reported to two or three decimal places, except when they fall below .001, in which case the paper simply states p < .001.
When the text discusses results in words rather than numbers, APA guidelines recommend spelling out the statistic name (“the means were”) rather than using symbols. The means and standard deviations for each condition are typically reported to two decimal places alongside the inferential test, giving readers enough information to evaluate both the direction and the size of the main effect.

