What Is Eta Squared in ANOVA? Effect Size Explained

Eta squared (η²) is a measure of effect size used in ANOVA that tells you what proportion of the total variance in your data is explained by a particular independent variable. While a p-value tells you whether group differences are statistically significant, eta squared tells you how large those differences actually are, on a scale from 0 to 1. A value of 0 means the independent variable explains none of the variance, and a value of 1 means it explains all of it.

How Eta Squared Is Calculated

The formula is straightforward: divide the sum of squares between groups by the total sum of squares.

η² = SS_between / SS_total

The sum of squares between groups captures how much the group means differ from the overall mean. The total sum of squares captures all the variability in your dependent variable. So the ratio tells you what fraction of the total variability is accounted for by group membership. If you run a one-way ANOVA comparing test scores across three teaching methods, and η² = 0.12, that means 12% of the variation in test scores is associated with which teaching method students received.

Interpreting Small, Medium, and Large Effects

Cohen’s widely used benchmarks for eta squared are:

  • 0.01: small effect
  • 0.06: medium effect
  • 0.14: large effect

These are general guidelines, not rigid cutoffs. A “small” effect in one field might be practically meaningful in another. In educational research, for example, even modest effect sizes can represent real differences in student outcomes when scaled across thousands of learners. The benchmarks are most useful as a rough frame of reference when you have no domain-specific standards to compare against.

How It Relates to R-Squared

If you’re familiar with regression, eta squared is conceptually similar to R². Both describe the proportion of variance in the dependent variable that’s explained by the independent variable(s). In a simple one-way ANOVA with a single factor, eta squared and R² from the equivalent regression model will give you the same value. This makes eta squared intuitive for anyone already comfortable reading R² as “percent of variance explained.”

Eta Squared vs. Partial Eta Squared

In a one-way ANOVA with a single independent variable, eta squared and partial eta squared produce identical results. The distinction matters once you move to factorial designs with two or more independent variables.

Standard eta squared always divides by the total sum of squares, which includes variance from every factor and interaction in the model. Partial eta squared removes the variance attributed to other factors and interactions before calculating the ratio, isolating the effect of just one variable relative to that variable’s effect plus error. This means partial eta squared for a given factor will always be equal to or larger than its eta squared in a multi-factor design.

Partial eta squared has become the dominant measure in published research. It’s the default output in most statistical software, including SPSS. If you’re reading a paper that reports effect sizes from a factorial ANOVA, you’re almost certainly looking at partial eta squared, even if the authors aren’t explicit about it.

The Bias Problem

Eta squared has a known positive bias, meaning it tends to overestimate the true effect size in the population. The formula divides one unbiased estimate (sum of squares between) by another (total sum of squares), but the ratio of two unbiased estimates is not itself unbiased. The result is that your calculated η² will, on average, be slightly larger than the real population value.

This bias is most pronounced with small sample sizes and shrinks as your sample grows. The sampling distribution of eta squared is also right-skewed, which means occasional samples can produce noticeably inflated values. If you’re working with small groups (say, 10 to 20 participants per condition), the overestimation can be substantial enough to matter.

Omega Squared as an Alternative

Omega squared (ω²) was developed specifically to correct for the upward bias in eta squared. It applies a correction factor that adjusts for sample size, producing a less biased estimate of the population effect size. Despite being widely recommended in statistics textbooks, omega squared is reported far less often than eta squared in practice. Part of the reason is that many software packages don’t calculate it automatically, and clear guidance on computing it across different ANOVA designs has historically been lacking.

If you’re deciding which to report, omega squared is the more conservative and technically more accurate choice, especially with smaller samples. For large samples, the two measures converge and the practical difference becomes negligible.

Reporting Eta Squared

Under APA style, Greek letters like η are written in standard (non-italic) type, unlike Latin statistical symbols such as F or p, which are italicized. A typical write-up might look like: F(2, 87) = 5.41, p = .006, η² = .11. This tells the reader that 11% of the variance in the outcome was associated with group membership, giving context that the p-value alone cannot provide.

Including an effect size measure alongside your significance test is now expected in most social science, education, and health research journals. A statistically significant result with a tiny effect size means something very different from one with a large effect size, and eta squared makes that distinction visible at a glance.