Yes, effect sizes can absolutely be negative. A negative sign simply tells you the direction of the difference or relationship, not that something went wrong with the calculation. Many of the most common effect size measures, including Cohen’s d, Hedges’ g, and Pearson’s r, routinely produce negative values depending on which group scored higher or how two variables relate to each other.
What the Negative Sign Means
Effect size measures how large a difference or relationship is between groups or variables. For measures that compare two group averages, the formula is straightforward: subtract one group’s mean from the other, then divide by a measure of variability. If the group you subtract from has the higher score, the result is negative. If it has the lower score, the result is positive. The sign tells you direction. The number tells you magnitude.
Take Cohen’s d as the most familiar example. It’s calculated as the mean of Group 1 minus the mean of Group 2, divided by the pooled standard deviation. If Group 1 averages 70 on a test and Group 2 averages 80, the numerator is negative, so d is negative. Flip the subtraction order and you get the same number with a positive sign. The size of the effect hasn’t changed at all. Glass’s delta and Hedges’ g work the same way, since both are variations on this basic structure of comparing two means.
Why Group Order Matters So Much
The single biggest reason an effect size comes out negative is simply which group a researcher labels “Group 1” versus “Group 2.” There’s no universal rule about this. Some researchers subtract the control group from the treatment group. Others do the reverse. This means a negative Cohen’s d in one paper could represent the exact same finding as a positive Cohen’s d in another paper, just with the groups entered in different order.
This is why you should always check which group was subtracted from which before interpreting the sign. A negative effect size in a drug trial might mean the treatment group had lower blood pressure than the control group, which would actually be a good outcome. Or it might mean the treatment group performed worse on some measure, depending entirely on how the researcher set up the comparison. The sign is meaningless without knowing the subtraction order.
Correlation as an Effect Size
Pearson’s correlation coefficient (r) is itself an effect size, and it ranges from +1.0 to -1.0. A value of +1.0 indicates a perfect positive relationship: as one variable goes up, the other goes up in lockstep. A value of -1.0 indicates a perfect negative relationship: as one variable goes up, the other goes down in lockstep. A value of 0 means no relationship at all.
Here, unlike with Cohen’s d, the negative sign carries genuine meaning that doesn’t depend on an arbitrary labeling choice. A correlation of -0.6 between exercise frequency and resting heart rate tells you something specific: people who exercise more tend to have lower resting heart rates. The direction is baked into the data, not determined by which variable you happened to list first. The strength of the relationship is the same whether r is -0.6 or +0.6, but the direction is fundamentally different.
When Negative Doesn’t Mean “Worse”
In medical and clinical research, a negative effect size often represents a desirable outcome. If a study measures symptom severity, a negative Cohen’s d for the treatment group means fewer or milder symptoms compared to the control group. If the outcome variable is pain scores, hospital readmission rates, or days to recovery, a negative difference for the treatment group is exactly what you’d hope to see.
This is one of the most common sources of confusion when reading research. People instinctively associate “negative” with “bad,” but in effect size reporting, the sign only indicates which direction the difference goes. Whether that direction is good or bad depends entirely on what’s being measured. A negative effect size for a depression treatment means depression scores went down, which is the whole point.
Odds Ratios: A Special Case
Not every effect size measure can go negative. Odds ratios, commonly used in medical research comparing event rates between groups, range from 0 to infinity. An odds ratio of 1.0 means no difference between groups. Values above 1.0 mean higher odds of the outcome in the exposed group. Values below 1.0 mean lower odds. The odds ratio can never be negative because it’s a ratio, not a difference.
However, the log odds ratio (which researchers use in logistic regression and meta-analysis) can be negative. Taking the natural log of an odds ratio below 1.0 produces a negative number. So while you’ll never see an odds ratio of -0.5, you might see a log odds ratio of -0.5, which corresponds to an odds ratio of about 0.61. If you’re reading a study that reports effect sizes from logistic regression, pay attention to whether they’re reporting the odds ratio itself or its logarithm.
Adjusted R-Squared Can Go Negative Too
R-squared, the measure of how much variation in an outcome a statistical model explains, normally ranges from 0 to 1. But its adjusted version, which penalizes models for including too many variables, can technically dip below zero. This happens when a model explains almost none of the variation in the outcome while simultaneously including many predictors. The penalty for those useless extra variables pushes the adjusted R-squared into negative territory. A negative adjusted R-squared is essentially the statistic telling you the model is worse than useless: you’d predict the outcome more accurately by just using the overall average.
How to Interpret Negative Values in Practice
When you encounter a negative effect size in a research paper or meta-analysis, ask three questions. First, what type of effect size is it? Measures based on mean differences (Cohen’s d, Hedges’ g, Glass’s delta) can go negative based on subtraction order. Correlations go negative when the relationship is inverse. Odds ratios cannot go negative, but log odds ratios can.
Second, which group or variable was treated as the reference? For mean-difference measures, the sign flips depending on whether the formula subtracts control from treatment or treatment from control. Without this context, the sign alone is uninterpretable.
Third, what was being measured? A negative effect size on a measure of disease severity means something very different from a negative effect size on a measure of quality of life. The direction only has practical meaning once you know whether higher scores on the outcome are good or bad.
The magnitude of the effect, its absolute value, follows the same benchmarks regardless of sign. For Cohen’s d, 0.2 is typically considered small, 0.5 medium, and 0.8 large. A d of -0.8 is just as large an effect as a d of +0.8. When an effect size of 0 represents complete overlap between two groups (both at the 50th percentile of each other), moving in either direction, positive or negative, represents an increasingly meaningful separation between groups.

