How to Report Cohen’s d: Format and Interpretation

Cohen’s d is reported as an italicized lowercase d followed by an equals sign and the value, typically to two decimal places: d = 0.55. Most style guides expect you to pair it with a confidence interval and enough context for the reader to judge both the statistical and practical significance of your finding. Here’s how to do that correctly.

The Basic Format

In APA style (7th edition), the letter d is always italicized. Report the value to two decimal places, and do not include a leading zero before the decimal point only when the statistic cannot exceed 1.00. Because Cohen’s d has no upper bound, you keep the leading zero: d = 0.45, not d = .45.

A minimal in-text report looks like this:

The treatment group scored significantly higher than the control group, t(58) = 2.87, p = .005, d = 0.75.

A more complete version includes the 95% confidence interval:

The treatment group scored significantly higher than the control group, t(58) = 2.87, p = .005, d = 0.75, 95% CI [0.22, 1.28].

The confidence interval tells your reader the range of plausible values for the true effect size. A wide interval signals less precision, usually because the sample was small. Including it is increasingly expected in journals that follow APA guidelines, and it gives readers far more information than the point estimate alone.

Where to Find It in Your Software

If you’re using SPSS version 27 or later, Cohen’s d is built into the independent samples t-test output. Navigate to Analyze, then Compare Means, then Independent Samples T Test. If you’re on SPSS 26 or earlier, it won’t appear automatically, and you’ll need to calculate it manually or use a spreadsheet calculator.

JASP produces Cohen’s d and its confidence interval when you check the “effect size” option in the t-test menu. Jamovi works similarly, displaying the value in the results table once you request it. In R, the effectsize package from the easystats collection computes both Cohen’s d and Hedges’ g directly.

Interpreting the Number

Jacob Cohen proposed three benchmarks for standardized mean differences: 0.20 is a small effect, 0.50 is medium, and 0.80 is large. Cohen described a medium effect as “visible to the naked eye of a careful observer,” a small effect as noticeably smaller but not trivial, and a large effect as the same distance above medium as small is below it.

These benchmarks are useful as a rough reference, but Cohen himself warned against applying them mechanically. He developed them to reflect typical effect sizes across the behavioral sciences as a whole, not within any specific discipline. In clinical psychology, where measures are often imprecise and the phenomena are subtle, a d of 0.30 might represent a meaningfully large finding. In experimental physiology, where variables are potent and experimental control is tight, that same 0.30 could be unremarkable. The best comparison is always to other effects in your specific field, not to a generic table.

When you report your result, you can reference Cohen’s labels if appropriate, but adding field-specific context is more informative. For example: “The intervention produced a small-to-medium effect (d = 0.35), comparable to other brief cognitive training programs in this population.”

Cohen’s d vs. Hedges’ g

Cohen’s d slightly overestimates the true effect size when samples are small. Hedges’ g applies a correction factor that removes this bias. For samples larger than about 20 per group, the two statistics are nearly identical and either is fine to report. Below that threshold, Hedges’ g is the better choice.

Report Hedges’ g in the same format: g = 0.62, 95% CI [0.15, 1.09]. If a reviewer or instructor asks for Cohen’s d and your sample is very small, note which statistic you used and why. The formatting is identical either way.

Which Standard Deviation to Use

The standard version of Cohen’s d uses the pooled standard deviation, calculated as the square root of the average of the two groups’ squared standard deviations. This is the default in most software and the version readers expect when they see d reported without further qualification.

An alternative, Glass’s delta (Δ), divides the mean difference by only the control group’s standard deviation. This is useful when the treatment itself is expected to change variability, not just the mean. If the intervention compresses or inflates scores in the treatment group, the control group’s standard deviation better represents the “natural” spread. If you use Glass’s delta, label it explicitly so readers know which formula produced the number.

Putting It All Together

A well-reported effect size in a results section combines the test statistic, the p value, the effect size, its confidence interval, and a brief interpretive note. Here’s a full example:

Participants in the sleep extension condition recalled significantly more words (M = 14.2, SD = 3.1) than those in the control condition (M = 11.8, SD = 2.9), t(46) = 2.74, p = .009, d = 0.79, 95% CI [0.19, 1.38], indicating a large effect by conventional benchmarks.

A few things to notice in that example. The group means and standard deviations come first, giving the reader the raw numbers. The effect size appears after the significance test, not as a replacement for it. The confidence interval follows immediately. And the interpretive label (“large effect”) is qualified with “by conventional benchmarks,” which signals awareness that context matters.

If you’re reporting multiple comparisons in a table rather than in text, create a column labeled d (italicized) and a separate column for the 95% CI. This keeps the results scannable and avoids cluttering your prose with dozens of inline statistics.