Reporting a t-test result requires a specific string of statistics: the means and standard deviations for each group, the t-value with degrees of freedom in parentheses, the p-value, and ideally an effect size. A properly formatted result looks like this: t(44) = 1.23, p = .09. But getting the notation right is only half the job. The sentence around those numbers matters just as much, because it tells your reader what the result actually means.
The Core Formula
Every t-test report includes the same basic components in the same order:
- Means (M) and standard deviations (SD) for each group or time point
- The t-value, written as a lowercase italic t
- Degrees of freedom, placed in parentheses right after t
- The p-value, written as a lowercase italic p
- Effect size, typically Cohen’s d
These pieces snap together into a standard string at the end of a sentence: t(44) = 1.23, p = .09, d = 0.35. The sentence before that string describes the finding in plain language, telling the reader what was compared, which group scored higher, and whether the difference was statistically significant.
How To Report an Independent Samples T-Test
An independent samples t-test compares two separate groups. The write-up follows a predictable structure: state the finding, provide the descriptive statistics for each group, then close with the test statistics. Here’s a complete example:
Participants in the experimental group drank significantly fewer drinks (M = 0.67, SD = 1.15) than those in the control group (M = 8.00, SD = 2.00), t(4) = -5.51, p = .005.
Notice that the means and standard deviations are embedded directly in the sentence, placed in parentheses right after each group is named. This lets the reader see the numbers in context without breaking the flow. If you’re comparing many groups or reporting multiple t-tests, you can move the descriptive statistics into a table instead and keep only the t-value and p-value in the text.
For a non-significant result, the structure is identical. You just change the wording to reflect that the groups did not differ significantly:
Men (M = 4.05, SD = 0.50) and women (M = 4.11, SD = 0.55) did not differ significantly on levels of extraversion, t(1) = 1.03, p = .38.
How To Report a Paired Samples T-Test
A paired samples t-test compares two measurements from the same participants, such as before and after an intervention, or responses to two different conditions. The format is nearly identical to an independent samples t-test, but your sentence should make it clear that the same people were measured twice.
Here’s a well-structured example: A paired samples t-test was conducted to compare feelings of disgust attributed to oneself versus to the victim. Participants attributed more feelings of disgust to themselves (M = 5.67, SD = 1.24) compared to the victim (M = 5.83, SD = 1.21), t(179) = 3.10, p = .002, Cohen’s d = 0.23.
The key difference is context. With paired data, your opening sentence should specify what the two conditions were and why the comparison makes sense. The reader needs to understand the within-subjects design to interpret the numbers correctly.
Including Effect Size
A p-value tells you whether a difference is statistically significant. It does not tell you whether that difference is meaningful. That’s what effect size is for, and many journals and instructors now require it.
The most common effect size for t-tests is Cohen’s d, which expresses the difference between two means in standard deviation units. The general benchmarks:
- 0.2 = small effect
- 0.5 = moderate effect
- 0.8 = large effect
In the paired samples example above, the Cohen’s d of 0.23 tells you that even though the result was statistically significant (p = .002), the actual size of the difference was small. This is useful information that the p-value alone can’t provide. Add it at the end of the statistics string, after the p-value: t(179) = 3.10, p = .002, d = 0.23.
Rounding and Decimal Places
APA style has specific conventions for how many decimal places to use. Getting these right signals that you know what you’re doing:
- Means and standard deviations: one decimal place (though two is acceptable when your scale of measurement warrants it)
- T-values and other inferential statistics: two decimal places
- P-values: two or three decimal places. When p is less than .001, write p < .001 rather than reporting the exact value
One important detail: p-values, correlations, and proportions never have a zero before the decimal point, because they can never exceed 1.0. Write p = .03, not p = 0.03. Means and t-values, which can exceed 1.0, do get the leading zero when they fall below 1.
Formatting the Symbols
Statistical symbols are italicized in APA style. This applies to t, p, M, SD, d, and N. If you’re writing in a word processor, italicize each one individually. In the statistics string, use spaces around the equals sign and a comma between each component: t(44) = 1.23, p = .09, d = 0.35.
The degrees of freedom go inside parentheses immediately after t with no space. For a standard independent samples t-test, degrees of freedom equal the total number of participants minus two. For a paired samples t-test, it’s the number of pairs minus one. If you used Welch’s correction (which adjusts for unequal variances between groups), your degrees of freedom will likely be a decimal number. Report it rounded to two decimal places.
Reporting Non-Significant Results
Non-significant results deserve the same level of detail as significant ones. Report the means, standard deviations, t-value, degrees of freedom, and exact p-value. What changes is your language. Say the groups “did not differ significantly” rather than claiming they were “the same” or that the intervention “had no effect.” A non-significant result means you didn’t find evidence of a difference. It doesn’t prove one doesn’t exist.
Avoid softening language like “marginally significant” or “approaching significance” for p-values that fall just above .05. These phrases introduce interpretive bias. If p = .08, report it as .08 and let the reader evaluate the result in context. You can note that the study may have been underpowered if your sample was small, but frame this as a limitation rather than an excuse to reinterpret the finding.
Also report the exact p-value rather than writing “n.s.” or “p > .05.” Exact values give your reader more information to work with, especially in meta-analyses or replication efforts.
Putting It All Together
Here’s a checklist for a complete t-test write-up. Your sentence should:
- State the comparison in plain language (what groups or conditions, what outcome)
- Report direction by saying which group scored higher or lower
- Include descriptive statistics (M and SD) for each group, embedded in the sentence or in a table
- Provide the test statistic as t(df) = value
- Give the exact p-value (or p < .001 for very small values)
- Add an effect size such as Cohen’s d
A polished example combining everything: Women reported significantly higher levels of happiness (M = 3.7, SD = 0.4) than men (M = 3.2, SD = 0.3), t(58) = 5.44, p < .001, d = 1.41. That single sentence gives your reader the comparison, the direction, the descriptive statistics, the test result, and the practical significance of the finding.

