How to Report P Values: Formatting Rules and APA Style

Reporting a p-value correctly comes down to three things: formatting it according to your style guide, providing the exact value rather than a threshold, and pairing it with enough context for the reader to interpret it. The specific rules vary between APA, AMA, and other styles, but the core principles are consistent across scientific publishing.

Basic Formatting Rules

Every style guide agrees on a few fundamentals. Report exact p-values to two or three decimal places. Drop the leading zero before the decimal, since a p-value can never exceed 1. And for any value smaller than .001, write “p < .001” rather than stringing out a long decimal.

Beyond those basics, formatting depends on which style guide you’re following.

APA Style (7th Edition)

APA uses a lowercase, italicized p. No leading zero. Two or three decimal places for exact values.

  • Standard result: p = .03 or p = .006
  • Very small value: p < .001
  • In tables: report exact values (e.g., p = .015), unless the value falls below .001

APA does not require you to define common statistical symbols like p, t, F, or df. Your readers are expected to recognize them.

AMA Style

AMA uses a capital, italicized P with thin spaces around the operator.

  • Standard result: P = .01
  • Threshold comparison: P < .05, P > .01
  • Very small value: P < .001

The thin space between the letter, the sign, and the number is a specific AMA requirement. In most word processors, you can insert a thin space as a special character. If your journal’s submission system strips it out, a regular space is the fallback.

Other Major Journals

The New England Journal of Medicine, JAMA, and the Annals of Internal Medicine all require that p-values smaller than .001 be reported as “p < .001.” The SAMPL guidelines, widely used for biomedical statistical reporting, set the same floor. The one exception: studies of genetic associations, where extremely small p-values (like 5 × 10⁻⁸) carry specific meaning and are reported in full.

Always Report the Exact Value

One of the most common mistakes in scientific writing is replacing the actual p-value with a simple declaration like “p < .05” or “not significant.” This strips away useful information. A result with p = .049 and one with p = .003 both clear the .05 threshold, but they tell very different stories about how compatible the data are with the null hypothesis.

The same logic applies to results above .05. Writing “p = .08” gives the reader far more to work with than “NS” or “not significant.” A p-value of .08 in a small, underpowered study means something different from p = .74 in a large one. Readers and reviewers need the actual number to judge for themselves.

The current consensus in biomedical publishing is clear: report p-values as continuous quantities. Don’t reduce them to a binary of significant versus not significant.

What to Report Alongside the P-Value

A p-value alone tells you how surprising your data would be if there were truly no effect. It does not tell you how large the effect is, or how precisely you’ve estimated it. That’s why most journals and style guides now expect you to report p-values alongside effect sizes and confidence intervals.

An effect size (like a mean difference, odds ratio, or correlation coefficient) answers the practical question: how big is this? A 95% confidence interval gives the range of plausible values for that effect. Together, these three numbers let the reader assess whether a result is statistically notable, practically meaningful, and precisely measured, or whether it’s a noisy estimate that happens to cross an arbitrary line.

In practice, a well-reported result looks something like this: “Participants in the intervention group scored 4.2 points higher on average (95% CI: 1.1 to 7.3, p = .008).” The reader gets the direction, the size, the uncertainty, and the statistical evidence in one sentence.

P-Values in Tables and Figures

You have two main options for presenting p-values in tables, and most style guides ask you to pick one and use it consistently throughout your paper.

The first option is listing exact p-values in their own column. This is the more informative approach and works well when your table has room for it. Each row gets its own value: .042, .310, <.001, and so on.

The second option is the asterisk system, where you mark significant values with symbols and define them in a footnote beneath the table. The traditional convention is one asterisk for p < .05, two for p < .01, and three for p < .001. APA style places these definitions in a “probability note” that appears below any general or specific notes for the table. This approach saves space but sacrifices precision, since it groups all values within a range together.

If your journal allows it, exact values in a dedicated column are almost always the better choice. They give readers more information and align with the broader push away from binary significance cutoffs.

Language to Avoid

Certain phrases have become so ingrained in academic writing that they feel natural, but they misrepresent what p-values actually mean. Here are the ones to watch for:

  • “Highly significant” or “marginally significant”: These imply that p-values measure the strength or importance of an effect. They don’t. A smaller p-value means the data are less compatible with the null hypothesis, not that the effect is larger or more meaningful.
  • “Trending toward significance”: This phrase is typically used for p-values between .05 and .10. It suggests the result is almost real and just needs a little more data, which is not what the number means.
  • “The results proved” or “failed to prove”: P-values don’t prove or disprove anything. They quantify how surprising the observed data would be under a specific assumption.
  • “Non-significant, therefore no effect”: A p-value above .05 does not mean there is no effect. It means the study didn’t detect one at that threshold, which could reflect a truly absent effect or simply insufficient statistical power.

The cleanest approach is to describe your results in plain terms, state the direction and size of the effect, and let the p-value and confidence interval speak for themselves. Instead of “the difference was statistically significant,” try “the intervention group improved by 4.2 points (p = .008).” The number does the work that the label “significant” tries to do, and it does it more honestly.

Quick Reference for Formatting

  • APA: lowercase italic p, no leading zero, two or three decimals (p = .03, p = .006, p < .001)
  • AMA: uppercase italic P, thin spaces around the operator, no leading zero (P = .04, P < .001)
  • Floor value: never report beyond three decimal places; use < .001 for anything smaller
  • Context: always pair with an effect size and confidence interval
  • Exact values: always preferred over threshold statements like “p < .05”