The statistical analysis section belongs in your methods, typically as the final subsection, and it needs to do three things clearly: name the tests you used, explain why you chose them, and state your threshold for significance. Getting this right signals to reviewers and readers that your findings rest on solid ground. Here’s how to build each part.
Where Statistical Analysis Fits in Your Paper
Your statistical analysis write-up lives in the methods section, but the numbers it produces belong in the results. This distinction trips up many writers. In the methods, you describe your analytical plan: which tests you selected, how you determined your sample size, how you handled missing data, and what software you used. In the results, you report the output of those tests: the test statistics, p-values, effect sizes, and confidence intervals. Think of methods as the recipe and results as the dish.
Start With Your Significance Threshold
State up front the alpha level you used to determine statistical significance. In most fields, this is 0.05, meaning you accepted a 5% probability that your result occurred by chance. A straightforward sentence works: “Statistical significance was set at p < .05 for all analyses.” If you used a stricter threshold (like 0.01) or adjusted for multiple comparisons, explain why. Reviewers look for this early because it frames everything that follows.
Match Each Variable to Its Test
The core of your statistical analysis section pairs each type of data with the test you used to analyze it. Don’t just list tests in the abstract. Be specific about which variables went through which analysis. A strong example from a clinical trial: “Categorical variables were analyzed with the chi-square or Fisher exact test. Continuous variables were presented as mean ± SD or median (interquartile range) based on distribution and analyzed with the t test or Mann-Whitney test.” That sentence tells the reader exactly what happened and why the presentation varied.
If you ran subgroup analyses, interaction tests, or adjusted for confounding variables, describe those separately. The STROBE guidelines for observational studies specifically require authors to describe all statistical methods used to control for confounding, any methods for examining subgroups and interactions, and any sensitivity analyses. Even if your paper isn’t an observational study, these are useful benchmarks for thoroughness.
Report Your Sample Size Justification
Reviewers expect to see a power analysis explaining how you arrived at your sample size. This tells the reader your study was large enough to detect a meaningful difference if one existed. A good power analysis includes the expected effect, the alpha level, and the statistical power (typically 0.80, meaning an 80% chance of detecting a true effect). For example, one mechanical ventilation study calculated that each treatment group needed 136 subjects based on previous institutional data, with an alpha of 0.05 and power of 0.80.
If your sample size was determined by practical constraints rather than a formal power calculation, say so. Silence on this point looks like an oversight, not a deliberate choice.
Explain How You Handled Missing Data
Nearly every dataset has gaps, and how you dealt with them affects your results. State clearly whether you excluded incomplete cases, used a statistical method to estimate missing values, or applied another approach. If participants dropped out of a study, note how many and why, since the reasons for withdrawal can influence how readers interpret your findings.
One common but generally discouraged approach is replacing missing values with the average of existing data. This adds no new information, artificially inflates your sample size, and underestimates the true variability in your data. If you used a more sophisticated method, like building a prediction model from your existing variables to estimate the missing values, describe that process briefly. The goal is transparency: a reader should be able to assess whether your approach to missing data could have biased your results in either direction.
Document Assumption Testing
Every statistical test rests on assumptions about your data. A t test assumes your outcome variable follows a roughly bell-shaped distribution and that the groups you’re comparing have similar variability. Regression models carry their own set of requirements. Reporting that you checked these assumptions, and that your data met them, strengthens the validity of your results.
You can handle this briefly in the methods (“Normality was assessed using the Shapiro-Wilk test”) and note the outcome in the results. If an assumption was violated, explain what you did about it. Maybe you switched to a non-parametric alternative, or you transformed the data. Either way, the reader needs to know.
Formatting P-Values and Effect Sizes
When you move to the results section, formatting matters. APA style provides clear rules that many journals follow, even outside psychology. Report exact p-values to two or three decimal places (p = .03, p = .006). For very small values, write p < .001 rather than listing a string of zeros. Do not place a zero before the decimal point when the value can never exceed 1, which applies to p-values, correlations, and proportions. So it’s p = .04, not p = 0.04.
P-values alone tell you whether a result is statistically significant, but not whether it’s meaningful. Publishing guidelines increasingly require effect sizes and confidence intervals alongside p-values. An effect size quantifies how large the difference or relationship actually is, while a 95% confidence interval gives the range within which the true value likely falls. Reporting all three together, the p-value, the effect size, and the confidence interval, allows readers to draw more precise and reliable conclusions than any single measure provides. For binary outcomes like yes/no results, the CONSORT guidelines recommend presenting both absolute and relative effect sizes.
One detail that researchers sometimes skip: report your test statistics even when results are not significant. A non-significant finding is still a finding, and omitting the numbers makes it impossible for future researchers to include your work in meta-analyses.
Name Your Software and Version
Always specify the software package, its version number, and (when relevant) any specific libraries or add-on packages you used. This is a basic reproducibility requirement. A simple sentence at the end of your statistical analysis subsection works: “All analyses were performed using R version 4.3.1 (R Foundation for Statistical Computing, Vienna, Austria) with the lme4 package for mixed-effects models.” If your field or journal has a preferred citation format for software, follow it. Some software developers provide a specific reference paper you can cite, while others assign a DOI through archival services.
Cite Your Rationale When Tests Are Uncommon
Standard tests like t tests, chi-square tests, and basic regression rarely need citations. But if you used a less common analytical approach, cite the reference that supports your choice. One study evaluating airway reactivity, for instance, cited a methodological paper to justify its use of partition analysis. This small step prevents reviewers from questioning an unfamiliar method and saves you a revision round.
A Practical Checklist
Before submitting, verify that your statistical analysis section covers these elements:
- Alpha level: The pre-specified threshold for significance
- Statistical tests: Each test named and linked to the variables it analyzed
- Sample size justification: Power analysis or explanation of how the sample was determined
- Missing data: How gaps were handled and how many cases were affected
- Assumption checks: Confirmation that test assumptions were evaluated
- Software: Name, version, and relevant packages
- Effect sizes and confidence intervals: Planned alongside p-values in the results
- Subgroup or adjusted analyses: Any secondary analyses described separately from primary ones
If your study is a clinical trial, cross-reference the CONSORT checklist, a 25-item reporting standard that covers statistical methods, participant flow, and outcome estimation. For observational research, the STROBE statement serves a similar function and specifically requires you to describe methods for handling confounders, missing data, loss to follow-up, and sensitivity analyses. Using the appropriate checklist for your study design catches gaps that are easy to miss on your own.

