Presenting research findings means organizing your data so readers or audience members can follow your reasoning, trust your evidence, and understand what it means. The format changes depending on whether you’re writing a journal article, standing in front of a conference room, or pinning a poster to a board, but the core principles stay the same: lead with what you found, support it visually, and keep interpretation separate from raw results.
The Standard Structure for Written Findings
Most research papers follow what’s known as the IMRAD structure: introduction, methods, results, and discussion. Your findings live in the results section, and the single most important discipline here is reporting what you observed without explaining why. The results section presents data. The discussion section interprets it. Mixing the two is one of the most common mistakes in academic writing.
In practice, this means your results section should state outcomes, report measurements, and describe patterns in the data. If your experiment showed a 12% improvement in response time, that fact belongs in the results. Your theory about why it happened belongs in the discussion. The discussion is where you explain how your findings answer (or fail to answer) your research question, address limitations in your study design, and explore how the results might apply beyond your specific sample.
Organize your results around your research questions or hypotheses, not around the chronological order you ran your experiments. Use subheadings to walk the reader through each major finding. Each subsection should open with a clear statement of the result, then provide the supporting data. This gives readers a logical path through your evidence rather than forcing them to piece it together on their own.
Reporting Statistics Clearly
When your findings are quantitative, the way you report numbers matters as much as the numbers themselves. APA style, which most social and behavioral sciences follow, requires exact p-values reported to two or three decimal places. So you’d write p = .03 or p = .006, not just “p < .05.” The one exception: when a p-value is smaller than .001, you simply write p < .001.
But p-values alone tell an incomplete story. They indicate how surprising your result would be if there were no real effect, but they say nothing about how large or meaningful the effect actually is. This is why many journals now require effect sizes alongside every p-value. An effect size tells the reader whether the difference you found is trivially small or large enough to matter in practice. A study might produce a statistically notable p-value with a tiny effect size, meaning the result is real but practically irrelevant.
There’s a growing movement away from the binary language of “statistically significant” and “not significant.” That pass/fail framing, anchored to the .05 threshold, has led to entire studies being shelved because their p-values landed at .06 or .08. The better approach is to report exact values and let readers evaluate the evidence on a continuum. A p-value of .072 should be reported as p = .072, not buried under the blanket statement of p > .05. Pair that with effect sizes, confidence intervals, or other measures of practical importance, and your reader gets a far more honest picture of what you found.
Using Tables and Figures Effectively
Most readers look at your figures and tables before they read a single paragraph. That’s not laziness; it’s how people process dense information. Your visuals should be able to stand on their own. A reader glancing at a table should be able to form an opinion about the results without needing the main text to explain what they’re seeing.
The choice between a table and a figure depends on what you’re showing. Tables work best when readers need exact values or when you’re comparing many variables across groups. Figures work best when you want to show trends, distributions, or relationships. A line chart showing how pain scores dropped over six weeks communicates trajectory instantly. The same data in a table would require the reader to mentally plot the numbers.
A few design principles make your visuals more effective regardless of format. Use vector-based graphics when possible, since they can be zoomed or printed without losing quality. Choose color palettes that are accessible to colorblind readers. Red-green combinations are the most common problem, but low-contrast pairings of any kind reduce readability. Tools like ColorBrewer (colorbrewer2.org) help you pick palettes that encode data clearly, and WebAIM’s contrast checker lets you verify that your foreground and background colors meet the recommended 4.5-to-1 contrast ratio. Never use color as the only way to distinguish data. Add labels, patterns, or shapes so the information survives black-and-white printing.
Presenting Qualitative Findings
If your research is qualitative, you won’t have p-values or bar charts. Your findings are themes, and presenting them well requires a different architecture. The goal is to walk your reader through each theme, show how it emerged from the data, and substantiate it with direct quotes from participants.
Structure your findings section around your themes, using each one as a subheading. Under each theme, describe the categories that support it. For example, if your theme is “barriers to patient engagement,” your supporting categories might include things like communication gaps, scheduling difficulties, and trust concerns. This layered approach lets the reader see how individual observations built up into the broader pattern you’re reporting.
Quotes are the evidence in qualitative work, but choosing them strategically makes the difference between a convincing section and a collection of transcripts. Select quotes that most compellingly illustrate the theme. Use similar quotes to reinforce a strong pattern, and include divergent quotes when the theme involves disagreement among participants. One of the most common mistakes in thematic analysis is presenting each theme without enough depth, essentially naming a theme and moving on. That misses the opportunity to show your reader how the theme was constructed and why the data supports it.
Slides for Oral Presentations
Presenting findings in a talk follows different rules than presenting them on a page. The core constraint is cognitive: your audience can either read your slide or listen to you, but not both at the same time. Reading and listening both use verbal processing, so full sentences on a slide force a choice that degrades both channels.
Plan for roughly one slide per minute of speaking time. A 20-minute presentation should have about 20 slides. On each slide, use words sparingly as guide posts, not as scripts. Research on cognitive load suggests the brain processes about six visual elements comfortably. Beyond that, the effort required to make sense of a slide increases dramatically. So a single chart with a clear title and a few labeled data points will land far better than a slide crammed with three figures, a table, and a block of text.
When you show a data figure during a talk, narrate it. Tell the audience what the axes represent, point to the key comparison, and state the takeaway out loud. Don’t assume they’ll parse it on their own in the few seconds it’s on screen.
Poster Presentations
Conference posters sit somewhere between a paper and a talk. They need to be visually scannable from a few feet away while still containing enough detail for someone standing close to evaluate your methods and results. The most common print size is 48 inches wide by 36 inches tall (a 4:3 ratio). Digital posters are often 16:9 widescreen format, typically around 56 by 31.5 inches. Always check your conference’s size and orientation requirements before you start designing.
Your poster should follow the same logical flow as a paper (introduction, methods, results, discussion) but in compressed form. The results section is the visual centerpiece. Place your strongest figure or table in the most prominent position, usually the center or upper right of the poster. Use large, readable fonts and limit text to short paragraphs or bullet points. A poster that requires someone to lean in and squint has already failed at its primary job.
Transparency With All Results
One of the most damaging habits in research is selective reporting: highlighting findings that support your hypothesis while downplaying or omitting those that don’t. This creates publication bias, where the published literature skews toward positive results and gives a distorted picture of reality. Entire studies end up in filing cabinets because they didn’t produce the “right” p-value.
Present all of your tested hypotheses, whether the results were what you expected or not. Report exact values for every analysis. A finding that didn’t reach conventional thresholds still contributes to the field’s understanding of a question, especially when paired with effect sizes that help readers judge practical importance. Statistical significance is not the same as practical significance. A tiny, meaningless difference can be “significant” with a large enough sample, and a meaningful difference can fail to reach significance with a small one. Your job is to report the data honestly and let the discussion section contextualize what it means.

