Which Two Actions Are Good Research Practices?

The two actions most widely recognized as good research practices are pre-registering your study and sharing your data. These two steps address the biggest vulnerabilities in science: bias in how studies are conducted and the inability of others to verify results. While many habits contribute to strong research, pre-registration and data sharing form the foundation of trustworthy, reproducible work.

Pre-Registering Your Study

Pre-registration means creating a time-stamped, publicly accessible record of your research plan before you begin collecting or analyzing data. This record typically includes your research question, hypotheses, methods, and the specific analyses you intend to run. By locking in these details ahead of time, you prevent yourself (consciously or not) from shifting your goals after seeing the results.

This matters because one of the most common ways research goes wrong is through subtle, after-the-fact adjustments. A researcher might test dozens of outcomes, then only report the ones that turned out to be statistically interesting. Or they might reframe their hypothesis to match unexpected findings, a practice known as HARKing (Hypothesizing After Results are Known). Pre-registration makes these moves visible. Anyone can compare what was planned against what was actually done and reported, which keeps researchers accountable and makes the published findings far more credible.

Study registration is now required or strongly encouraged by most major medical journals and funding agencies. Clinical trial registries, for example, have been standard in medicine for years, and the practice is expanding rapidly into psychology, education, and other social sciences.

Sharing Your Data and Methods

The second core practice is making your raw data, analysis code, and research materials available to others. When other researchers can access the same dataset and run the same analysis, they can verify whether the reported findings actually hold up. This is the basic test of reproducibility, and it’s impossible without open data.

Data sharing also multiplies the value of research. Other investigators can use shared datasets to ask new questions, run sensitivity checks with different statistical approaches, or combine data from multiple studies to draw stronger conclusions. A single well-documented dataset can fuel dozens of secondary analyses that the original team never planned.

Effective data sharing goes beyond uploading a spreadsheet. It requires clear documentation: metadata that explains what each variable means, how the data were collected, what units are used, and any processing steps that were applied. The NIH recommends that when no formal metadata standard exists for a project, researchers create a “readme” file that walks others through the dataset so they can interpret it correctly. Without this context, raw numbers are often useless to anyone outside the original research team.

Why These Two Practices Matter Together

Pre-registration and data sharing reinforce each other. Pre-registration tells the world what you planned to do. Data sharing lets the world check what you actually did. Together, they close the loop on accountability. A pre-registered study with shared data is extremely difficult to manipulate, because every step from hypothesis to final result is transparent and verifiable.

These practices also serve as a natural defense against the three forms of research misconduct defined by the Office of Research Integrity: fabrication (making up data), falsification (manipulating data or methods so results are misrepresented), and plagiarism (using someone else’s work without credit). When your plan is public and your data are open, fabricating or falsifying results becomes far riskier. And because shared datasets carry clear attribution, the chain of intellectual credit is easier to trace.

Other Practices That Support Good Research

Pre-registration and data sharing are the two most frequently cited actions, but they sit within a broader ecosystem of responsible research habits. Thorough record-keeping is one of the most important. A well-maintained lab notebook, whether physical or electronic, should document not just successful experiments but also failed ones, unexpected observations, and even “bad” data points. NIH guidelines emphasize that all data goes into the notebook, and mistakes should be corrected with a visible strikethrough rather than deleted. This creates an honest, traceable record of the research process.

Ethical treatment of research participants is another pillar. The informed consent process requires three things: giving potential participants enough information to make a real decision, making sure they understand what they’ve been told, and ensuring their participation is genuinely voluntary. These protections are rooted in the principle that individuals should be treated as autonomous agents capable of deciding what happens to them.

Honest authorship practices also matter. The two minimum requirements for listing someone as an author on a paper are making a substantial contribution to the work and being accountable for the published results. Listing someone who didn’t meaningfully contribute (gift authorship) or leaving off someone who did (ghost authorship) are both violations of research integrity norms.

Financial conflict of interest disclosure rounds out the picture. Researchers receiving NIH funding, for example, must disclose any significant financial interests related to their work, including foreign financial interests above $5,000 from entities like universities or governments outside the U.S. These disclosures allow institutions and journals to assess whether financial incentives might be influencing the research.

How to Apply These Practices

If you’re conducting research at any level, you can start with straightforward steps. For pre-registration, platforms like the Open Science Framework allow you to create a time-stamped record of your study plan for free. You describe your hypotheses, your sample, your methods, and your planned analyses, then lock it in before data collection begins. This doesn’t prevent you from running exploratory analyses later. It simply requires you to label them as exploratory rather than presenting them as if they were planned all along.

For data sharing, the key is documentation. Organize your data in a standard format, write clear variable labels, and include a readme file or data dictionary that another researcher could follow without contacting you. Then deposit it in a public repository appropriate to your field. Many journals and funders now require this as a condition of publication or grant acceptance.

Both practices take extra time upfront but save the broader scientific community enormous effort downstream. They reduce wasted resources on findings that can’t be replicated, and they build the kind of trust in research that benefits everyone, from scientists to the public relying on their work.