Moral and ethical concerns have reshaped science at nearly every level, from what experiments are permitted to how results get published. The influence runs deep: ethical failures in the 20th century led directly to the consent requirements, oversight boards, and transparency rules that govern research today. Far from being separate from the scientific process, ethics now determines which questions scientists can pursue, how they pursue them, and what they must share when they’re done.
Nazi Experiments and the Birth of Research Ethics
Modern research ethics trace back to one of history’s darkest chapters. After World War II, the trials of Nazi doctors who performed horrific experiments on concentration camp prisoners produced the Nuremberg Code in 1947, a set of ten principles that became the foundation for ethical research worldwide. The very first principle declared that the voluntary consent of a human subject is “absolutely essential,” meaning the person must have full legal capacity to consent, must be free from force, fraud, or coercion, and must understand the nature, duration, purpose, and risks of the experiment before agreeing to participate. Principle nine established the right to withdraw from an experiment at any time.
The Nuremberg Code placed responsibility squarely on the individual researcher. Each person who initiates, directs, or participates in an experiment bears a personal duty to ensure the quality of consent, a responsibility that “may not be delegated to another with impunity.” This was revolutionary: for the first time, an international standard said that scientific curiosity could never override a person’s right to choose.
Ironically, the regulatory systems that followed departed from Nuremberg in important ways. When the United States adopted its first federal research regulations in 1974 (revised in 1991 and again in 2017), it shifted focus from individual scientists to the medical institutions that sponsor research. It also relied heavily on two mechanisms the Nuremberg Code never mentioned: review boards and written consent forms. The ethical impulse remained the same, but the machinery became procedural rather than personal.
The Belmont Report’s Three Pillars
In 1979, following revelations about the Tuskegee syphilis study, in which Black men were deliberately left untreated for decades, the U.S. National Commission for the Protection of Human Subjects published the Belmont Report. It defined three core ethical principles that still govern human research: respect for persons, beneficence, and justice.
Respect for persons means treating individuals as autonomous decision-makers and protecting those who lack full autonomy, such as children or people with cognitive impairments. Beneficence requires researchers to maximize potential benefits while minimizing harm. Justice demands that the burdens and rewards of research be distributed fairly, so that vulnerable populations aren’t exploited for science that primarily benefits the privileged. Each principle maps to a practical requirement: informed consent, careful assessment of risks versus benefits, and equitable selection of research subjects.
How Oversight Boards Changed What Gets Studied
Today, virtually no research involving human participants at a U.S. institution can proceed without approval from an Institutional Review Board (IRB). These committees evaluate proposed studies before they begin, checking whether risks have been minimized, whether the expected benefits justify those risks, whether participant selection is fair, and whether the informed consent process is adequate. They also assess whether provisions exist to monitor data for participant safety and to protect privacy.
This system means ethics shapes science before a single data point is collected. A study with a brilliant hypothesis but an exploitative design will never receive approval. When an IRB discovers serious or continuing noncompliance with its requirements, it reports the violation to federal regulators, and institutions can lose the authority to conduct federally funded research entirely. The practical effect is that ethical review acts as a gatekeeper, filtering out not just harmful research but also poorly designed studies that would waste participants’ time and trust.
Animal Research and the Three Rs
Ethical pressure hasn’t only changed how scientists work with people. Concern for animal welfare produced the “Three Rs” framework: Replacement, Reduction, and Refinement. Replacement asks whether an animal model can be swapped for a non-sentient alternative. Reduction pushes researchers to use the fewest animals possible while still generating robust results. Refinement requires minimizing pain, distress, and other adverse effects throughout every stage of an animal’s life in captivity, not just during the experiment itself.
These principles became law across Europe through the EU Directive 2010/63/EU, which regulates the use of animals in scientific research and requires all member states to transpose it into national legislation. The result is that a researcher in Berlin, Paris, or Rome must justify why an animal model is necessary and demonstrate that no viable alternative exists before a project is approved. What began as an ethical argument made by philosophers in the 1950s now shapes the daily practice of biomedical science across an entire continent.
Gene Editing and Ethical Red Lines
Sometimes ethics catches up to science only after a line has been crossed. In 2018, Chinese biophysicist He Jiankui announced that he had edited the genes of human embryos using CRISPR technology, resulting in the birth of twin girls with modified DNA. The international response was swift and overwhelmingly negative. Scientific and ethical bodies around the world issued condemnations. A 2015 consensus statement from the International Summit on Human Gene Editing had already warned that it would be “irresponsible to proceed with any clinical use of germline editing” until safety and efficacy concerns were resolved and broad societal consensus existed. He Jiankui had ignored both conditions.
He was convicted in a Chinese court in December 2019 for illegally practicing medicine, along with two associates. More broadly, the case triggered regulatory change: China introduced new legislation covering biosecurity, genetic technology, and biomedicine, with “high-risk” technologies like germline genome editing now requiring national-level review. The incident demonstrated that ethical norms don’t just constrain science passively. When violated, they generate active legal and regulatory responses that tighten the boundaries further.
Conflicts of Interest and Funding Bias
Ethics has also transformed how the scientific community handles money. Research consistently shows that funding sources influence outcomes. One landmark analysis found that 96% of authors who supported a particular class of heart medication had financial ties to the manufacturers, compared to 63% of authors with neutral or critical findings. Another study found that 95% of industry-funded articles on cancer treatments reported positive results, versus 62% of articles without industry funding. Papers in top medical journals were 67% more likely to favor a product when at least one author had disclosed a financial interest.
Having a financial conflict doesn’t automatically mean the research is biased, but it’s a recognized risk factor for bias. That recognition led to mandatory disclosure policies across nearly all major scientific journals. Researchers must now declare funding sources, consulting fees, stock ownership, and other financial relationships. Reviewers, editors, and readers use this information to evaluate whether financial interests may have shaped study design or interpretation. Transparency doesn’t eliminate bias, but it gives everyone the tools to spot it.
Transparency and Open Data
The ethical push for honesty in science extends beyond financial disclosure. Reproducibility, the ability for other scientists to repeat an experiment and get the same results, is considered a cornerstone of trustworthy research. Best practices now require complete sharing of both data and computer code. Platforms like CodeOcean and Zenodo allow researchers to archive the exact version of their analytical tools in a form suitable for publication, so anyone can verify the work.
The 2024 revision of the Declaration of Helsinki, adopted by the World Medical Association at its General Assembly in Helsinki, Finland, strengthened these commitments. The updated declaration calls for improved transparency in clinical trials, increased protection for vulnerable populations, and stronger commitments to fairness and equity in research. Clinical trial registration, which requires researchers to publicly declare a study’s design and goals before collecting data, is now a near-universal expectation. This prevents the selective reporting of only favorable results, a practice that once distorted entire fields of medicine.
Artificial Intelligence as the Next Frontier
As AI becomes embedded in scientific research, ethical questions are evolving again. AI systems used in healthcare and research raise concerns about algorithmic bias: if training data reflects existing inequalities, the AI’s outputs will too. There are growing calls for AI developers to transparently explain how their products work, demonstrate that models were produced through legal means, and show that the technology is safe with clear accountability for harm. Researchers who build AI tools face increasing pressure to publish their models in peer-reviewed literature rather than keeping them proprietary.
These demands echo the same principles that emerged from Nuremberg and the Belmont Report: transparency, accountability, and protection from harm. The technology changes, but the ethical logic remains remarkably consistent. Science advances fastest when the public trusts it, and that trust depends on visible, enforceable ethical standards that put human welfare ahead of convenience or profit.

