What Are the 7 Principles of Ethics in Research?

The seven principles of ethics in research are: social and clinical value, scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent, and respect for potential and enrolled subjects. This framework, developed by researchers at the NIH Clinical Center, provides a step-by-step system for evaluating whether a study involving human participants is ethically justified. Each principle builds on the others, and all seven must be satisfied for a study to be considered ethical.

These seven principles expand on an older, foundational document called the Belmont Report, published in 1979. The Belmont Report established three broad ethical pillars: respect for persons, beneficence (maximizing benefits while minimizing harm), and justice (distributing the burdens and benefits of research fairly). The seven-principle framework translates those broad ideals into more specific, actionable requirements that researchers and review boards can apply to real studies.

Social and Clinical Value

A study is only worth conducting if its results can improve health or advance medical knowledge in a meaningful way. This principle exists because research exposes participants to inconvenience at a minimum and genuine risk in many cases. If a study has no realistic chance of producing useful information, those risks can’t be justified, regardless of whether participants agree to them.

The amount of value a study needs depends on what it asks of participants. A low-risk survey can be justified with modest expected benefits to knowledge. A study that involves experimental drugs or invasive procedures needs to offer proportionally greater potential to improve health. Research that exposes people to high risks and has no social value is considered unethical even when every participant gives fully informed consent.

Scientific Validity

A study must be designed well enough to actually produce reliable answers. Poor methodology wastes resources and, more importantly, exposes participants to risk for nothing. If the sample size is too small to detect a real effect, the measurements are unreliable, or the statistical plan is flawed, the study fails this principle before it even begins.

Scientific validity also means the research question has to be answerable with the proposed methods. A study that is structured in a way that can’t produce clear results is ethically problematic, not just scientifically weak, because it puts participants through an experience that cannot deliver on the reason the study was approved in the first place.

Fair Subject Selection

Who participates in a study, and who doesn’t, should be driven by the scientific question rather than by convenience or vulnerability. Historically, some populations bore a disproportionate share of research burdens. Prisoners, institutionalized individuals, and marginalized communities were overrepresented in risky studies while being excluded from the benefits of the resulting treatments.

Fair subject selection actually involves four distinct considerations that can sometimes pull in different directions. Researchers need to think about fair inclusion (enrolling people the results will apply to), fair burden sharing (not repeatedly targeting the same vulnerable groups), fair opportunity (giving people access to potentially beneficial experimental treatments), and fair distribution of third-party risks (protecting bystanders and communities). Designing inclusion and exclusion criteria ethically means balancing all four of these concerns rather than optimizing for just one.

Favorable Risk-Benefit Ratio

Before a study moves forward, the potential benefits to participants and society must outweigh the risks to participants. This assessment follows a structured logic. First, researchers identify all risks and take steps to minimize them. Then they identify all potential benefits and look for ways to enhance them. Only after that minimization work is done can they evaluate the balance.

When a study intervention directly benefits participants (for example, testing a new treatment for their existing condition), the personal benefit may justify the personal risk. But many studies involve interventions that offer no direct benefit to the participant, like drawing extra blood for research purposes or testing a drug on healthy volunteers. In these cases, researchers assess “net risks,” meaning the risks left over after subtracting any direct benefits. Those net risks must be kept to a level that’s justified by the study’s value to society. If the cumulative net risks across all study procedures are excessive relative to what the study stands to contribute, the study shouldn’t proceed.

Independent Review

Researchers have an inherent conflict of interest. They want their studies to happen. Independent review exists to check that bias by putting an impartial group between the researcher’s enthusiasm and the participant’s welfare. In the United States, this role is filled by Institutional Review Boards (IRBs), committees that evaluate whether a proposed study meets ethical standards before any participant is enrolled.

The concept dates back to 1965, when the director of the NIH proposed that all research involving human subjects be evaluated by an impartial panel of peers. Today, IRB members are required to disclose and recuse themselves from reviewing studies where they have personal or financial conflicts. The board checks that consent documents are clear, that risks are minimized, that participant selection is fair, and that the overall design is scientifically and ethically sound. No single researcher, no matter how well-intentioned, can serve as both advocate for the study and protector of the participants.

Informed Consent

Participants must understand what they’re agreeing to and must agree freely. The informed consent process has three core features: disclosure of all information a person needs to make an informed decision, efforts to ensure that information is actually understood, and protection of voluntariness so the decision to participate is genuinely free.

In practice, this means consent documents need to be written in language the participant population can understand, not in dense legal or medical terminology. Participants must be told that joining is voluntary, that refusing carries no penalty, and that they can withdraw at any time without losing access to benefits they’re otherwise entitled to. Consent isn’t a one-time signature on a form. It’s an ongoing process where participants retain the right to change their mind as the study progresses and new information emerges.

For populations that cannot consent for themselves, such as young children or adults with significant cognitive impairment, the process involves legally authorized representatives. But even in those cases, the principle requires maximizing the individual’s involvement in the decision to whatever degree is possible.

Respect for Enrolled Subjects

Ethical obligations don’t end once someone signs a consent form. This final principle covers everything that happens during and after the study. Researchers must protect participants’ privacy and confidentiality, monitor their well-being throughout the study, and follow through on every promise made during enrollment, particularly around how personal data will be used and stored.

Respect also means keeping participants informed. Research from qualitative interviews with study participants found that about half considered access to their own results, including personal test data and overall study findings, to be a core part of feeling respected. Participants also valued having opt-in and opt-out options for receiving specific types of results, appreciating the ability to change their mind about what information they wanted even after enrolling. Prompt follow-ups, timely communication about changes to risks or benefits, and even notification about the possibility of early study termination all fall under this principle.

At its simplest, this principle means treating participants as partners in the research process rather than as data sources. Their autonomy, privacy, and dignity remain priorities from the first interaction through the final follow-up.

How These Principles Become Law

In the United States, these ethical principles are codified in federal regulations known as the Common Rule (45 CFR 46, Subpart A). The Common Rule applies to all federally funded research involving human subjects and was most recently revised in 2018. It establishes the legal requirements for IRB review, informed consent, and protections for vulnerable populations including pregnant women, prisoners, and children, each addressed in separate regulatory subparts.

Internationally, the Declaration of Helsinki serves a similar function. Published by the World Medical Association, it sets ethical standards for medical research worldwide and was most recently revised in 2024. While it doesn’t carry the force of law the way the Common Rule does in the U.S., it shapes research ethics policies in countries around the globe and is referenced by most major medical journals as a condition of publication.