Clinical studies are research projects involving human volunteers, designed to test whether a medical treatment, drug, device, or behavioral approach is safe and effective. They are the core mechanism through which nearly every medical advance reaches patients. The National Institutes of Health formally defines a clinical trial as a study in which people are assigned to one or more interventions to evaluate the effects on health-related outcomes. Without this process, no new prescription drug can reach the market in the United States.
Interventional vs. Observational Studies
Clinical studies fall into two broad categories, and the distinction matters because it determines what kind of conclusions researchers can draw.
Interventional studies (also called experimental studies) are what most people picture when they hear “clinical trial.” Researchers actively do something to participants: give them a new drug, test a medical device, or assign them to a specific diet or exercise program. Because the researcher controls the conditions, these studies are the strongest tool for proving that a treatment actually causes an improvement rather than just being associated with one.
Observational studies take the opposite approach. Researchers watch what happens naturally without intervening. They might track thousands of people over decades to see who develops heart disease, or compare the medical histories of people with and without a particular condition. These studies are valuable for spotting patterns and risk factors, but they cannot prove that one thing directly causes another. A researcher might find that coffee drinkers have lower rates of a certain disease, for example, without being able to confirm that coffee is the reason.
The Four Phases of a Clinical Trial
Before a new drug reaches your pharmacy shelf, it passes through a structured sequence of testing phases. Each one answers a different question, involves more people, and carries higher stakes.
Phase 1 is the first time a treatment is tested in humans. These trials use a small group of volunteers, often just a few dozen, and focus almost entirely on safety. Researchers are looking for side effects and figuring out appropriate dosing rather than measuring whether the drug works.
Phase 2 expands the pool to a larger group, typically several hundred people who have the condition the drug is meant to treat. The goal shifts toward effectiveness: does this treatment actually improve the condition? Researchers continue monitoring safety but begin collecting data on how well the drug performs.
Phase 3 involves hundreds to thousands of participants and is the pivotal stage. These large-scale trials generate the evidence regulators need to decide whether to approve a drug. They compare the new treatment against existing standard treatments or placebos, and they run long enough to detect side effects that might not show up quickly.
Phase 4 happens after a drug has already been approved and is on the market. These studies monitor long-term safety and effectiveness in the general population, where patients are more diverse and conditions less controlled than in earlier phases. Sometimes Phase 4 studies reveal rare side effects that lead to new warnings or, occasionally, a drug being pulled from the market.
The odds of making it through all four phases are low. Research covering drug development programs from 2000 to 2015 found that only about 13.8% of drugs entering Phase 1 eventually win approval. For cancer drugs specifically, the success rate has historically been even lower, dipping to 1.7% in 2012 before climbing to around 8.3% by 2015.
How Randomization and Blinding Work
The gold standard for interventional studies is the randomized, double-blind, placebo-controlled trial. Each element of that phrase serves a specific purpose in eliminating bias.
Randomization means participants are assigned to the treatment group or the control group by chance, not by a doctor’s choice. This prevents a situation where healthier patients end up in one group and sicker patients in another, which would skew results. It also balances out factors researchers might not even know about, like genetics or lifestyle habits, so that any difference between groups can be attributed to the treatment itself.
A placebo is an inactive substitute, like a sugar pill, given to the control group. Placebos exist because the simple act of receiving treatment can make people feel better. Without a placebo group, researchers can’t tell whether improvements come from the drug or from the psychological relief of being treated.
Blinding (sometimes called masking) means keeping people in the dark about who is getting the real treatment. In a single-blind study, only the participant doesn’t know. In a double-blind study, neither the participant nor the researcher administering treatment knows. In a triple-blind study, even the team analyzing the data is unaware. Blinding prevents expectations from influencing either the patient’s reported symptoms or the researcher’s interpretation of results.
Who Can Participate
Every clinical study has a specific list of inclusion and exclusion criteria that determine who qualifies. Inclusion criteria describe the target population: a study on a lung disease might require participants to be at least 40 years old, have a confirmed diagnosis for at least one year, and be a current or former smoker. Exclusion criteria filter out people whose other health conditions could complicate the results or put them at greater risk, such as those with sleep apnea or other chronic respiratory diseases.
These criteria exist to protect participants and to make results interpretable. If a study included people with widely different conditions, it would be difficult to know whether the treatment worked for the specific disease being studied. That said, overly narrow criteria have historically been a problem. Studies conducted mostly on younger white men, for instance, produce results that may not apply to women, older adults, or people of other racial backgrounds.
U.S. law now addresses this directly. Legislation passed in late 2022 requires sponsors of Phase 3 and other pivotal studies to submit a Diversity Action Plan to the FDA. These plans must include enrollment goals broken down by race, ethnicity, sex, and age, along with a rationale for those goals and an explanation of how the sponsor intends to meet them. The FDA published draft guidance on these requirements in June 2024.
Ethics and Participant Protections
Before any clinical study can begin, it must be reviewed and approved by an Institutional Review Board, an independent committee that evaluates whether the study is ethically designed and whether participants are adequately protected. The IRB reviews not just the study protocol but all materials participants will see, including recruitment ads and consent forms.
Informed consent is the cornerstone of participant protection. FDA regulations require that consent documents cover eight specific elements: a description of what the study involves, the risks and discomforts, potential benefits, alternative treatments available, how confidentiality will be maintained, what compensation or medical treatment is available if something goes wrong, who to contact with questions, and a clear statement that participation is voluntary. You can leave a clinical trial at any time for any reason.
What Happens to the Results
Clinical trial results don’t just sit in a researcher’s filing cabinet. Federal law requires that certain trials be registered on ClinicalTrials.gov and that results be submitted after the study ends. The 2016 Final Rule under the FDA Amendments Act spelled out exactly which trials must be submitted, when, and what counts as compliance. The Department of Veterans Affairs requires results within one year of a study’s completion date. NIH-funded trials carry similar expectations regardless of whether they fall under the FDA’s requirements.
Publishing results in a medical journal adds another layer of accountability. The International Committee of Medical Journal Editors requires that a trial be registered in a public database before enrollment begins as a condition of publication. This prevents researchers from running a study, disliking the results, and quietly burying them.
These transparency rules exist because selective reporting has been a real problem. When only positive results get published, doctors and patients get an inflated picture of how well a treatment works. Public registration and mandatory reporting help close that gap.
The Cost of Bringing a Drug to Market
Clinical trials are expensive, and the costs have become a major factor in drug pricing debates. A study published in JAMA Network Open analyzing U.S. drug development from 2000 to 2018 found that the average cost of developing a single new drug was about $172.7 million in 2018 dollars. That figure covers direct research costs only.
When you factor in the cost of all the drugs that fail during development (since companies fund many candidates for every one that succeeds), the average rises to $515.8 million. Add in the cost of capital, meaning the money tied up for years that could have been invested elsewhere, and the total climbs to roughly $879 million per approved drug. Costs vary dramatically by specialty: anti-infection drugs averaged around $379 million in total capitalized costs, while pain and anesthesia drugs averaged over $1.75 billion.

