Efficacy is how well something works under ideal, controlled conditions. In medicine and health, it specifically refers to whether a treatment, vaccine, or intervention produces its intended effect when everything goes according to plan: the right patients, the right doses, careful monitoring, and strict protocols. It’s distinct from a related term, effectiveness, which describes how well that same intervention works in the messier reality of everyday life.
Understanding the difference matters because efficacy numbers are what you see in headlines about new drugs and vaccines, and they don’t always translate directly to what you’d experience in a typical doctor’s office.
Efficacy vs. Effectiveness
Efficacy and effectiveness answer two different questions. Efficacy asks: does this treatment work when everything is optimized? Effectiveness asks: does it work in the real world?
An efficacy trial enrolls a carefully selected, often homogeneous group of patients. Researchers use strict criteria to decide who qualifies, controlling for age, sex, disease severity, and other variables. The treatment is delivered by trained specialists following a precise protocol. This setup maximizes the chance of detecting whether the treatment actually does something, but it also creates conditions that don’t reflect what happens when millions of different people use it.
Effectiveness studies flip that approach. They enroll broader, more diverse populations with fewer exclusion criteria, deliver the treatment in routine clinical settings, and account for the reality that patients skip doses, switch doctors, or have other health conditions complicating the picture. Factors like patient preferences, the quality of the patient-doctor relationship, and whether someone lives in an urban or rural area can all influence outcomes in ways that controlled trials deliberately eliminate.
This is why a drug with 90% efficacy in a trial might perform somewhat differently in practice. The efficacy number isn’t wrong. It just reflects a best-case scenario.
How Efficacy Is Calculated
The most intuitive example comes from vaccines. Vaccine efficacy is calculated by comparing the rate of disease in vaccinated people against the rate in unvaccinated people, then expressing the difference as a percentage. If 100 out of 10,000 unvaccinated people get sick and only 5 out of 10,000 vaccinated people get sick, the vaccine reduced the disease rate by 95%.
For drugs and other treatments, efficacy is measured similarly by comparing outcomes in a treatment group against a control group (usually receiving a placebo or standard care). But how those outcomes get reported can change the impression dramatically.
Relative risk reduction describes how much the treatment shrank the risk compared to the control group. If 20% of untreated patients had a bad outcome and 12% of treated patients did, the relative risk reduction is 40%. That sounds impressive. Absolute risk reduction, on the other hand, reports the simple difference: 20% minus 12% equals 8%. That means out of every 100 people treated, 8 were spared a bad outcome. Both numbers describe the same result, but they feel very different. When you see efficacy figures in the news, checking whether they’re relative or absolute gives you a much clearer picture of how meaningful the benefit really is.
One useful concept here is the “number needed to treat.” In the example above, you’d need to treat about 13 people for one person to benefit (100 divided by 8). The lower that number, the more impactful the treatment.
How Clinical Trials Measure It
Efficacy gets tested across multiple phases of clinical trials, with the bar rising at each stage. Phase II trials enroll 100 to 300 people and focus on determining whether the treatment actually works while continuing to monitor safety. Phase III trials scale up to 1,000 to 3,000 participants to confirm that efficacy, compare the treatment against existing options, and collect enough data for regulatory approval.
Within these trials, different methods of analyzing the data can produce different efficacy estimates. The most common approach, called intention-to-treat analysis, includes every patient who was randomly assigned to a group, regardless of whether they actually followed the treatment plan. This gives a conservative, real-world-leaning estimate because it accounts for people who dropped out or didn’t take their medication as directed.
A per-protocol analysis, by contrast, only looks at patients who followed the study rules completely. This tends to show a larger treatment effect because it strips out the noise of non-adherence. Both numbers are informative. The intention-to-treat result tells you what to expect when you assign a treatment to a population. The per-protocol result tells you what’s possible when patients stick with it perfectly. In one major heart rhythm trial (CABANA), the intention-to-treat analysis showed no statistically significant benefit of one procedure over drug therapy, but the per-protocol analysis showed a meaningfully larger effect, highlighting how adherence alone can shift the apparent efficacy of a treatment.
What Gets Measured as an “Outcome”
Efficacy is only as meaningful as the outcome it’s measuring, and not all trials measure the thing you’d most care about. Sometimes researchers use surrogate endpoints: lab values or biomarkers that stand in for the actual health outcome. Blood pressure reduction, for example, is a validated surrogate for stroke risk. Decades of trials have confirmed that lowering systolic blood pressure reliably reduces the chance of stroke, so a blood pressure trial doesn’t need to wait years for strokes to occur. It can measure blood pressure changes over a shorter period in a smaller group.
This approach makes trials faster and more practical, but surrogate endpoints aren’t always reliable proxies. A drug might improve a lab number without actually making patients live longer or feel better. When reading about a treatment’s efficacy, it’s worth noting whether the trial measured something you’d directly care about (survival, symptom relief, fewer hospitalizations) or a surrogate marker that may or may not translate to those outcomes.
How the FDA Uses Efficacy for Approval
For a new drug or biologic to reach the market in the United States, the FDA requires “substantial evidence of effectiveness.” For decades, this was interpreted as needing at least two adequate, well-controlled clinical trials showing the treatment works. In 1997, Congress introduced more flexibility: a single trial plus confirmatory evidence could also meet the standard.
Beyond the number of trials, the FDA considers the strength of the evidence holistically. Trial design, the endpoints chosen, statistical methods, and the size of the observed effect all factor in. A treatment for a life-threatening disease with no existing options might be approved on less extensive evidence than one entering a crowded market for a mild condition. The standard isn’t a single fixed threshold but a judgment call that weighs certainty of benefit against the urgency of the medical need.
Why the Distinction Matters to You
When you see a headline saying a new vaccine is “95% effective” or a drug “cuts risk by half,” those numbers almost always come from efficacy trials conducted under controlled conditions. They represent the ceiling of what the treatment can do. Real-world performance typically comes in somewhat lower, not because the treatment failed, but because life introduces variability that clinical trials are designed to eliminate.
Knowing this doesn’t mean you should distrust efficacy data. It means you can interpret it more accurately. A treatment with high efficacy that also shows strong effectiveness in follow-up studies is one you can feel genuinely confident about. And when relative risk reductions sound dramatic, converting them to absolute numbers gives you a grounded sense of how much difference the treatment is likely to make for someone like you.

