Attrition rate in research is the percentage of participants who drop out of a study before it ends. If a clinical trial enrolls 500 people and only 400 complete it, the attrition rate is 20%. This number matters because losing participants can skew results, weaken conclusions, and introduce bias that undermines the entire study.
How Attrition Rate Is Calculated
The formula is straightforward: divide the number of participants who left the study by the number who originally enrolled, then multiply by 100. A trial that starts with 1,000 participants and loses 150 has a 15% attrition rate. Researchers sometimes calculate attrition at multiple time points throughout a study rather than only at the end, since participants can also have missing data at specific checkpoints without formally withdrawing.
The terms “attrition,” “dropout,” and “loss to follow-up” are often used interchangeably in practice. Technically, loss to follow-up refers to participants researchers simply can’t reach anymore, while dropout can include people who actively withdraw. But most published research treats them as the same problem: incomplete data from people who were supposed to be in the study.
Why Participants Leave Studies
An audit of clinical studies at a tertiary referral center found that nearly 88% of dropouts were due to loss to follow-up, not active withdrawal. The two most common reasons were practical: participants changed their phone numbers and couldn’t be contacted, or they moved to a different area. Only about 1% withdrew consent, and another 1% left because of adverse events from treatment.
Other common reasons across research more broadly include:
- Burden of participation: too many clinic visits, lengthy questionnaires, or invasive procedures
- Side effects: participants in drug trials may stop taking a medication that makes them feel worse
- Perceived lack of benefit: participants who don’t feel improvement may lose motivation
- Life changes: job transitions, family obligations, or health problems unrelated to the study
- Protocol nonadherence: participants who miss too many appointments or fail to follow study rules may be removed by investigators
When Attrition Becomes a Problem
Not all attrition is equally damaging. The key distinction is between random and nonrandom (differential) attrition. If participants drop out for reasons unrelated to the study, like moving for a new job, the remaining group still roughly represents the original population. The results lose statistical power because of the smaller sample, but they aren’t systematically distorted.
Differential attrition is far more dangerous. This happens when the people who leave are meaningfully different from those who stay. Imagine a trial testing a new antidepressant where participants experiencing severe side effects drop out at higher rates than those in the placebo group. The treatment group now contains only people who tolerated the drug well, making it look more effective and safer than it actually is. This kind of systematic dropout can completely invalidate a study’s conclusions.
Research suggests that attrition up to 20% is generally acceptable and unlikely to introduce major bias. Between 20% and 40%, significant bias becomes a real concern, particularly when the attrition is nonrandom. Above 40%, most researchers and reviewers consider the results unreliable regardless of what statistical corrections are applied.
How Attrition Introduces Bias
When participants disappear from a study, the group being analyzed no longer matches the group that was originally randomized. In a randomized controlled trial, the whole point of randomization is to create two equivalent groups so any difference in outcomes can be attributed to the treatment. Attrition breaks that equivalence. If more people drop out of one group than the other, or if the dropouts share characteristics (older, sicker, lower income), the comparison between groups becomes unreliable.
This also prevents researchers from conducting a true intention-to-treat analysis, which is the gold standard for clinical trials. Intention-to-treat means analyzing every person based on the group they were originally assigned to, regardless of whether they completed the study. When those people are simply gone, there’s no data to analyze, and the study must rely on workarounds that involve assumptions about what might have happened.
How Researchers Handle Missing Data
When attrition does occur, researchers use statistical methods to account for the gaps. The most common modern approaches fall into a few categories.
Multiple imputation generates several plausible estimates for each missing data point based on patterns in the available data. This produces multiple complete versions of the dataset, each slightly different. Researchers analyze all of them separately, then combine the results into a single estimate. The strength of this approach is that it accounts for the uncertainty of guessing what the missing values might have been, rather than pretending to know for sure.
Full information maximum likelihood takes a different path. Instead of filling in missing values, it estimates the study’s results directly using all the information contained in the incomplete dataset. It extracts as much signal as possible from whatever data exists. This method is especially common in studies using complex statistical models.
Both approaches are considered far more reliable than older methods like simply deleting incomplete cases or carrying forward the last available measurement. Those older techniques tend to understate uncertainty and can amplify the very bias that attrition creates.
Regulatory Expectations for Reporting
Regulatory agencies take attrition seriously when evaluating clinical trials submitted for drug approval. The European Medicines Agency states that the goal of every clinical trial should be complete data capture from all patients, including those who stop treatment. There is no fixed rule for the maximum number of missing values that regulators will accept, but trials are expected to thoroughly document and justify any gaps.
Specifically, trial reports should include the number and timing of dropouts, the reasons for each, and graphical summaries showing whether dropout patterns differ between treatment groups. The pattern of missing data matters as much as the statistical method used to handle it, because it helps regulators judge the likely direction of any bias. If more people dropped out of the treatment arm for side-effect-related reasons, for instance, that tells a different story than equal dropout across both arms for logistical reasons.
Strategies That Reduce Attrition
Researchers have identified six broad categories of retention strategies. Communication approaches include personalized emails, reminder letters signed by different members of the study team, and varied delivery methods to keep participants engaged. Case management assigns dedicated trial assistants to individual participants, maintaining a personal connection and troubleshooting barriers to continued participation.
Incentives, whether cash payments, vouchers, or small gifts, can improve retention, particularly for studies requiring frequent visits. Behavioral strategies like workshops that help participants set goals or understand the importance of the research can strengthen commitment. Methodological choices also play a role: blinded trials, where participants don’t know whether they’re receiving treatment or placebo, tend to have lower attrition than open trials because participants are less likely to leave based on perceived group assignment.
Perhaps the simplest and most effective strategy is minimizing participant burden. Shorter questionnaires, fewer clinic visits, flexible scheduling, and remote data collection (through phone calls or apps) all reduce the friction that drives people away. Studies designed with the participant’s experience in mind consistently retain more people than those optimized purely for data collection.

