Attrition in research is the loss of participants before a study ends. It happens when people drop out, become unreachable, or stop completing follow-up assessments, and it’s one of the most common problems in studies that track people over time. Even a well-designed study can produce misleading results if too many participants leave, especially if the people who leave differ in important ways from those who stay.
How Attrition Works
In a longitudinal study or clinical trial, researchers need the same group of people to show up at multiple time points. Attrition refers specifically to what’s called a “monotone missing pattern”: once a participant drops out, they don’t come back. This distinguishes it from intermittent missingness, where someone skips one visit but returns for the next. Both create gaps in the data, but attrition is more damaging because it permanently shrinks the study sample.
The most common reason participants disappear is simply losing contact. A five-year audit of clinical studies at a tertiary referral center found that nearly 88% of all dropouts were classified as “lost to follow-up,” most often because participants changed phone numbers or moved to a different area. Only about 5% dropped out for not following the study protocol, and roughly 1% withdrew consent or left due to side effects.
Why Attrition Threatens Study Results
Attrition does three things to a study’s integrity. First, it reduces statistical power. Fewer participants means the study is less able to detect real effects. Second, it disrupts the composition of groups. In a randomized controlled trial, the whole point of randomization is to create balanced groups. If participants leave unevenly, that balance breaks down. Third, and most importantly, it can introduce bias that skews the findings.
This bias shows up clearly in demographic patterns. One study following adolescents after traumatic brain injury found that participants who completed the study had higher caregiver education and higher family income than those who dropped out. Ethnicity and the specific treatment group weren’t associated with dropout, but income was. The result: the study’s findings were biased toward higher-income families and didn’t fully represent the outcomes of families from lower socioeconomic backgrounds. This pattern is common across health research.
Random vs. Differential Attrition
Not all attrition is equally problematic. Researchers distinguish between two types based on whether the dropout pattern is related to the study itself.
- Random (non-differential) attrition happens when participants leave for reasons unrelated to the study or their condition. Someone moves, gets a new job, or simply loses interest. This still reduces sample size and weakens statistical power, but it doesn’t systematically distort the results in one direction.
- Differential attrition happens when dropout rates differ between study groups. If people in the treatment group quit more often because of side effects while the control group stays, the remaining treatment group is no longer comparable. This is considered a serious threat to internal validity because it can make a treatment look more or less effective than it actually is.
The same logic applies at finer levels. Attrition can be individual-level (specific people dropping out) or cluster-level (entire classrooms, clinics, or sites disappearing from the study). Each type distorts the data differently and requires different statistical handling.
How Much Attrition Is Too Much
There’s no universal cutoff, but many researchers and review bodies treat 20% as a rough threshold. Studies with overall attrition below 20% are generally considered lower risk for bias, while those above it raise concerns. Differential attrition of more than about 10 to 15 percentage points between groups is a red flag in clinical trials. These are guidelines, not hard rules. A study with 25% attrition where dropouts look statistically identical to completers is less concerning than one with 15% attrition concentrated among the sickest participants.
How Researchers Report and Handle It
The CONSORT guidelines, the standard reporting framework for randomized controlled trials, require researchers to account for every participant. Specifically, they must report the number of participants randomly assigned to each group, the number who received the intended treatment, the number analyzed for the primary outcome, and all losses and exclusions after randomization along with the reasons. A flow diagram is strongly recommended to make this transparent at a glance.
When it comes to analyzing the data despite dropouts, two main approaches exist. Intention-to-treat analysis includes every participant who was randomized, regardless of whether they completed the study or followed the protocol. This preserves the benefits of randomization and reflects real-world conditions where not everyone sticks with a treatment. Per-protocol analysis only includes participants who completed the study as planned, which gives a cleaner picture of what happens when a treatment is followed correctly but risks the biases that come with attrition. Both approaches are valid, but they answer different questions.
Strategies That Reduce Dropout
Researchers have tested a range of retention strategies, with varying success. Communication methods matter: the type of reminders, who signs them, and how they’re delivered (email, phone, mail) all influence whether participants stay engaged. Shorter questionnaires tend to reduce dropout compared to longer ones, which makes intuitive sense. Monetary incentives, vouchers, and small gifts have also been shown to help.
Some of the more effective approaches are structural. Assigning a dedicated trial assistant to manage follow-up for each participant, sometimes called case management, keeps people from slipping through the cracks. Behavioral interventions like workshops that help participants set goals around study participation can improve retention. Even the study design itself plays a role: blinded trials, where participants don’t know which group they’re in, tend to have lower attrition than open-label trials where everyone knows what treatment they’re getting.
Collecting multiple forms of contact information at enrollment, including backup phone numbers and addresses of close relatives, addresses the most common cause of attrition: simply losing touch with participants who move or change their number.

