What Is Attrition in Psychology: Causes and Effects

Attrition in psychology refers to the loss of study participants over time. It’s one of the most common problems researchers face, and it can seriously compromise the quality of a study’s findings. In clinical trials for mental health treatments, roughly 30% of participants drop out before the study ends, though rates can swing anywhere from under 10% to over 60% depending on the study design and population.

How Attrition Works

Every psychology study starts with a set number of participants. As the study progresses, some of those participants stop showing up. They move away, lose interest, find the procedures too burdensome, or simply can’t be reached. This gradual loss is attrition, and the longer a study runs, the worse it tends to get. Longitudinal studies that follow people over months or years are especially vulnerable.

The problem isn’t just that you end up with fewer people to analyze. The people who drop out are often systematically different from the people who stay. Research has consistently found that participants who are male, single, lower-income, less educated, from minority ethnic groups, or dealing with depression or substance use are more likely to leave a study early. That means the remaining sample may no longer represent the population the study was trying to learn about.

Why Attrition Threatens Study Results

Attrition creates two distinct problems: it reduces statistical power (making it harder to detect real effects because there are fewer data points), and it introduces bias. The bias issue is the more dangerous one. If the people who drop out share characteristics that are relevant to whatever the study is measuring, the results become skewed in ways that aren’t always obvious.

Consider a study testing a new therapy for anxiety. If participants with the most severe anxiety are the ones who quit, the therapy might look more effective than it actually is, because only people with milder symptoms stuck around to be measured. The study would report positive results that don’t reflect what would happen in the real world, where the full range of severity exists. This is a threat to external validity, meaning the findings can’t be generalized beyond the specific group that completed the study. It also threatens internal validity, meaning you can no longer be confident the therapy caused the improvement rather than the selective loss of the most anxious participants.

Differential vs. Non-Differential Attrition

Not all attrition is equally problematic. When participants drop out at similar rates across all groups in a study, that’s non-differential attrition. It reduces your sample size and may limit your statistical power, but it doesn’t necessarily distort the comparison between groups.

Differential attrition is a bigger concern. This happens when dropout rates differ between the treatment and control groups. A systematic review of health behavior trials found that average attrition was 18% in intervention groups and 17% in control groups, a small gap in that case. But in many studies the gap is much wider. If 40% of participants leave the treatment group but only 15% leave the control group, any comparison between those groups becomes unreliable. The two groups may no longer be equivalent in the ways they were at the start, which undermines the entire logic of a controlled experiment.

Typical Dropout Rates

Attrition rates vary widely by field and study type, but some benchmarks help put the numbers in perspective. A meta-analysis of trials involving young people at high risk for psychosis found a pooled attrition rate of about 30%, with individual studies ranging from 6% to 57%. Follow-up assessments after the main intervention had even higher dropout, around 34%.

Other mental health research shows similar patterns. Cognitive behavioral therapy trials across various disorders average about 26% attrition. Studies of outpatient mental health care for children and adolescents report a mean dropout of 28%. Trials for generalized anxiety disorder tend to fare better, with a pooled rate around 17%. Web-based interventions for adolescent depression show the widest spread, with attrition ranging from 0% to 61%. Many researchers now recommend building a 30% attrition rate into their planning from the start, recruiting extra participants to compensate for the expected loss.

What Causes Participants to Drop Out

Some reasons are straightforward and logistical. Participants relocate, change jobs, or simply can’t make it to the study site. Transportation barriers, inconvenient scheduling, and the sheer time commitment of repeated assessments all play a role. Studies that require participants to travel to a specific location or undergo lengthy or uncomfortable procedures lose people faster.

Other reasons are more closely tied to the research itself. Studies on sensitive topics like trauma, substance use, or sexual behavior tend to see higher attrition because the process of being studied can feel intrusive or distressing. Poor rapport between researchers and participants matters too. If participants don’t feel valued or don’t understand why the study matters, they’re less likely to keep coming back. Compensation also makes a difference: studies that offer meaningful incentives retain participants better than those that don’t, though the type of incentive (cash, gift cards, small gifts) and its timing can affect how well it works.

How Researchers Handle Missing Data

The gold standard for dealing with attrition in clinical trials is called intention-to-treat analysis. The idea is simple in principle: you analyze every participant based on the group they were originally assigned to, whether or not they completed the study. If someone was assigned to the therapy group but dropped out after two sessions, their data still counts toward the therapy group’s results.

In practice, this is tricky because you often don’t have final outcome data for people who left. Researchers use statistical techniques to estimate what those missing results might have looked like, based on the data they do have. This approach isn’t perfect, but it prevents a common and misleading shortcut: only analyzing data from people who finished the study, which can make a treatment look better than it is.

Reporting standards now require researchers to be transparent about attrition. The CONSORT framework, widely used for clinical trials, calls for a flow diagram showing exactly how many participants were excluded, how many were assigned to each group, and how many were lost at each stage along with the reasons. This lets readers judge for themselves whether the dropout pattern might have affected the conclusions.

Strategies That Reduce Attrition

Researchers have tested a range of approaches to keep participants engaged. Communication strategies matter: personalized emails, letters signed by the lead researcher rather than a generic study coordinator, and varied contact methods all help. Some studies have found that the simple act of switching from standard mail to priority or tracked delivery increases response rates for follow-up assessments.

Monetary incentives are the most studied retention tool. Offering payment for each completed visit, rather than a lump sum at the end, tends to work better because participants have an immediate reason to show up. Gift cards, vouchers, and small non-monetary gifts have also shown positive effects, though cash generally outperforms alternatives. Beyond incentives, making the study experience as convenient and respectful as possible, keeping visits short, offering flexible scheduling, and maintaining genuine relationships with participants, reduces the friction that leads people to quietly disappear.