What Is a Waitlist Control Group and How Does It Work?

A waitlist control group is a group of study participants who are assigned to receive no treatment during the active phase of a trial but are guaranteed to receive the intervention once that phase ends. It functions as a comparison group, letting researchers measure whether the treatment works better than simply waiting, while ensuring no one misses out on a potentially helpful therapy. This design is especially common in psychology, rehabilitation, and behavioral health research.

How a Waitlist Control Works

In a typical study, participants are randomly assigned to either the treatment group or the waitlist group. The treatment group begins the intervention immediately. The waitlist group continues with their normal routine and receives no active intervention during that period, but they complete the same assessments (questionnaires, symptom measures, cognitive tests) at the same time points as the treatment group.

Once the comparison period ends, the waitlist group crosses over and receives the same treatment. If an intervention is designed to last five weeks, for example, the waitlist period is usually set to match: five weeks of waiting, followed by five weeks of treatment. Researchers try to keep the waiting period as short as possible. Restricting access to a potentially beneficial treatment longer than necessary raises ethical concerns and can threaten participants’ well-being.

The key comparison happens during that initial window, before the crossover. Researchers look at how much the treatment group improved relative to the waitlist group, which serves as the baseline for what happens without any intervention at all.

Why Researchers Use This Design

The central appeal is ethical. In a standard placebo-controlled trial, some participants never receive the real treatment. That’s acceptable when testing a new drug with unknown risks, but it becomes harder to justify in fields like psychotherapy or rehabilitation, where withholding a likely beneficial treatment from people seeking help feels problematic. A waitlist design solves this by ensuring every participant eventually gets treated.

This also helps with recruitment. People are more willing to join a study when they know they’ll receive the intervention regardless of which group they land in. For researchers working with small patient populations, like adults with Parkinson’s disease and co-occurring depression, that recruitment advantage can make the difference between a feasible study and an impossible one.

The design is most common in psychology, psychiatry, and rehabilitation research. Cognitive behavioral therapy trials, mindfulness programs, group therapy interventions, and digital health tools frequently use waitlist controls. It’s generally considered sufficient for early-stage (pilot) studies aiming to show that an intervention has any effect compared to doing nothing.

The Problem With Inflated Results

Waitlist controls have a well-documented tendency to make treatments look more effective than they actually are. The inflation can be dramatic. In one meta-analysis of exposure and response prevention therapy for OCD, effect sizes were almost six times larger when the therapy was compared to a waitlist group than when it was compared to other control conditions. Whether the trial used a waitlist or another type of control accounted for roughly one-third of the variance in effect sizes across studies.

This happens for two reasons working in opposite directions. First, the treatment group may improve partly because of non-specific factors: the attention from a therapist, the structure of regular appointments, the simple expectation that something helpful is happening. A waitlist group doesn’t control for any of those factors, so their effects get bundled into the apparent treatment benefit.

Second, and more surprising, the waitlist group may actually get worse or stagnate in ways that wouldn’t happen outside of a study. Research on depression found that participants in waitlist groups improved less than expected, as if being told to wait made them passive. The implicit message of a waitlist is “hold off for now,” and people tend to comply. One smoking cessation trial found that some waitlisted participants who had negative feelings about waiting decided to delay their quit attempt. In an alcohol study, participants who were most ready to take action actually drank more after being placed on a waitlist, consuming roughly six extra drinks per week compared to their counterparts who received the intervention immediately.

So the gap between the treatment group and the waitlist group widens from both sides: treatment participants improve partly due to placebo-like effects, and waitlist participants may deteriorate or freeze in place. The result is an inflated estimate of how well the treatment works.

Waitlist vs. Active Control Groups

The alternative to a waitlist (or “passive”) control is an active control group. In this design, the comparison group participates in some kind of alternative activity: a different type of therapy, an educational program, or a structured activity that isn’t expected to target the specific outcome being measured. Active controls account for the effects of time, attention, and expectation, isolating the specific ingredient the researchers are testing.

Interestingly, a large analysis aggregating data from over 1,500 cognitive training studies found no meaningful difference in effect sizes between studies using active controls and those using passive controls like waitlists. Bayesian statistical analysis strongly supported the conclusion that control group type did not influence results for objective cognitive measures. This suggests the inflation problem may be more pronounced in certain fields (like psychotherapy for mood disorders) than others.

In practice, the choice between control types depends on the research question. A waitlist control answers: “Is this treatment better than doing nothing?” An active control answers: “Is this treatment better than a comparable alternative?” The second question is harder to answer but more useful for real-world decision-making.

Limitations Beyond Bias

Waitlist designs create a structural problem for long-term data. Once the waitlist group crosses over and receives treatment, there’s no untreated comparison group left. Researchers can track whether the treatment group maintained its gains over months or years, but they can’t compare those outcomes to a group that never received the intervention. This makes it difficult to study how durable treatment effects are relative to natural recovery.

Generalizability is another concern. Being placed on a waitlist inside a research study is not the same as being on a real-world clinical waitlist, and it’s certainly not the same as never seeking treatment at all. Participants in a study have already taken the step of enrolling, completed intake assessments, and may have heightened expectations. How they behave during the waiting period (whether they seek other help, change their habits, or simply stall) may not reflect what would happen in a comparable population outside the study. This is particularly true for online studies, where participants actively sought out the intervention and may feel more disappointment about being asked to wait.

Finally, the design is inappropriate for serious or acute medical conditions. If a condition requires immediate care, randomly assigning someone to a waiting period is ethically unacceptable. Waitlist controls work best when the condition is chronic, stable, and non-life-threatening, and when a delay of several weeks poses no meaningful risk.