A quasi-experiment is a study that tests the effect of something (a treatment, program, or event) on an outcome, just like a true experiment, but without randomly assigning people to groups. That single difference, the lack of random assignment, is what puts the “quasi” (meaning “resembling”) in front of “experiment.” Researchers still manipulate or identify an independent variable and measure its effect, but they work with groups that already exist or that form naturally rather than flipping a coin to decide who gets the treatment and who doesn’t.
How It Differs From a True Experiment
In a true experiment, participants are randomly assigned to either a treatment group or a control group. Random assignment is powerful because it tends to spread out all the individual differences (age, personality, health, motivation) evenly across groups. That way, if the treatment group scores differently on the outcome measure, you can be fairly confident the treatment itself caused the difference.
A quasi-experiment skips that step. The researcher either selects pre-existing groups (say, two different classrooms, or people who already smoke versus people who don’t) or lets some practical circumstance determine who receives the intervention. Everything else can look identical to a true experiment: there may be a control group, measurements before and after the treatment, and careful data collection. But because participants weren’t randomly sorted, there’s always the possibility that some hidden difference between the groups, not the treatment, is responsible for the results. That hidden difference is called a confounding variable.
Why Researchers Use This Design
Sometimes random assignment is impossible, impractical, or unethical. You can’t randomly assign people to experience childhood trauma, live through a natural disaster, or attend a particular school. You can’t ethically withhold a promising intervention from patients who need it just to create a clean control group. In those situations, a quasi-experiment is often the best available option.
Quasi-experiments are also less expensive and less time-consuming than full randomized controlled trials. They can be used to evaluate policy changes that have already happened, even if the researcher had no role in implementing them. A school district rolls out a new anti-bullying program in some schools but not others? That’s a ready-made quasi-experiment waiting to be analyzed. Quasi-experiments are pragmatic: they evaluate interventions as they actually play out in the real world, carried out by regular staff under normal conditions, rather than under the tightly controlled circumstances of a lab. That real-world quality often gives them stronger external validity, meaning the results are more likely to generalize to other settings and populations.
Common Quasi-Experimental Designs
Three designs come up most often:
- Pretest-posttest with a nonequivalent control group. This is the most widely used quasi-experimental design. The researcher picks a group to receive the treatment and a similar group to serve as the control. Both groups are measured before the treatment (pretest) and after it (posttest). If the two groups start out with similar scores on the pretest and similar demographic characteristics, any gap that appears on the posttest is more plausibly linked to the treatment. “Similar” is the key word here: because assignment isn’t random, the researcher has to check that the groups match up on relevant variables like age, background, or severity of a condition.
- One-group pretest-posttest. There is no control group at all. A single group is measured, given the treatment, and measured again. This is the simplest design but the weakest, because any change could be due to the treatment, the passage of time, outside events, or even the act of taking the pretest itself.
- Posttest-only with a nonequivalent control group. Two groups are compared after one receives the treatment, but neither group is measured beforehand. Without a pretest, it’s harder to know whether the groups were equivalent to begin with, which makes it more difficult to attribute differences to the treatment.
A fourth design worth knowing is the interrupted time-series design. Here, a researcher collects many measurements over time, introduces an intervention at a specific point, and then continues measuring. If the trend line shifts noticeably right after the intervention, that’s evidence of an effect. This design is commonly used to study the impact of new laws, public health campaigns, or organizational policy changes.
Threats to Internal Validity
Internal validity is the degree to which you can confidently say the treatment (and not something else) caused the observed change. Quasi-experiments face several well-known threats, originally outlined by psychologist Donald Campbell in 1957. Seven stand out:
- Selection bias. The groups being compared may differ in important ways from the start. This is the most fundamental problem in any nonrandomized design, because pre-existing differences can masquerade as treatment effects.
- History. Events outside the study that occur between the pretest and posttest can influence the outcome. If a national news story about mental health breaks during a study on a therapy program, it could affect participants’ responses regardless of the treatment.
- Maturation. People naturally change over time. They get older, more fatigued, more experienced, or simply more bored. Those natural shifts can look like treatment effects, especially in longer studies or studies with children.
- Testing effects. Taking a pretest can itself change how people perform on the posttest, simply because they’ve seen similar questions before.
- Instrument decay. If the measurement tool changes over time (observers become less attentive, a survey is revised midway), differences between pretest and posttest may reflect the measurement shift rather than a real change.
- Statistical regression. People who score at extreme highs or lows on a first measurement tend to score closer to average on a second measurement, purely due to chance. This “regression to the mean” can be mistaken for improvement or decline.
- Mortality (attrition). If participants drop out of one group more than the other, the remaining groups may no longer be comparable.
Nonrandomized designs also tend to overestimate how large a treatment effect is, precisely because these biases are harder to control.
How Researchers Strengthen the Design
Knowing these threats, researchers use several strategies to make quasi-experiments more convincing. Adding a well-matched control group is the most important step: it provides a baseline for comparison that accounts for at least some outside influences. Taking multiple measurements over time, rather than just one pretest and one posttest, helps reveal whether changes were already trending before the treatment began. Some researchers use a crossover approach, where the control group eventually receives the treatment too, which lets each group serve as its own comparison.
At the analysis stage, researchers carefully compare the demographics and baseline scores of each group. If the treatment and control groups start with statistically similar pretest scores and share key characteristics, the case for attributing posttest differences to the intervention grows stronger. More advanced statistical techniques can also adjust for known differences between groups, though they can never fully eliminate the risk of unmeasured confounders.
Where You See Quasi-Experiments in Psychology
Quasi-experiments show up wherever random assignment is off the table. Developmental psychologists use them to compare children of different ages, since you obviously can’t randomly assign a child to be five or fifteen. Clinical psychologists studying the effects of trauma compare people who have experienced a traumatic event with those who haven’t, without being able to randomly assign trauma. Educational psychologists evaluate new teaching methods by comparing schools or classrooms that adopted a program with those that didn’t.
They’re also the go-to design for studying the psychological impact of policy changes. When a state raises the minimum age for purchasing alcohol, or a city bans certain advertising, researchers can compare outcomes before and after the change, or between regions with and without the policy. These “natural experiments” are a subset of quasi-experiments in which some external event or decision, rather than the researcher, determines who is exposed to the variable of interest.
Strengths and Limitations at a Glance
The core strength of a quasi-experiment is practicality. It lets researchers study questions that would be impossible or unethical to investigate with random assignment, and it does so in real-world settings with real-world populations, including groups often excluded from tightly controlled trials. Results from quasi-experiments tend to reflect what actually happens when an intervention is rolled out under normal conditions, which is exactly the kind of evidence policymakers and practitioners need.
The core limitation is the inability to firmly establish cause and effect. Without randomization, there is always room for the argument that some other variable, not the treatment, produced the result. This doesn’t make quasi-experiments useless. It means their findings are strongest when multiple studies, using different groups and different designs, point in the same direction. A single quasi-experiment is a useful piece of evidence. A pattern of consistent results across several of them starts to look a lot like causation.

