Which Experiments Involve Operant Conditioning?

If you’re looking at a list of experiments and trying to identify which one uses operant conditioning, the key is simple: operant conditioning is any experiment where consequences (rewards or punishments) are used to change a voluntary behavior. The classic answer is B.F. Skinner’s experiments with rats pressing levers and pigeons pecking keys inside what became known as the Skinner Box. If your list includes Thorndike’s puzzle box with cats, that also counts. If it includes Pavlov’s dogs salivating to a bell or Watson’s Little Albert experiment, those are classical conditioning.

The Core Rule for Identifying Operant Conditioning

Operant conditioning changes behavior through what happens after the behavior. The animal or person does something voluntarily, and then a consequence follows that makes the behavior more or less likely to happen again. Classical conditioning works the other way around: it changes behavior by pairing stimuli that come before the response, and the response itself is usually involuntary, like salivating or flinching.

So when you’re scanning a list of experiments, ask yourself: is the subject doing something on purpose, and is the experimenter manipulating what happens next? If yes, it’s operant conditioning. Is the subject reacting automatically to a paired stimulus? That’s classical conditioning.

Experiments That Use Operant Conditioning

The most frequently tested example is Skinner’s operant conditioning chamber, often called the Skinner Box. In these experiments, a rat was placed inside a small chamber containing a lever. When the rat pressed the lever, it received a food pellet. Over time, the rat pressed the lever more frequently because the behavior was followed by a reward. Skinner also ran the same setup with pigeons, who pecked an illuminated disk to receive food. He published this foundational work in 1938 in The Behavior of Organisms, and it became the basis for decades of behavioral research.

Skinner chose lever-pressing and key-pecking deliberately because these are simple, repeatable actions. He measured the rate of responding as an indicator of how strongly the animal had learned the behavior. From these experiments, he developed reinforcement schedules (fixed-ratio, variable-ratio, fixed-interval, variable-interval) that describe how the timing and frequency of rewards affect behavior patterns.

Thorndike’s puzzle box experiments, conducted even earlier in 1898, are also operant conditioning. Thorndike placed cats inside wooden boxes that could be opened by pressing a latch or pulling a string. When a cat performed the correct action, it escaped the box and received food. With repeated trials, the cats escaped faster and faster. Thorndike called this the Law of Effect: behaviors followed by a satisfying outcome become stronger, while behaviors followed by discomfort weaken. This principle directly laid the groundwork for Skinner’s later work.

A classroom study published in the Journal of Applied Behavior Analysis demonstrated operant conditioning in humans. Researchers found that when a teacher used praise for appropriate behavior and disapproval for talking out of turn or turning around in seats, those disruptive behaviors dropped substantially in a class of 25 high school students. A control class of 26 students taught by the same teacher showed no change. This is operant conditioning because the consequences (praise and disapproval) were applied after the students’ voluntary behaviors.

Experiments That Are Not Operant Conditioning

Pavlov’s dogs are the textbook example of classical conditioning. Pavlov rang a bell before giving dogs food. After repeated pairings, the dogs began salivating at the sound of the bell alone. The salivation was an involuntary reflex, and the bell was a stimulus that preceded the behavior rather than a consequence that followed it.

The Little Albert experiment, conducted by John B. Watson and Rosalie Rayner in 1920, is also classical conditioning. They paired a loud, startling noise with the presence of a white rat. Eventually, the infant “Little Albert” showed a fear response to the rat alone. The fear was an automatic emotional reaction, not a voluntary behavior being shaped by consequences.

How to Tell Them Apart on a Test

Look for these signals in the experiment description:

  • Operant conditioning: The subject performs a voluntary action (pressing, pecking, studying, behaving a certain way), and something is added or removed afterward to change the frequency of that action.
  • Classical conditioning: Two stimuli are paired together repeatedly until the subject begins responding to the new stimulus the same way it responded to the original one. The response is typically automatic or reflexive.

If the experiment mentions reinforcement, punishment, rewards, or consequences following a behavior, it’s operant. If it mentions pairing stimuli, conditioned responses, or involuntary reactions like fear or salivation, it’s classical.

The Four Types of Consequences in Operant Experiments

Understanding these categories helps you recognize operant conditioning even when the experiment isn’t a famous one. Operant conditioning uses four tools, and any experiment applying one of them qualifies.

  • Positive reinforcement: Adding something pleasant after a behavior to increase it. A rat gets a food pellet for pressing a lever.
  • Negative reinforcement: Removing something unpleasant after a behavior to increase it. A loud noise stops when a rat presses a lever.
  • Positive punishment: Adding something unpleasant after a behavior to decrease it. A dog receives a spray of water for jumping on the counter.
  • Negative punishment: Removing something pleasant after a behavior to decrease it. A child loses screen time for breaking a rule.

All four change behavior by manipulating consequences. That’s the defining feature. Whether the experiment involves animals in a lab chamber, students in a classroom, or a child learning to tie their shoes through encouragement, if consequences are shaping voluntary behavior, you’re looking at operant conditioning.

Shaping: A Special Operant Technique

Some exam questions describe an experiment where a complex behavior is built gradually, step by step. This technique is called shaping, and it falls squarely under operant conditioning. The experimenter reinforces behaviors that get progressively closer to the target. To train a dog to sit, you might first reward it for standing still, then only reward it when it begins to lower its body, and finally only when it sits completely. Each step raises the bar slightly. Shaping is how researchers trained pigeons to play ping-pong and how therapists help people build new habits, from exercise routines to social skills. If you see an experiment describing this incremental reward process, it’s operant conditioning.