The law of effect is a principle in psychology stating that behaviors followed by satisfying outcomes are more likely to be repeated, while behaviors followed by unpleasant outcomes are less likely to be repeated. First formally described by psychologist Edward Thorndike in 1905, it became one of the most influential ideas in the history of behavioral science and laid the groundwork for virtually everything we now understand about reinforcement and learning.
The Core Idea
Thorndike’s original formulation is straightforward: when an animal or person makes several different responses in the same situation, the responses followed by satisfaction become more strongly connected to that situation. When the situation comes up again, those satisfying responses are more likely to happen again. Responses followed by discomfort, on the other hand, become weakly connected to the situation and are less likely to recur. The greater the satisfaction or discomfort, the stronger or weaker the bond becomes.
What makes this idea powerful is the mechanism it proposes. Thorndike wasn’t saying that animals consciously think “that worked, I’ll do it again.” Instead, he argued that a satisfying outcome automatically strengthens the connection between a situation and a response. The outcome itself isn’t stored as part of that connection. The behavior simply becomes more tightly wired to the circumstances that triggered it. This was a radical departure from earlier theories that assumed learning required conscious reasoning or insight.
Thorndike’s Puzzle Box Experiments
The law of effect didn’t come from abstract theorizing. It came from watching cats try to escape wooden boxes. Thorndike built a series of “puzzle boxes,” each requiring a specific action to open the door: pulling a wire loop, pressing a lever, or performing a sequence of movements. A hungry cat was placed inside, with food visible outside. Then Thorndike timed how long it took the cat to escape on each successive attempt.
The results were remarkably consistent. Cat 12, placed in a box that required pulling a wire loop, started at around 160 seconds to escape and dropped to just 6 seconds over 24 trials. The decline was rapid and fairly steady. This pattern was typical for boxes requiring a single response. Cat 4, placed in a box requiring three distinct responses in sequence, showed much slower and more erratic progress across roughly 117 trials over seven days.
What Thorndike did not see was a sudden “aha” moment where a cat figured out the mechanism. Instead, escape times declined gradually, with some variability, as the successful response grew stronger through repeated satisfaction (getting out and eating) while unsuccessful responses (clawing at the walls, biting the bars) faded. This gradual stamping-in of successful behavior, rather than a flash of understanding, was exactly what the law of effect predicted.
The Law of Exercise
Thorndike originally paired the law of effect with a companion principle called the law of exercise. The law of exercise held that responses practiced more frequently in a given situation become better learned, essentially a “practice makes perfect” idea. Repetition alone, in this view, could strengthen the bond between a stimulus and a response.
Over time, Thorndike revised his own thinking. He came to believe that repetition without a satisfying consequence didn’t do much on its own. You can repeat something endlessly, but if the outcome is neutral or unpleasant, the behavior won’t stick. The law of effect turned out to be the more fundamental principle, and it’s the one that shaped the field going forward.
How It Shaped Modern Psychology
The law of effect is essentially the ancestor of operant conditioning, the framework developed by B.F. Skinner in the mid-20th century. Skinner replaced Thorndike’s language of “satisfaction” and “discomfort” with more precisely defined terms like reinforcement and punishment, but the underlying logic is the same: consequences shape future behavior. Every time you give a dog a treat for sitting on command, use a rewards program to motivate employees, or praise a child for finishing homework, you’re applying the law of effect.
In education, the principle shows up whenever teachers design activities so that correct responses lead to positive feedback. Gamified learning apps operate on the same foundation: get the answer right, hear a pleasant chime, earn points, and you’re more likely to keep practicing. Workplace management leans on it too. Performance bonuses, public recognition, and promotion systems all function by attaching satisfying outcomes to desired behaviors. Conversely, penalties and corrective feedback aim to weaken the connection between a situation and an unwanted response.
Even habit formation in daily life follows this pattern. The reason you keep checking your phone is that past checks have been intermittently rewarded with interesting notifications. The reason you avoid a restaurant where you once got food poisoning is that the unpleasant outcome weakened your connection between hunger and that particular place.
Where the Law Falls Short
Thorndike’s original version had a notable asymmetry problem. He initially claimed that satisfaction and discomfort were mirror images, equally powerful in strengthening or weakening behavior. Later research, including some of Thorndike’s own follow-up work, showed this isn’t quite right. Punishment (discomfort) often suppresses a behavior temporarily without truly erasing the underlying connection. A punished behavior can come roaring back once the threat of punishment is removed. Reward, by contrast, produces more durable learning.
The law also treats the learner as essentially passive, a creature whose behavior is mechanically stamped in or stamped out by consequences. It doesn’t account for expectation, curiosity, or insight. Humans (and even some animals) can learn by watching others, by imagining outcomes, or by suddenly restructuring their understanding of a problem. These cognitive dimensions of learning were largely invisible in Thorndike’s framework, which is why later psychologists moved beyond strict stimulus-response models.
There’s also the question of timing. The law of effect works best when consequences are immediate. The longer the gap between a behavior and its outcome, the weaker the connection. This is why it’s hard to learn from consequences that are days, weeks, or years away, like the health effects of diet or exercise, even though the law of effect would predict that those outcomes should shape behavior.
Why It Still Matters
Despite its limitations, the law of effect remains one of the most reliable principles in behavioral science. It accurately predicts behavior across species, from pigeons pecking keys to people scrolling social media. Therapists use it as the basis of behavior modification programs. Animal trainers rely on it entirely. App designers, whether they know Thorndike’s name or not, build products around the principle that rewarded actions get repeated.
The simplicity of the idea is part of its staying power. Behaviors that produce good outcomes persist. Behaviors that produce bad outcomes fade. Over a century after Thorndike watched cats fumble their way out of wooden boxes, that core insight remains one of the most practically useful things psychology has ever produced.

