What Is Conditioning Psychology? Types and Examples

Conditioning is a type of learning that links a trigger or stimulus to a behavioral response. It’s one of the most fundamental concepts in psychology, explaining how humans and animals develop new behaviors, habits, emotional reactions, and preferences. There are two main forms: classical conditioning, where you learn to associate two stimuli with each other, and operant conditioning, where you learn from the consequences of your actions.

Classical Conditioning: Learning Through Association

Classical conditioning happens when your brain links something neutral to something that already triggers a natural response. The most famous example comes from Ivan Pavlov, a Russian physiologist who stumbled onto the concept in the 1890s while studying digestion in dogs. Pavlov noticed that his dogs began salivating not just when food arrived, but when they heard the footsteps of the lab assistant who brought it. The dogs’ brains had linked the sound of footsteps to the arrival of food, creating a learned response where none existed before.

This process works through four components. The unconditioned stimulus is the thing that naturally triggers a reaction: food makes a dog salivate, a loud noise makes a baby cry. The unconditioned response is that automatic reaction. Then there’s a neutral stimulus, something that initially produces no reaction at all, like a bell or a particular image. When the neutral stimulus is repeatedly paired with the unconditioned stimulus, the brain begins treating them as connected. At that point, the neutral stimulus becomes a conditioned stimulus, and the learned reaction it now triggers is the conditioned response.

A striking demonstration came from John Watson’s 1920 experiment at Johns Hopkins University. Watson and his graduate student Rosalie Rayner exposed a 9-month-old baby, referred to as “Little Albert,” to a white rat, which the baby happily played with. Then they began making a loud, startling noise behind the baby’s head each time the rat appeared. After several rounds, Albert became fearful of the rat even without the noise. The fear generalized to other furry objects that had once been sources of curiosity. The experiment showed that emotional responses, not just physical reflexes like salivation, could be conditioned.

Operant Conditioning: Learning From Consequences

Operant conditioning works differently. Instead of pairing two stimuli before a behavior, it changes behavior by altering what happens afterward. If the consequence of an action is favorable, you’re more likely to repeat it. If it’s unpleasant, you’re less likely to do it again. This framework was developed primarily by B.F. Skinner, building on earlier work by Edward Thorndike.

Operant conditioning uses four tools, and the terminology trips people up because “positive” and “negative” don’t mean “good” and “bad.” Positive means adding something, negative means removing something. Reinforcement increases a behavior, punishment decreases it. That gives you four combinations:

  • Positive reinforcement: Adding something desirable to encourage a behavior. A dog gets a treat for sitting on command.
  • Negative reinforcement: Removing something unpleasant to encourage a behavior. Your car’s seatbelt alarm stops buzzing once you buckle up, making you more likely to buckle up quickly next time.
  • Positive punishment: Adding something unpleasant to discourage a behavior. A child touches a hot stove and feels pain, reducing the chance they’ll touch it again.
  • Negative punishment: Removing something desirable to discourage a behavior. A teenager loses phone privileges after breaking curfew.

Why Timing and Frequency Matter

How often a behavior gets reinforced shapes how strong and persistent it becomes. Psychologists have identified four schedules of reinforcement, and each produces different patterns of behavior.

A fixed-ratio schedule delivers reinforcement after a set number of responses. A factory worker who earns a bonus for every 50 units produced is on a fixed-ratio schedule. This tends to produce high rates of behavior because the reward depends entirely on effort. A variable-ratio schedule also ties reinforcement to the number of responses, but the exact number changes unpredictably. Slot machines work this way: you know a payout is coming eventually, but not when, which keeps you pulling the lever. Variable-ratio schedules produce the most persistent behavior and are the hardest to break.

Fixed-interval schedules deliver reinforcement after a consistent amount of time. A biweekly paycheck is a fixed-interval reward. People on this schedule tend to slow down right after reinforcement and ramp up as the next one approaches. Variable-interval schedules reinforce after unpredictable time periods. If your boss drops by at random times to check your work, you’ll maintain a steadier pace because you can never predict the next check-in.

Partial reinforcement, where a behavior is only reinforced some of the time, actually creates stronger habits than reinforcing every single instance. This is why behaviors learned on variable schedules are so resistant to fading.

Extinction and Spontaneous Recovery

When reinforcement or the paired stimulus stops, conditioned behaviors gradually weaken. This process is called extinction. If Pavlov’s dogs heard the bell repeatedly without food ever arriving, they would eventually stop salivating at the sound. If a child’s tantrums no longer result in getting candy at the store, the tantrums will decline over time.

But extinction doesn’t erase the original learning. It layers new learning on top of it. The brain essentially learns that the old association no longer holds, but the original memory remains underneath. This is why spontaneous recovery happens: a conditioned response that seemed fully extinguished can resurface after time passes. A person who overcame a fear of dogs through gradual exposure might find the fear flickers back months later when encountering an unfamiliar dog. The reemergence is typically weaker than the original response, and it fades again quickly if the stimulus continues without reinforcement.

One explanation for why this happens is that extinction is the second-learned association, making it more fragile and context-dependent than the original learning. The brain treats the first experience as the default and the correction as the exception.

What Happens in the Brain

At a biological level, conditioning involves changes in how brain cells communicate with each other. When two neurons fire together repeatedly, the connection between them strengthens, a process called long-term potentiation. This selective strengthening of connections that are activated together is essentially the cellular version of associative learning. It occurs across multiple brain regions, including areas involved in memory, emotion, and movement coordination. Fear conditioning, for instance, relies heavily on the brain’s threat-detection circuitry, which is why conditioned fear responses can be so rapid and automatic.

Conditioning in Everyday Life

Conditioning isn’t confined to labs and textbooks. It shapes behavior constantly. Advertising relies heavily on classical conditioning: Nike’s “Just Do It” campaigns repeatedly pair the brand with images of people pushing through challenges to achieve greatness. The goal is to condition positive emotional associations with the brand so that consumers feel motivated or empowered when they see the logo. Research suggests this kind of evaluative conditioning can increase brand awareness, brand positivity, and even sales, though changing attitudes toward brands people already have strong feelings about is harder.

Your phone habits are a textbook case of variable-ratio reinforcement. Checking social media is rewarded unpredictably, sometimes with interesting content, sometimes with nothing, which is precisely the schedule that produces compulsive, persistent checking. The notification sound itself becomes a conditioned stimulus that triggers a little jolt of anticipation, a classically conditioned response.

Modern therapy also traces its roots to conditioning principles. Exposure therapy, one of the most effective treatments for phobias, panic disorder, and agoraphobia, is essentially a structured extinction procedure. By repeatedly facing a feared stimulus in a safe environment without the expected negative outcome, the conditioned fear response weakens. Cognitive-behavioral therapy grew directly out of applying conditioning principles to clinical problems starting in the 1950s, and it remains one of the most widely practiced and researched forms of psychotherapy today.

The Key Difference Between the Two Types

The simplest way to keep classical and operant conditioning straight: classical conditioning is about what happens before a behavior. You learn to anticipate something based on signals in your environment. Operant conditioning is about what happens after a behavior. You learn to repeat or avoid actions based on their outcomes. Both are constantly at work, often simultaneously. A child who gets a cookie (operant reinforcement) from a jar shaped like a bear may also develop warm feelings toward bears (classical conditioning). Understanding both helps explain not just how habits form, but why they can be so difficult to break.