Behavioral shaping is a technique for teaching a new behavior by reinforcing small, progressive steps toward it rather than waiting for the complete behavior to appear on its own. It’s one of the core tools in operant conditioning, the branch of psychology built on the idea that behaviors are learned through their consequences. If you reward closer and closer versions of what you ultimately want, the learner gradually arrives at the target behavior.
The technique works with children, adults, and animals alike. It’s used in classrooms, therapy settings, animal training, and everyday parenting, often without people realizing they’re applying a formal psychological principle.
How Successive Approximations Work
The central mechanism of shaping is something psychologists call successive approximations. You start by reinforcing a behavior that only loosely resembles what you’re aiming for. Once that behavior becomes consistent, you raise the bar. Now you only reinforce a version that’s a step closer to the goal. You keep tightening the criteria until the learner is performing the full target behavior.
A classic example from autism therapy illustrates this clearly. If a therapist is teaching a child with limited verbal language to say “mommy,” the first reinforced response might just be the sound “mmm.” Once the child reliably produces that, the therapist stops reinforcing “mmm” and only reinforces “ma.” Then “mama.” Then “mommy.” At each stage, the previous approximation is no longer enough to earn reinforcement, which pushes the behavior forward.
This process depends on something called differential reinforcement: selectively reinforcing the responses you want while withholding reinforcement for ones that no longer meet the current standard. The combination of reward and extinction at each step is what drives the behavior to evolve. Without it, the learner would have no reason to move beyond the earliest approximation.
Breaking Down the Process
Shaping typically follows a predictable sequence, whether it’s being used formally in a therapy program or informally by a dog trainer.
- Define the target behavior. Be specific about what the final result looks like. “Raising a hand before speaking” is a shapeable target. “Being more polite” is not.
- Identify a starting point. Find a behavior the learner already does that bears some resemblance to the goal. This is your first approximation.
- Map out the steps. Break the distance between the starting behavior and the target into manageable increments. Each step should be close enough to the previous one that the learner can realistically get there.
- Choose a reinforcer. Pick something meaningful to the learner: praise, a treat, a token, access to a preferred activity.
- Reinforce and raise the bar. Reward the current approximation consistently, then shift your criteria to the next step once the learner is performing it reliably.
- Track progress. Collect data on how the behavior is changing so you can tell whether your steps are too big, too small, or on track.
The program continues until the learner demonstrates the full target behavior. If progress stalls, it usually means the jump between two steps is too large and needs an intermediate approximation inserted.
Where Shaping Is Used
Shaping has its deepest roots in applied behavior analysis (ABA), where it’s used extensively with individuals on the autism spectrum. The Association for Science in Autism Treatment describes shaping procedures as well-established and widely researched, effective for increasing a variety of skills including communication, social interaction, and daily living tasks. Because many of these skills don’t exist in a learner’s repertoire at all, shaping provides a way to build them from scratch rather than relying on instructions or imitation alone.
Outside clinical settings, shaping is everywhere. Animal trainers use it to teach complex tricks: a dolphin doesn’t learn a backflip in one session, but through dozens of reinforced approximations. Teachers use it when they praise a struggling student for partial answers on the way to complete ones. Parents use it when they celebrate a toddler’s first wobbly steps before expecting a full walk across the room. Sports coaches shape athletic skills by reinforcing progressively better form.
The principle also shows up in self-directed behavior change. If your goal is to run five miles but you currently don’t exercise, starting with a ten-minute walk and gradually increasing distance and intensity is shaping applied to yourself. You’re reinforcing each approximation (with satisfaction, a logged achievement, or a reward) on the way to the final target.
How Shaping Differs From Chaining
People often confuse shaping with another behavioral technique called chaining, but they solve different problems. Shaping refines a single behavior by reinforcing gradually better versions of it. Chaining links a sequence of separate behaviors into a complete routine.
Think of it this way: teaching a child to say the word “water” is shaping. You reinforce “w,” then “waa,” then “water,” each time improving one behavior. Teaching a child to wash their hands is chaining. You’re connecting a series of distinct steps (turn on faucet, wet hands, apply soap, scrub, rinse, dry) into a routine where completing each step cues the next one.
The key distinction is that shaping produces a novel behavior through progressive refinement, while chaining combines already-learnable behaviors into a functional sequence. Both techniques use reinforcement, but they structure it differently. In shaping, reinforcement follows increasingly precise versions of one response. In chaining, each step in the sequence acts as both a signal to perform the next step and a reward for completing the previous one.
Why Shaping Works
The power of shaping comes from the fact that it never asks the learner to do something they can’t currently do. Each step is only slightly more demanding than the last, which keeps frustration low and success rates high. The learner experiences a near-continuous stream of reinforcement, which maintains motivation throughout the process.
This stands in contrast to approaches that wait for the complete target behavior and then reinforce it. For complex or entirely new behaviors, that approach can mean waiting indefinitely. A child who has never spoken won’t spontaneously say “mommy” just because the adults around them would praise it. Shaping solves this by meeting the learner where they are and building from there.
B.F. Skinner, who developed much of the theoretical framework for operant conditioning in the 1940s, demonstrated the power of reinforcement contingencies through his pigeon experiments. In one well-known 1948 study, pigeons developed distinctive repetitive behaviors (turning in circles, head-tossing, side-to-side hopping) simply because those actions happened to occur just before food was delivered on a fixed schedule. Skinner described these as “superstitious” behaviors, arguing that the accidental pairing of action and food reinforced the responses. While that study focused on accidental reinforcement rather than deliberate shaping, it illustrated the same underlying principle: behaviors that are followed by positive consequences become more likely to recur.
Deliberate shaping harnesses this principle with precision, directing it toward a specific goal. By controlling which approximations earn reinforcement and which don’t, you can guide behavior toward outcomes that would never emerge by chance alone.

