A fixed ratio schedule is a rule in operant conditioning where a reward is delivered only after a specific, unchanging number of responses. If the ratio is set at five, the reward comes after every fifth response, no exceptions. A car salesman who must sell exactly five cars to earn a bonus is working under a fixed ratio schedule. This concept is one of four main reinforcement schedules that psychologists use to describe how the timing and frequency of rewards shape behavior.
How a Fixed Ratio Schedule Works
The core idea is simple: the number of responses required for each reward stays constant. Psychologists abbreviate these schedules as “FR” followed by the number. An FR-1 schedule rewards every single response (this is also called continuous reinforcement). An FR-10 schedule requires ten responses before a reward appears. The organism, whether it’s a rat pressing a lever or a factory worker assembling products, must complete the set number of actions to receive anything.
What makes fixed ratio schedules distinct from other reinforcement schedules is that the reward depends entirely on output, not on time. Interval schedules reward the first response after a certain amount of time has passed. Ratio schedules reward a certain amount of work. And because the number is fixed rather than random, the organism quickly learns exactly how much effort each reward costs.
The “Break and Run” Pattern
Fixed ratio schedules produce one of the most recognizable patterns in behavioral psychology. After receiving a reward, the organism pauses before starting to work toward the next one. Then, once it resumes, it responds at a high, steady rate until the next reward arrives. Researchers call this the “break and run” pattern: a break (the pause), followed by a run (a burst of rapid, consistent responding that continues until the reward is earned).
The pause after each reward is called the post-reinforcement pause, and it’s a defining feature of fixed ratio behavior. Several factors seem to drive it. One explanation is straightforward: consuming the reward itself temporarily reduces motivation. If the reward is food, eating it creates a brief feeling of fullness that makes the next round of effort less appealing for a moment. Another explanation involves a kind of push and pull in the brain. The just-received reward has a mildly inhibiting effect, while cues associated with the upcoming reward have an exciting, motivating effect. The pause lasts until the pull toward the next reward overcomes the satisfied feeling from the last one.
This pause tends to grow longer as the ratio gets larger. An FR-5 schedule produces shorter pauses than an FR-50 schedule. When the required effort feels manageable, the organism gets back to work quickly. When the required effort is high, the pause stretches out because the next reward feels far away.
What Happens in the Brain
Dopamine, the brain chemical most closely tied to motivation and reward-seeking, plays a central role in fixed ratio performance. Research on animals with reduced dopamine activity in the brain’s reward center shows two clear effects: they take longer to start working after receiving a reward (a longer post-reinforcement pause), and they actually press faster once they do start. This suggests dopamine doesn’t simply control how fast you work. It controls how quickly you re-engage with effort after a reward. The “wanting” to begin again depends heavily on dopamine, while the mechanical act of responding, once started, can persist even when dopamine is low.
This finding helps explain why motivation can feel so uneven under fixed ratio conditions. The hardest part isn’t doing the work. It’s starting the next round.
Fixed Ratio vs. Variable Ratio
The most common comparison is between fixed and variable ratio schedules. In a variable ratio schedule, the reward still depends on the number of responses, but that number changes unpredictably. A slot machine is the classic example: it pays out based on pulls, but you never know which pull will hit.
Variable ratio schedules produce fast, steady responding without the post-reinforcement pause. Because the organism can’t predict when the next reward will arrive, there’s no logical point to take a break. Fixed ratio schedules, by contrast, always produce that pause because the organism knows exactly where it stands in the count.
One area where the two schedules are more similar than many textbooks suggest is in how quickly behavior disappears once rewards stop entirely. Recent experimental evidence found no significant difference in extinction rates between fixed and variable ratio groups. Both groups decreased their responding at similar rates when rewards were removed. The popular claim that variable ratio schedules always produce more persistent behavior may be less clear-cut than traditionally taught.
Everyday Examples
Fixed ratio schedules appear throughout daily life, often in workplaces. Piece-rate pay is the clearest example: a garment worker paid for every ten shirts sewn is on an FR-10 schedule. Sales commissions structured around a set number of deals work the same way. Loyalty punch cards (“buy nine coffees, get the tenth free”) are consumer-facing fixed ratio schedules.
In education, token economies often use fixed ratio logic. A teacher might give a student a sticker for every three math problems completed. The student learns that a predictable amount of effort produces a predictable reward, which can be effective for building work habits. The trade-off is that the post-reinforcement pause shows up here too. Students may take a break right after earning their reward before re-engaging with the next set of problems.
Video games use fixed ratio mechanics when they grant a power-up or item after a set number of actions, like defeating exactly five enemies. Players quickly learn the ratio and push through the required actions efficiently, but they often pause briefly after collecting the reward before diving back in.
Strengths and Limitations
The biggest advantage of a fixed ratio schedule is that it generates high response rates. Because the reward is directly tied to output, there’s a strong incentive to work quickly. The faster you complete the required responses, the faster you earn the reward. This makes fixed ratio schedules effective in any situation where you want to maximize productivity or task completion.
The main limitation is the built-in pause. In a workplace context, that pause translates to predictable dips in productivity right after each payout or milestone. Managers sometimes try to combat this by increasing the ratio (requiring more output per reward), but pushing the ratio too high creates a phenomenon called ratio strain, where the effort required feels so large relative to the reward that the organism slows down dramatically or stops responding altogether. Finding the right ratio is a balancing act: high enough to be efficient, low enough to keep motivation intact.
Partial reinforcement schedules like FR schedules also produce behavior that’s more resistant to extinction than continuous reinforcement. If you’ve been rewarded for every single response and rewards suddenly stop, you notice immediately. If you’ve been working through sets of ten or twenty responses per reward, the absence of a reward after one set doesn’t feel as jarring, and you’re more likely to keep going for a while before giving up.

