A variable interval (VI) schedule is a pattern of reinforcement where a reward becomes available after an unpredictable amount of time has passed. Unlike a fixed schedule where the timing is always the same, a variable interval schedule changes the wait time around an average. A VI-30 schedule, for example, means a reward becomes available on average every 30 seconds, but any individual interval might be 10 seconds, 45 seconds, or anywhere in between. This unpredictability is what makes the schedule powerful: it produces steady, consistent behavior because there’s no way to predict exactly when the next reward is coming.
How a Variable Interval Schedule Works
The key mechanic is simple. A timer runs in the background, and once a set (but unpredictable) period elapses, the next correct response earns a reward. Before that timer runs out, responding doesn’t produce anything. After it does, the very next response gets reinforced. Then a new, different interval begins.
What matters here is that the person or animal can’t game the system by responding faster. Unlike ratio schedules, where more responses mean faster rewards, interval schedules tie reinforcement to the passage of time. You just need to check in often enough to catch the reward once it’s available. This creates a moderate, steady rate of responding rather than the rapid bursts you’d see when every response counts toward a goal.
Why It Produces Steady Behavior
On a fixed interval schedule, where the reward always arrives after the same amount of time, a predictable pattern emerges. Behavior slows down right after a reward and gradually picks up as the next reward window approaches. Psychologists call this a “scalloped” response pattern. You can picture a student who studies hard the night before a weekly quiz, then barely opens a textbook the day after.
Variable interval schedules eliminate that pattern. Because there’s no way to predict when the next opportunity will appear, the most effective strategy is to keep responding at a consistent pace. There’s no rational point to pause or speed up. Research consistently shows that interval schedules produce slower response rates than ratio schedules (where every Nth response is rewarded), partly because longer pauses between responses actually increase the probability of the next response being reinforced. In other words, the schedule itself quietly rewards a more measured pace.
Everyday Examples
Variable interval schedules are surprisingly common in daily life, even though most people never think of them in those terms.
- Checking email or social media. Messages arrive at unpredictable times. You never know whether refreshing your inbox will reveal something new. Because the “reward” (a new message) appears on an irregular schedule, you tend to check in at a fairly steady rate throughout the day.
- Pop quizzes. A teacher who gives unannounced quizzes is using a variable interval schedule. One week you might get two quizzes, then go two full weeks without one. Because you can’t predict when the next quiz will land, the most effective strategy is to stay consistently prepared.
- A boss checking your work. If your supervisor drops by your desk at random times during the day, you tend to maintain a steady work pace. You can’t predict the next visit, so there’s no safe window to slack off.
- Fishing. A fish bites at unpredictable intervals. You can’t speed up the process by casting more frantically, but you do need your line in the water when the opportunity comes. This keeps anglers casting at a steady, patient rate.
The common thread in all these examples is unpredictable timing paired with a reward that requires you to “check” or respond at the right moment.
Variable Interval vs. Other Schedules
Reinforcement schedules fall into four basic types based on two dimensions: whether the requirement is time-based (interval) or response-based (ratio), and whether the requirement is predictable (fixed) or unpredictable (variable).
- Fixed interval (FI): Reward available after the same time period every time. Produces the scalloped pattern of slow-then-fast responding.
- Variable interval (VI): Reward available after unpredictable time periods. Produces slow, steady responding.
- Fixed ratio (FR): Reward delivered after a set number of responses (like a loyalty punch card). Produces fast bursts of behavior with pauses after each reward.
- Variable ratio (VR): Reward delivered after an unpredictable number of responses (like a slot machine). Produces the highest and most consistent response rates of all four schedules.
The critical distinction between VI and VR schedules is what the reward depends on. On a variable ratio, responding faster genuinely earns rewards faster. On a variable interval, speed doesn’t help. You just need to respond at least once after each invisible timer runs out. That’s why variable ratio schedules drive higher response rates while variable interval schedules drive moderate but remarkably stable ones.
Resistance to Extinction
One of the most practically important features of variable interval schedules is how persistent the behavior becomes once it’s learned. “Extinction” is what happens when reinforcement stops entirely: the behavior gradually fades. Behavior trained on any variable schedule tends to resist extinction much longer than behavior trained on a fixed schedule.
The logic is intuitive. If you’ve always been rewarded on a predictable schedule and the reward suddenly stops, the change is obvious and immediate. But if rewards have always been unpredictable, a stretch without reinforcement feels normal. You’ve experienced dry spells before, and rewards always eventually came. So you keep going. Research published in the Journal of the Experimental Analysis of Behavior found that when variable interval schedules are trained in separate blocks, leaner schedules (ones with less frequent reinforcement) actually produce greater resistance to extinction than richer ones, a phenomenon that parallels the well-known partial reinforcement extinction effect: the less predictable and frequent the reward, the harder the habit is to break.
How Therapists and Trainers Use It
In applied behavior analysis, therapists use variable interval schedules to maintain behaviors over the long term. For instance, research with fourth-grade students found that delivering social reinforcement (praise, acknowledgment) on a variable interval schedule effectively sustained academic engagement in the classroom. The students stayed on task because positive attention could arrive at any moment, but they weren’t dependent on constant praise to keep working.
Animal trainers use the same principle. During initial training, rewards come frequently and predictably to establish a new behavior. Once the behavior is solid, shifting to a variable interval schedule makes it durable. The animal continues performing reliably because it has learned that rewards come, just not on a clock. This transition from frequent reinforcement to a leaner variable schedule is one of the most effective strategies for building habits that last without requiring continuous rewards.
The same logic applies to self-management. If you’re trying to build a habit like exercising or studying, arranging occasional, unpredictable rewards for the behavior (rather than rewarding yourself every single time) can make the habit stickier and less dependent on external motivation.

