The inductive hypothesis is an assumption you make during a proof by mathematical induction: you assume that a statement is true for some particular case, then use that assumption to prove the statement holds for the next case. It’s the engine that makes induction work, connecting one case to the next like dominoes falling in sequence.
Mathematical induction is a technique for proving that a statement holds for every natural number (or every element in a recursively defined structure). The inductive hypothesis is one piece of that technique, but it’s the piece students find most confusing, because it feels like you’re assuming what you’re trying to prove. You’re not. Here’s how it fits together.
How Induction Works in Three Parts
A proof by induction has three components. First, the base case: you directly verify the statement is true for a starting value, usually n = 0 or n = 1. Second, the inductive hypothesis: you assume the statement is true when n = k, for some arbitrary integer k. Third, the inductive step: using that assumption, you prove the statement is also true when n = k + 1.
The logic works like this. The base case confirms the statement for n = 1. The inductive step says “if it’s true for any k, it’s true for k + 1.” So since it’s true for 1, it must be true for 2. Since it’s true for 2, it must be true for 3. And so on, forever. The inductive hypothesis is the “if” part of that chain. You temporarily assume the statement holds at step k so you can use it as a building block to reach step k + 1.
Why Assuming P(k) Isn’t Circular
The most common stumbling point is this: “Aren’t we assuming the thing we’re trying to prove?” No. You’re not assuming the statement is true for all numbers. You’re proving a conditional: if P(k) is true, then P(k + 1) is true. That conditional can be valid regardless of whether P(k) actually is true, the same way “if it’s raining, the ground is wet” is a valid statement even on a sunny day. The base case is what anchors the chain to reality by confirming P(1) is actually true.
This is also why skipping the base case is dangerous. You can sometimes carry out a perfectly valid inductive step, proving that P(k) implies P(k + 1), while the statement itself is false for every number. One classic classroom example involves beans: suppose you start with 8 beans, double your count each midnight, and eat 5 each morning. You can prove that if your bean count ends in a 5 at noon on day n, it will end in a 5 at noon on day n + 1 (doubling a number ending in 5 gives one ending in 0, then subtracting 5 gives one ending in 5 again). The inductive step is perfectly sound. But you started with 8 beans, not a number ending in 5, so the base case fails and the conclusion never gets off the ground.
A Concrete Example
Say you want to prove that the sum of the first n positive integers equals n(n + 1)/2. Here’s how the inductive hypothesis shows up in practice.
Base case: When n = 1, the sum is just 1, and 1(1 + 1)/2 = 1. It checks out.
Inductive hypothesis: Assume the formula holds for n = k. That is, assume 1 + 2 + 3 + … + k = k(k + 1)/2.
Inductive step: Now show it holds for n = k + 1. The sum 1 + 2 + … + k + (k + 1) can be rewritten as (1 + 2 + … + k) + (k + 1). By the inductive hypothesis, the chunk in parentheses equals k(k + 1)/2. Substituting that in and simplifying gives k(k + 1)/2 + (k + 1) = (k + 1)(k + 2)/2, which is exactly the formula with n = k + 1 plugged in.
The substitution in the middle of that algebra is the key moment. That’s where the inductive hypothesis actually does its job: it lets you replace a complicated sum with a tidy formula so you can manipulate the expression and arrive at your goal.
Weak Induction vs. Strong Induction
The version described above is sometimes called “weak” induction. In weak induction, the hypothesis assumes the statement holds only for n = k, a single value. Strong induction (also called complete induction) makes a broader assumption: you assume the statement holds for every integer from the base case up through k, not just k itself.
Cornell’s CS department puts it simply. In weak induction, you’re standing on one step and using it to reach the next. In strong induction, you’re standing on all the steps you’ve already climbed.
Strong induction is useful when proving P(k + 1) requires you to look back more than one step. For example, proving properties of the Fibonacci sequence often requires knowing both P(k) and P(k − 1), since each Fibonacci number depends on the two before it. In those cases, the strong inductive hypothesis, “assume the statement holds for all integers from the base case through k,” gives you everything you need.
Weak and strong induction are logically equivalent. Anything provable with one can be proved with the other. The difference is purely practical: sometimes one version makes the algebra or logic much cleaner.
Structural Induction in Computer Science
Induction isn’t limited to numbers. In computer science, structural induction applies the same idea to recursively defined data structures like lists and trees. The inductive hypothesis takes a slightly different form here.
For lists, the base case proves a property holds for the empty list. The inductive hypothesis assumes the property holds for some list of a given length, and the inductive step proves it still holds when you add one more element to the front. For trees, the base case covers individual leaf nodes, and the inductive hypothesis assumes the property holds for two subtrees so you can prove it holds when those subtrees are combined under a new root node.
The underlying logic is identical to numerical induction. You prove the simplest structure works, assume an arbitrary structure works, then show that building a slightly bigger structure preserves the property.
Proving Algorithms Correct
The inductive hypothesis plays a direct role in verifying that algorithms do what they’re supposed to do. For recursive algorithms, the connection is natural: a recursive function calls itself on a smaller input, and the inductive hypothesis is the assumption that the function works correctly on that smaller input. You then prove that the function handles the current input correctly given that assumption.
For example, proving that an insertion sort algorithm correctly sorts a list works like this. The base case: inserting into an empty list trivially produces a sorted list. The inductive hypothesis: assume the algorithm correctly sorts any list of length n. The inductive step: show that when the algorithm processes a list of length n + 1, it sorts the first n elements correctly (by the hypothesis) and then inserts the last element into its proper position.
For algorithms that loop rather than recurse, the same idea appears in a slightly different form called a loop invariant. A loop invariant is a property that stays true before and after every pass through the loop. Proving it works involves four steps: define the invariant, show it holds before the loop starts (initialization, analogous to the base case), show that if it holds before one iteration it still holds after (maintenance, which is the inductive step), and show the loop eventually stops (termination). The maintenance step is where the inductive hypothesis lives: you assume the invariant is true at the start of iteration k and prove it remains true at the start of iteration k + 1.
Common Mistakes With the Inductive Hypothesis
The most frequent error is forgetting the base case entirely, which, as the bean example showed, can produce a valid-looking proof of a completely false statement. A second common mistake is assuming too much: accidentally assuming P(k + 1) is true (the thing you’re trying to prove) instead of just P(k). That actually is circular reasoning, and it makes the proof invalid.
A subtler mistake is not using the hypothesis at all. If your proof of P(k + 1) never references the assumption that P(k) is true, something has gone wrong. Either you’ve proven the statement directly (which means you didn’t need induction in the first place) or you’ve skipped a step. The whole point of the inductive hypothesis is to serve as a bridge, and a bridge you never cross isn’t doing anything.

