A recurrence relation is a mathematical equation that defines each term in a sequence using one or more previous terms. Instead of giving you a direct formula for the 50th number in a sequence, it tells you how to calculate any number from the ones that came before it. The Fibonacci sequence is the most famous example: each number equals the sum of the two before it (0, 1, 1, 2, 3, 5, 8, 13…). That simple rule, combined with two starting values, generates the entire infinite sequence.
The Two Parts Every Recurrence Relation Needs
A recurrence relation has two essential components: the recursive rule and the initial conditions. The recursive rule is the equation itself, describing how to compute the next term. The initial conditions are the starting values that anchor the sequence so it can actually begin producing numbers.
Consider the Fibonacci sequence. The recursive rule is F(n) = F(n-1) + F(n-2), meaning “add the two previous terms.” But that rule alone isn’t enough. To find F(2), you need F(1) and F(0), so you must be told those values up front: F(0) = 0 and F(1) = 1. Without initial conditions, there’s no way to start computing, and the sequence is undefined. A first-order recurrence (one that only looks back one step) needs one initial condition. A second-order recurrence like Fibonacci (looking back two steps) needs two.
Types of Recurrence Relations
Recurrence relations are classified along a few dimensions, and knowing the type tells you which solving techniques will work.
Linear vs. non-linear. A recurrence is linear if each previous term appears only to the first power and isn’t multiplied by another term in the sequence. The Fibonacci relation F(n) = F(n-1) + F(n-2) is linear. Something like x(n) = x(n-1) · x(n-2) is non-linear, and generally much harder to solve.
Homogeneous vs. non-homogeneous. A linear recurrence is homogeneous if the current term depends only on previous terms with no extra function tacked on. For example, a(n) = 3·a(n-1) + 2·a(n-2) is homogeneous. If you add a function of n that doesn’t involve any sequence terms, like a(n) = 3·a(n-1) + 2·a(n-2) + 5n, it becomes non-homogeneous. That extra piece (the 5n) is what makes solving it more involved.
Order. The order of a recurrence is how far back it reaches. If each term depends only on the immediately previous term, it’s first-order. The Fibonacci sequence is second-order because it looks back two steps. Higher-order recurrences exist but are less common in introductory courses.
Solving a Recurrence Relation
Solving a recurrence relation means finding a closed-form formula that lets you calculate any term directly, without computing all the terms before it. If you have a recurrence that defines the millionth term in terms of the 999,999th, you’d rather have a formula that gives you the millionth term in one shot.
Iteration (Repeated Substitution)
The most intuitive method is to keep substituting the recurrence into itself until a pattern emerges. Take T(n) = 4·T(n/2) + n. You can expand it step by step:
- T(n) = 4·T(n/2) + n
- T(n) = 4·(4·T(n/4) + n/2) + n = 16·T(n/4) + 2n + n
- T(n) = 64·T(n/8) + 4n + 2n + n
At each step, the pattern becomes clearer. You keep expanding until you reach the base case (typically T(1), where no more recursion happens). Then you express the whole thing as a summation and simplify. This method is straightforward but can involve heavy algebra for complex recurrences.
The Characteristic Equation
For linear homogeneous recurrences with constant coefficients, there’s a more systematic approach. You assume the solution has the form r^n (where r is an unknown constant), substitute it into the recurrence, and divide through to get a polynomial equation in r. The roots of that polynomial give you the building blocks of the solution.
For a second-order recurrence like x(n) = a·x(n-1) + b·x(n-2), this produces a quadratic equation. If it has two distinct roots r₁ and r₂, the general solution is x(n) = C₁·r₁^n + C₂·r₂^n, where C₁ and C₂ are constants determined by plugging in the initial conditions. This is exactly how the closed-form formula for the Fibonacci sequence is derived.
Why Recurrence Relations Matter in Computer Science
Recurrence relations are the standard tool for analyzing how long recursive algorithms take to run. Any time an algorithm solves a problem by breaking it into smaller subproblems, calling itself on each one, and then combining the results, its running time naturally follows a recurrence.
For example, merge sort splits a list in half, sorts each half recursively, and then merges the results. Its running time satisfies T(n) = 2·T(n/2) + n: two recursive calls on half-sized inputs, plus n units of work to merge. Solving this recurrence gives T(n) = n·log(n), which is why merge sort is described as an “n log n” algorithm.
The Master Theorem provides a shortcut for recurrences of the form T(n) = a·T(n/b) + f(n), where a is the number of recursive calls, n/b is the size of each subproblem, and f(n) is the work done outside the recursion. It compares the growth rate of f(n) against a critical threshold. If the recursive calls dominate, the solution is determined by the recursion depth. If the outside work dominates, that determines the answer. If they’re balanced, you multiply by a logarithmic factor. This covers the vast majority of divide-and-conquer algorithms you’d encounter in practice.
Real-World Examples Beyond Algorithms
Recurrence relations show up anywhere a process unfolds in discrete steps, with each step depending on the previous one.
Compound interest. If your bank account earns 11% annual interest, the balance each year is 1.11 times the previous year’s balance: a(n) = 1.11·a(n-1). This is a first-order linear recurrence. With an initial deposit as your starting value, you can solve it to get the familiar compound interest formula.
Population growth. In biology, discrete-generation models track population sizes using recurrence relations. The simplest version is N(t+1) = a·N(t), where a is the average number of offspring per individual. This produces exponential growth, which isn’t realistic long-term. A more refined version is the discrete logistic growth model: N(t+1) = b·(1 – N(t)/M)·N(t), where M is the environment’s carrying capacity. As the population approaches M, the growth rate slows. This non-linear recurrence can produce surprisingly complex behavior, including oscillations and chaos, depending on the value of b.
Counting problems. Many combinatorics problems are naturally recursive. The number of ways to tile a 2×n rectangle with dominoes, the number of ways to climb n stairs taking one or two steps at a time, and the number of binary strings of length n with no consecutive ones all satisfy simple recurrence relations. In each case, you break the problem into cases based on what happens at the last step, and each case reduces to a smaller version of the same problem.
Recurrence Relations vs. Closed-Form Formulas
A recurrence relation defines a sequence implicitly: it tells you the relationship between terms. A closed-form formula defines it explicitly: plug in n, get the answer. Both describe the same sequence, but they serve different purposes.
Recurrence relations are often easier to write down because they mirror the natural structure of a problem. When you see that each year’s bank balance is last year’s balance plus interest, that’s a recurrence. Converting it to a closed-form formula (balance = principal × 1.11^n) requires solving the recurrence, but the result lets you jump straight to any year without calculating all the intermediate ones.
Not every recurrence relation has a neat closed-form solution. Non-linear recurrences, in particular, can be extremely difficult or impossible to solve in closed form. In those cases, you might rely on numerical computation (just running the recurrence forward step by step) or on approximation techniques that describe the solution’s long-term behavior without giving an exact formula.

