What Is Recursive in Math? Definition and Examples

In math, recursive means defining something in terms of itself. A recursive formula calculates each new value by referring back to one or more previous values, rather than computing the answer from scratch. You’ll encounter recursion most often in sequences, where each term depends on the term before it, but the concept shows up across many areas of mathematics, from factorials to fractals.

The Two Parts of Every Recursive Definition

A recursive definition always needs two pieces to work. The first is a base case: at least one starting value that’s stated outright, not calculated from anything else. The second is a recursive step: a rule that builds new values from values you already have. Without the base case, you’d never have a starting point. Without the recursive step, you’d never generate anything beyond that starting point.

Think of it like a ladder. The base case is the first rung, placed at a known height. The recursive step tells you how to place each rung above the last one. Together, they let you climb as high as you need.

Recursive vs. Explicit Formulas

The clearest way to understand recursion is to compare it with the alternative. An explicit formula lets you jump straight to any term in a sequence using its position number. A recursive formula requires you to know the previous term before you can find the next one.

Take the sequence 10, 13, 16, 19, 22. The recursive formula says: start at 10, then add 3 each time. Written out, that’s a(n) = a(n−1) + 3, with a(1) = 10. To find the 5th term, you’d need the 4th term first, which means you need the 3rd, and so on back to the start.

The explicit formula for the same sequence is a(n) = 3n + 7. Plug in n = 5 and you get 22 immediately, no chain of previous calculations required. Both formulas produce the same sequence. The recursive version describes how the pattern grows step by step. The explicit version gives you a shortcut to any position.

Sometimes converting between the two is straightforward. Other times, especially with more complex recursions, finding an explicit formula is difficult or impossible, and the recursive definition is the most natural way to describe the pattern.

Arithmetic and Geometric Sequences

The two most common sequence types in algebra both have clean recursive formulas. An arithmetic sequence adds the same number each time. Its recursive formula is a(n) = a(n−1) + d, where d is the common difference. If your sequence is 5, 8, 11, 14, then d = 3 and you start with a(1) = 5.

A geometric sequence multiplies by the same number each time. Its recursive formula is a(n) = r · a(n−1), where r is the common ratio. For the sequence 168, 84, 42, 21, each term is half the previous one, so r = 1/2 and a(1) = 168. The explicit version of that same sequence would be a(n) = 168 / 2^(n−1), which lets you skip ahead to any term directly.

The Fibonacci Sequence

The Fibonacci sequence is probably the most famous recursive definition in all of mathematics. It starts with two base cases instead of one: F(0) = 0 and F(1) = 1. The recursive step says each new term is the sum of the two terms before it: F(n) = F(n−1) + F(n−2).

That simple rule produces the sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, and onward. To get 8, for example, you add 3 and 5. To get 13, you add 5 and 8. Each value is built entirely from what came before. An explicit formula for Fibonacci numbers does exist, but it involves irrational numbers and is far less intuitive than the recursive version, which is one reason mathematicians tend to present it recursively.

Factorials

The factorial of a number n (written n!) means multiplying all whole numbers from 1 up to n. So 5! = 5 × 4 × 3 × 2 × 1 = 120. You can define this recursively with a base case of 0! = 1, and a recursive step of n! = n × (n−1)! for any n ≥ 1.

To compute 4! recursively, you’d work through the chain: 4! = 4 × 3!, and 3! = 3 × 2!, and 2! = 2 × 1!, and 1! = 1 × 0!, and 0! = 1. Then you multiply back up: 1, then 1, then 2, then 6, then 24. The answer is 24. This chain of “ask the previous value first” is the hallmark of recursive thinking.

Pascal’s Triangle

Pascal’s triangle is built entirely on a recursive rule. Each number equals the sum of the two numbers directly above it. In formal notation, the entry in row r at position k equals the entry at (r−1, k) plus the entry at (r−1, k−1). The edges of the triangle are all 1s, serving as the base cases.

This means you can generate the entire triangle row by row without any multiplication or division. Row 4, for instance (1, 4, 6, 4, 1), comes from adding adjacent pairs in row 3 (1, 3, 3, 1). The same recursive identity, expressed in terms of binomial coefficients, is foundational to combinatorics and probability.

Recursion in Fractals

Recursion also appears in geometry through fractals, shapes that repeat their own structure at smaller and smaller scales. The Sierpinski triangle is a classic example. You start with an equilateral triangle, find the midpoints of all three sides, and connect them to form a smaller triangle in the center. Remove that center triangle. Now repeat the same process on each of the three remaining triangles, then on each of those smaller triangles, and so on.

The base case here is a single triangle. The recursive step says: subdivide and repeat. After just a few rounds, the shape develops an intricate, self-similar pattern where every small section looks like a miniature copy of the whole. This is recursion applied to shapes rather than numbers, but the logic is identical: define a simple starting point, then apply the same rule over and over.

Finding the Limit of a Recursive Sequence

Some recursive sequences settle toward a single value as you compute more and more terms. When that happens, there’s a useful trick for finding that limit. If the sequence converges to some value A, then eventually both a(n) and a(n+1) are essentially equal to A. You can replace both with A in the recursive formula and solve the resulting equation.

For example, consider the sequence defined by a(n+1) = √(3 · a(n)), starting at a(0) = 2. If you assume the sequence converges to A, you set A = √(3A), square both sides to get A² = 3A, and solve: A = 0 or A = 3. Since the sequence starts at 2 and its terms stay above 2, the limit can’t be 0. The sequence converges to 3. This fixed-point method works whenever you already know (or can verify) that convergence occurs. The solutions you get are candidates for the limit, and the starting value helps you determine which candidate is correct.

Why Recursion Matters

Recursive definitions are powerful because they let you describe complex patterns with very simple rules. You don’t need to figure out a clever closed-form equation. You just need a starting point and a rule for getting from one step to the next. This makes recursion a natural way to model processes that build on themselves: population growth where each generation depends on the last, compound interest where each year’s balance depends on the previous year’s, or algorithms in computer science that solve big problems by breaking them into smaller copies of themselves.

Mathematical induction, the standard technique for proving statements about all whole numbers, is closely related. To prove a property holds for every member of a recursively defined set, you show it holds for the base case and then show the recursive step preserves it. If the property is true before the step, it remains true after. This mirrors the structure of the recursive definition itself: anchor at the base, then carry forward.