How to Find the Limit of a Sequence: Key Methods

Finding the limit of a sequence means determining the single value that the sequence’s terms get closer and closer to as you go further out. There’s a formal definition behind this idea, but in practice you’ll rely on a toolkit of algebraic techniques, standard results, and theorems. The right approach depends on what kind of sequence you’re working with.

What “Limit” Actually Means

A sequence converges to a limit L if, no matter how tiny a distance you pick (called epsilon), you can always find a point in the sequence beyond which every single term stays within that distance of L. In notation: for any ε > 0, there exists a positive integer M such that for all n > M, the distance between a(n) and L is less than ε.

The key insight is that early terms don’t matter. A sequence could behave wildly for its first million terms. What determines convergence is the long-term behavior. And if you shrink your acceptable distance (make epsilon smaller), you may need to go further out in the sequence (make M larger) before all remaining terms fall inside that window. If you can always find such an M no matter how small epsilon gets, the sequence converges to L.

Rational Sequences: The Fastest Shortcut

If your sequence looks like one polynomial divided by another, such as (3n² + 5n) / (7n² – 2), you can find the limit almost instantly by comparing the degrees of the top and bottom.

  • Bottom degree is larger: the limit is 0. Example: n / n² converges to 0.
  • Top degree is larger: the sequence diverges to infinity (or negative infinity).
  • Degrees are equal: the limit is the ratio of the leading coefficients. For (3n² + 5n) / (7n² – 2), the limit is 3/7.

This shortcut works because the highest-power terms dominate as n grows. Everything else becomes negligible. When the degrees match, dividing both numerator and denominator by the highest power of n causes all the lower-order terms to vanish, leaving only the leading coefficients.

Limit Laws for Combining Sequences

When you already know the limits of two sequences, you can combine them using arithmetic rules. If sequence a(n) converges to a and sequence b(n) converges to b, then:

  • Sum: the limit of a(n) + b(n) is a + b
  • Difference: the limit of a(n) – b(n) is a – b
  • Product: the limit of a(n) · b(n) is a · b
  • Scalar multiple: the limit of c · a(n) is c · a
  • Quotient: the limit of a(n) / b(n) is a / b, provided b ≠ 0

These rules let you break complicated sequences into simpler pieces, find each piece’s limit, then reassemble. For instance, you can evaluate the limit of (1/n + 3)(2 – 5/n) by finding the limits of 1/n, 3, 2, and 5/n separately, then multiplying the results: (0 + 3)(2 – 0) = 6.

Standard Limits Worth Memorizing

A few results come up so often that memorizing them saves significant time:

  • 1/n → 0 as n → ∞. This is the most fundamental convergent sequence and the building block for many others.
  • n^(1/n) → 1 as n → ∞. The nth root of n approaches 1, even though n itself is growing.
  • (1 + 1/n)^n → e as n → ∞. This is one of the oldest definitions of the constant e (approximately 2.718).
  • r^n → 0 when |r| < 1. Any geometric sequence with a common ratio strictly between -1 and 1 converges to 0. When |r| ≥ 1, the geometric sequence diverges.

You can extend the geometric result immediately: if you see something like 5 · (2/3)^n, the limit is 0 because |2/3| < 1 and the scalar multiple rule applies.

The Squeeze Theorem

When you can’t evaluate a sequence’s limit directly, try trapping it between two simpler sequences that converge to the same value. This is the Squeeze Theorem (also called the Sandwich Theorem). You need three things: a lower-bound sequence a(n), your target sequence x(n), and an upper-bound sequence b(n), where a(n) ≤ x(n) ≤ b(n) for all n. If both a(n) and b(n) converge to the same limit L, then x(n) also converges to L.

A classic example: find the limit of sin(n)/n. You know that -1 ≤ sin(n) ≤ 1 for all n, so -1/n ≤ sin(n)/n ≤ 1/n. Both -1/n and 1/n converge to 0, so sin(n)/n is squeezed to 0 as well. The power of this technique is that you never need to evaluate sin(n) directly, which would be impossible since it never settles down on its own.

Using Calculus on Sequences

If your sequence a(n) can be written as f(n) for some continuous function f(x), then the limit of the sequence equals the limit of f(x) as x → ∞ (when it exists). This connection is useful because it opens up calculus tools, especially L’Hôpital’s Rule.

L’Hôpital’s Rule applies when a limit produces an indeterminate form like 0/0 or ∞/∞. In those cases, you can differentiate the numerator and denominator separately, then take the limit of that new fraction. For example, to find the limit of ln(n)/n, you note that both ln(n) and n go to infinity. Differentiating gives (1/n)/1 = 1/n, which goes to 0. So ln(n)/n → 0.

This technique also handles tricky forms like 0^0, 1^∞, or ∞^0 by taking the natural log first, evaluating the limit, then exponentiating back. For instance, to find the limit of n^(1/n), rewrite it as e^(ln(n)/n). Since ln(n)/n → 0 (as just shown), the sequence converges to e^0 = 1, confirming the standard result above.

Recursive Sequences

Some sequences are defined by a formula that references previous terms, such as a(n+1) = (1/4)a(n) + 3/4 with some starting value a(1). You can’t plug in a formula for the nth term directly, so you use a different strategy.

First, assume the sequence converges to some limit L. If it does, then as n gets large, both a(n) and a(n+1) approach L. Replace both with L in the recursive formula and solve for L. In the example above, that gives L = (1/4)L + 3/4, which solves to L = 1. The solutions to this equation are called fixed points of the recursion.

This method only finds candidates for the limit. You still need to verify that the sequence actually converges, which often requires showing that it’s monotone (always increasing or always decreasing) and bounded. Once convergence is confirmed, the limit must be one of the fixed points.

The Monotone Convergence Theorem

This theorem provides a powerful guarantee: if a sequence is increasing and bounded above, it converges. Similarly, if a sequence is decreasing and bounded below, it converges. You don’t even need to know what the limit is to prove it exists.

To use it, you verify two properties. First, show monotonicity: prove that a(n+1) ≥ a(n) for all n (increasing) or a(n+1) ≤ a(n) for all n (decreasing). You can often do this by looking at the difference a(n+1) – a(n) or the ratio a(n+1)/a(n). Second, show boundedness: find a concrete number that the sequence never exceeds (for increasing sequences) or never drops below (for decreasing sequences).

This theorem is particularly useful for recursive sequences. You might use it to prove convergence, then apply the fixed-point method from the previous section to find the actual limit value.

When a Sequence Diverges

Not every sequence has a limit. Divergence comes in distinct flavors, and recognizing them helps you avoid wasting time searching for a limit that doesn’t exist.

A sequence can diverge to +∞, meaning its terms eventually exceed any number you pick, no matter how large. The sequence n² is a straightforward example. It can also diverge to -∞ in the same way but heading downward.

The trickier case is oscillation. The sequence (-1)^n alternates between -1 and 1 forever, never settling near any single value. It’s bounded (it stays between -1 and 1) but it doesn’t converge. Other oscillating sequences can be unbounded, like (-n)^n, which swings between increasingly large positive and negative values. Any divergent sequence that doesn’t head to +∞ or -∞ is classified as oscillatory.

A quick divergence test for sequences: if the terms don’t approach 0 or any other fixed value, the sequence diverges. And if a sequence is unbounded in both directions, no limit exists.