What Is the Integral Test for Series Convergence?

The integral test is a method in calculus for determining whether an infinite series converges or diverges. It works by comparing the series to an improper integral: if the integral produces a finite value, the series converges, and if the integral blows up to infinity, the series diverges. It’s one of the most practical tools in a Calculus II course because it connects two big ideas, summation and integration, and it handles series that other tests struggle with.

How the Integral Test Works

The core idea is straightforward. Suppose you have an infinite series whose terms come from some function, meaning each term a_n equals f(n). If that function f(x) is continuous, positive, and decreasing on the interval from 1 to infinity, then the improper integral of f(x) from 1 to infinity and the infinite series either both converge or both diverge. There’s no middle ground where one converges and the other doesn’t.

All three conditions must hold before you can use the test:

  • Continuous: The function has no gaps, jumps, or undefined points on the interval.
  • Positive: Every term of the series is greater than zero.
  • Decreasing: The function’s values get smaller as x increases, so each successive term is no larger than the one before it.

If any of these conditions fails, the integral test doesn’t apply, and you’ll need a different convergence test. A common mistake is applying it to a function that oscillates or increases over part of the interval. The decreasing condition doesn’t need to kick in right at n = 1, though. If the function eventually becomes decreasing after some starting point, you can still use the test from that point onward, since a finite number of terms at the beginning don’t affect whether the series converges.

Why It Works: The Area Interpretation

The logic behind the test comes from comparing areas. Picture the curve y = f(x) drawn on a graph, and imagine building rectangles of width 1 at each integer. If you use the value at the right endpoint of each interval as the rectangle’s height, the rectangles sit below the curve (because the function is decreasing). That means the sum of those rectangles underestimates the area under the curve. If you instead use the left endpoints, the rectangles poke above the curve, and the sum overestimates the area.

This sandwiching is the key. The actual series sum is trapped between two versions of the integral, one shifted left and one shifted right. So if the integral converges to a finite area, the series can’t escape to infinity. And if the integral diverges, the series can’t somehow stay finite. They’re locked together.

The P-Series: A Classic Application

The most important result that comes from the integral test is the p-series rule. A p-series has the form 1/n^p, where p is a constant: think of series like 1 + 1/4 + 1/9 + 1/16 + … (that’s p = 2) or 1 + 1/2 + 1/3 + 1/4 + … (that’s p = 1, the harmonic series).

The rule is clean: a p-series converges if and only if p is greater than 1. Here’s how the integral test proves it. Take f(x) = 1/x^p, which is continuous, positive, and decreasing for x ≥ 1 when p > 0. Evaluate the improper integral from 1 to infinity. When p > 1, the antiderivative is -1/((p-1)x^(p-1)), and as x goes to infinity, that fraction shrinks to zero. You’re left with a finite value of 1/(p-1), so the integral converges, and the series converges with it.

When p = 1, you get the integral of 1/x, which is ln(x). That grows without bound, so the harmonic series diverges. When p is less than 1, things are even worse, and divergence follows by comparison to the harmonic series. This single result is surprisingly useful because p-series show up constantly as benchmarks for testing other series.

Series Involving Logarithms

The integral test really earns its keep with series that involve logarithms, where comparison tests are awkward and the ratio test is inconclusive. A typical example is the series with terms 1/(n · ln(n)). The function f(x) = 1/(x · ln(x)) is continuous, positive, and decreasing for x ≥ 2, so the integral test applies.

To evaluate the integral, you use the substitution u = ln(x), which turns it into the integral of 1/u. That’s another logarithm, ln(ln(x)), which grows to infinity. So the integral diverges, and the series diverges too. This is a good example of a series whose terms shrink to zero (passing the basic divergence test) but still diverges, similar in spirit to the harmonic series but harder to spot without the integral test.

Variations like 1/(n · (ln(n))^2) work differently. The same substitution gives you an integral of 1/u^2, which converges. So adding that extra power on the logarithm is enough to tip the series into convergence. These logarithmic series are where students most commonly need the integral test, because few other tools handle them cleanly.

Estimating the Remainder

Beyond just answering “converge or diverge,” the integral test gives you a way to estimate how close a partial sum is to the true value of a convergent series. If you add up the first n terms, the error (the remainder r_n, meaning everything you left out) is bounded by two integrals:

The integral of f(x) from n+1 to infinity gives a lower bound on the error, and the integral of f(x) from n to infinity gives an upper bound. In other words, the leftover sum is at least as big as the first integral and no bigger than the second.

This is valuable in practice. Say you’ve summed 100 terms of a convergent series and want to know how accurate your approximation is. You compute the integral from 100 to infinity, and that tells you the maximum possible error. You can also work backward: decide how small you want the error to be, then solve for how many terms you need to add up. This makes the integral test not just a yes-or-no convergence tool but a quantitative estimation method.

When to Use the Integral Test

The integral test is the right choice when the series terms come from a function you can actually integrate, and when other tests don’t give you an answer. The ratio test, for instance, is often inconclusive for series with polynomial or logarithmic terms. Comparison tests require you to already know a similar series that converges or diverges, which isn’t always obvious.

The main limitation is practical: you need to be able to evaluate the improper integral. If the antiderivative is something you can’t find (or doesn’t exist in a nice closed form), the test isn’t helpful even though it theoretically applies. You also can’t use it when the function fails to be eventually positive and decreasing, such as series with terms that alternate in sign or oscillate.

One important detail that trips up students: the integral test tells you whether a series converges, but it does not tell you what the series converges to. The value of the improper integral is not equal to the sum of the series. They’re related through the rectangle-and-curve comparison, but they’re different numbers. If you need the actual sum, you’ll need other techniques or the remainder estimation bounds described above to pin it down.