In calculus, monotonic describes a function that moves in only one direction, either always going up or always going down, across an interval. A function that only rises is called monotonically increasing. A function that only falls is called monotonically decreasing. The word comes from the Greek “mono” (one) and “tonos” (direction or tension), and it captures the idea of consistent, one-directional behavior with no reversals.
Monotonically Increasing vs. Decreasing
A function is monotonically increasing if, whenever you pick two inputs where the first is less than the second, the output of the first is less than or equal to the output of the second. In plain terms: as you move right along the x-axis, the function never drops. It can rise, or it can stay flat for a stretch, but it never goes down.
A function is monotonically decreasing if the reverse holds: moving right along the x-axis, the function never rises. It can fall or stay flat, but it never goes up. A function that is either entirely increasing or entirely decreasing on an interval is called strictly monotonic on that interval.
Strict vs. Non-Strict Monotonicity
This distinction trips up a lot of students, so it’s worth spelling out. A strictly increasing function has no flat segments at all. Every time x gets bigger, f(x) gets bigger too, never just equal. Compare the sequence 1, 2, 3, 4, 5 (strictly increasing) with 1, 2, 2, 3, 4 (non-decreasing but not strictly increasing, because of that repeated 2).
A non-decreasing function allows for flat stretches where the output stays constant over some interval. It still qualifies as monotonically increasing under the broader definition because it never reverses direction. The same logic applies on the decreasing side: strictly decreasing means the function always drops, while non-increasing allows flat portions.
A constant function, like f(x) = 5, is technically both non-decreasing and non-increasing, since its output never goes up or down. But it is not strictly increasing or strictly decreasing. Be aware that terminology varies across textbooks. Some older texts use “increasing” to mean what modern texts call “strictly increasing,” so always check the definitions your course is using.
How Derivatives Reveal Monotonicity
The derivative is the primary tool for determining where a function is monotonic. The connection is intuitive: the derivative tells you the slope of the function at each point. If the slope is positive, the function is rising. If the slope is negative, the function is falling. This is formalized as the Increasing/Decreasing Test:
- If f'(x) > 0 for all x in an interval, then f is increasing on that interval.
- If f'(x) < 0 for all x in an interval, then f is decreasing on that interval.
- If f'(x) = 0 for all x in an interval, then f is constant on that interval.
This test requires the function to be continuous on the closed interval and differentiable on its interior. It won’t apply at points where the function has a sharp corner or a discontinuity, but it covers the vast majority of functions you’ll encounter in a standard calculus course.
Finding Intervals of Monotonicity
In practice, you’ll often be asked to find the specific intervals where a function is increasing or decreasing. Here’s the process:
First, find the derivative f'(x). Then identify the critical points, which are x-values where f'(x) = 0 or where f'(x) is undefined. These critical points are the only places where the function can switch from increasing to decreasing or vice versa, so they divide the number line into intervals you need to test individually.
For each interval between consecutive critical points, pick any test value and plug it into f'(x). If the result is positive, the function is increasing on that entire interval. If it’s negative, the function is decreasing. For example, if a function has critical points at x = -2 and x = 2, you’d test the sign of f'(x) on the three intervals: (-∞, -2), (-2, 2), and (2, ∞). The sign of the derivative on each interval tells you the function’s monotonic behavior there.
This process also sets up the First Derivative Test for finding local maxima and minima. If f’ switches from positive to negative at a critical point, the function goes from rising to falling, so that point is a local maximum. If f’ switches from negative to positive, it’s a local minimum. Monotonicity and optimization are deeply connected in this way.
The Monotone Convergence Theorem
Monotonicity also shows up when working with sequences and series, not just functions. The Monotone Convergence Theorem states that every bounded monotone sequence converges. In other words, if a sequence only increases (or only decreases) and it has a ceiling it can never exceed (or a floor it can never go below), then it must settle toward a specific limit.
This matters because not all bounded sequences converge. A sequence like 1, -1, 1, -1 is bounded between -1 and 1 but bounces back and forth forever. Adding monotonicity removes that possibility. If the sequence can only move in one direction and it’s boxed in, it has nowhere to go but toward a finite value. This theorem is a workhorse in proofs involving limits of sequences and in establishing convergence of certain series.
Why Monotonicity Matters Beyond Definitions
Knowing that a function is monotonic on an interval tells you several useful things at once. A strictly monotonic function is automatically one-to-one on that interval, meaning it passes the horizontal line test and has an inverse. This is why calculus courses often connect monotonicity to the existence of inverse functions.
Monotonicity also simplifies solving equations and inequalities. If you know a function only increases, then the equation f(x) = c has at most one solution. You don’t need to worry about the function looping back and crossing that value again. In optimization problems, if a function is monotonically increasing on an entire interval, it has no local extrema in the interior of that interval, and its maximum and minimum occur at the endpoints.

