The concavity of the function tells you everything. If the function is concave up on the interval where you’re approximating, the tangent line sits below the curve, so your linear approximation is an underestimate. If the function is concave down, the tangent line sits above the curve, making your approximation an overestimate.
That’s the core rule, and it comes directly from the geometry of tangent lines. The rest is knowing how to check concavity and applying the idea to actual problems.
Why Concavity Controls the Answer
A linear approximation uses the tangent line at a known point to estimate the function’s value at a nearby point. The tangent line captures the function’s value and slope at that point, but it can’t capture how the function curves. That curving is exactly what concavity describes.
When a function is concave up (curving upward, like the bottom of a bowl), the curve bends away from the tangent line in the upward direction. The tangent line stays below the function near the point of tangency. So the tangent line’s value at your target point will be less than the function’s actual value: an underestimate.
When a function is concave down (curving downward, like the top of a hill), the opposite happens. The curve bends below the tangent line, so the tangent line sits above the function. Your approximation comes out higher than the true value: an overestimate.
How to Check: The Second Derivative
Concavity is determined by the second derivative, f”(x). Here’s the process:
- Step 1: Identify the function f(x) and the base point a where you’re building the tangent line.
- Step 2: Compute the second derivative f”(x).
- Step 3: Check the sign of f”(x) on the interval between a and the point you’re estimating.
If f”(x) > 0 throughout that interval, the function is concave up and your approximation is an underestimate. If f”(x) < 0 throughout the interval, the function is concave down and your approximation is an overestimate.
The key detail: you need the second derivative to keep the same sign across the entire interval from your base point to your target point. If it changes sign (meaning there’s an inflection point in that interval), the rule doesn’t apply cleanly, and the approximation could be an overestimate on one side and an underestimate on the other.
Example: Approximating a Square Root
Suppose you want to approximate √26 using a linear approximation based at a = 25, where f(x) = √x.
The first derivative is f'(x) = 1/(2√x). The second derivative is f”(x) = -1/(4x^(3/2)). Since x is positive in the interval (25, 26), the second derivative is negative everywhere on that interval. The function is concave down.
That means the tangent line at x = 25 lies above the curve of √x on this interval. Your linear approximation of √26 will be an overestimate. You’d get a value slightly larger than the true answer of roughly 5.0990.
This makes intuitive sense if you picture the square root function. It rises steeply at first and then gradually flattens out. A straight line projecting forward from any point on that curve will overshoot because the curve is always bending downward.
Example: Approximating sin(x)
The linearization of sin(x) at a = 0 is simply L(x) = x, since sin(0) = 0 and cos(0) = 1. So for small values of x, you approximate sin(x) ≈ x.
The second derivative of sin(x) is -sin(x). For x values just above 0 (say, on the interval from 0 to 1), -sin(x) is negative, so sin(x) is concave down there. The linear approximation L(x) = x overestimates sin(x) for small positive values. You can verify: sin(0.1) ≈ 0.0998, which is slightly less than 0.1.
This example also illustrates the inflection point case. At x = 0 itself, f”(0) = -sin(0) = 0, which is an inflection point. For small negative values of x, -sin(x) becomes positive, so the function is concave up on that side. The same linearization L(x) = x becomes an underestimate for negative x values near zero. When an inflection point sits at your base point, the approximation switches from overestimate to underestimate depending on which side of a you’re evaluating.
How Far Off Can the Estimate Be?
The second derivative doesn’t just tell you the direction of the error. It also helps you bound how large the error is. The error in a linear approximation satisfies this inequality:
|Error| ≤ (M/2)(x – a)²
Here, M is the largest value of |f”(t)| on the interval between a and x, and (x – a) is the distance from your base point to your target point. Two things jump out from this formula. First, the error grows with the square of the distance from a, so approximations get worse fast as you move away from the base point. Second, a larger second derivative (more curvature) means more error, which makes sense: a more curved function departs from its tangent line more quickly.
For the √26 example, |f”(t)| = 1/(4t^(3/2)). On the interval (25, 26), the maximum occurs at t = 25, giving M = 1/(4 · 125) = 1/500. The distance (x – a) = 1, so the error is at most (1/500)(1/2)(1) = 0.001. The true error turns out to be about 0.0001, well within that bound.
Quick Reference Summary
- f”(x) > 0 (concave up): tangent line below the curve, linear approximation is an underestimate
- f”(x) < 0 (concave down): tangent line above the curve, linear approximation is an overestimate
- f”(x) = 0 (inflection point): the approximation may switch from over to under depending on which side of the base point you’re evaluating
If you can remember just one visual, think of concave up as a bowl: the tangent line touches the bottom of the bowl and everything else curves above it. Concave down is an upside-down bowl: the tangent line touches the top and the curve falls away beneath it.

