An equilibrium solution is stable if nearby solutions stay close to it over time, and unstable if nearby solutions drift away. The core test depends on the type of system you’re working with: for a single equation, you check the sign of the derivative at the equilibrium; for a system of equations, you check the eigenvalues of a matrix called the Jacobian. Both approaches answer the same fundamental question: if you nudge the system slightly away from equilibrium, does it return or does it escape?
The One-Dimensional Case: Check the Derivative
For a single autonomous differential equation of the form dx/dt = f(x), an equilibrium solution exists wherever f(x) = 0. To test stability, evaluate f'(x) at that equilibrium point. If f'(x) is negative, the equilibrium is stable. If f'(x) is positive, it’s unstable. If f'(x) equals zero, the test is inconclusive and you need other methods.
The logic is straightforward. A negative derivative means f(x) is decreasing as it crosses zero. That creates a pattern where values slightly above the equilibrium produce a negative dx/dt (pushing the solution back down), and values slightly below produce a positive dx/dt (pushing the solution back up). The equilibrium acts like a valley that solutions fall into. A positive derivative reverses the pattern, making the equilibrium a hilltop that solutions roll away from.
You can also see this on a phase line. Plot f(x) and look at where it crosses the x-axis. If f(x) goes from positive to negative as x increases through the equilibrium, arrows on the phase line point inward, confirming stability. If f(x) goes from negative to positive, arrows point outward, confirming instability.
Systems of Equations: The Eigenvalue Test
For a system of two or more differential equations, stability depends on the eigenvalues of the Jacobian matrix evaluated at the equilibrium point. The Jacobian is a matrix of all first-order partial derivatives of your system’s functions. Once you compute it and plug in the equilibrium coordinates, you solve for its eigenvalues. The real parts of those eigenvalues tell you everything.
The classification breaks down cleanly:
- All eigenvalues have negative real parts: the equilibrium is stable (called a sink). Solutions approach it over time.
- Any eigenvalue has a positive real part: the equilibrium is unstable. Even one positive real part is enough to destabilize the system.
- All eigenvalues have zero real part: the test is inconclusive for the nonlinear system. You need additional tools.
Having all negative real parts is both necessary and sufficient for stability of the linearized system. This is the single most important rule in stability analysis for systems of ODEs.
What the Eigenvalues Look Like Geometrically
Eigenvalues don’t just tell you stable or unstable. They also reveal how solutions behave near the equilibrium, which you can see in a phase portrait.
When eigenvalues are real and both negative, the equilibrium is a stable node (or nodal sink). Trajectories approach the equilibrium along straight paths without spiraling. When both are real and positive, it’s an unstable node (a source), with trajectories streaming outward. When one eigenvalue is positive and the other negative, the equilibrium is a saddle point. Solutions approach along one direction but veer away along another, making it unstable overall.
When eigenvalues are complex, meaning they have the form a + bi, the imaginary part introduces oscillation. If the real part (a) is negative, you get a spiral sink where trajectories wind inward toward the equilibrium. If the real part is positive, trajectories spiral outward in an unstable spiral. If the real part is exactly zero, trajectories form closed loops around the equilibrium, creating a center. Centers are considered marginally stable in the linear system, but this classification can change when nonlinear terms are included.
Repeated eigenvalues follow the same sign rule. Two identical negative eigenvalues produce a stable node; two identical positive ones produce an unstable source. The shape of the trajectories depends on whether you have one or two independent eigenvectors, but the stability conclusion doesn’t change.
Why Linearization Works (and When It Doesn’t)
The reason you can analyze a nonlinear system using eigenvalues of a matrix is linearization. Near an equilibrium point, you approximate the nonlinear system with a linear one using a Taylor expansion. You define small deviation variables that measure how far the state is from equilibrium, expand the equations, and drop all terms beyond first order. What remains is a linear system governed by the Jacobian matrix.
The Hartman-Grobman theorem guarantees that this approximation faithfully captures the qualitative behavior of the nonlinear system near the equilibrium, as long as the equilibrium is hyperbolic. A hyperbolic equilibrium is one where no eigenvalue has a real part of exactly zero. When all eigenvalues sit firmly in the left or right half of the complex plane, linearization gives you the correct stability answer for the original nonlinear system.
Linearization fails when any eigenvalue lands exactly on the imaginary axis (real part of zero). In that case, the higher-order nonlinear terms you dropped can push the system toward stability or instability, and the linear approximation can’t predict which. The classic example is a center in the linearized system that turns out to be a spiral sink or spiral source once nonlinear terms are included.
Handling the Inconclusive Cases
When linearization fails because of zero-real-part eigenvalues, two main approaches can resolve the question.
Lyapunov Functions
A Lyapunov function is an energy-like function V(x) that you construct for the system. If you can find a function that is zero at the equilibrium, positive everywhere nearby, and whose time derivative along trajectories of the system is negative (or at least non-positive), the equilibrium is stable. If the time derivative is strictly negative, the equilibrium is asymptotically stable, meaning solutions don’t just stay close but actually converge to it. This approach works directly on the nonlinear system without any linearization, so it handles cases where eigenvalues sit on the imaginary axis.
The challenge is that no general recipe exists for finding Lyapunov functions. For mechanical systems, total energy often works. For other systems, finding the right function can require creativity or systematic trial and error.
Center Manifold Reduction
When a system has some eigenvalues with zero real parts and others with negative real parts, center manifold theory lets you reduce the problem. The idea is that solutions near the equilibrium quickly collapse onto a lower-dimensional surface (the center manifold) and then evolve slowly on that surface. The stability of the full system matches the stability of the reduced system on the center manifold. This converts a difficult high-dimensional problem into a simpler lower-dimensional one that you can analyze with other tools.
Local vs. Global Stability
Everything discussed so far is local stability: it tells you what happens to solutions that start close to the equilibrium. A locally stable equilibrium might only attract solutions from a small neighborhood while repelling solutions that start farther away.
Global asymptotic stability means the equilibrium attracts every solution, regardless of initial conditions. The eigenvalue test only confirms local stability. To establish global stability, you typically need a Lyapunov function that is positive definite and has a strictly negative derivative across the entire state space, not just near the equilibrium. The Lyapunov function must also grow without bound as you move farther from the equilibrium (a property called radially unbounded), ensuring that no solutions can escape to infinity.
A globally asymptotically stable equilibrium is automatically locally stable, but the reverse is not true. In practice, many physical systems have multiple equilibria, and global stability of any single one is impossible. Local stability analysis at each equilibrium, combined with phase portraits showing the basins of attraction, gives the complete picture.
A Shortcut: The Routh-Hurwitz Criterion
For larger systems where computing eigenvalues by hand is tedious, the Routh-Hurwitz criterion offers a shortcut. Instead of solving for eigenvalues directly, you work with the coefficients of the characteristic polynomial of the Jacobian matrix. You arrange these coefficients into a specific sequence and count sign changes. The number of sign changes equals the number of eigenvalues with positive real parts. If there are zero sign changes, all eigenvalues have negative real parts and the equilibrium is stable.
This is especially useful for 3×3 or 4×4 systems where factoring the characteristic polynomial is impractical. You never need to find the actual eigenvalues. You just need to know whether any of them have positive real parts, and the sign-change count answers that directly.

