A steady state vector is a probability vector that remains unchanged when multiplied by a transition matrix. If you have a system that moves between different states over time (like weather patterns shifting between sunny and rainy days), the steady state vector tells you the long-run proportions the system settles into, regardless of where it started. Mathematically, it satisfies the equation x = Px, where P is the transition matrix and x is the vector you’re solving for.
The Core Idea
Steady state vectors come from Markov chains, which are mathematical models for systems that hop between a fixed set of states with known probabilities. A transition matrix captures all those probabilities: each entry tells you the chance of moving from one state to another in a single step.
When you multiply the transition matrix by a probability vector (a vector whose entries are non-negative and sum to 1), you get a new probability vector representing the distribution after one step. The steady state vector is special because multiplying it by the transition matrix gives back the exact same vector. Nothing changes. The system has reached equilibrium.
This is why it’s also called a stationary distribution or invariant distribution. Once the system reaches this distribution, it stays there forever.
Connection to Eigenvalues
The equation x = Px is really just an eigenvalue equation with eigenvalue 1. In other words, the steady state vector is an eigenvector of the transition matrix corresponding to the eigenvalue λ = 1. Every stochastic matrix (a matrix whose columns or rows sum to 1) is guaranteed to have 1 as an eigenvalue, so there’s always at least one candidate for a steady state vector.
The key insight from MIT’s treatment of Markov eigenvalues: since each row of a transition matrix sums to 1, the vector of all ones is automatically a right eigenvector with eigenvalue 1. The steady state vector is the corresponding left eigenvector, normalized so its entries sum to 1. That normalization is what makes it a proper probability distribution rather than just any eigenvector.
How to Calculate It
Finding a steady state vector involves two simultaneous requirements: the vector must satisfy the matrix equation, and its entries must sum to 1. Here’s the process step by step.
Start with the equation x = Px, which you can rewrite as (P – I)x = 0, where I is the identity matrix. This gives you a system of linear equations. Because the matrix (P – I) is always singular (its determinant is zero), the system has infinitely many solutions. The constraint that all entries sum to 1 narrows it down to exactly one solution.
As a concrete example, consider a queueing system with four states and this transition matrix:
- From state 1: 50% chance of staying, 50% chance of moving to state 2
- From state 2: roughly 17% to state 1, 50% staying, 33% to state 3
- From state 3: 33% to state 2, 50% staying, 17% to state 4
- From state 4: 50% to state 3, 50% staying
Setting up v = vP and solving the resulting equations alongside the constraint a + b + c + d = 1 gives the steady state vector (0.125, 0.375, 0.375, 0.125). This means the system spends 12.5% of its time in states 1 and 4, and 37.5% of its time in states 2 and 3 over the long run.
When Does a Unique Steady State Exist?
Not every Markov chain has a single steady state vector. The conditions that guarantee existence and uniqueness come from a result called the Perron-Frobenius theorem. A transition matrix is called “regular” if, for some number of steps k, there’s a positive probability of getting from any state to any other state in exactly k steps. Equivalently, some power of the matrix has all strictly positive entries.
When a matrix is regular, three things are guaranteed:
- A unique steady state vector exists
- All its entries are strictly positive (every state gets visited)
- The system converges to this distribution no matter where it starts
If the chain isn’t regular, you can run into problems. A chain that’s “reducible” (some states can’t reach others) might have multiple steady state vectors. A chain that’s “periodic” (it cycles through states in a fixed pattern) might never settle down to a single distribution, even though a steady state vector technically exists.
How Fast Does Convergence Happen?
The speed at which a Markov chain approaches its steady state depends on the second-largest eigenvalue of the transition matrix. The largest eigenvalue is always 1 (that’s the steady state itself). If you call the next largest eigenvalue λ*, the distance between the current distribution and the steady state shrinks roughly by a factor of λ* at each step.
When λ* is close to 0, convergence is fast: the system forgets its initial state quickly. When λ* is close to 1, convergence is slow: the system takes many steps to settle down. For a matrix that can be diagonalized, the gap between the current distribution and the steady state after k steps is bounded by a term proportional to (λ*)^k, which decays exponentially. This “spectral gap” (the difference 1 – λ*) is one of the most important quantities in applied probability because it tells you how many steps you need before the steady state prediction becomes reliable.
PageRank: The Most Famous Application
Google’s original PageRank algorithm is essentially a steady state vector calculation. The web is modeled as a giant Markov chain where each webpage is a state and each hyperlink is a transition. A random surfer clicking links at random would eventually settle into a steady state distribution across all pages. The steady state probability of each page becomes its “importance score,” which is then used to rank search results.
The Google matrix is constructed to be regular (using a small probability of jumping to any random page), which guarantees a unique positive steady state vector exists via the Perron-Frobenius theorem. The entries of that vector, after normalization, give each page its PageRank score. Pages that many other important pages link to end up with higher steady state probabilities, which is why the algorithm captures an intuitive notion of importance.
Other Practical Uses
Steady state vectors show up anywhere a system transitions between discrete states over time. In genetics and evolutionary biology, populations moving between different genetic configurations can be modeled as Markov chains, and the steady state predicts the long-run genetic equilibrium. Optimization algorithms inspired by evolution use steady state analysis to prove that the population converges to the globally optimal solution under certain conditions.
In economics, steady state vectors predict the long-run distribution of market share when customers switch between brands with fixed probabilities. In queueing theory, they predict how busy a server will be on average. In physics, they describe thermal equilibrium in certain particle systems. The underlying math is identical in every case: set up the transition matrix, find the eigenvector for eigenvalue 1, normalize it so the entries sum to 1, and you have your long-run prediction.

