The steady state approximation is a technique in chemical kinetics that simplifies complex, multi-step reactions by assuming that certain short-lived intermediates are consumed just as fast as they’re produced. This means the concentration of those intermediates stays essentially constant throughout most of the reaction, even though reactants are being used up and products are building. The core assumption translates to a simple mathematical statement: the rate of change of the intermediate’s concentration equals zero. That single assumption lets you solve for the intermediate’s concentration algebraically and plug it into a rate law you can actually use.
Why Intermediates Matter
Most real chemical reactions don’t happen in a single step. Instead, molecules go through one or more intermediate species on the way from reactants to products. These intermediates are typically reactive and unstable, which means they don’t accumulate to high concentrations. They form, exist briefly, and then get converted into something else.
The problem is that these intermediates make the math complicated. If you write out the full set of rate equations for every species in a multi-step mechanism, you get a system of differential equations that can be difficult or impossible to solve exactly. The steady state approximation cuts through this by letting you replace those differential equations with simple algebraic ones.
The Core Math
For any intermediate (call it “Int”), the approximation sets its rate of change to zero:
d[Int]/dt = 0
This doesn’t mean the intermediate isn’t being made or destroyed. It means the rate at which it’s being produced equals the rate at which it’s being consumed, so its concentration holds roughly constant. You write out all the reactions that create the intermediate and all the reactions that remove it, set those rates equal, and solve for the intermediate’s concentration in terms of the reactants and rate constants you already know.
Once you have that expression, you substitute it into the rate equation for the overall product formation. The result is a rate law written entirely in terms of measurable quantities: reactant concentrations and rate constants. No unmeasurable intermediate concentrations remain.
A Classic Example: Two-Step Reaction
Consider a simple mechanism where reactant A forms intermediate I in a reversible first step (with forward rate constant k₁ and reverse rate constant k₋₁), and then I irreversibly converts to product P (with rate constant k₂):
Step 1: A ⇌ I (forward k₁, reverse k₋₁)
Step 2: I → P (rate constant k₂)
The intermediate I is being formed at rate k₁[A] and consumed by two processes: it can revert back to A at rate k₋₁[I], or it can move forward to product at rate k₂[I]. Setting production equal to consumption gives:
k₁[A] = k₋₁[I] + k₂[I]
Solving for [I]:
[I] = k₁[A] / (k₋₁ + k₂)
The rate of product formation is k₂[I], so substituting gives a clean rate law: rate = k₁k₂[A] / (k₋₁ + k₂). Everything in that expression is either a rate constant or a reactant concentration, both of which you can measure.
How It Connects to Enzyme Kinetics
The most famous application of the steady state approximation is the derivation of the Michaelis-Menten equation, which describes how enzymes process substrates. In this model, an enzyme (E) binds a substrate (S) to form a complex (ES), and that complex either falls apart back into E and S, or proceeds to release product (P) and free enzyme.
The enzyme-substrate complex ES is the intermediate. Applying the steady state approximation means setting the rate of ES formation equal to the rate of ES breakdown:
Rate of ES formation = k₁[E][S]
Rate of ES breakdown = k₋₁[ES] + k₂[ES]
There’s one extra trick: because the total enzyme concentration is fixed, the free enzyme concentration equals the total enzyme minus whatever is tied up in the complex: [E] = [E_total] – [ES]. Substituting that in and solving for [ES] leads directly to the Michaelis-Menten equation, where the Michaelis constant Kₘ equals (k₋₁ + k₂) / k₁. This constant captures how tightly the enzyme holds onto its substrate relative to how quickly it processes it.
The approximation holds well in enzyme kinetics when the total enzyme concentration is much lower than the sum of the substrate concentration and Kₘ. In practical terms, this is almost always the case in biological systems, where enzymes are present in tiny amounts compared to their substrates.
When the Approximation Is Valid
The steady state approximation works when the intermediate is highly reactive, meaning it gets consumed much faster than it builds up. In terms of rate constants, this generally requires that the steps removing the intermediate (both the reverse of step 1 and the forward step 2) are fast relative to the step that creates it. When k₋₁ + k₂ is much larger than k₁, the intermediate never has a chance to accumulate, and the approximation is excellent.
There’s also a brief initial period at the very start of a reaction, sometimes called the induction or pre-steady-state period, during which the intermediate’s concentration is still climbing toward its steady value. During this transient phase, the approximation doesn’t apply. In most practical situations, this period is extremely short compared to the overall reaction time, so it has negligible effect on the rate law you derive. But in specialized experiments designed to probe the very first moments of enzyme catalysis (pre-steady-state kinetics), scientists specifically measure what happens before steady state kicks in.
How It Differs From the Equilibrium Approximation
Students often confuse the steady state approximation with the equilibrium approximation, but they’re built on different assumptions. The equilibrium approximation assumes the first step of a reaction reaches full equilibrium before the second step proceeds. This requires that the reverse of step 1 is much faster than step 2 (k₋₁ >> k₂), so the intermediate is mostly shuttling back and forth between reactant and itself, with only a slow leak toward product.
The steady state approximation is more general. It doesn’t require the first step to be at equilibrium. It only requires that the intermediate’s concentration stays roughly constant, which can happen even when k₂ is comparable to k₋₁. In fact, if you take the steady state result for the two-step mechanism and impose the additional condition that k₋₁ is much larger than k₂, you recover the equilibrium approximation as a special case. The equilibrium approach is a subset of the steady state approach, not an alternative to it.
Geometrically, researchers have described these two approximations as different surfaces in the mathematical space of all possible concentrations. The true behavior of the system passes close to both surfaces, but the steady state surface is generally a better shadow of reality when the intermediate is short-lived, while the equilibrium surface works when the first step is genuinely fast and reversible.
Applications in Atmospheric Chemistry
Beyond the classroom, the steady state approximation is a workhorse in atmospheric chemistry, particularly for understanding ozone. In the stratosphere, oxygen atoms are a reactive intermediate produced when ultraviolet light breaks apart oxygen molecules (O₂) and ozone (O₃). These free oxygen atoms are extremely reactive and short-lived, making them ideal candidates for the approximation.
By applying the steady state assumption to oxygen atoms, atmospheric scientists can express the oxygen atom concentration in terms of measurable quantities: the concentrations of O₂ and O₃, the rates at which sunlight breaks them apart, and the rate constants for the recombination reactions. This approach is essential for modeling how the ozone layer responds to pollutants, because directly measuring the concentration of free oxygen atoms in the stratosphere is impractical. The steady state expression replaces that unmeasurable quantity with things you can actually track from satellites and ground stations.
Practical Limitations
The approximation breaks down in a few predictable situations. If the intermediate is relatively stable and accumulates to significant concentrations, the assumption that its rate of change is zero becomes inaccurate. This can happen when the step consuming the intermediate is slow compared to the step producing it. You’ll get a rate law that looks clean but gives predictions that don’t match experimental data.
It also struggles with very fast reactions where the pre-steady-state period occupies a meaningful fraction of the total reaction time, or in systems where concentrations change abruptly (such as pulse experiments in laser chemistry). In spatially heterogeneous environments, where reactants aren’t evenly mixed, the approximation can introduce additional errors because local concentrations may deviate significantly from the average values used in the math.
Despite these edge cases, the steady state approximation remains one of the most practical tools in chemical kinetics. It turns otherwise unsolvable systems of equations into manageable algebra, producing rate laws that match experimental observations across enzyme biochemistry, atmospheric science, combustion chemistry, and polymer reactions.

