Solving an ordinary differential equation (ODE) starts with identifying what type you’re dealing with, then applying the right technique. The method you use depends on two things: the order of the equation (whether it involves a first derivative, second derivative, or higher) and whether the equation is linear or nonlinear. Once you classify the equation, the solving process becomes a structured set of steps.
Classifying Your ODE
Before picking a method, you need to know three things about your equation. First, the order: look at the highest derivative present. If the highest derivative is dy/dx, it’s first order. If it’s d²y/dx², it’s second order. Second, check for linearity: an ODE is linear if the unknown function y and all its derivatives appear only to the first power, are never multiplied together, and aren’t inside other functions like sin(y) or y². If any of those things happen, the equation is nonlinear. Third, for linear equations, check whether it’s homogeneous: if every term involves y or one of its derivatives (nothing is a standalone function of x), the equation is homogeneous. If there’s an extra term that depends only on x, it’s nonhomogeneous.
These distinctions matter because each combination has its own solving technique. A first-order separable equation takes two minutes of integration. A second-order nonhomogeneous equation with constant coefficients requires several steps but follows a clear recipe. A nonlinear equation might not have a closed-form solution at all.
First-Order Separable Equations
The simplest ODEs to solve are separable first-order equations. An equation is separable if you can rearrange it so all the y terms (including dy) are on one side and all the x terms (including dx) are on the other. The general form looks like N(y) dy = M(x) dx.
Once separated, you integrate both sides independently: ∫N(y) dy = ∫M(x) dx. Each integration produces its own constant, but you can combine them into a single constant C on one side. This gives you an implicit solution. If you’re lucky, you can solve algebraically for y to get an explicit solution y(x).
For example, if you have y⁻² dy = 6x dx, integrating both sides gives −y⁻¹ = 3x² + C. If you have an initial condition like y(0) = 1, plug in x = 0 and y = 1 to find C, then solve for y. The constant of integration is what turns a general solution (a family of curves) into a particular solution (one specific curve).
First-Order Linear Equations
A first-order linear ODE has the standard form dy/dx + P(x)y = Q(x). The technique here is the integrating factor. You compute a function μ(x) = e^(∫P(x) dx), then multiply the entire equation by it. This transforms the left side into the derivative of the product μ(x)·y, which you can integrate directly.
The steps are always the same: put the equation in standard form, calculate the integrating factor, multiply through, recognize the left side as a product rule derivative, integrate both sides, and solve for y. The method works for any first-order linear ODE regardless of what P(x) and Q(x) look like, as long as you can evaluate the integrals.
Second-Order Linear Equations With Constant Coefficients
These are the workhorses of applied math and physics. The general form is ay″ + by′ + cy = 0 for the homogeneous case, where a, b, and c are constants. The key insight is that exponential functions y = e^(rx) are natural candidates for solutions. Substituting this guess into the equation produces the characteristic equation: ar² + br + c = 0.
This is just a quadratic, and the nature of its roots determines the form of the general solution.
- Two distinct real roots r₁ and r₂: The solution is y(x) = Ae^(r₁x) + Be^(r₂x).
- One repeated root r: The solution is y(x) = Ae^(rx) + Bxe^(rx). The extra factor of x in the second term accounts for the missing second independent solution.
- Complex roots r ± si: The solution is y(x) = e^(rx)(C cos(sx) + D sin(sx)). This produces oscillating behavior modulated by exponential growth or decay, depending on the sign of r.
The constants A, B, C, or D are determined by two initial conditions, typically the value of y and y′ at some starting point.
Solving Nonhomogeneous Equations
When the equation has a nonzero right-hand side, like ay″ + by′ + cy = g(t), you need two pieces: the general solution to the corresponding homogeneous equation (set g(t) = 0), plus any one particular solution to the full equation. The complete solution is the sum of both.
Undetermined Coefficients
This method works when g(t) is an exponential, polynomial, sine, cosine, or a combination of these. You guess a solution form that mirrors g(t), plug it in, and solve for the unknown coefficients. The guesses follow a simple pattern:
- If g(t) = ae^(βt): guess Ae^(βt).
- If g(t) involves cos(βt) or sin(βt): guess A cos(βt) + B sin(βt). You always include both sine and cosine even if only one appears in g(t).
- If g(t) is a polynomial of degree n: guess a full polynomial of degree n with all coefficients unknown.
One catch: if your guess already appears in the homogeneous solution, multiply it by t (or t² if needed) to ensure the particular solution is independent. After substituting the guess into the equation, match coefficients on both sides to pin down the unknowns.
Variation of Parameters
This method handles any g(t), not just the nice forms above. If y₁ and y₂ are the two independent solutions to the homogeneous equation, the particular solution is:
Yₚ(t) = −y₁ ∫[y₂ g(t) / W] dt + y₂ ∫[y₁ g(t) / W] dt
Here W is the Wronskian, defined as W = y₁y₂′ − y₂y₁′. The Wronskian measures whether your two homogeneous solutions are truly independent (it must be nonzero). Variation of parameters is more general than undetermined coefficients but often involves messier integrals.
Using Laplace Transforms
The Laplace transform converts a differential equation in time into an algebraic equation in a new variable s. This is especially useful for equations with initial conditions or discontinuous forcing functions. The key properties that make this work are the derivative rules:
- First derivative: The transform of f′(t) becomes sF(s) − f(0).
- Second derivative: The transform of f″(t) becomes s²F(s) − sf(0) − f′(0).
The procedure is: take the Laplace transform of every term in the equation, substitute initial conditions, solve the resulting algebraic equation for F(s), then use inverse Laplace transform tables (or partial fractions) to convert back to y(t). Derivatives become multiplication by s, which is why differential equations become algebraic ones. The initial conditions get baked in automatically, so you never need to solve for constants separately.
When No Exact Solution Exists
Many ODEs, especially nonlinear ones, have no closed-form solution. In these cases, numerical methods approximate the solution by stepping forward in small increments of size h.
Euler’s method is the simplest approach. At each step, it uses the current slope to project forward: y(t + h) = y(t) + h·f(t, y). It’s easy to implement but only accurate to order h, meaning its local error at each step is proportional to h². Halving the step size only halves the global error, so you need very small steps for reasonable accuracy.
The fourth-order Runge-Kutta method (RK4) evaluates the slope at four points within each step and takes a weighted average. This dramatically improves accuracy: RK4 is accurate to order h⁴, so halving the step size reduces the global error by a factor of 16. For most practical problems, RK4 provides an excellent balance of accuracy and computational cost.
Some equations are “stiff,” meaning they contain dynamics on vastly different timescales. A chemical reaction that takes hours to complete but has intermediate steps lasting milliseconds is a classic example. Explicit methods like Euler or RK4 require absurdly tiny step sizes to remain stable on stiff problems. Implicit methods, which solve an equation at each step rather than just evaluating a formula, remain stable with much larger step sizes. The implicit (backward) Euler method is the simplest example, and more sophisticated implicit solvers handle stiff systems efficiently.
Solving ODEs With Software
In practice, most ODEs are solved computationally. MATLAB provides a family of ODE solvers with a consistent syntax: [T, Y] = solver(odefun, tspan, y0), where “solver” is the method name, “odefun” is a function defining your equation, “tspan” is the time interval, and “y0” is the initial condition vector.
MATLAB’s ode45, based on a Runge-Kutta method, is the default choice for nonstiff problems. For stiff equations, ode15s or ode23s are better options. Python’s SciPy library offers equivalent tools through scipy.integrate.solve_ivp, with similar syntax and method selection.
Higher-order equations must be rewritten as systems of first-order equations before feeding them to a solver. A second-order equation y″ = f(t, y, y′) becomes two first-order equations by introducing a new variable: let v = y′, so y′ = v and v′ = f(t, y, v). The solver then tracks both y and v simultaneously.
A Real-World Example: The RLC Circuit
A series electrical circuit containing a resistor (R), inductor (L), and capacitor (C) driven by a voltage source E(t) produces the second-order ODE:
LQ″ + RQ′ + (1/C)Q = E(t)
Here Q is the charge on the capacitor. The inductor contributes the Q″ term (it resists changes in current), the resistor contributes Q′ (it dissipates energy proportional to current), and the capacitor contributes Q (it stores charge). This equation has the exact same form as a mass-spring-damper system in mechanics, which is why the same mathematical techniques apply to both electrical and mechanical systems.
With constant R, L, and C, this is a second-order linear ODE with constant coefficients. You’d find the characteristic equation Lr² + Rr + 1/C = 0, determine whether the roots are real or complex (which tells you whether the circuit oscillates or just decays), then add a particular solution if E(t) is nonzero. The math is identical to the techniques described above, just with physical meaning attached to each term.
Does Your Equation Actually Have a Solution?
Not every ODE has a solution, and when solutions exist, they aren’t always unique. For a first-order initial value problem y′ = F(t, y) with y(t₀) = y₀, a unique solution is guaranteed near t₀ when two conditions hold: F must be continuous near the point (t₀, y₀), and F must satisfy a Lipschitz condition in y, meaning the rate at which F changes with respect to y is bounded. Informally, the second condition says F doesn’t change too wildly as y varies. When both conditions are met, there’s exactly one solution passing through the initial point, at least over some interval around t₀. When they fail, solutions might not exist, or multiple solutions might pass through the same starting point.

