Solving a differential equation means finding a function whose derivatives satisfy a given relationship. The method you use depends entirely on what type of equation you’re dealing with, so the first step is always classification. From there, a handful of core techniques cover the vast majority of equations you’ll encounter in a math or engineering course.
Classify the Equation First
Before picking a method, you need to identify two things about your equation: its order and whether it’s linear.
The order is simply the highest derivative in the equation. If the highest derivative is dy/dx, it’s first-order. If it’s d²y/dx², it’s second-order. The linearity question is about how the unknown function and its derivatives appear. An equation is linear if the unknown function and all its derivatives show up in a “plain” way: they aren’t multiplied together, squared, or tucked inside functions like sine, cosine, or exponentials. If any of those things happen, the equation is nonlinear.
This classification tells you which solving technique to reach for. Here’s a practical map:
- First-order, separable: Separate variables and integrate both sides.
- First-order, linear (not separable): Use an integrating factor.
- Second-order, linear, constant coefficients: Use the characteristic equation.
- Non-homogeneous (any order): Solve the homogeneous part first, then find a particular solution.
- Equations that resist all of the above: Use Laplace transforms or numerical methods.
Separable First-Order Equations
A separable equation is one where you can get all the y terms on one side and all the x terms on the other. In standard form, it looks like N(y) dy = M(x) dx. This is the simplest class of differential equation, and it’s the one most courses start with.
The procedure has three steps. First, rearrange the equation so that everything involving y (including dy) is on one side and everything involving x (including dx) is on the other. Second, integrate both sides independently. Third, solve for y if possible.
For example, if you have dy/dx = 6x·y², you’d divide both sides by y² and multiply both sides by dx to get y⁻² dy = 6x dx. Integrating the left side gives -1/y; integrating the right gives 3x². So the implicit solution is -1/y = 3x² + C, where C is your constant of integration. You can then solve for y explicitly: y = -1/(3x² + C).
A few practical notes. You only need one constant of integration, not two. Technically both integrals produce a constant, but the two combine into a single C. Place it on the x side, since you’ll eventually be solving for y anyway. If you have an initial condition like y(0) = 1, plug it in to find the specific value of C. It’s usually easier to do this while the solution is still in implicit form, before you’ve rearranged for y.
Linear First-Order Equations
When a first-order equation isn’t separable but is linear, the integrating factor method works. The equation must be in the standard form dy/dt + p(t)·y = g(t). If it isn’t already in that form, divide or rearrange until it is.
The key idea is to multiply the entire equation by a special function, called the integrating factor, that turns the left side into the derivative of a product. The integrating factor is:
μ(t) = e^(∫p(t) dt)
Once you’ve calculated μ(t), multiply every term in the equation by it. The left side collapses into d/dt[μ(t)·y], which you can integrate directly. The general solution comes out as:
y(t) = (1/μ(t)) · [∫μ(t)·g(t) dt + C]
The hardest part is usually computing the integrals. The method itself is completely mechanical: put the equation in standard form, compute μ, multiply through, integrate, and solve for y. If you’re given an initial condition, use it to pin down C at the end.
Second-Order Equations With Constant Coefficients
The most common second-order equation in textbooks looks like ay” + by’ + cy = 0, where a, b, and c are constants. This is a homogeneous linear equation, and you solve it by converting it into an algebra problem.
Assume the solution has the form y = e^(rt). Substituting this into the equation turns it into ar² + br + c = 0, a plain quadratic called the characteristic equation. Solve it with the quadratic formula, and the nature of the roots tells you the shape of the solution:
- Two distinct real roots (r₁ and r₂): The general solution is y = C₁e^(r₁t) + C₂e^(r₂t).
- One repeated real root (r): The general solution is y = C₁e^(rt) + C₂·t·e^(rt). The extra factor of t is needed because a repeated root would otherwise give you only one independent solution.
- Complex roots (α ± βi): The general solution is y = e^(αt)[C₁cos(βt) + C₂sin(βt)]. Complex roots always come in conjugate pairs, so you’ll always get this combined sine-cosine form.
The two constants C₁ and C₂ require two conditions to pin down, which is why second-order problems typically come with both an initial value y(0) and an initial derivative y'(0).
Non-Homogeneous Equations
When the right side of the equation isn’t zero, as in ay” + by’ + cy = g(t), you need two pieces: the general solution to the homogeneous version (set g(t) = 0 and use the characteristic equation) and one particular solution to the full equation. The complete solution is the sum of these two.
The method of undetermined coefficients gives you the particular solution for many common forms of g(t). The idea is to guess the form of the particular solution based on what g(t) looks like, plug it in, and solve for the unknown coefficients. The standard guesses are:
- g(t) is an exponential (e^(βt)): Guess y_p = Ae^(βt).
- g(t) is a sine or cosine: Guess y_p = Acos(βt) + Bsin(βt). You always need both sine and cosine in the guess, even if g(t) only contains one of them.
- g(t) is a polynomial of degree n: Guess a full polynomial of degree n with unknown coefficients.
For products of these basic types, combine the guesses. If g(t) = t²·e^(3t), for instance, you’d guess a second-degree polynomial multiplied by e^(3t). For sums, guess for each piece separately and combine like terms. One important wrinkle: if your guess happens to be a solution to the homogeneous equation, multiply it by t (or t² for a double overlap) to get a valid guess.
Laplace Transforms
Laplace transforms take a completely different approach. Instead of solving the differential equation directly, you transform the entire equation into an algebraic equation, solve the algebra, and then convert back.
The process works in four steps. Take the Laplace transform of every term in the equation, using derivative properties to handle y’ and y” terms. Substitute your initial conditions into the transformed equation. Solve for Y(s), the transform of your unknown function. Finally, use a table of Laplace transforms (or partial fraction decomposition) to convert Y(s) back into y(t).
This method is especially useful for equations with discontinuous forcing functions, like a sudden on/off input, where undetermined coefficients would be awkward. It’s also the standard approach in engineering courses for analyzing circuits and control systems, because initial conditions get baked into the process automatically rather than being applied at the end.
Numerical Methods for Equations You Can’t Solve by Hand
Many real-world differential equations have no closed-form solution. In those cases, you approximate the solution numerically, generating a sequence of points rather than a formula.
Euler’s Method
The simplest numerical approach is Euler’s method. Given dy/dt = f(t, y) with an initial condition y(t₀) = y₀, you step forward in small increments of size h using the formula:
y_(n+1) = y_n + h · f(t_n, y_n)
At each step, you calculate the slope at your current point and follow it in a straight line to the next point. It’s intuitive and easy to implement, but the approximation drifts from the true solution as errors accumulate. Smaller step sizes improve accuracy at the cost of more computation.
Runge-Kutta (RK4)
The fourth-order Runge-Kutta method is far more accurate for the same step size. Instead of sampling the slope at just one point per step, it samples at four points: once at the start, twice at the midpoint (using two different estimates), and once at the end. These four slope estimates, called k₁ through k₄, are combined into a weighted average:
y_(n+1) = y_n + (k₁ + 2k₂ + 2k₃ + k₄)/6 · h
The midpoint slopes get double weight. This averaging cancels out much of the error that plagues Euler’s method, making RK4 the default workhorse for numerical ODE solving. It’s what most software uses under the hood.
Solving Differential Equations in Code
If you need to solve a differential equation computationally, Python’s SciPy library provides solve_ivp (solve initial value problem) in its integration module. You define a function that returns the derivative, specify a time span, pass in your initial conditions, and the solver handles the rest. It returns an object containing arrays of time values and corresponding solution values.
Higher-order equations need to be rewritten as a system of first-order equations before feeding them to the solver. For a second-order equation, you introduce a new variable for y’ and write two first-order equations instead. A third-order equation becomes a system of three, and so on. This reduction works for any order, which means solve_ivp can handle essentially any ODE problem you throw at it.
You can pass a t_eval argument to specify exactly which time points you want the solution at, which is useful for plotting or comparing against data. For stiff equations, where some components of the solution change much faster than others, switching to an implicit solver method within the same function can dramatically improve performance.
When Does a Solution Even Exist?
Not every differential equation has a solution, and when solutions exist, they aren’t always unique. The Picard-Lindelöf theorem gives the standard conditions that guarantee both existence and uniqueness for an initial value problem y’ = F(t, y), y(t₀) = y₀. If F is continuous near your starting point and satisfies a Lipschitz condition in y (roughly meaning F doesn’t change infinitely fast as y varies), then a unique solution exists in some interval around t₀.
In practice, this means most “well-behaved” equations you encounter in coursework have unique solutions. Problems arise when F has a discontinuity or blows up near your initial point. The classic example is y’ = y^(2/3), y(0) = 0, which has two solutions: y = 0 and y = (t/3)³. The right-hand side fails the Lipschitz condition at y = 0, so uniqueness breaks down. If you ever get unexpected behavior from a numerical solver, a violated existence or uniqueness condition is worth checking.

