Solving partial differential equations (PDEs) requires matching the right method to the type of equation you’re dealing with. There is no single technique that works for all PDEs. Instead, you classify the equation first, then choose from analytical methods like separation of variables and the method of characteristics, or numerical approaches like finite differences when a closed-form solution isn’t possible. Here’s how each piece fits together.
Classify the Equation First
Before attempting any solution, you need to know what type of PDE you’re working with. For second-order linear PDEs in two variables, classification depends on the coefficients of the highest-order terms. If your equation has the general form involving coefficients a, b, and c on the second-derivative terms, the discriminant b² − ac tells you everything:
- Hyperbolic (b² − ac > 0): Wave-like behavior. The classic example is the wave equation. Solutions propagate information along characteristic curves.
- Parabolic (b² − ac = 0): Diffusion-like behavior. The heat equation is the standard example. Solutions smooth out over time.
- Elliptic (b² − ac < 0): Steady-state behavior. Laplace’s equation falls here. Solutions describe equilibrium configurations with no time evolution.
This classification matters because each type has fundamentally different solution behavior and requires different techniques. A method that works beautifully for elliptic equations can fail completely on hyperbolic ones. The discriminant can also vary across the domain. For instance, the equation u_xx + x·u_yy = 0 is elliptic only where x > 0, since b² − ac = −x is negative there.
Separation of Variables
Separation of variables is typically the first analytical technique taught and the most widely applicable for linear PDEs on simple domains. The core idea is to assume the solution can be written as a product of functions, each depending on only one variable. For a function u(x, t), you guess that u(x, t) = X(x)·T(t).
The steps follow a consistent pattern. You substitute the product form into the PDE, then divide both sides so that each side depends on a different variable. Since one side is a function of x alone and the other of t alone, both must equal the same constant (called the separation constant). This breaks the PDE into two ordinary differential equations, which are generally much easier to solve. You then apply boundary conditions to determine the separation constant, solve each ODE, and combine the results. For most physical problems, you end up with an infinite series of solutions that you sum together, using initial conditions and Fourier series to pin down the coefficients.
This method works cleanly when the domain has a regular geometry (rectangles, circles, spheres) and the boundary conditions align with a standard coordinate system. For the heat equation on a rod, for example, separation of variables produces a Fourier sine or cosine series whose terms decay exponentially in time, capturing exactly how heat diffuses and dissipates.
Method of Characteristics
For first-order PDEs, the method of characteristics is the primary analytical tool. It works by converting the PDE into a system of ordinary differential equations along special curves called characteristics.
Consider a first-order linear PDE of the form a(x, y)·u_x + b(x, y)·u_y = c(x, y). The idea is to find curves in the x-y plane along which the PDE reduces to an ODE. You parameterize these curves by a variable s and solve the system dx/ds = a, dy/ds = b, dz/ds = c (where z represents the solution u along the curve). Each characteristic curve carries information from the initial data into the domain, and the full solution surface is built by stitching together all these curves.
The method extends to more complex equations. For quasilinear PDEs, where the coefficients depend on the solution itself, the characteristic equations become dx/ds = a(x, y, z), dy/ds = b(x, y, z), dz/ds = c(x, y, z). The approach is conceptually the same, but the curves now depend on the solution, which can cause characteristics to cross and produce shock waves. This is exactly what happens in fluid dynamics when smooth waves steepen into discontinuities.
Boundary Conditions Shape the Solution
A PDE alone doesn’t have a unique solution. You need boundary conditions (and initial conditions for time-dependent problems) to pin down which solution applies to your specific situation. The three standard types each carry a distinct physical meaning:
- Dirichlet conditions fix the value of the solution on the boundary. In heat transfer, this means specifying the temperature at the surface of an object.
- Neumann conditions fix the derivative of the solution normal to the boundary. Physically, this specifies the flux. A zero Neumann condition on a heat equation describes a perfectly insulated boundary where no heat escapes.
- Robin conditions combine both, setting a relationship between the solution and its normal derivative at the boundary. In heat transfer, this models a surface where some heat is absorbed or radiated at the boundary, with the rate depending on the surface temperature.
Choosing the wrong type of boundary condition, or specifying too many or too few, leads either to no solution or infinitely many. The classification of the PDE determines how many boundary and initial conditions you need for a well-posed problem.
When Exact Solutions Don’t Exist
Most real-world PDEs can’t be solved analytically. Irregular domains, nonlinear terms, or variable coefficients quickly push problems beyond the reach of separation of variables and similar techniques. This is where numerical methods take over.
The finite difference method is the most straightforward numerical approach. You lay a grid over your domain and replace each derivative with an algebraic approximation using nearby grid values. The three basic building blocks are the forward difference (u at the next grid point minus u at the current point, divided by the spacing), the backward difference (current minus previous), and the central difference (next minus previous, divided by twice the spacing). Central differences are more accurate, with errors proportional to the grid spacing squared rather than just the spacing, but they aren’t always stable.
For second derivatives, you use the standard three-point stencil: the value at the point to the left, minus twice the center value, plus the value to the right, all divided by the spacing squared. Once you replace all derivatives with these approximations, your PDE becomes a large system of algebraic equations that you solve on a computer.
Stability and the CFL Condition
Not every combination of grid spacing and time step produces a reliable answer. The Courant-Friedrichs-Lewy (CFL) condition is the fundamental stability requirement: the numerical scheme must be able to “keep up” with the speed at which information travels in the actual equation. For the advection equation with wave speed a, this means |a·Δt/Δx| ≤ 1, where Δt is the time step and Δx is the spatial grid spacing. Violating this condition causes the numerical solution to blow up with wild oscillations that have nothing to do with the real physics.
Beyond finite differences, the finite element method handles complex geometries by breaking the domain into small triangles or tetrahedra, and spectral methods achieve extremely high accuracy for smooth problems by representing solutions as sums of basis functions like sines and cosines. Each method trades off between ease of implementation, accuracy, and the types of problems it handles well.
Existence and Uniqueness of Solutions
Before investing effort in solving a PDE, it helps to know whether a solution actually exists and whether it’s the only one. The Cauchy-Kovalevskaya theorem provides the classical guarantee: if the equation, the initial data, and the surface on which that data is prescribed are all real analytic (meaning they can be expressed as convergent power series), then a unique analytic solution exists in some neighborhood of the initial surface.
This theorem is powerful as a theoretical foundation, but its requirement of analyticity is restrictive. Many physically meaningful problems involve non-analytic data (a sharp initial temperature profile, for instance), and the theorem says nothing about those cases. For specific equation types, stronger and more practical results exist. Elliptic equations with reasonable boundary data have well-behaved solutions thanks to maximum principles. Hyperbolic equations have finite-speed propagation results that guarantee solutions exist as long as characteristics don’t collide. The key takeaway is that well-posedness, meaning a unique solution that depends continuously on the data, is something you verify for your specific problem class rather than assuming.
Neural Network Approaches for Complex Problems
Physics-informed neural networks (PINNs) represent a newer approach that embeds the PDE directly into a neural network’s loss function. Instead of discretizing the domain, you train a neural network so that its output approximately satisfies the equation at a collection of sample points. The network’s parameters are adjusted until the PDE residual, boundary conditions, and any available data are all satisfied as closely as possible.
PINNs are mesh-free, scale naturally to high-dimensional problems, and can handle inverse problems where some parameters of the equation are unknown. However, they have real limitations. Because the physics is enforced only through an averaged loss function, PINNs don’t strictly obey conservation laws the way traditional numerical methods do. They can also be difficult to train, with regularization terms sometimes creating conflicting optimization directions. Recent work on automatic network structure discovery, tested on Laplace, Burgers, and Poisson equations, has improved both accuracy and training efficiency, but for problems requiring strong physical consistency, traditional numerical methods remain more reliable.
Choosing the Right Approach
Your choice of method depends on the equation, the domain, and what you need from the solution. Linear PDEs on simple geometries with standard boundary conditions are good candidates for separation of variables or transform methods (Fourier, Laplace). First-order PDEs, including nonlinear ones, often yield to the method of characteristics. For everything else, and that covers most engineering and science applications, you’ll use a numerical method.
Finite differences are easiest to implement and a natural starting point for regular domains. Finite elements handle complex geometries and are the standard in structural mechanics and many engineering fields. Spectral methods give the best accuracy per grid point for smooth problems. PINNs are worth considering when the problem is high-dimensional or when you’re solving an inverse problem with incomplete data. In practice, many working scientists and engineers use established software packages (FEniCS, COMSOL, MATLAB’s PDE toolbox) that implement these methods, so understanding the underlying ideas helps you choose the right tool and interpret the results correctly.

