What Is a Linear System? Definition and Examples

A linear system is any system where the output is directly proportional to the input. More precisely, it’s a system that obeys two rules: if you double the input, the output doubles, and if you combine two inputs, the output is the combination of their individual outputs. These two properties, called homogeneity and additivity, define linearity across every field that uses the concept, from algebra to electrical engineering to signal processing.

The Two Rules That Define Linearity

Every linear system must pass two tests. The first is homogeneity, sometimes called the scaling rule. If you feed an input into the system and get a certain output, then feeding in twice the input must give you exactly twice the output. Triple the input, triple the output. This proportional relationship holds for any scaling factor.

The second test is additivity. Suppose you send input A into the system and record the output. Then you send input B and record that output separately. If the system is linear, sending both A and B together produces an output that equals the sum of the two individual outputs. Nothing extra appears, and nothing cancels unexpectedly.

Together, these two rules form the principle of superposition. A system that satisfies superposition is linear. A system that violates either rule is nonlinear. This single principle is the dividing line.

Linear Systems in Algebra

The most common place people first encounter linear systems is in algebra, where a “system of linear equations” means two or more equations whose variables appear only to the first power, with no products between variables. A simple example:

  • 8x − y = 4
  • 5x + 4y = 1
  • x − 3y = 2

This system can be written compactly as a matrix equation, Ax = b, where A is a grid of the coefficients, x is a column of unknowns, and b is a column of the values on the right side. Solving the system means finding values of x that satisfy all equations simultaneously. The equation Ax = b has a solution if and only if b can be built from a combination of the columns of A. This matrix form isn’t just shorthand; it’s the foundation for computational methods that solve systems with thousands of variables in fields like economics, physics simulations, and machine learning.

Everyday Physical Examples

Linear systems show up constantly in physics and engineering, often in forms you’ve already encountered. A spring is the classic example. Hooke’s Law states that the force a spring exerts equals its stiffness constant multiplied by how far it’s been stretched or compressed: F = −kx. Stretch it twice as far, and the restoring force doubles. That’s a linear relationship. Almost any object that can be slightly deformed behaves this way for small displacements, which is why linear models work so well as approximations of real-world behavior.

Electrical circuits offer another clean example. Ohm’s Law says the voltage across a resistor equals the current times the resistance: V = IR. Double the current, double the voltage. The relationship between voltage and current holds for all values, making a resistor a linear circuit element. This is why introductory circuit analysis relies heavily on linear system tools.

Time-Invariant Linear Systems

In engineering, the most widely studied category is the linear time-invariant (LTI) system. “Time-invariant” means the system’s behavior doesn’t change depending on when you apply the input. A system that’s both linear and time-invariant has a remarkable property: if you feed in a sine wave at a particular frequency, the output is a sine wave at that same frequency, just potentially shifted in timing and scaled in size. The system never creates new frequencies.

This property is the reason electrical engineering focuses so heavily on sine waves and frequency analysis. Any complex signal, whether it’s audio, radio, or sensor data, can be broken into a sum of sine waves at different frequencies. Because an LTI system processes each frequency independently, you can analyze the system’s effect on each frequency separately and then combine the results. The function that describes how the system scales and shifts each frequency is called the frequency response, and it completely characterizes what the system does.

Impulse Response and Convolution

For a linear, time-invariant system, there’s an even more powerful shortcut. You only need to measure one thing to fully characterize the system: how it responds to a single, instantaneous spike of input (called an impulse). This measurement is called the impulse response.

Once you know the impulse response, you can predict the system’s output for any input whatsoever using a mathematical operation called convolution. Convolution works by breaking the input signal into a sequence of tiny impulses, calculating the system’s response to each one (using the known impulse response, shifted and scaled appropriately), and adding all those responses together. The additivity and scaling properties of linearity are what make this summation valid. This technique is fundamental in audio processing, image filtering, communications, and control systems.

Stability in Linear Systems

One of the most important practical questions about any linear system is whether it’s stable, meaning that small inputs produce outputs that stay bounded rather than growing without limit. For a linear system described by a matrix, stability depends on the system’s eigenvalues, which are characteristic numbers derived from the matrix that govern how the system evolves over time.

For continuous systems (those evolving in smooth time), the system is asymptotically stable if every eigenvalue has a negative real part. This means all disturbances decay toward zero over time. If any eigenvalue has a positive real part, the system is unstable, and disturbances grow. For discrete systems (those updating in steps, like a digital controller), the dividing line is the unit circle: all eigenvalues must have a magnitude less than 1 for stability. These clean, checkable criteria are one of the major advantages of working with linear models.

Why Linearity Matters (and Where It Breaks Down)

The appeal of linear systems comes down to predictability. Superposition means you can understand a complex situation by breaking it into simple pieces, analyzing each one, and adding the results. Stability can be checked by computing eigenvalues. The entire output can be predicted from a single impulse response measurement. These tools are clean, fast, and well understood.

Real-world systems, however, are rarely perfectly linear. A spring stretched too far stops obeying Hooke’s Law. An amplifier driven too hard distorts the signal. Fluid dynamics, weather systems, and biological processes are deeply nonlinear. When engineers model these systems as linear, they’re making an approximation that works well within a limited operating range. Studies comparing linear and nonlinear control approaches for autonomous vehicles, for instance, show that linearized models accumulate errors over time, with tracking errors roughly three times larger than those from nonlinear models under the same conditions.

This tradeoff, simplicity versus accuracy, is why linear systems theory remains central to engineering education. It provides the baseline tools and intuition. When those tools aren’t enough, nonlinear methods build on the same foundations but handle the added complexity directly.