What Is an Eigenvalue? Definition and Real-World Uses

An eigenvalue is a number that describes how much a vector gets stretched, shrunk, or flipped when a matrix transforms it. In the equation Ax = λx, the eigenvalue is λ. Most vectors change direction when you multiply them by a matrix, but a few special vectors (called eigenvectors) keep pointing the same way. The eigenvalue tells you what happens to those special vectors: a value of 2 means the vector doubles in length, 0.5 means it shrinks by half, and -1 means it flips to point the opposite direction.

The Core Idea in Plain Terms

Think of a matrix as a machine that takes in a vector and spits out a new one. Usually the output vector points in a completely different direction than the input. But for certain vectors, the matrix only scales them, stretching or compressing them along the same line. These are eigenvectors, and the scaling factor is the eigenvalue.

An eigenvalue greater than 1 means the eigenvector gets longer (dilation). An eigenvalue between 0 and 1 means it gets shorter (contraction). A negative eigenvalue means the vector also reverses direction. And an eigenvalue of zero means the matrix collapses that vector to nothing.

In two dimensions, you can picture this as lines through the origin. Each line represents a direction where the matrix just stretches or compresses without rotating. The eigenvalue tells you how much stretching or compressing happens along that line.

How Eigenvalues Are Calculated

Finding eigenvalues comes down to solving one equation: det(A − λI) = 0. Here, A is your matrix, λ is the unknown eigenvalue, and I is the identity matrix (ones on the diagonal, zeros everywhere else). This equation is called the characteristic equation.

The logic behind it: if Ax = λx, you can rearrange to (A − λI)x = 0. For this to have a nonzero solution, the matrix (A − λI) must be singular, meaning its determinant equals zero. Expanding that determinant gives you a polynomial in λ, and the roots of that polynomial are the eigenvalues. A 2×2 matrix gives a quadratic, a 3×3 gives a cubic, and so on.

For a 2×2 matrix, this is straightforward algebra. For anything much larger, solving by hand becomes impractical. Software packages use iterative numerical methods instead, most commonly the QR algorithm, which repeatedly factors and recombines the matrix until the eigenvalues emerge. The computational cost scales steeply with matrix size, so real-world implementations first simplify the matrix into a form that’s cheaper to work with.

Two Useful Shortcuts

There are two elegant relationships that connect eigenvalues to properties you can read directly off a matrix. The sum of all eigenvalues equals the trace of the matrix (the sum of its diagonal entries). And the product of all eigenvalues equals the determinant. These won’t give you individual eigenvalues for large matrices, but they’re useful for checking your work or quickly deducing a missing eigenvalue when you already know the others.

Symmetric Matrices Are a Special Case

Symmetric matrices, where the entry in row i, column j always matches the entry in row j, column i, have particularly clean eigenvalue behavior. Their eigenvalues are always real numbers (never complex), and their eigenvectors are always perpendicular to each other. This makes symmetric matrices much easier to work with, which is fortunate because they show up constantly in applications. Covariance matrices in statistics, for instance, are always symmetric.

A symmetric matrix with all positive eigenvalues is called positive definite. This comes up in optimization, where a positive definite matrix guarantees you’re at a minimum rather than a maximum or saddle point. A quick way to check: if all the pivots during elimination are positive, all the eigenvalues are positive too.

Eigenvalues in Data Science

Principal component analysis (PCA), one of the most widely used techniques for reducing the complexity of large datasets, runs entirely on eigenvalues. PCA works by computing the eigenvectors and eigenvalues of a dataset’s covariance matrix. Each eigenvector represents a direction of variation in the data, and its corresponding eigenvalue tells you how much of the total variance that direction captures.

If the first eigenvalue accounts for 85% of the total variance, the corresponding component alone captures most of the meaningful structure in the data. You can then drop components with small eigenvalues, compressing a dataset with hundreds of variables down to a handful of components without losing much information. The proportion of variance explained by any set of components is simply the sum of their eigenvalues divided by the sum of all eigenvalues.

Eigenvalues in Search Rankings

Google’s original PageRank algorithm, the system that made web search useful, was fundamentally an eigenvalue problem. The web was modeled as a matrix where each entry described the probability of following a link from one page to another. The importance score of every page was encoded in a single eigenvector corresponding to the eigenvalue 1.

Rather than solving for this eigenvector directly (which would be impossibly expensive for billions of web pages), the algorithm used a technique called the power method: start with an initial guess, multiply by the matrix repeatedly, and the result converges to the dominant eigenvector. Only a few iterations were needed to get a good approximation, making it computationally feasible at web scale.

Eigenvalues in Physics and Engineering

In quantum mechanics, eigenvalues have direct physical meaning. The energy levels of a particle are the eigenvalues of a mathematical operator called the Hamiltonian. Solving the Schrödinger equation means finding these eigenvalues, and only specific energy values are allowed. This is the origin of the word “quantized”: the eigenvalues are discrete, and each one corresponds to a specific quantum state.

In structural engineering, eigenvalues determine the natural frequencies at which a bridge, building, or aircraft wing vibrates. Every structure has a set of natural vibration modes, each with its own frequency given by an eigenvalue of the system’s stiffness and mass matrices. If an external force (wind, traffic, earthquakes) hits one of these natural frequencies, resonance occurs and vibrations amplify dangerously. Engineers use eigenvalue analysis to identify these critical frequencies during design and adjust the structure to avoid them.

Why the Name

The word “eigenvalue” comes from the German word “eigen,” meaning “own” or “characteristic.” An eigenvalue is the matrix’s own value, the scaling factor intrinsic to how that matrix transforms space. You’ll sometimes see eigenvalues referred to as “characteristic values” or “proper values” in older textbooks, but “eigenvalue” is now the standard term across mathematics, physics, and engineering.