What Is an Eigenspace: Definition and Examples

An eigenspace is the set of all vectors that get scaled (stretched, shrunk, or flipped) by the same factor when a matrix or linear transformation acts on them, plus the zero vector. If you’ve already encountered eigenvalues and eigenvectors, the eigenspace is simply what you get when you collect every eigenvector sharing the same eigenvalue and bundle them together into a subspace. It’s one of the central ideas in linear algebra, and it shows up in everything from data science to quantum mechanics.

Eigenvalues and Eigenvectors First

To make sense of an eigenspace, you need the two concepts it’s built from. When you multiply a matrix A by most vectors, the result points in a completely new direction. But certain special vectors only get scaled: the output is just a number times the original vector. Those special vectors are eigenvectors, and the scaling factor is the eigenvalue.

Written out, the relationship is Av = λv, where A is the matrix, v is the eigenvector, and λ (lambda) is the eigenvalue. A 2×2 matrix might stretch one family of vectors by a factor of 5 and another family by a factor of −1. Each of those families, collected together, forms an eigenspace.

The Formal Definition

For a given eigenvalue λ of a matrix A, the eigenspace is the collection of every vector v satisfying (A − λI)v = 0, where I is the identity matrix. In linear algebra language, that collection is the null space of the matrix (A − λI). The zero vector is always included by convention, even though it isn’t considered an eigenvector itself, because including it is what makes the eigenspace a proper subspace: it’s closed under addition and scalar multiplication.

That closure property matters. If two vectors both get scaled by the same factor λ when you apply A, then any combination of those two vectors also gets scaled by λ. So the eigenspace isn’t just a scattered set of arrows; it’s a flat, structured region of space (a line, a plane, or something higher-dimensional) passing through the origin.

How to Calculate an Eigenspace

The process has two stages: find the eigenvalues, then find each eigenspace.

To find eigenvalues, you solve the characteristic equation det(A − λI) = 0. Each solution λ is an eigenvalue. To find the eigenspace for a particular eigenvalue, you plug that λ back in and solve the system (A − λI)v = 0 using row reduction.

A concrete example makes this clearer. Take the matrix A with rows [1, 2] and [4, 3]. One of its eigenvalues is λ = 5. You form the matrix (5I − A), which works out to rows [4, −2] and [−4, 2]. Row-reducing that gives [1, −1/2] in the first row and zeros in the second. That tells you the first component of any eigenvector equals half the second component. So every vector of the form [1/2, 1] scaled by any constant t lives in this eigenspace. The eigenspace for λ = 5 is the span of the vector (1/2, 1), which geometrically is a line through the origin.

Geometric and Algebraic Multiplicity

The dimension of an eigenspace is called the geometric multiplicity of that eigenvalue. It tells you how many independent directions get scaled by that factor. A separate concept, algebraic multiplicity, counts how many times the eigenvalue appears as a root of the characteristic equation. The geometric multiplicity is always at least 1 (an eigenspace must contain a nonzero vector) and can never exceed the algebraic multiplicity.

When these two multiplicities match for every eigenvalue, the matrix is diagonalizable, meaning you can rewrite it in a simpler diagonal form using a basis made entirely of eigenvectors. When the geometric multiplicity falls short of the algebraic multiplicity for even one eigenvalue, the matrix is called defective and cannot be diagonalized. This distinction matters in applications because diagonalizable matrices are far easier to work with: raising them to powers, computing exponentials, and analyzing long-term behavior all become straightforward.

What an Eigenspace Looks Like Geometrically

Think of a linear transformation as warping space: stretching it in some directions, compressing it in others, maybe flipping it. The eigenspaces are the directions that survive the transformation without rotating. They might get longer or shorter, or reverse direction, but they stay on the same line (or plane, or hyperplane) they started on.

A 3×3 matrix could have a one-dimensional eigenspace (a line of vectors all scaled by the same factor), a two-dimensional eigenspace (an entire plane of vectors scaled by the same factor), or even a three-dimensional eigenspace if the matrix is just a uniform scaling in every direction. The richer the eigenspaces, the simpler the transformation’s behavior.

Why Eigenspaces Matter in Practice

Eigenspaces aren’t just an abstract classroom exercise. They underpin some of the most widely used techniques in data science, engineering, and applied mathematics.

In principal component analysis (PCA), you compute the eigenvalues and eigenvectors of a dataset’s covariance matrix. The eigenvectors point in the directions of greatest variance in the data, and the eigenvalues tell you how much variance each direction captures. Ranking eigenvectors by their eigenvalues lets you keep only the most informative directions and discard the rest, compressing a dataset with dozens or hundreds of variables down to a handful of principal components that retain most of the original information. Each principal component is an eigenvector, and the subspace they span is an eigenspace of the covariance matrix.

Web search ranking uses the same idea at a massive scale. Google’s original PageRank algorithm modeled the internet as a giant matrix of link relationships and computed its dominant eigenvector: the eigenvector with the largest eigenvalue. The entries of that vector became the importance scores for every webpage. The eigenspace associated with that top eigenvalue contained the ranking information for the entire web.

In physics and engineering, eigenspaces describe the natural modes of vibrating structures, the stable energy states of quantum systems, and the long-term behavior of dynamical systems. Wherever a system can be described by a matrix acting repeatedly on a state, eigenspaces identify the directions that behave in the simplest, most predictable way.

Eigenspace vs. Eigenvector

A common point of confusion: an eigenvector is a single nonzero vector; an eigenspace is the entire subspace generated by all eigenvectors sharing the same eigenvalue. If a matrix has two independent eigenvectors both associated with λ = 3, neither one alone is the eigenspace. The eigenspace is the plane (or line, or higher-dimensional flat) spanned by all of them together. Any nonzero vector in that subspace is an eigenvector for λ = 3, and there are infinitely many such vectors. The eigenspace packages them into one coherent geometric object.