What Is a Null Space? Definition and How to Find It

A null space is the set of all vectors that, when multiplied by a given matrix, produce the zero vector. If you have a matrix A and you’re solving the equation Ax = 0, every solution x belongs to the null space of A. The zero vector itself is always a solution (since A times zero is always zero), but the interesting question is whether other, non-zero solutions exist.

This concept shows up early in linear algebra and connects to some of the most important ideas in the subject: whether a system of equations has unique solutions, whether a matrix is invertible, and how much “freedom” exists in a system.

The Core Idea Behind the Null Space

Think of a matrix as a machine that takes in a vector, transforms it, and spits out a new vector. The null space captures every input that gets collapsed down to zero by that transformation. If the only input that produces zero is the zero vector itself, the matrix loses no information during the transformation. If other vectors also get mapped to zero, the matrix is “destroying” some information, and those destroyed inputs make up the null space.

For example, take a matrix A and test whether the vector (1, 2, -1) is in its null space. You multiply A by that vector. If the result is the zero vector, the vector is in the null space. If the result is anything else, it isn’t. That’s the entire test.

Why It Qualifies as a Subspace

The null space isn’t just a random collection of vectors. It has a very specific mathematical structure: it’s a subspace. That means it satisfies three properties. First, it always contains the zero vector (non-emptiness). Second, if you add any two vectors in the null space together, the result is also in the null space. Third, if you multiply any vector in the null space by a scalar, the result stays in the null space.

These properties matter because they guarantee the null space has a clean, predictable shape. It’s not some scattered set of points. It’s a flat surface passing through the origin: a line, a plane, or a higher-dimensional equivalent, depending on the matrix.

How to Calculate It

Finding the null space comes down to solving the equation Ax = 0 using a systematic process. You set up the equation, then use elimination (Gaussian elimination) to reduce the matrix to its reduced row echelon form. The key insight is that elimination doesn’t change the null space. If a vector satisfies Ax = 0, it also satisfies Rx = 0, where R is the reduced form.

Once you have the reduced form, you identify two types of columns: pivot columns and free columns. The pivot columns correspond to variables that are locked in by the equations. The free columns correspond to variables you can set to whatever you want. For each free variable, you create one “special solution” by setting that free variable to 1 and all other free variables to 0, then solving for the pivot variables.

For instance, suppose you reduce a matrix and find that x₂ is a free variable. You set x₂ = 1, solve the remaining equations, and get a specific vector like (-2, 1, 0). That vector is a special solution. If you had two free variables, you’d repeat the process for the second one, getting a second special solution. Every vector in the null space can be written as a combination of these special solutions.

Basis and Dimension of the Null Space

The special solutions you find through elimination form a basis for the null space. A basis is the smallest set of vectors that can generate every vector in the space through linear combinations. These special solutions are linearly independent (none of them can be built from the others) and they span the entire null space, which is exactly what a basis requires.

The number of vectors in this basis is called the nullity. If you have three free variables, you get three special solutions, and the nullity is 3. This number tells you the “dimension” of the null space, or loosely, how many degrees of freedom exist in the solution set of Ax = 0.

The Rank-Nullity Theorem

One of the cleanest results in linear algebra ties the null space directly to the rest of the matrix’s structure. For any matrix with n columns:

rank + nullity = n

The rank counts the number of pivot columns (the independent “directions” the matrix can produce as output), and the nullity counts the free columns (the dimensions of the null space). Together, they always add up to the total number of columns. If a 5-column matrix has rank 3, its nullity is 2, meaning the null space is a two-dimensional plane in five-dimensional space.

What the Null Space Tells You About Invertibility

The null space gives you a direct test for whether a square matrix is invertible. An invertible matrix has a null space containing only the zero vector, with a nullity of zero. This means the equation Ax = 0 has no solutions other than x = 0, which in turn means the matrix doesn’t collapse any non-zero input to zero. No information is lost, and every transformation can be reversed.

If the null space contains anything beyond the zero vector, the matrix is not invertible. There’s no way to “undo” the transformation because multiple different inputs produce the same output. The larger the null space, the more information the matrix destroys.

Where Null Spaces Show Up in Practice

Null spaces aren’t just a textbook exercise. In structural engineering, the null space of a stiffness matrix reveals what are called zero energy modes: ways a structure can move without any internal forces resisting it. For a floating structure like an oil platform or a ship hull, the null space captures its rigid body motions (translation and rotation through space) that don’t deform the structure. Engineers need to compute these null spaces to correctly analyze forces and stability.

In data science, null spaces help identify redundancies. If you’re working with a dataset where some measurements are linear combinations of others, the null space of the data matrix tells you exactly which combinations carry no new information. This connects to dimensionality reduction, where you strip away redundant variables to simplify a problem without losing meaningful structure.

More broadly, any time you’re solving a system of linear equations and the solution isn’t unique, the null space describes the full family of solutions. You find one particular solution, then add any vector from the null space to get another valid solution. The null space is the “wiggle room” in the system.