What Is the Null Space of a Matrix in Linear Algebra

The null space of a matrix is the set of all vectors that, when multiplied by that matrix, produce the zero vector. If you have a matrix A with n columns, the null space consists of every vector x in n-dimensional space where Ax = 0. In set-builder notation: Nul A = {x ∈ Rⁿ : Ax = 0}. It’s one of the most fundamental concepts in linear algebra because it tells you exactly how much “freedom” a system of equations has.

What the Null Space Actually Represents

Think of a matrix as a machine that transforms input vectors into output vectors. The null space is the collection of all inputs that get crushed to zero by that transformation. If a matrix maps three-dimensional space into two-dimensional space, for instance, there’s an entire line (or more) of vectors that all collapse onto the origin. That line is the null space.

This directly connects to solving systems of linear equations. When you write Ax = 0 (called a homogeneous system), you’re asking: which combinations of the unknowns satisfy all the equations simultaneously with zeros on the right side? The null space is the complete answer to that question. It always includes at least the zero vector itself (plugging in all zeros trivially works), but the interesting cases are when it contains nonzero vectors too.

The null space is guaranteed to be a subspace, meaning it’s closed under addition and scalar multiplication. If two vectors both get sent to zero by the matrix, any combination of them does too. This is what allows you to describe the entire null space compactly using just a few basis vectors.

Nullity and the Rank-Nullity Theorem

The dimension of the null space has its own name: nullity. A matrix with a two-dimensional null space has nullity 2, meaning you need exactly two independent vectors to describe every possible solution to Ax = 0.

The rank-nullity theorem ties this together with a clean formula. For any m × n matrix A:

rank(A) + nullity(A) = n

The rank counts how many columns carry independent information (the number of pivot columns after row reduction). The nullity counts the remaining columns, the “free” ones. Together they always add up to the total number of columns. So if you have a 5 × 7 matrix with rank 4, the nullity is 3, and the null space is a three-dimensional subspace of R⁷. This works because rank equals the number of leading entries in row echelon form, and nullity equals the number of columns without leading entries.

How To Find a Basis for the Null Space

The standard method uses row reduction. Here’s the process with a concrete example.

Suppose you have a 3 × 4 matrix. First, row reduce it to reduced row echelon form. Identify the pivot columns (columns with leading 1s) and the free columns (everything else). Each free column corresponds to a free variable, one you can set to any value.

To build the basis vectors, take turns setting each free variable to 1 while setting all other free variables to 0, then solve for the pivot variables using the simplified equations. Each round gives you one basis vector. If you have two free variables, you get two basis vectors, and together they span the entire null space. Any vector in the null space can be written as a combination of these basis vectors.

For example, if row reduction of a 3 × 4 matrix leaves columns 1 and 3 as pivots and columns 2 and 4 as free, you’d first set x₂ = 1 and x₄ = 0, solve for x₁ and x₃, and record that vector. Then set x₂ = 0 and x₄ = 1, solve again, and record the second vector. Those two vectors form a basis for the null space.

What the Null Space Tells You About a Matrix

The size of the null space reveals whether a matrix is invertible. For a square matrix, the following properties are all equivalent:

  • Trivial null space: the only solution to Ax = 0 is the zero vector
  • Invertibility: the matrix has an inverse
  • Full rank: the matrix row-reduces to the identity matrix
  • Unique solutions: the system Ax = b has exactly one solution for every b
  • Independent columns: no column can be written as a combination of the others

If any one of these is true, they’re all true. If any one fails, they all fail. So a square matrix with a nontrivial null space (containing anything besides the zero vector) is singular, meaning it cannot be inverted and the system Ax = b either has no solution or infinitely many solutions depending on b.

For non-square matrices, the null space still tells you about redundancy. A wide matrix (more columns than rows) always has a nontrivial null space because there are more unknowns than equations, guaranteeing free variables.

Null Space in Engineering and Applied Math

The null space shows up whenever you need to understand what a system “doesn’t see.” In structural engineering, the null space of a stiffness matrix corresponds to zero-energy modes, the ways a structure can move without any internal resistance. For a bridge that isn’t bolted down, rigid body motions (translation and rotation as a whole unit) live in the null space because they don’t deform the structure.

This matters for simulation software. Floating structures in static and dynamic analysis produce rank-deficient matrices, and engineers need to compute the null space to detect whether their model is sufficiently constrained. If the null space is larger than expected, it signals mechanisms or modeling errors that would otherwise cause the simulation to fail or produce nonsense. Domain decomposition methods used to break large finite element problems into smaller pieces also rely on null space computations for their substructures.

Computing the Null Space in Software

For small matrices in a homework setting, row reduction by hand works fine. For larger matrices in practice, numerical software uses a different approach: singular value decomposition (SVD). Row reduction is numerically unstable for floating-point arithmetic, meaning small rounding errors can cascade and give wrong answers for large matrices.

In Python, SciPy provides scipy.linalg.null_space(A), which constructs an orthonormal basis for the null space using SVD. MATLAB has a similar function. Both identify vectors corresponding to singular values that are effectively zero (below a tolerance threshold), and those vectors span the null space. The result is a set of orthogonal unit vectors, which is often more useful computationally than the basis you’d get from row reduction.

The distinction matters: row reduction gives you a basis that’s easy to interpret (with 1s and 0s in the free variable positions), while SVD gives you an orthonormal basis that’s numerically stable and better suited for further computation. For understanding the concept, use row reduction. For real-world calculations with large matrices, use SVD-based tools.