A subspace is a smaller space that lives inside a larger one and follows the same rules. In mathematics, specifically linear algebra, a subspace is a subset of a vector space that is itself a vector space. Think of it as a “space within a space” that behaves consistently: you can add things together or scale them up, and you never leave the subspace.
If you’ve encountered this term in a math class, you’re most likely dealing with the linear algebra definition. Here’s what it means, how to recognize one, and why it matters.
The Two Rules a Subspace Must Follow
A subspace needs to satisfy two conditions. Say you have a vector space V (the big space) and a non-empty subset W inside it. W is a subspace of V if:
- Closed under addition: If you pick any two elements in W and add them together, the result is still in W.
- Closed under scalar multiplication: If you pick any element in W and multiply it by any number, the result is still in W.
These two rules automatically guarantee a third property: the zero vector is always in a subspace. That’s because you can take any element in W and multiply it by zero, which gives you the zero vector, and by the scalar multiplication rule, that result has to stay in W.
“Closed” here just means “you can’t escape.” No matter how you combine elements using addition or scaling, you stay inside the subspace. If even one combination kicks you out, it’s not a subspace.
What Subspaces Look Like Geometrically
The easiest way to build intuition is to picture subspaces in two and three dimensions.
In 2D space, the only subspaces are: the origin by itself (a single point at zero), any line that passes through the origin, and the entire 2D plane. That’s it. A line that doesn’t pass through the origin is not a subspace, because it doesn’t contain the zero vector.
In 3D space, the subspaces are: the origin, any line through the origin, any plane through the origin, and all of 3D space itself. Again, a plane floating off to the side that doesn’t pass through the origin fails the test.
This “must pass through the origin” requirement trips up a lot of students. A line like y = x + 1 in 2D looks perfectly well-behaved, but it doesn’t contain the zero point (0, 0), so it can’t be a subspace.
Quick Examples of What Fails
Seeing what doesn’t qualify often helps more than seeing what does.
Consider all points (x, y) in 2D where both x and y are zero or positive, the first quadrant. This is not a subspace because it fails closure under scalar multiplication. If you take the point (1, 2) and multiply by -1, you get (-1, -2), which is outside the first quadrant. You’ve “escaped.”
Now consider all points (x, y) where the absolute value of x equals the absolute value of y. This includes points like (3, -3) and (-2, 2). It’s not a subspace because it fails closure under addition: (3, -3) + (-2, 2) = (1, -1), which works, but (3, 3) + (1, -1) = (4, 2), and |4| does not equal |2|. One bad combination is enough to disqualify it.
Basis and Dimension
Every subspace (except the one containing only the zero vector) has a basis: a minimal set of vectors that can generate every element in the subspace through addition and scaling. A basis must span the entire subspace, meaning every vector in it can be written as some combination of the basis vectors. It must also be linearly independent, meaning none of the basis vectors can be built from the others.
The dimension of a subspace is simply the number of vectors in its basis. A line through the origin in 3D space is a one-dimensional subspace (one basis vector defines the direction). A plane through the origin in 3D is two-dimensional (two basis vectors define the flat surface). The full 3D space has dimension 3, and the zero-only subspace is defined to have dimension 0.
Subspaces That Come From Matrices
If you’re taking a linear algebra course, you’ll encounter four subspaces tied to every matrix. The two most common are the column space and the null space.
The column space of a matrix tells you which outputs are possible. If you’re trying to solve the equation Ax = b, the column space tells you which values of b actually have a solution. It’s built from combinations of the matrix’s columns, and it forms a subspace of the output space.
The null space tells you which inputs produce zero output. It’s the collection of all vectors x where Ax = 0. This also forms a subspace, because if two vectors both produce zero when multiplied by A, their sum will too, and so will any scaled version of either one. The null space reveals redundancy in the system: it shows the “hidden” directions where the matrix has no effect.
The dimension of the column space is called the rank of the matrix, and it tells you how many truly independent columns the matrix has. The rank is one of the most important single numbers associated with any matrix.
Why Subspaces Matter
Subspaces aren’t just an abstract concept for homework problems. They’re the structural backbone of linear algebra. Whenever you solve a system of equations, perform a least-squares fit, compress data, or analyze signals, you’re working with subspaces whether you realize it or not. Finding the right subspace often means finding the solution, or at least the best approximation of one.
In more advanced settings, the concept extends beyond linear algebra into topology and differential geometry, where subspaces (often called submanifolds) describe lower-dimensional surfaces sitting inside higher-dimensional spaces. The core idea remains the same: a well-behaved smaller space living inside a bigger one, following the same structural rules as its parent.

