Two matrices are row equivalent if you can turn one into the other using a sequence of elementary row operations. This is one of the most fundamental ideas in linear algebra because row-equivalent matrices represent systems of equations that have exactly the same solutions. When you perform row operations on a matrix, you’re reshaping it into a simpler form without changing the underlying relationships between the variables.
The Three Row Operations
Row equivalence is built on three specific moves you can apply to any matrix, called elementary row operations:
- Row swap: Exchange any two rows with each other.
- Scalar multiplication: Multiply every entry in a row by a nonzero constant.
- Row addition: Add a multiple of one row to another row.
If you can get from matrix A to matrix B using any combination of these three operations, applied any number of times, the two matrices are row equivalent. The order doesn’t matter, and there’s no limit on how many operations you use. You could swap two rows, then multiply a row by 3, then add twice the first row to the third row, and the result is still row equivalent to the original.
One important restriction: when you multiply a row by a constant, that constant cannot be zero. Multiplying a row by zero would destroy information, collapsing an entire equation into 0 = 0 and making the transformation irreversible.
Why the Solution Set Doesn’t Change
The reason row equivalence matters is that row-equivalent matrices describe the same set of solutions. If you write a system of linear equations as an augmented matrix and then row reduce it, every intermediate matrix along the way is row equivalent to the original. That means you haven’t gained or lost any solutions at any step.
This is the entire basis of Gaussian elimination. You start with a messy system of equations, apply row operations to simplify the matrix into a form where the answers are easier to read off, and trust that the solutions you find apply to the original system. Without this guarantee, the whole method would fall apart.
Row Echelon and Reduced Row Echelon Form
The practical goal of row operations is usually to transform a matrix into one of two standard forms. Row echelon form has a staircase pattern where each leading entry sits to the right of the one above it, and all entries below each leading entry are zero. Reduced row echelon form goes further: every leading entry is 1, and it’s the only nonzero value in its entire column.
Here’s the key fact: every matrix is row equivalent to exactly one matrix in reduced row echelon form. You might take different paths to get there (swapping rows in a different order, for instance), but the final reduced form is always the same. This uniqueness is what makes reduced row echelon form so useful as a canonical representation. If two matrices have the same reduced row echelon form, they are row equivalent to each other.
Row Equivalence and Row Space
Row-equivalent matrices share more than just solution sets. They also have the same row space, which is the collection of all possible combinations of their row vectors. If A and B are row equivalent, the row space of A equals the row space of B. A direct consequence: all row echelon forms of a given matrix have the same number of nonzero rows. That number is the rank of the matrix, and row operations preserve it.
What Row Operations Do to the Determinant
For square matrices, row equivalence does not necessarily preserve the determinant, but the changes are predictable. Swapping two rows flips the sign of the determinant (multiplies it by -1). Multiplying a row by a constant c multiplies the determinant by c. Adding a multiple of one row to another leaves the determinant completely unchanged.
This means two row-equivalent matrices can have different determinants. However, if one has a determinant of zero, the other does too, since none of the row operations can turn a zero determinant into a nonzero one or vice versa. So row equivalence preserves whether a matrix is invertible, even though it may change the determinant’s exact value.
Invertible Matrices and the Identity
One of the cleanest results in linear algebra ties row equivalence to invertibility: a square matrix is invertible if and only if it is row equivalent to the identity matrix. The identity matrix is the one with 1s on the diagonal and 0s everywhere else.
This gives you a practical method for finding a matrix’s inverse. You set up the original matrix side by side with the identity matrix, written as [A | I]. Then you row reduce the left side. If A is invertible, the left side will reduce to I, and whatever the right side has become is the inverse. If A isn’t invertible, you’ll hit a row of zeros on the left side before reaching the identity, and the inverse doesn’t exist.
How Gaussian Elimination Works Step by Step
Gaussian elimination is the standard algorithm for producing a row-equivalent matrix in echelon form. It works in two stages. In the first stage (sometimes called forward elimination), you move column by column from left to right. For each column, you find the leftmost nonzero entry in the current row or any row below it. If that entry isn’t in the current row, you swap rows to bring it up. Then you use row addition to zero out every entry below that pivot position. You repeat this for each row, working downward.
The second stage (back substitution, or the Jordan part of Gauss-Jordan elimination) moves upward. You use row addition to zero out entries above each pivot, and you scale each pivot row so the leading entry becomes 1. After this stage, the matrix is in reduced row echelon form, and the solutions can be read directly.
Every matrix you produce along the way is row equivalent to the one you started with. The entire chain of transformations, from the original matrix to the final reduced form, consists of row-equivalent matrices sharing the same solution set, the same row space, and the same rank.

