To multiply a matrix by a vector, you take the dot product of each row of the matrix with the vector. Each dot product produces one number in the resulting vector. The only requirement is that the number of columns in the matrix matches the number of entries in the vector.
The Dimension Rule
Before you can multiply, check that the dimensions are compatible. If your matrix has n columns, your vector must have exactly n entries. An m×n matrix multiplied by a vector with n entries produces a new vector with m entries. Think of it this way: the inner dimensions must match. Write the sizes side by side (m×n) · (n×1), and the two n’s need to agree. The outer dimensions, m and 1, tell you the size of the result.
If your matrix is 3×2, the vector needs exactly 2 entries, and you’ll get a vector with 3 entries out. A 4×5 matrix needs a 5-entry vector and gives back a 4-entry vector. If the column count and vector length don’t match, the multiplication simply isn’t defined.
Step-by-Step: The Row Method
The most common approach is to work row by row through the matrix. For each row, multiply the corresponding entries of that row and the vector together, then add up all those products. That sum becomes one entry in your result.
Here’s a concrete example. Suppose you have a 2×3 matrix and a vector with 3 entries:
A = [1, −1, 0; 0, −3, 1] and x = (2, 1, 0)
For the first row [1, −1, 0], multiply entry by entry with the vector (2, 1, 0): that’s (1)(2) + (−1)(1) + (0)(0) = 2 − 1 + 0 = 1. So the first entry of the result is 1.
For the second row [0, −3, 1], do the same: (0)(2) + (−3)(1) + (1)(0) = 0 − 3 + 0 = −3. The second entry is −3.
The result is the vector (1, −3). That’s it. Each row of the matrix produced one number in the output through a dot product with the vector.
The Column Method: A Different Perspective
There’s a second way to think about the same operation that becomes important later in linear algebra. Instead of going row by row, you can treat the multiplication as a weighted combination of the matrix’s columns, where the vector entries are the weights.
Split the matrix into its individual columns. Multiply each column by the corresponding entry in the vector, then add all the scaled columns together. Using the same example: the first column is (1, 0), the second is (−1, −3), and the third is (0, 1). The vector entries 2, 1, and 0 are the weights. So the result is 2·(1, 0) + 1·(−1, −3) + 0·(0, 1) = (2, 0) + (−1, −3) + (0, 0) = (1, −3). Same answer, different path.
This column perspective is worth learning because it reveals what matrix-vector multiplication really does: it combines the columns of the matrix in specific proportions. That idea shows up constantly in topics like solving systems of equations, understanding vector spaces, and working with transformations.
What It Means Geometrically
When you multiply a matrix by a vector, you’re transforming that vector. The matrix acts as a function that takes in a vector and outputs a new one, potentially with a different direction and length. In two dimensions, matrices can encode rotations, reflections, scalings, and shears.
For example, a 2×2 rotation matrix built from sine and cosine values will spin any vector you feed it by a specific angle around the origin. A diagonal matrix stretches or compresses the vector along the coordinate axes. A shear matrix like [1, 1; 0, 1] slides one component based on the other, tilting shapes sideways. Every 2D or 3D transformation you see in computer graphics, from spinning a game character to projecting a 3D scene onto your screen, boils down to multiplying vectors by matrices.
Key Algebraic Properties
Matrix-vector multiplication follows two rules that make it predictable to work with:
- It distributes over addition. A(x + y) = Ax + Ay. If you add two vectors first and then multiply, you get the same result as multiplying each one separately and adding the outputs.
- Scalars can move freely. A(cx) = c(Ax). If you scale a vector by some number before multiplying, you get the same result as multiplying first and scaling after.
These two properties together mean that matrix-vector multiplication is a linear operation. That’s actually the defining feature of linear transformations, and it’s why matrices are the central tool of linear algebra.
One property that does not hold: you can’t reverse the order. Matrix-vector multiplication is not commutative. The expression Ax is defined when the dimensions line up, but “xA” either doesn’t exist or means something entirely different. Order matters every time.
Common Mistakes to Avoid
The most frequent error is mismatched dimensions. If your matrix is 3×4 and your vector has 3 entries instead of 4, the multiplication is undefined. Always check: columns of the matrix must equal entries in the vector. A quick way to catch this is to write the dimensions side by side. If the two inner numbers don’t match, stop and recheck your setup.
Another common mistake is multiplying in the wrong direction within a row. Each entry in a given row pairs with the entry in the same position of the vector. The first element of row 2 multiplies the first element of the vector, not the second. Staying systematic, moving left to right through each row while matching positions in the vector, prevents scrambled results.
Finally, watch out for sign errors. Matrices with negative entries are common, and losing a minus sign in one dot product will throw off your entire result. It helps to write out each product individually before summing, especially while you’re building the skill.
Doing It in Code
In Python, the NumPy library handles matrix-vector multiplication with minimal syntax. The @ operator is the cleanest way:
import numpy as np
A = np.array([[1, -1, 0], [0, -3, 1]])
x = np.array([2, 1, 0])
result = A @ x
This returns the array [1, -3]. You can also use np.dot(A, x), which does the same thing when A is a 2D array and x is 1D. For multiplying two 2D matrices (not a matrix and a vector), NumPy’s documentation recommends @ or np.matmul over np.dot, since np.dot behaves differently depending on the number of dimensions of each input.
In MATLAB or Octave, the syntax is simply A * x, where x is a column vector. In R, use A %*% x. Every major scientific computing environment has this operation built in because it’s one of the most fundamental computations in applied math.

