What Is a Skew-Symmetric Matrix? Definition & Properties

A skew-symmetric matrix is a square matrix that equals the negative of its own transpose. In mathematical notation, a matrix A is skew-symmetric when A = −Aᵀ. This means every entry above the main diagonal has a matching entry below the diagonal with the opposite sign, and every diagonal entry is zero. You’ll also see these called “antisymmetric matrices” in some textbooks.

The Defining Rule

The transpose of a matrix flips it across its main diagonal, swapping rows and columns. For a skew-symmetric matrix, that flipped version is the exact negative of the original. Written in terms of individual entries: the entry in row i, column j satisfies aᵢⱼ = −aⱼᵢ for every pair of positions.

This rule has an immediate consequence for the diagonal. When i = j, the rule says aᵢᵢ = −aᵢᵢ, which is only true if that entry equals zero. So every skew-symmetric matrix has zeros running down its main diagonal, no exceptions.

Here’s a simple 3×3 example:

[ 0, 3, −2 ]
[−3, 0, 5 ]
[ 2, −5, 0 ]

Notice how the 3 in the top row is mirrored by −3 below the diagonal, the −2 mirrors 2, and the 5 mirrors −5. This “opposite reflection” pattern is the visual signature of skew-symmetry.

Why the Diagonal Must Be Zero

This is worth understanding intuitively, not just as a formula. Each diagonal entry sits at position (i, i), which is its own mirror across the diagonal. When you transpose the matrix, that entry stays in place. But the skew-symmetric rule demands it become its own negative. The only number equal to its own negative is zero. This is why a matrix with any nonzero diagonal entry can never be skew-symmetric.

Eigenvalues Are Purely Imaginary

Eigenvalues describe the “scaling factors” a matrix applies along certain special directions. For skew-symmetric matrices, these eigenvalues are always either zero or purely imaginary numbers (multiples of i, the square root of −1). They never have a real, nonzero component.

The reasoning goes like this: if A is skew-symmetric and has a real eigenvalue λ with a nonzero eigenvector x, you can show through the dot product that λ = −λ, which forces λ to be zero. For complex eigenvalues, squaring them gives a negative real number, meaning the eigenvalue itself must be imaginary. In a 3×3 case, this typically yields one eigenvalue of zero and a conjugate pair ±iω, where ω relates to the magnitude of the matrix entries.

Determinant and Odd Dimensions

Skew-symmetric matrices have a striking property tied to their size. If the matrix has an odd number of rows and columns (3×3, 5×5, 7×7, and so on), its determinant is always zero. This result, known as Jacobi’s theorem, means every odd-dimensional skew-symmetric matrix is singular, so it has no inverse. For even-dimensional skew-symmetric matrices, the determinant can be nonzero and is always a perfect square of a polynomial in the matrix entries.

Connection to the Cross Product

One of the most concrete uses of skew-symmetric matrices is representing the cross product of two vectors. If you have a 3D vector a = (a₁, a₂, a₃), you can build a skew-symmetric matrix from its components:

[ 0, −a₃, a₂ ]
[ a₃, 0, −a₁ ]
[−a₂, a₁, 0 ]

Multiplying this matrix by another vector b gives the same result as a × b, the cross product. This rewriting turns a vector operation into a matrix multiplication, which is far more convenient for computation and for deriving formulas in physics and engineering. The three independent entries of the matrix (since the rest are determined by the sign-flip rule) correspond to the three components of the original vector, sometimes called its “axial vector.”

Role in Rotation and Angular Velocity

Skew-symmetric matrices appear naturally whenever something rotates. In robotics and mechanics, angular velocity is a 3D vector describing how fast and around which axis an object spins. That vector can be converted into a 3×3 skew-symmetric matrix using the same bracket construction described above. Once in matrix form, you can write the rate of change of a rotation matrix R as the product of that skew-symmetric matrix times R.

The set of all 3×3 skew-symmetric matrices forms a mathematical structure called so(3), which is the Lie algebra associated with SO(3), the group of all rotation matrices. In practical terms, SO(3) describes every possible orientation an object can have, while so(3) describes every possible way it can be spinning. The skew-symmetric matrix is the bridge between these two ideas: it encodes infinitesimal rotations and converts angular velocity vectors into a form compatible with matrix equations of motion.

This connection extends beyond rotation. In classical mechanics, any quantity that describes an infinitesimal transformation of a physical system (a tiny rotation, a small deformation) tends to be represented by a skew-symmetric matrix or tensor. The antisymmetric structure ensures the transformation preserves certain physical properties, like the length of a vector during pure rotation.

Symmetric vs. Skew-Symmetric

It helps to see these as opposites. A symmetric matrix satisfies A = Aᵀ, meaning entries mirror identically across the diagonal. A skew-symmetric matrix satisfies A = −Aᵀ, meaning entries mirror with a sign flip. Symmetric matrices have all real eigenvalues. Skew-symmetric matrices have all imaginary eigenvalues. Symmetric matrices can have any values on the diagonal. Skew-symmetric matrices always have zeros.

Any square matrix can be split into a symmetric part and a skew-symmetric part. For a matrix B, the symmetric component is ½(B + Bᵀ) and the skew-symmetric component is ½(B − Bᵀ). This decomposition is unique, and it’s useful in fields ranging from continuum mechanics (where it separates stretching from rotation in a deformation) to optimization algorithms.