What Is a Tensor in Physics? More Than a Matrix

A tensor is a mathematical object that represents a physical quantity, characterized by both magnitude and one or more directions simultaneously. The number of directions it encodes is called its rank. You already know two special cases: a scalar (like temperature) is a rank-0 tensor with magnitude but no direction, and a vector (like velocity) is a rank-1 tensor with magnitude and one direction. Tensors extend this idea to quantities that need two, three, or more directions to fully describe them.

Rank: How Tensors Are Classified

The rank of a tensor tells you how many directional components it carries. In three-dimensional space, a rank-0 tensor (scalar) is described by a single number. A rank-1 tensor (vector) needs 3 numbers. A rank-2 tensor needs 9 numbers and can be written as a 3×3 grid, or matrix. A rank-3 tensor requires 27 numbers, and so on. The general pattern: in n-dimensional space, a tensor of rank r requires n^r numbers to describe it.

This scaling is what makes tensors so powerful and so necessary. Some physical situations simply cannot be captured by a single number or a single arrow. When force applied in one direction produces a response in a different direction, you need a rank-2 tensor to map every possible input direction to its corresponding output.

What Makes a Tensor More Than a Grid of Numbers

A tensor is not just any collection of numbers arranged in a table. What distinguishes a tensor from a plain array is how it behaves when you change coordinate systems. If you rotate your axes, relabel your directions, or switch to a curved coordinate system, a true tensor transforms its components in a specific, predictable way that preserves the underlying physical reality it describes.

Think of it this way: the wind doesn’t change because you turn your head. A vector representing wind velocity will have different x and y components depending on which way your axes point, but the physical wind is the same. The transformation rules guarantee that. Tensors of every rank obey analogous rules. There are two flavors of these rules, called covariant and contravariant, depending on whether the components transform in the same direction as the coordinate change or in the opposite direction. Many tensors have a mix of both types of indices.

This transformation property is the formal definition physicists care about most. If a quantity doesn’t transform correctly under a coordinate change, it isn’t a tensor, no matter how many numbers it contains.

The Stress Tensor: A Concrete Example

One of the most intuitive rank-2 tensors is the stress tensor, which describes internal forces inside a material. Imagine slicing through a solid object along some plane. The material on one side of that cut pushes and pulls on the material on the other side. That force depends on two directions: the orientation of your imaginary cut and the direction the force acts.

In three dimensions, the stress tensor has 9 components arranged in a 3×3 grid. Three of those components sit along the diagonal and represent normal stresses, forces that push straight into or pull straight out of a surface. The other six are shear stresses, forces that act sideways along a surface. Given any surface orientation (described by a direction perpendicular to it), the stress tensor maps it to the exact force vector acting on that surface. A single vector could never capture this, because the force changes depending on which way the surface faces.

The Inertia Tensor: Why Spinning Gets Complicated

In introductory physics, the moment of inertia is a single number that tells you how hard an object is to spin. That works fine for symmetric objects rotating around a fixed axis. But for a general three-dimensional object, inertia is actually a rank-2 tensor, a 3×3 matrix that connects angular velocity to angular momentum.

The key insight is that for most objects, angular momentum doesn’t point in the same direction as the angular velocity. Spin a lopsided object around one axis and it will wobble, because the angular momentum vector tilts away from the spin axis. The inertia tensor captures this: multiply it by the angular velocity vector and you get the angular momentum vector, which generally points in a different direction. Only along certain special axes (called principal axes) do the two vectors line up, and the tensor reduces to something simpler.

The Electromagnetic Field Tensor

In classical physics, electric and magnetic fields are treated as separate vector fields. But special relativity reveals that they’re really two aspects of a single object: the electromagnetic field tensor, a rank-2 tensor in four-dimensional spacetime.

This tensor is antisymmetric, meaning it flips sign when you swap its two indices. That antisymmetry reduces its independent components from 16 down to just 6. Three of those components correspond to the electric field and three to the magnetic field. When you change reference frames (say, by moving at high speed relative to a charged particle), the electric and magnetic components mix into each other according to the tensor’s transformation rules. What looks like a purely electric field to a stationary observer can appear partly magnetic to a moving one. The field tensor makes this mixing automatic and elegant.

Tensors in General Relativity

General relativity is where tensors become absolutely essential. Gravity, in Einstein’s theory, isn’t a force. It’s the curvature of spacetime, and curvature is described by tensors.

The most fundamental is the metric tensor, a rank-2 tensor that encodes the geometry of spacetime at every point. It tells you how to measure distances and time intervals between nearby events. In flat spacetime (no gravity), the metric tensor is simple and constant. Near a massive object, its components change from point to point, and those changes are what we experience as gravitational effects. The metric tensor has 10 independent components in four-dimensional spacetime, and solving Einstein’s field equations means finding those 10 functions.

The Riemann curvature tensor goes further. It’s a rank-4 tensor that captures exactly how spacetime is curved at each point. Physically, it describes tidal forces: the way nearby objects accelerate toward or away from each other in a gravitational field. If you drop two particles side by side near Earth, they’ll slowly drift together because their paths converge toward the center. The Riemann tensor quantifies that convergence. In flat spacetime, all of its components are zero. In curved spacetime, some are nonzero, and those nonzero components tell you the shape and strength of the curvature.

Why Vectors Aren’t Enough

Tensors appear throughout physics whenever a quantity depends on more than one direction, or whenever a physical law needs to relate two different types of directed quantities. Material properties are a clear example. In an isotropic material (one that behaves the same in every direction), electrical conductivity can be described by a single number: apply a voltage in any direction and the current flows proportionally in the same direction. But many real materials are anisotropic. A crystal might conduct electricity easily along one axis and poorly along another. In that case, conductivity becomes a rank-2 tensor. Apply a voltage in one direction and the current may flow partly sideways.

The same applies to thermal conductivity, optical permittivity, magnetic permeability, and mechanical stiffness. Piezoelectric materials, which generate electric charge when squeezed, require a rank-3 tensor to describe how stress in various directions produces polarization in various directions. The elastic properties of a general crystal require a rank-4 tensor with 21 independent components.

Reading Tensor Notation

Physics texts write tensors using index notation, where subscripts and superscripts label the components. A vector might be written as v^i, where the index i runs from 1 to 3 (or 0 to 3 in relativity). A rank-2 tensor might appear as T^ij or T_ij, with two indices.

The Einstein summation convention eliminates the need to write summation signs everywhere. The rule is simple: if the same index letter appears twice in a single term, once up and once down, you sum over all its values. So a_i b^i means a_1 b^1 + a_2 b^2 + a_3 b^3. An index that appears only once (called a free index) can take any value and must match on both sides of an equation. An index that appears twice (called a bound or dummy index) gets summed over and disappears from the final result. If the same index ever appears three or more times in a single term, something has gone wrong.

Upper indices denote contravariant components (which transform opposite to the coordinate change), and lower indices denote covariant components (which transform in the same direction). This distinction matters whenever you’re working in curved spaces or non-Cartesian coordinates. In flat space with Cartesian coordinates, the two types behave identically, which is why introductory courses can ignore the difference.