What Is Measurement in Physics? Definition & Units

Measurement in physics is the process of comparing a physical quantity to a standard reference unit and expressing the result as a number. Every measurement you encounter, from the speed of a car to the temperature of a room, follows this basic principle: take something unknown, compare it to something agreed upon, and record the ratio. Without standardized measurement, physics would have no way to test predictions, reproduce experiments, or describe the natural world with any reliability.

The Seven Base Units of Physics

All physical measurements trace back to just seven fundamental quantities, known as the SI base units (SI stands for the International System of Units, the modern metric system). These are the second (time), the meter (length), the kilogram (mass), the ampere (electric current), the kelvin (temperature), the mole (amount of substance), and the candela (luminous intensity). Every other unit in physics is built from combinations of these seven.

On May 20, 2019, four of these units were redefined so that none of them depend on a physical object anymore. The kilogram, kelvin, ampere, and mole are now tied to unchanging constants of nature, including the speed of light, the electric charge of a single electron, and the Planck constant. The second, meter, and candela were already defined this way. This matters because a physical artifact (like the old platinum-iridium kilogram stored in a vault in France) can degrade over time. A constant of nature cannot.

Derived Units: Building on the Basics

Most quantities you encounter in physics aren’t base units at all. They’re derived units, created by combining base units through multiplication or division. Velocity, for example, is length divided by time, giving you meters per second. Acceleration adds another division by time: meters per second squared. Force (the newton) combines mass, length, and time. Energy (the joule) is a newton applied over a meter. Power (the watt) is a joule delivered per second.

Here are some common derived quantities:

  • Velocity: meters per second (m/s)
  • Force: newtons (kg·m/s²)
  • Pressure: pascals (N/m²)
  • Energy: joules (N·m)
  • Power: watts (J/s)
  • Density: kilograms per cubic meter (kg/m³)

The key insight is that you never need to invent a completely new unit. Any measurable quantity in physics can be expressed as some combination of the seven base units. This keeps the entire system internally consistent.

Accuracy vs. Precision

These two words sound interchangeable, but they describe different things. Accuracy is how close your measurement lands to the true value. Precision is how close repeated measurements land to each other.

A useful way to picture this: imagine a GPS unit trying to locate a restaurant. If it gives you five readings that are all spread far apart but their average falls right on the restaurant, that’s high accuracy but low precision. If those five readings cluster tightly together but all point to a spot two blocks away from the restaurant, that’s high precision but low accuracy. The ideal, of course, is both: measurements that cluster tightly around the correct value.

This distinction shapes how physicists evaluate their data. A precise but inaccurate instrument can often be fixed through recalibration. An imprecise instrument needs a more fundamental redesign or better environmental controls.

Sources of Error

No measurement is perfect. The errors that creep in fall into two broad categories, and understanding the difference helps you know what to trust.

Random errors come from unpredictable fluctuations. Electronic noise in a circuit, a gust of wind changing heat loss from a sensor, tiny vibrations in a lab, all of these introduce scatter into your readings. You can’t eliminate random errors entirely, but you can reduce their impact by taking many measurements and averaging them. Random errors limit precision.

Systematic errors are more insidious. They push every measurement in the same direction, so averaging doesn’t help. A thermometer that makes poor thermal contact with the substance it’s measuring will consistently read too low. A scale that isn’t zeroed properly will add the same offset to every reading. These errors limit accuracy, and they’re notoriously difficult to detect because your data can look beautifully consistent while being consistently wrong. Systematic errors fall into two main types: offset errors (the instrument doesn’t read zero when it should) and scale factor errors (the instrument over- or under-reports every change by the same proportion).

Uncertainty and How It’s Expressed

Every measurement in physics comes with an uncertainty, a range that reflects how confident you are in the result. If you measure a table as 1.52 meters long with an uncertainty of 0.01 meters, you’re saying the true length likely falls between 1.51 and 1.53 meters.

Uncertainty can be stated in absolute terms (plus or minus 0.01 meters) or as a relative percentage. Relative uncertainty is calculated by dividing the absolute uncertainty by the measured value and multiplying by 100. So 0.01 divided by 1.52 gives roughly 0.66%. Relative uncertainty is especially useful when comparing the quality of measurements across very different scales. A 1-centimeter uncertainty matters a lot when measuring a coin but is irrelevant when measuring a football field.

Significant Figures

Significant figures are how physics communicates the precision of a measurement through the number itself. When you write 92.00 meters instead of 92 meters, those trailing zeros after the decimal tell the reader you measured to the nearest hundredth of a meter, not just the nearest meter. The core rules are straightforward: all nonzero digits count, zeros between nonzero digits count, leading zeros (like in 0.54) don’t count, and trailing zeros after a decimal point do count. A number in scientific notation like 5.02 × 10⁴ has three significant figures.

This system prevents you from claiming false precision. If you measure a wall with a tape measure accurate to the nearest centimeter and then multiply that measurement by another value, your final answer shouldn’t suddenly report precision to the nearest micrometer. The result can only be as precise as the least precise input.

Traceability: Connecting Every Measurement to a Standard

For a measurement to be meaningful beyond your own lab, it has to connect back to an internationally recognized standard through what’s called a traceability chain. This is a documented, unbroken series of calibrations linking your instrument’s reading all the way up to a national or international realization of the SI units.

In practice, this works like a relay. The National Institute of Standards and Technology (NIST), for example, weighs a 1-kilogram mass on a Kibble balance (a device that defines mass through electrical and mechanical energy) in a near-perfect vacuum. That mass is then compared to a second reference mass, and that second mass is used to calibrate other instruments, which calibrate still others, forming a chain that eventually reaches the scale at your grocery store or the pressure gauge in a hospital. Every link in the chain is documented, and each calibration step contributes a known amount of uncertainty. Internationally, this system is coordinated through mutual recognition agreements so that a kilogram measured in Japan means the same thing as a kilogram measured in Germany.

Measurement at the Quantum Scale

At the scale of atoms and subatomic particles, measurement works fundamentally differently. In classical physics, you can (in principle) measure something without disturbing it. In quantum mechanics, the act of measuring a particle’s property actually changes the system. A particle can exist in a spread-out state of possibilities until a measurement forces it into a single definite outcome. This isn’t a limitation of our instruments. It’s a feature of how nature works at that scale.

The Heisenberg uncertainty principle puts a hard mathematical floor on this. It states that the more precisely you know a particle’s position, the less precisely you can know its momentum, and vice versa. The product of the two uncertainties can never be smaller than a fixed constant (Planck’s constant divided by 4π). A similar tradeoff exists between energy and time. This isn’t about clumsy instruments bumping into particles. It reflects a genuine limit built into the fabric of physics, one that no technological advance can overcome.

Even the question of what counts as a “measurement” at the quantum level remains debated. Some physicists argue that a conscious observer plays a special role. Others maintain that any physical interaction that produces a lasting record is sufficient to collapse a quantum system into a definite state, whether or not anyone looks at the result. The production of a material record alone appears to be enough to destroy the interference patterns that characterize quantum behavior.