What Is Beer’s Law? Definition, Equation, and Uses

Beer’s Law is a formula that describes how light is absorbed by a substance in solution. It states that the amount of light absorbed is directly proportional to the concentration of the substance and the distance the light travels through it. This simple relationship is the foundation of spectrophotometry, a technique used in chemistry labs, medical devices, and biological research to measure how much of something is dissolved in a liquid.

The Equation and What Each Variable Means

Beer’s Law is expressed as:

A = εlc

  • A is absorbance, a unitless number representing how much light the solution absorbs.
  • ε (epsilon) is the molar absorptivity coefficient, measured in M⁻¹cm⁻¹. This is a constant specific to each substance at a given wavelength of light. Think of it as how “good” a particular chemical is at absorbing light.
  • l is the path length, measured in centimeters. This is the distance the light travels through the solution, determined by the width of the container (called a cuvette) holding the sample.
  • c is the concentration of the substance in the solution, measured in molarity (M).

Because ε and l are typically constants in any given experiment, the equation simplifies to a straight line: absorbance equals some constant times concentration. If you plot absorbance on the y-axis and concentration on the x-axis, you get a straight line through the origin, just like y = mx. That linear relationship is what makes the law so useful. Measure the absorbance of an unknown sample, find where it falls on your line, and you can read its concentration directly.

Absorbance and Transmittance

A spectrophotometer doesn’t measure absorbance directly. It measures how much light passes through a sample, called transmittance. Transmittance ranges from 0% (the solution blocks all the light) to 100% (none is absorbed). Absorbance is then calculated from transmittance using a logarithmic conversion: A = -log₁₀(T), where T is transmittance expressed as a decimal.

This means absorbance and transmittance move in opposite directions. A highly concentrated, deeply colored solution transmits very little light and has a high absorbance value. A dilute, nearly clear solution transmits most of the light and has an absorbance close to zero.

How a Spectrophotometer Works

The instrument that puts Beer’s Law into practice has four basic parts. A light source produces a continuous spectrum of radiation. A monochromator, which uses a diffraction grating (older instruments used a prism), isolates a single wavelength of light. This is important because Beer’s Law only holds for one wavelength at a time. The selected light then passes through the sample holder, where the cuvette containing your solution sits. Finally, a detector, usually a phototube or photomultiplier tube, measures the intensity of the light that made it through.

To get meaningful results, you first run a “blank,” a cuvette containing just the solvent without the substance you’re measuring. This tells the instrument what 100% transmittance looks like. Every subsequent measurement is compared against that baseline.

Where Beer’s Law Is Used

The most common application is determining the concentration of a substance in solution. In a teaching lab, this might mean figuring out how much dye is in a sample of colored water. In a research or clinical lab, the stakes are higher.

DNA and protein quantification rely heavily on Beer’s Law. Nucleic acids absorb ultraviolet light strongly at 260 nm, while proteins absorb at 280 nm. A device called a NanoDrop spectrophotometer can measure the absorbance of just one microliter of sample at these wavelengths to calculate DNA concentration. The ratio of absorbance at 260 nm versus 280 nm also reveals how pure the sample is, since protein contamination shifts the ratio.

Pulse oximeters, the clip-on devices that measure blood oxygen levels, also rely on a version of Beer’s Law. They shine red and infrared light through your fingertip. Oxygenated hemoglobin and deoxygenated hemoglobin absorb these two wavelengths differently. By comparing how much of each color is absorbed as blood pulses through the tissue, the device calculates your oxygen saturation. It’s a practical manipulation of the same principle: light absorption reveals concentration.

When Beer’s Law Breaks Down

Beer’s Law assumes ideal conditions, and real-world samples don’t always cooperate. The linear relationship between absorbance and concentration starts to fail under three main conditions.

First, at high concentrations, molecules in solution are close enough together that they interact with each other electrostatically. These interactions change how individual molecules absorb light, bending the straight line into a curve. For most practical purposes, absorbance readings between about 0.1 and 1.0 are considered reliable. Above that range, you’re better off diluting the sample and measuring again.

Second, the law assumes the light hitting the sample is a single wavelength. If the monochromator does a poor job of isolating one wavelength and lets through a range of colors, the absorbance reading becomes an average across those wavelengths rather than a precise measurement at one. This introduces error, especially for substances whose absorptivity changes sharply with wavelength.

Third, if the sample scatters light rather than absorbing it (think of a cloudy or turbid solution), some light never reaches the detector for reasons that have nothing to do with the substance’s concentration. The instrument interprets scattered light as absorbed light, inflating the absorbance reading.

Building a Calibration Curve

In practice, you rarely plug numbers into the Beer’s Law equation and solve for concentration directly. Instead, you build a calibration curve. You prepare a series of solutions with known concentrations of the substance you’re interested in, called standards. You measure the absorbance of each one. Then you plot those points and fit a best-fit line through them.

Once you have that line, any unknown sample can be measured and its absorbance matched to a concentration on the curve. This approach has a built-in advantage: it accounts for the specific conditions of your instrument, your cuvettes, and your wavelength choice without requiring you to look up the molar absorptivity coefficient. As long as the relationship is linear across your range of standards, the curve does the work for you.

If the points start curving at higher concentrations, that’s a sign you’ve moved outside the range where Beer’s Law holds. The fix is straightforward: dilute those samples, measure again, and multiply the result by your dilution factor.