How Is Color Measured? Instruments, LAB & Delta E

Color is measured by capturing how an object reflects or emits light across the visible spectrum, then converting that data into standardized numbers. The most widely used system, called CIELAB, assigns every color three values: one for lightness, one for its position on the green-to-red scale, and one for its position on the blue-to-yellow scale. These numbers give scientists, manufacturers, and designers a universal language for color that doesn’t depend on anyone’s personal perception.

Why Color Needs Numbers

Your eyes contain three types of cone cells, each sensitive to a different range of wavelengths: short (blue), medium (green), and long (red). Every color you see is your brain’s interpretation of how strongly each cone type is stimulated. The problem is that interpretation varies from person to person, and it shifts dramatically under different lighting. Two paint samples might look identical under the warm bulbs in a hardware store and completely different under the fluorescent lights in your kitchen.

This phenomenon is called metamerism: two surfaces with genuinely different spectral properties can appear to match under one light source but clash under another. Color measurement exists to cut through that subjectivity, giving every color a fixed numerical identity that holds regardless of who’s looking or what bulb is overhead.

How Instruments Capture Color

Two main instruments do the heavy lifting: colorimeters and spectrophotometers. They work on the same basic principle (shining light on or through a sample and measuring what comes back) but differ in precision and flexibility.

A colorimeter measures light at a small number of fixed wavelengths, typically produced by internal LEDs. It’s simpler, cheaper, and perfectly adequate for tasks like checking whether a batch of paint matches a reference sample. A spectrophotometer scans across a wide range of wavelengths, producing a full spectral curve that shows exactly how much light the sample reflects or absorbs at each point. That detailed curve is essential when you need to predict how a color will behave under different lighting conditions or when you’re formulating a new pigment recipe from scratch.

Both devices ultimately produce the same type of output: a set of numbers that locates a color in a standardized color space. For a given set of solutions, they may report slightly different absolute values, but both yield valid, consistent results for comparing samples against each other.

The CIELAB Color Space

The international standard for describing measured color is the CIELAB system (sometimes written CIE L*a*b*), maintained by the International Commission on Illumination. It maps every visible color onto three axes:

  • L* (lightness): Runs from 0 (pure black) to 100 (pure white).
  • a* (green to red): Negative values are greener, positive values are redder.
  • b* (blue to yellow): Negative values are bluer, positive values are yellower.

This system was designed to be “perceptually uniform,” meaning a given numerical step corresponds to roughly the same visual difference anywhere in the space. It’s considered one of the most accurate models for evaluating color in industries from dairy processing to automotive paint, because the three numbers map intuitively to qualities people actually perceive: how light, how red or green, and how yellow or blue.

An older system called CIE XYZ also describes color using three coordinates, but those values don’t correspond as neatly to what the human eye perceives. CIELAB was built on top of XYZ specifically to make the numbers more practical.

Measuring Color Difference With Delta E

Once you have L*, a*, and b* values for two colors, you can calculate how far apart they are. That distance is called Delta E (ΔE), and it’s the backbone of color quality control worldwide. The current standard formula, CIEDE2000, includes corrections that account for quirks in human perception, particularly around neutral tones, blues, and shifts in hue.

The thresholds are well established:

  • ΔE ≤ 1.0: Not perceptible by most people. This is widely cited as the “just noticeable difference.”
  • ΔE 1.0 to 2.0: A minor difference visible only to a trained eye.
  • ΔE 2.0 to 3.5: A noticeable difference, often the outer limit for commercial acceptability.
  • ΔE > 5.0: The two colors are considered fundamentally different.

When a manufacturer sets a color tolerance for a product, they’re typically specifying a maximum ΔE. A luxury car brand might require ΔE under 1.0 between body panels. A food packaging company might accept up to 2.0 or 3.0.

Why Lighting and Surface Matter

A color measurement is only meaningful if you control what light hits the sample. The international standard for simulating daylight is called D65, which represents average midday light in Western Europe. Instruments use this or other defined illuminants so that measurements taken in a lab in Tokyo can be directly compared with measurements taken in a factory in Ohio.

Surface texture introduces another variable. Imagine two tiles painted the exact same blue, but one has a matte finish and the other is glossy. They contain the same pigment, yet they look different because the glossy surface reflects light in a focused, mirror-like way (specular reflection) while the matte surface scatters it. Spectrophotometers handle this with two measurement modes. One mode (called SCI) captures all reflected light, both specular and scattered, giving you the “true” color of the material regardless of its surface finish. The other mode (called SCE) excludes the specular component, capturing how the surface actually appears to a viewer. Recipe formulators typically use SCI to match pigments. Quality control inspectors typically use SCE to confirm that the finished product looks right.

Visual Comparison Systems

Not every color measurement requires an electronic instrument. The Munsell Color System, developed over a century ago, uses physical color chips organized by three properties: hue (the basic color family), value (lightness or darkness on a scale where low numbers are dark), and chroma (how vivid or dull the color is, where low numbers are grayish). You hold a sample next to the chips and find the closest match.

Soil scientists still use Munsell books in the field every day. A soil’s Munsell notation reveals practical information at a glance. A low value (2 or 3) with a page labeled 10YR means dark, yellowish soil rich in organic matter. A low chroma (2 or less) signals gray, waterlogged conditions. Higher chroma means better drainage and more oxidized minerals. The system works because the chips are physically standardized, so two geologists in different countries reading “10YR 3/2” are describing the same color.

Color Measurement in Practice

The food industry is one of the heaviest users of color measurement. Consumers judge freshness, ripeness, and quality by color before they ever taste anything, so processors measure it obsessively. CIELAB is the dominant system, applied to everything from cheese and olive oil to canned tomatoes. Professional spectrophotometers from companies like Konica Minolta have long been the gold standard, but lower-cost sensors and even flatbed scanners paired with free image analysis software are increasingly used for fast, less precise checks on production lines.

Display calibration is another common application. If you work with photos or video, the colors on your screen need to match reality. A hardware colorimeter (a small sensor you attach to your monitor) measures the actual colors your screen produces, compares them to known standards, and generates a correction profile. Before calibrating, you let the monitor warm up for about 30 minutes and ensure consistent ambient lighting. The software then asks you to set target values for white point, brightness, and gamma before running an automated test that maps your display’s color output against reference values. The result is a custom profile that corrects for your specific screen’s inaccuracies.

In manufacturing, from textiles to plastics to cosmetics, the workflow is similar: measure a reference sample, set a Delta E tolerance, then measure every production batch against that reference. A ΔE under 1.0 means the batch is visually indistinguishable from the standard. Above 3.5, and most customers would notice the mismatch.