What Is Chrominance: Color Information in Video

Chrominance is the color information in a video or image signal, separated from brightness. Every pixel you see on a screen is built from two layers of data: luminance (how bright it is) and chrominance (what color it is). This split exists because of how human vision works and because it allows engineers to compress video far more efficiently than sending full color data for every pixel.

How Chrominance Relates to Luminance

A black-and-white photograph contains only luminance, the light and dark values that define shapes, edges, and detail. Chrominance adds color on top of that grayscale foundation. Together, the two recreate a full-color image. Think of luminance as the sketch and chrominance as the paint.

In technical terms, chrominance describes both hue (the actual color, like red or blue) and saturation (how vivid that color is). When chrominance drops to zero, you’re left with pure gray at whatever brightness level the luminance dictates. The standard way to encode chrominance is through “color difference” signals. Instead of storing red, green, and blue values independently, systems store one brightness channel and two chrominance channels that represent how far the color deviates from neutral gray. These are often labeled Cb (blue difference) and Cr (red difference) in digital video, or U and V in older analog formats.

Why Brightness and Color Are Split Apart

Your eyes are far more sensitive to changes in brightness than to changes in color. The retina has three types of color-detecting cone cells (responsive to long, medium, and short wavelengths of light), plus a luminance-processing system that combines signals from those cones. Research on human contrast sensitivity shows that your visual system detects fine spatial detail and fast motion primarily through luminance, while color perception is tuned for slower, broader changes. The neural pathways for luminance and color even travel through different parts of the brain: luminance signals feed heavily into the dorsal pathway (which tracks motion and spatial relationships), while color information becomes more important in the ventral pathway (which handles object recognition).

In practical terms, this means you can blur or reduce the color information in a video and most viewers won’t notice, as long as the brightness detail stays sharp. Engineers have exploited this quirk of human vision since the earliest days of color television.

The Analog Television Origin

When color TV arrived in the 1950s, it had to remain compatible with existing black-and-white sets. The solution was elegant: keep the original luminance signal intact and add chrominance on top of it using a separate subcarrier frequency. In the NTSC system used in North America, this color subcarrier was set at 3.579545 MHz, chosen so it would be exactly 455/2 times the horizontal line rate. A black-and-white TV simply ignored the subcarrier. A color TV decoded it to extract hue and saturation, then combined those with the luminance signal to produce full color.

The PAL system, used across much of Europe, placed its color subcarrier at 4.43 MHz and added a phase-alternation technique to reduce color errors. S-Video cables later improved on both systems by carrying luminance and chrominance on physically separate wires, preventing the interference that happened when the two were crammed into one composite signal.

Chrominance in Digital Video

Modern digital video uses the same luminance/chrominance split, just in a more precise mathematical form. The most common color model is YCbCr, where Y is the luminance channel, Cb represents the blue-difference chrominance, and Cr represents the red-difference chrominance. Cameras capture red, green, and blue light, then the signal is converted into YCbCr for processing, compression, and transmission.

The color range available for chrominance depends on which broadcast standard is in use. The HD television standard known as Rec. 709, established in 1990, covers about 35.9% of the visible color spectrum. The newer Rec. 2020 standard, introduced in 2012 for 4K and 8K content, covers roughly 75.8% of the visible spectrum, producing noticeably richer and more vibrant colors. Rec. 2020 also supports HDR (high dynamic range), which expands both the brightness range and the color precision of each frame.

How Chroma Subsampling Saves Data

Because your eyes tolerate reduced color detail, nearly all consumer video formats use a technique called chroma subsampling. This reduces the resolution of the chrominance channels while keeping luminance at full resolution. The savings are described with a three-number ratio based on a conceptual block of pixels, always starting with 4 luma samples as a reference point.

  • 4:4:4 means no subsampling at all. Every pixel gets its own full color value. This is used in high-end production and graphics work.
  • 4:2:2 means each horizontal row has only 2 chroma samples for every 4 luminance samples, cutting chrominance data roughly in half. Common in professional broadcast and editing.
  • 4:2:0 means each row has 2 chroma samples per 4 luma samples, but chroma is only recorded on every other row. This is the standard for DVDs, Blu-rays, streaming video, and most content you watch at home.
  • 4:1:1 means only 1 chroma sample per 4 luma samples on every row, used in some older DV camera formats.

The jump from 4:4:4 to 4:2:0 eliminates a significant portion of the color data, yet the image looks nearly identical to most viewers because the luminance channel, which carries all the sharp edge detail, remains untouched. This is why chroma subsampling became the backbone of video compression. It doesn’t require heavy processing power to decode, and it delivers large file-size reductions with minimal visible quality loss.

Where Chrominance Matters in Practice

For everyday viewing, you’ll rarely think about chrominance. Streaming services, TV broadcasts, and Blu-ray discs all use 4:2:0 subsampling, and the results look great for finished content. Where chrominance quality becomes critical is in post-production. If you’re color grading footage, pulling a green screen key, or compositing visual effects, reduced chrominance creates problems. Edges around keyed subjects look blocky or jagged because there isn’t enough color data to make clean separations. That’s why professional cameras and production workflows use 4:2:2 or 4:4:4 recording.

Color accuracy also depends on chrominance encoding when you’re calibrating a monitor or TV. Displays that support wider color gamuts (like Rec. 2020) can reproduce chrominance values that simply don’t exist in the older Rec. 709 space. If your source material was mastered in a narrow gamut, a wider-gamut display won’t invent new colors. The chrominance data in the file defines the ceiling of what you can see.