Gamma correction is a nonlinear adjustment applied to pixel values so that images look correct on screens and make efficient use of digital storage. It exists because neither your eyes nor displays respond to light in a straight, proportional way. Without it, photos and video would look either too dark or would waste most of their data on bright tones you can barely distinguish, leaving shadows a muddy, banded mess.
Why Light Needs a Curve
A digital camera sensor captures light linearly. If twice as many photons hit the sensor, the recorded value doubles. Your eyes don’t work this way. Human vision is far more sensitive to small changes in dark tones than in bright ones. If you doubled the light in an already bright room, you’d barely notice, but doubling a dim candle’s output would be obvious. This nonlinear sensitivity roughly follows a power law, meaning perceived brightness relates to actual light intensity raised to an exponent.
Gamma correction bridges the gap between the linear world of sensors and the nonlinear world of human perception. It compresses the tonal range so that more digital values are assigned to the darker shades your eyes care most about, and fewer values are spent on bright highlights where you can’t tell the difference anyway. The result is that a standard 8-bit image (256 levels) looks smooth from shadows to highlights, instead of showing visible banding in dark areas.
The Power Law Formula
At its core, gamma correction is a power law transform. The corrected output equals the input value raised to an exponent called gamma:
output = inputγ
When gamma is less than 1 (like 0.45), the curve lifts dark values upward, brightening them. This is called gamma compression or encoding. When gamma is greater than 1 (like 2.2), the curve pushes values downward, darkening the midtones. This is gamma expansion or decoding. In practice, the formula includes a small linear segment near black to avoid mathematical problems at zero luminance, but the power law describes the vast majority of the tonal range.
Modern standards like Rec. 709 (used for HD video) define a piecewise function: linear below a threshold of about 0.018 luminance, then a power curve with an effective gamma of roughly 0.45 for the rest. The sRGB color space used on most computer monitors follows a very similar curve.
How the Imaging Chain Fits Together
Gamma correction isn’t a single step. It’s split across the entire pipeline from capture to display. A camera (or RAW processing software) encodes the image with a compressive gamma of about 0.45. This brightens the dark tones for storage and transmission. The display then decodes the image by applying the reciprocal gamma of about 2.2, which darkens those values back down. Multiply the two exponents together (0.45 × 2.2 ≈ 1.0) and you get a system that reproduces the original scene’s luminance relationships on screen.
Every stage in this chain has a defined gamma for its inputs and outputs. JPEG and most standard image formats store gamma-encoded data. RAW files from digital cameras, by contrast, use linear gamma (effectively 1.0), which is why RAW files look dark and flat until processing software applies a gamma curve during conversion to JPEG or TIFF. That encoding step redistributes the camera’s native tonal levels into perceptually uniform ones, making the most efficient use of a given bit depth.
When any link in this chain applies gamma incorrectly, or applies it twice, the result is an image that looks washed out or crushed into darkness. A correctly calibrated system ensures encoding and decoding balance each other so that the final image on screen matches the original scene.
The CRT Origin Story
Gamma correction has its roots in old cathode ray tube televisions. Engineers discovered early on that CRTs do not produce light proportional to their input voltage. Instead, the light output is proportional to voltage raised to a power of about 2.2 to 2.5, a quirk caused by electrostatic effects in the electron gun. This meant a signal intended to be 50% brightness would appear much darker on screen.
The fix was to pre-correct the signal at the camera end, applying the inverse curve (gamma of roughly 0.45) before transmission. The CRT’s natural physics would then undo the correction, and the viewer saw the intended brightness. This happy accident also turned out to match human perception well: the compression into darker tones aligned neatly with how our eyes allocate sensitivity. So even though CRTs are now obsolete, the gamma system they inspired persists.
Modern Displays and Gamma Simulation
LCD and OLED screens don’t share the CRT’s natural voltage-to-light physics. Their inherent response curves are completely different. However, because the entire video and image ecosystem was built around the CRT gamma model, modern flat panels electronically simulate a CRT-like gamma curve. Your monitor’s internal processing maps the incoming signal through a lookup table that mimics the 2.2 power response, ensuring images encoded for the old standard still display correctly.
Most consumer monitors target a gamma of 2.2, which is the standard for sRGB content and general computing. Professional broadcast monitors often use a gamma of 2.4 (defined in the ITU-R BT.1886 standard), which produces slightly deeper shadows suited to the dim viewing environments of color grading suites. The Rec. 2020 standard for ultra-high-definition television uses the same transfer function as Rec. 709, just specified with higher precision for 12-bit systems.
HDR Changes the Rules
High dynamic range content breaks away from the traditional power-law gamma entirely. Standard gamma was designed for displays topping out around 100 candelas per square meter. HDR displays can hit 1,000 to 10,000 candelas per square meter, and stretching a simple power curve across that range would require 15 bits per channel to avoid banding. That’s wildly inefficient.
Instead, HDR uses new transfer functions built specifically for wide luminance ranges. The Perceptual Quantizer (PQ), published as SMPTE ST 2084, models human perception across the full range up to 10,000 cd/m² and replaces the gamma curve used in standard dynamic range content. Hybrid Log-Gamma (HLG) takes a different approach, combining a logarithmic curve for highlights with a conventional curve for shadows, making it backward compatible with standard displays. PQ is not backward compatible with the traditional gamma curve, which is why HDR content requires displays and software that understand these newer transfer functions.
Practical Impact on Photos and Video
If you edit photos or video, gamma correction affects your work at several points. When you shoot in RAW, your camera stores linear data. The image looks dark and low-contrast on screen because your monitor expects gamma-encoded input. Your editing software applies a gamma curve (along with other adjustments) when it renders the preview, and bakes that curve in when you export to JPEG or a similar format.
Misunderstanding gamma leads to common editing mistakes. Adjusting brightness on a linear file as though it were already gamma-encoded will produce unnatural results. Working in a “linear light” workflow (sometimes used in visual effects compositing) means temporarily removing the gamma curve, performing calculations on the true light values, and re-encoding afterward. This matters for operations like blurring, blending, or adding light effects, which are physically accurate only in linear space.
For everyday use, most of this happens invisibly. Your camera, operating system, and display handle the encoding and decoding automatically. But when images look too dark, too bright, or show unexpected banding in shadows, a gamma mismatch somewhere in the chain is often the cause. Checking that your monitor is calibrated to the expected gamma (usually 2.2 for sRGB content) and that your software’s color management settings match your output format will resolve most of these issues.

