What Is HDR Tone Mapping and How Does It Work?

HDR tone mapping is the process of translating a video signal with a wide brightness range into something your specific screen can actually display. HDR content can be mastered for peak brightness levels of 1,000 to 10,000 nits, but most TVs top out well below that. Tone mapping is what bridges that gap, compressing the brightest and darkest parts of the image so you still see detail in highlights and shadows instead of blown-out whites or crushed blacks.

Why Tone Mapping Is Necessary

Standard dynamic range (SDR) content, the kind we watched for decades, was mastered at around 100 nits peak brightness. HDR changed that dramatically. HDR10, the baseline open standard, targets displays capable of 1,000 nits. HDR10+ content is mastered for up to 4,000 nits. Dolby Vision supports a theoretical ceiling of 10,000 nits, though most Dolby Vision movies and shows are mastered between 1,000 and 4,000 nits.

Now consider what your screen can actually produce. A mid-range TV might peak at 600 nits. A high-end OLED might hit 1,200. Even the latest QD-OLED panels with 4,000 nits of peak brightness can only sustain around 400 to 500 nits across a full screen. Real HDR content includes scenes where a specific overhead light should measure around 1,950 nits, a bright building at night should sit at 700, and pool reflections should land near 300. If your display peaks at 600 nits, there’s simply no room for those highlights to separate from each other. Everything bright gets squeezed into one flat, washed-out zone. Tone mapping prevents that by intelligently redistributing brightness values so the relationships between bright and dark areas are preserved, even on a less capable screen.

How the Compression Actually Works

At its core, tone mapping applies a mathematical curve that takes the full brightness range of the source content and maps it onto the smaller range your display supports. The lower and middle portions of the image typically pass through with minimal change, since most displays handle those brightness levels without issue. The real work happens at the top of the range, where the brightest highlights need to be compressed into whatever headroom your screen has left.

There are two broad approaches to handling those highlights. A hard clip simply cuts off everything above the display’s maximum brightness, treating a 2,000-nit explosion and a 1,500-nit sky reflection as the same flat white. You lose texture, detail, and the sense of depth that HDR is supposed to deliver. A soft roll-off, by contrast, gradually curves the brightest values downward so they still differ from each other, even if the absolute brightness is reduced. This preserves the sense of shape and gradation in bright areas like clouds, flames, or metallic reflections.

Color is the trickier part. Most tone mapping algorithms focus on compressing luminance (brightness) without paying much attention to what happens to color in the process. When you push brightness values down, colors can become oversaturated because the ratio between color channels shifts. Some systems try to correct this by desaturating the image after the fact, but that can strip away vibrancy and cause hue shifts. Getting both brightness and color right simultaneously is one of the harder problems in display engineering, and it’s a big reason why some TVs look noticeably better than others when playing the same HDR content.

Static vs. Dynamic Tone Mapping

The simplest form is static tone mapping, which is what basic HDR10 uses. The content includes a single set of metadata describing the overall mastering characteristics of the entire movie or episode, including the maximum brightness of the brightest frame and the average brightness of the brightest frame. Your TV reads those two numbers, picks one tone-mapping curve, and applies it to everything from start to finish.

The problem is obvious: a movie has dark dialogue scenes and bright outdoor sequences. A single curve that works well for a sunlit desert will crush detail in a candlelit room, and vice versa. Static tone mapping is always a compromise, optimized for the movie’s extremes rather than any individual scene.

Dynamic tone mapping solves this by adjusting the curve scene by scene, and sometimes frame by frame. It comes in two flavors. The first is content-side dynamic metadata, which is what Dolby Vision and HDR10+ provide. The content creator embeds brightness information for individual scenes or frames directly into the video stream, giving your display specific guidance on how to handle each moment. When your TV receives Dolby Vision or HDR10+ content, it often switches to a dedicated processing pipeline where the content’s own metadata drives the mapping decisions.

The second flavor is display-side dynamic tone mapping, sometimes called DTM. This is processing your TV does on its own, analyzing the incoming image in real time and adjusting the curve regardless of whether the content includes dynamic metadata. Many modern TVs apply their own DTM on top of basic HDR10 content to compensate for the limitations of static metadata. The quality of this processing varies significantly between TV brands and models, which is one reason HDR picture quality can look so different across displays playing the same stream.

How This Applies to Gaming

Games present a unique challenge because the image is generated in real time rather than pre-mastered. A filmmaker can preview exactly how every frame looks and embed metadata accordingly. A game engine has no idea what brightness levels any given moment will produce until you’re playing it.

This led to the creation of HGiG, the HDR Gaming Interest Group, a consortium of TV manufacturers, console makers, and game studios working to standardize how HDR is handled in games. The core idea is to create a shared understanding between your TV, your console, and the game itself. You calibrate your console to your TV’s actual capabilities once, and every HGiG-compatible game uses that calibration to tailor its HDR output. Without this, you’d need to tweak brightness and HDR sliders for every individual game, and many players end up with washed-out highlights or invisible shadow detail because the game and the TV are both trying to tone map independently, working against each other.

HGiG remains a set of guidelines rather than an official specification, and adoption is still growing. But when it works properly, it prevents the double-mapping problem where both the game and the TV apply their own tone mapping curves, resulting in a muddy or flat-looking image.

What Makes Good Tone Mapping

The difference between good and bad tone mapping is visible in everyday viewing. On a well-implemented display, a sunset scene will show distinct gradations from deep orange to pale yellow to white, with clouds retaining their texture even at the brightest points. On a poorly implemented one, the same scene collapses into bands of flat color with a featureless white blob where the sun should be.

A display with 1,200 to 2,500 nits of peak brightness has enough headroom to preserve most of the gradations in typical HDR content, maintaining detail in both the brightest and darkest parts of the image simultaneously. Below that, tone mapping has to work harder, and the quality of the algorithm matters more. A 600-nit TV with excellent tone mapping can look better in practice than an 800-nit TV with aggressive clipping.

If your TV has tone mapping settings you can adjust, the most meaningful option is usually choosing between a mode that prioritizes peak brightness (punchy highlights but potential clipping) and one that prioritizes detail preservation (softer highlights but more visible texture and gradation in bright areas). Neither is universally better. Bright, well-lit rooms favor punchier mapping, while dark home theater setups tend to reward the detail-preserving approach, where subtle differences in highlight brightness become much more visible.