Dynamic tone mapping is a process where your TV or display adjusts brightness, contrast, and color on a scene-by-scene or frame-by-frame basis while you watch HDR content. Instead of applying one fixed brightness setting to an entire movie or show, it continuously analyzes what’s on screen and optimizes the image in real time. The result is better detail in both very bright highlights and dark shadows throughout a piece of content.
Why Standard HDR Needs Help
To understand dynamic tone mapping, you need to know the problem it solves. The most common HDR format, HDR10, uses static metadata. That means a single set of instructions for brightness and contrast is embedded into the content during production and stays the same from the first frame to the last. Those instructions tell your TV the overall peak brightness and darkest black level for the entire film.
The issue is that a movie contains wildly different scenes. A sunlit desert and a dimly lit interior have completely different brightness needs, but static metadata forces your TV to use the same tone mapping curve for both. Very bright or very dark scenes may not be fully optimized because the TV is working from one compromise setting. A bright explosion might lose detail in the highlights, or a nighttime scene might look washed out, because the TV has no way of knowing what’s coming next.
How Dynamic Tone Mapping Works
Dynamic tone mapping solves this by analyzing the brightness distribution of each frame (or scene) individually, then adjusting the tone mapping curve to match. When a dark, moody scene plays, the algorithm shifts the curve to preserve shadow detail and deep blacks. When the scene cuts to a bright exterior, it recalculates to maintain highlight detail without crushing the whites. This isn’t simply making the image brighter or darker. It’s a precise rebalancing that preserves detail at both ends of the brightness spectrum simultaneously.
There are two ways this happens. Some HDR formats bake dynamic metadata directly into the content file, giving your TV frame-by-frame instructions from the content creator. Other implementations rely on your TV’s own processor to analyze the incoming HDR10 signal and generate its own tone mapping adjustments in real time, even when the content only has static metadata.
HDR Formats That Use Dynamic Metadata
Two major HDR formats include dynamic metadata as part of the content itself: Dolby Vision and HDR10+.
- Dolby Vision adjusts HDR settings on a scene-by-scene or frame-by-frame basis using metadata created during the mastering process. It supports up to 12-bit color depth and a theoretical peak brightness of 10,000 nits, giving it the most headroom of any current format. It’s widely supported on Netflix and Apple TV+.
- HDR10+ works similarly, using dynamic metadata to optimize tone mapping per scene or frame. It’s limited to 10-bit color like standard HDR10 but adds the critical per-scene adjustment layer. Amazon Prime Video is its strongest streaming supporter.
The practical difference between these two and standard HDR10 is most visible in content with high contrast. A scene where a character stands in a dark room with bright light streaming through a window will look noticeably better with dynamic metadata, because the format can tell your TV exactly how to handle both the shadows and the highlights in that specific moment.
TV-Side Dynamic Tone Mapping
Even when you’re watching plain HDR10 content with only static metadata, many modern TVs can apply their own dynamic tone mapping. The TV’s processor analyzes the incoming video frame by frame and generates its own adjustments. Most major TV brands include some version of this feature, often buried in picture settings under names like “Dynamic Tone Mapping” or similar proprietary labels.
This TV-side processing can genuinely improve the HDR10 viewing experience, especially for content that was mastered for displays much brighter than yours. If a movie was graded for a 4,000-nit professional monitor and your TV peaks at 800 nits, the TV needs to compress that brightness range somehow. Dynamic tone mapping does this more intelligently than a single static curve, preserving more detail scene to scene.
The Gaming Difference: DTM vs. HGiG
Gaming adds a wrinkle. Modern consoles like the PS5 and Xbox Series X can do their own tone mapping, which creates a conflict when your TV also applies dynamic tone mapping on top. The result is double tone mapping, where the image gets processed twice and can look overblown.
This is where HGiG comes in. HGiG (HDR Gaming Interest Group) is a setting that tells your TV to display the console’s output without adding any additional tone mapping. The console handles everything, using the TV’s actual peak brightness as its reference point. On an OLED that peaks around 800 nits, for example, you’d set the in-game brightness slider to 800 nits and let the console map the content accordingly.
The visual difference is real. With DTM enabled in gaming, the TV brightens the entire image, including dark areas, which can make shadows look washed out and compress the gap between highlights and dark tones. HGiG keeps darks truly dark and only brightens the parts of the image that are supposed to be bright, like light sources, skies, and reflections. Some gamers find HGiG can look a bit dull or lifeless by comparison, though, because DTM produces a more vibrant, punchy image that many people actually prefer for casual play.
The practical approach many gamers use: set your console’s system HDR calibration to HGiG, then switch to DTM for specific games that don’t support HGiG properly or that clip bright highlights incorrectly. Some older titles like Horizon Zero Dawn and God of War are known examples where DTM can actually produce better results because those games weren’t designed with HGiG in mind.
Does It Preserve the Creator’s Intent?
This is the core tension with any TV-side dynamic tone mapping. When a colorist spends weeks grading a film, they make deliberate choices about how bright or dark each scene should look. A dim, low-contrast scene might be intentionally moody. TV-side DTM may “correct” that mood by brightening midtones and boosting contrast in ways the creator never intended.
Formats like Dolby Vision sidestep this problem because the dynamic metadata comes from the creator themselves. The instructions your TV follows were part of the mastering process. TV-side processing, on the other hand, is the display making its own best guess. For most viewers watching on a couch, that guess looks great. For viewers who prioritize accuracy, it can feel like the TV is editorializing.
Content producers working in both standard and HDR formats face a related challenge. Preserving artistic intent across different display types requires precise round-trip conversion between formats, and static approaches using fixed look-up tables can feel arbitrary and limit creative options. Dynamic approaches offer more flexibility but add complexity to the production pipeline.
When to Use It
For movies and TV shows in HDR10 (no dynamic metadata), turning on your TV’s dynamic tone mapping typically improves the picture. You’ll see more detail in bright skies and dark interiors compared to static processing. If your content is already in Dolby Vision or HDR10+, the dynamic metadata is doing this job for you, and your TV should follow those instructions automatically.
For gaming, start with HGiG if your console supports it. Calibrate the in-game HDR settings to your display’s actual peak brightness. If a particular game looks off or clips highlights badly, try switching to DTM for that title. There’s no single correct answer for every game, because HDR implementation varies wildly across titles.

