How Do Blind People Watch TV: Audio Description & More

Blind and visually impaired people watch TV primarily through audio description, a narration track that fills in what’s happening on screen between lines of dialogue. A narrator describes facial expressions, scene changes, physical actions, and on-screen text so viewers can follow the full story without seeing the picture. Combined with voice-controlled remotes, accessible streaming apps, and the natural sound design already built into most shows, TV is far more accessible than many sighted people realize.

How Audio Description Works

Audio description is a separate narration track layered on top of a program’s regular audio. During natural pauses in dialogue, a narrator briefly describes what’s happening visually: a character’s body language, a shift in setting, an important object on screen, or text like a location title. A description might sound like “She smiles as she opens the envelope” or “The camera pans across a crowded parking lot at night.” The goal is to blend seamlessly into the existing soundtrack without talking over the actors or interrupting the flow of the scene.

The timing is precise. Describers work within the gaps that already exist in a program’s dialogue, choosing only the most essential visual details. Not every gesture or background element gets described. Instead, the narrator prioritizes what a viewer needs to follow the plot, understand character emotions, or catch visual humor. For fast-paced scenes with little dialogue, this can be a tight squeeze, and the quality of the description varies significantly between productions.

Where to Find Audio Description

On traditional broadcast TV, audio description is delivered through a secondary audio channel, sometimes called SAP (Secondary Audio Programming). This is the same channel often used for alternate language tracks. You can turn it on through your TV’s audio settings or your cable box menu, and the descriptive narration plays alongside the regular soundtrack.

FCC rules require ABC, CBS, Fox, and NBC affiliates in the top 120 TV markets to provide 87.5 hours of audio-described programming per calendar quarter, roughly 7 hours per week. Of those hours, 50 must be prime time or children’s programming. Cable, satellite, and phone-based TV systems with 50,000 or more subscribers face the same requirement for the five most-watched non-broadcast networks. That means a baseline of described content is guaranteed on major channels, though it still covers only a fraction of total programming.

Streaming services have expanded access considerably. Netflix, Hulu, Amazon Prime Video, Disney+, Apple TV+, and others offer audio description on a growing portion of their libraries. On Disney+, for example, you select the “Audio and Subtitles” menu while watching, then choose “English, Audio Description” to activate it. The process is similar on other platforms, typically requiring just a couple of clicks in the playback settings. One persistent frustration: most services don’t let you filter or sort their entire catalog by audio description availability, so finding described content can involve trial and error.

Navigating the TV Itself

Finding a show and pressing play presents its own challenge when menus are designed around visual interfaces. Several layers of technology help with this. Many modern streaming devices, including Amazon Fire TV and Apple TV, have built-in screen readers that announce menu items, show titles, and navigation options aloud. Fire TV devices offer a feature called VoiceView that reads the screen as you move through it with the remote.

Voice control has become one of the most practical tools. Saying “Play Stranger Things on Netflix” or “Turn on audio description” to a smart remote or smart speaker skips the visual menu entirely. Amazon Alexa, Google Assistant, and Apple’s Siri can all launch content, adjust volume, pause playback, and switch inputs by voice. For many blind viewers, this has replaced the need to memorize button sequences on a remote.

Physical modifications also help. Some viewers use tactile stickers or raised dots on remote control buttons to distinguish them by touch. The bump on the “5” button of a numeric keypad, a convention borrowed from telephones, serves as a reference point. More advanced solutions include 3D-printed or laser-cut overlays that sit on top of touch screens, giving physical edges and landmarks to otherwise flat interfaces. These can be customized for specific devices and apps.

Following a Show Without Description

Audio description isn’t available on every program, and many blind viewers are skilled at following TV without it. Dialogue carries the bulk of any story, and most shows convey far more through conversation than through silent visual action. Character voices become the primary identifier. Regular viewers learn to distinguish characters by vocal tone, speech patterns, and accents rather than by appearance.

Sound design does a surprising amount of heavy lifting. Footsteps signal movement. A door closing tells you someone left the room. Background noise, whether it’s traffic, birdsong, or the hum of a hospital, establishes the setting. Musical scores cue emotional shifts: tension, romance, danger, comedy. A skilled sound editor, without intending to, makes a show more accessible simply by doing their job well. Action-heavy scenes with minimal dialogue and sparse sound design are the hardest to follow. A quiet, dramatic stare between two characters, the kind of moment that might be visually powerful, can register as dead air without description.

Watching with sighted friends or family fills in the remaining gaps. A quick whispered explanation during a confusing scene is one of the oldest and most common forms of accessibility, and many blind viewers describe it as a normal part of how they’ve always experienced TV.

Smartphone Apps That Add Description

A newer category of tools uses smartphone microphones to synchronize audio description with whatever is playing on screen. The phone listens to the program’s soundtrack, identifies where you are in the content, and plays the matching description track through your earbuds. This means you can add description to a movie playing in a theater, on a friend’s TV, or on a device that doesn’t natively support it.

These apps have limitations. They generally work with movies and pre-recorded content that has a consistent audio fingerprint. Broadcast television, with its commercial interruptions, throws off the synchronization. And the description track has to exist in the app’s database in the first place. Still, for movies in particular, these tools have opened up access in settings where audio description was previously unavailable.

What’s Still Frustrating

Despite real progress, gaps remain. Live television, including news, sports, and award shows, rarely has real-time audio description. Sports coverage benefits from play-by-play commentary that functions similarly, but news broadcasts often rely on on-screen graphics, text crawls, and silent video footage that go completely undescribed. Reality TV and unscripted programming are less consistently described than scripted shows.

The inability to browse by description availability on most streaming platforms means blind users often discover a show lacks description only after they’ve already started watching. And while 87.5 hours per quarter is mandated on major networks, that represents a small slice of the hundreds of hours those networks air. Many beloved older shows and films have never been described at all.

For viewers who grew up without any of these tools, the current landscape represents an enormous leap. For those navigating it daily, the inconsistency is the main source of frustration: not the absence of technology, but the unevenness of its application.