Deaf and hard-of-hearing viewers watch television primarily through closed captions, which display spoken dialogue and sound effects as on-screen text. But captions are just the starting point. A combination of regulatory requirements, streaming platform features, sign language interpretation, and newer assistive technologies gives deaf viewers multiple ways to access TV content.
Closed Captions: The Foundation
In the United States, the FCC requires that all new English and Spanish language programming, both analog and digital, carry closed captions. At least 75% of older, pre-existing programming must also be captioned. Only 13 narrow categories of content are exempt, along with cases where a broadcaster can prove captioning would be an undue financial burden. In practice, this means nearly everything on broadcast and cable TV has captions available.
Modern digital televisions use a captioning standard called CTA-708, which replaced the older analog-era format. The upgrade matters because 708 captions let you customize how text appears on screen. You can change the font size, color, style, and background opacity to suit your preferences. This is especially useful for viewers who also have low vision or color blindness. The newer format also supports a wider range of languages and special characters, making captions available in non-Western scripts.
To turn captions on, you typically navigate to your TV’s accessibility or caption settings. On most remotes, there’s a dedicated CC button. Once enabled, the text appears at the bottom of the screen in real time.
CC vs. SDH: What’s the Difference?
If you use streaming services, you’ve probably noticed two different caption options: CC (closed captions) and SDH (subtitles for the deaf and hard of hearing). Both serve the same audience and include the same type of content: dialogue, sound effects like [door slams], music descriptions, and speaker identification. The difference is mostly technical. CC is a legacy broadcast format with stricter character limits and different display rules. SDH files are styled and formatted like regular subtitles but carry all the extra information deaf viewers need. On streaming platforms, SDH is the more common format, and it generally looks cleaner on screen.
How Accurate Are Captions?
Caption quality varies dramatically depending on whether a human or a machine produced them. The accepted accuracy threshold in the industry is 98%. Human captioners working on pre-recorded content consistently hit that mark. In one large study, human-generated captions on television news scored 99.4% accuracy, sports segments reached 99.6%, and talk shows came in at 99.2%. That translates to roughly three errors per minute of content.
Automated captions have historically been far less reliable. Studies from 2018 to 2021 found AI-generated captions averaging around 96.3% accuracy, which sounds close to 98% but actually means roughly 22 errors per minute compared to three. That gap is noticeable, especially during fast dialogue or when speakers have accents. Sports broadcasts tend to fare worst, with automated accuracy dropping as low as 95.4%.
The gap is closing, though. Newer AI captioning engines tested in 2023 achieved accuracy rates between 99.1% and 99.9%, rivaling human performance. Live captioning remains the trickiest category, since there’s no time to review before the text hits the screen. The FCC added quality standards for captions in 2014, covering accuracy, timing, placement, and completeness.
Streaming Platform Features
Streaming services have made TV more accessible than traditional broadcast in several ways. Most platforms let you turn captions on globally so they appear automatically for every show. You can also adjust caption appearance, choosing larger text, high-contrast colors, or different fonts. These customization options mirror the CTA-708 standard but are often easier to find in a streaming app’s settings menu than buried in a TV’s system menus.
Many services also offer multiple caption language tracks for the same content, so a deaf viewer who is more comfortable reading in Spanish or French can select that option regardless of the show’s spoken language. Some platforms distinguish between “full” subtitles (dialogue only, meant for hearing viewers watching foreign-language content) and SDH tracks that include sound descriptions. If you’re deaf or hard of hearing, always look for the SDH or CC option rather than standard subtitles, since the standard track won’t tell you about off-screen sounds, music cues, or which character is speaking.
Sign Language Interpretation on Screen
For viewers whose primary language is American Sign Language, reading captions in English can feel like reading a second language. ASL has its own grammar and syntax that differ significantly from written English. Sign language interpreters appear on some broadcasts, particularly during emergency announcements and government press conferences, displayed in a small window in the corner of the screen.
The FCC encourages broadcasters and video providers to keep sign language interpreters visible on screen at all times during emergency information. Outside of emergencies, though, ASL interpretation on mainstream television remains uncommon. Some public broadcasting and news programs offer it, and a handful of dedicated channels or online streams provide ASL-interpreted content.
Researchers have been developing AI-powered signing avatars, 3D animated figures that could theoretically translate any broadcast into sign language in real time. So far, the deaf community has found these avatars difficult to understand. The movements look unnatural, facial expressions are limited, and there are noticeable delays between the spoken content and the signed version. More recent approaches use deep learning to generate realistic signing videos based on actual human reference footage, which deaf viewers tend to prefer over cartoonish avatars.
Hearing Aids and TV Streamers
For people who are hard of hearing rather than profoundly deaf, direct audio streaming can make a huge difference. Devices like TV streamers plug into a television’s audio output and transmit sound wirelessly to Bluetooth-enabled hearing aids. The hearing aids essentially function as personalized wireless headphones, delivering the TV audio directly into your ears at whatever volume and settings your audiologist has programmed. You can watch at a comfortable level without blasting the volume for everyone else in the room.
These streamers use a low-energy wireless protocol that minimizes audio delay, so the sound stays synced with what’s happening on screen. Setup is typically plug-and-play: connect the streamer to your TV, and compatible hearing aids pair automatically when you’re in range. Many hard-of-hearing viewers use this technology alongside captions for the best experience.
Haptic and Wearable Technology
A newer category of assistive technology translates sound into physical sensation. Haptic devices use vibration motors embedded in wearable vests, wristbands, or seat cushions to let deaf viewers feel the audio dimension of what they’re watching. A sudden explosion produces a strong vibration, background music creates a softer rhythmic pattern, and dialogue might pulse differently depending on the speaker.
Some movie theaters already use vibrating seats as part of “4D” experiences, and researchers are working on bringing similar technology into homes. Wearable haptic systems with vibration motors embedded directly in clothing can synchronize tactile feedback with on-screen action. One research team built a system specifically for live sports that translates confusing ball movements in soccer into vibrations, helping viewers follow the action through touch. These devices are still largely experimental or niche products rather than mainstream consumer electronics, but they represent a fundamentally different approach: instead of converting audio to text, they convert it to touch.
Augmented Reality Caption Glasses
Some deaf viewers prefer not to have captions displayed on the TV screen itself, whether because other household members find them distracting or because they want captions positioned differently. AR caption glasses overlay text directly in the wearer’s field of vision. Products like XRAI Glass use speech recognition to generate real-time captions that appear on the lens of lightweight glasses, so only the person wearing them sees the text. This approach works not just for TV but for any audio environment, making it a multipurpose tool for deaf and hard-of-hearing users.

