What Do Deaf People Use to Hear: Devices & Tech

Deaf and hard-of-hearing people use a range of technologies to perceive sound, from devices that amplify it to implants that bypass damaged parts of the ear entirely. The right option depends on the type and severity of hearing loss. Some people use one device, others combine several, and many also rely on visual and vibrotactile tools that convert sound into light, text, or physical sensation.

Hearing Aids

Hearing aids are the most common starting point. They work by picking up sound through a microphone, processing it through a digital chip, and delivering an amplified version into the ear canal. Modern digital hearing aids convert sound waves into digital signals, allowing the microchip inside to do more than just make things louder. It can reduce background noise, suppress feedback whistling, and store multiple program settings for different environments.

The main styles vary by size and placement. Behind-the-ear (BTE) models sit in a small case behind the ear and connect to an earmold through tubing. Receiver-in-canal (RIC) models are a smaller version of the same idea, with a thin wire running into the ear. In-the-ear (ITE) aids fill the outer bowl of the ear, while completely-in-the-canal (CIC) models are tiny enough to sit almost invisibly inside the ear canal. Smaller generally means less powerful, so the choice often reflects the degree of hearing loss.

Since October 2022, the FDA has allowed over-the-counter hearing aids for adults 18 and older with perceived mild to moderate hearing loss. These can be purchased without a prescription or hearing exam, and they let users adjust settings themselves through built-in controls or a smartphone app. For more significant hearing loss, prescription hearing aids fitted by an audiologist remain the standard.

Cochlear Implants

When hearing loss is severe enough that amplifying sound no longer helps, cochlear implants take a fundamentally different approach. Instead of making sound louder, they skip the damaged part of the inner ear and send electrical signals directly to the hearing nerve.

Here’s how it works: a sound processor worn behind the ear picks up audio from the environment and converts it into coded signals. Those signals travel to a receiver surgically placed under the skin behind the ear, which passes tiny electrical currents through a set of electrodes threaded into the cochlea, the snail-shaped structure of the inner ear. The electrodes stimulate the hearing nerve, and the brain interprets those signals as sound. For most people with inner-ear hearing loss, the hearing nerve itself still works fine. It’s the hair cells inside the cochlea that are damaged and can no longer do their job of converting vibrations into nerve signals.

The results are significant but not uniform. A large review of over 100 studies found that average word perception improved from about 8% before surgery to 54% afterward. Sentence understanding in quiet environments reached an average of 74%. Among adults who lost hearing after learning to speak, 82% showed meaningful improvement in speech perception. For those who were deaf before acquiring language, the success rate was lower, around 53%, but still substantial. Cochlear implants don’t restore normal hearing. Many recipients describe the sound as electronic or robotic at first, with the brain gradually adapting over weeks and months.

Bone-Anchored Hearing Systems

Some types of hearing loss happen not in the inner ear but in the outer or middle ear, where sound waves physically can’t travel through as they should. Chronic ear infections, structural malformations, or conditions like cholesteatoma can block the normal air-conduction pathway. For these situations, bone-anchored hearing systems offer an alternative route.

A bone-anchored device is surgically attached to the skull bone behind the ear. It picks up sound and converts it into vibrations that travel through the bone directly to the cochlea, completely bypassing the outer and middle ear. This is also used for single-sided deafness, where one ear has normal hearing and the other has little to none. The device on the deaf side picks up sound and routes it through the skull to the working ear.

Auditory Brainstem Implants

For a small number of people, neither hearing aids nor cochlear implants are an option. If the hearing nerve itself is missing, damaged, or nonfunctional, there’s no pathway for signals to reach the brain from the inner ear. Auditory brainstem implants (ABIs) address this by placing an electrode paddle directly on the brainstem’s hearing center, called the cochlear nucleus complex. The implant bypasses both the inner ear and the auditory nerve entirely, stimulating the brain’s sound-processing area with electrical signals. ABIs are most commonly used in people with neurofibromatosis type 2 (a condition that causes tumors on the hearing nerves) or in children born without a developed auditory nerve.

Assistive Listening Devices

Even with a hearing aid or implant, certain environments make it hard to catch every word. Background noise in a restaurant, distance from a speaker in a lecture hall, or poor acoustics in a theater can all degrade the signal. Assistive listening devices bridge that gap by delivering sound more directly.

FM systems are widely used in classrooms. A teacher wears a small clip-on microphone, and the audio is transmitted wirelessly to a receiver connected to the student’s hearing device. The same technology works for adults in meetings, lectures, or houses of worship. Hearing loop systems, found in some theaters, museums, airports, and even taxi cabs, use an electromagnetic signal picked up by a telecoil, a small copper antenna built into many hearing aids. When you’re in a looped room and switch your hearing aid to the telecoil setting, the speaker’s voice streams directly into your device without background noise.

Visual and Vibrotactile Alerts

Sound isn’t just about conversation. It’s also how most people know someone is at the door, the smoke alarm is going off, or severe weather is approaching. Deaf and hard-of-hearing people replace these audio cues with visual and physical ones.

Strobe-light units flash brightly when a smoke alarm triggers. Bed shakers slip under a pillow or mattress and vibrate to wake someone during a nighttime alarm. Combination systems that pair strobes with bed shakers are considered the most reliable setup. Smart video doorbells send phone notifications with a video preview. Whole-home alert receivers can flash lights in every room when the phone rings, the doorbell sounds, or a security system activates. Smartphones themselves offer LED flash alerts, customizable vibration patterns, and live captioning for calls.

For emergencies, Wireless Emergency Alerts push notifications directly to phones with vibration and flash patterns, and NOAA Weather Radios with visual or vibrating alerts provide backup when internet or power goes down.

Live Captioning and Speech-to-Text

Real-time captioning has improved dramatically. Automatic speech recognition systems in 2025 achieve word error rates as low as 3 to 7% on clear audio with a single speaker, a major leap from the 15 to 20% error rates that were common just five years ago. That puts AI captioning close to the accuracy of trained human stenographers, who consistently hit error rates below 2%.

Many captioning providers now use a hybrid model where AI generates the initial text and a human editor corrects errors in real time. About 58% of enterprise captioning services use this approach, achieving error rates below 2.5%. Live captions are built into video calls, streaming platforms, and smartphone accessibility settings, making them one of the most widely available tools for following spoken conversation.

Haptic Devices That Turn Sound Into Touch

An emerging category of technology translates sound into vibrations felt on the skin. Vibrotactile devices worn on the wrist or forearm convert audio frequencies into patterns of physical sensation. Research at Georgetown University and George Washington University found that the way sound is translated matters enormously. When speech was converted into smooth, fluid vibrations that mimicked the natural rhythm of spoken language, the brain’s auditory speech system activated after training, meaning participants’ brains processed the vibrations similarly to how they would process heard speech. When the same words were broken into choppy, distinct pulses (more like Morse code), that effect disappeared. These devices are still relatively new, but they offer a potential way for profoundly deaf individuals to perceive speech through touch rather than sound.