Cochlear implants have been around for nearly 70 years, depending on where you draw the line. The first direct electrical stimulation of the auditory nerve happened in 1957, the first wearable device came in 1972, and the FDA approved the first commercial implant in 1984. That timeline matters because it shows how long it took for the technology to move from a lab experiment to something that could genuinely restore hearing.
The 1957 Experiment That Started It All
On February 25, 1957, two French researchers, André Djourno and Charles Eyriès, placed an electrode in contact with a segment of a patient’s auditory nerve. The device was tiny: 2.5 centimeters long and 3.5 millimeters in diameter. One insulated stainless steel wire touched the nerve, while a bare wire connected to a muscle in the temple, where a small coil picked up signals from an external source.
The patient could detect sounds, though nothing close to normal hearing. What mattered was the proof of concept. Djourno and Eyriès concluded that electrical stimulation of the inner ear “would without doubt allow the construction of a possible mechanism for electrical hearing.” They were right, but it would take decades to get there.
The First Wearable Device: 1972
William House, an otologist in Los Angeles, spent years developing a practical cochlear implant. In 1972, he and engineer Jack Urban created the first take-home wearable signal processor paired with a single-channel platinum electrode and induction coil system. This became the prototype for everything that followed.
Single-channel implants used one electrode to deliver electrical signals to the auditory nerve. Patients could detect environmental sounds and the rhythm of speech, which helped with lip reading, but understanding conversation through sound alone was extremely limited. Still, the device was a genuine medical product that patients could wear outside a laboratory, a leap that separated House’s work from the earlier experiments.
Multi-Channel Implants Changed Everything
The real breakthrough came when researchers figured out how to stimulate different parts of the inner ear independently. Graeme Clark, an Australian surgeon, led the development of the multi-channel cochlear implant through the late 1970s. His team discovered three things that made the technology viable: coded signals could be transmitted through intact skin using radio waves (eliminating infection risk from wires poking through the scalp), electrical currents could be targeted to specific groups of nerve cells with the right electrode placement, and electrodes inside the cochlea could be positioned opposite the cells responsible for transmitting the mid-to-low frequencies most important for understanding speech.
Early multi-channel designs tried to mimic the way the auditory system naturally processes sound, stimulating many electrodes at once. That didn’t work well because electrical fields from neighboring electrodes overlapped, making it hard to control what the patient actually heard. Clark’s team shifted strategy, focusing instead on identifying the most critical components of speech and maximizing how clearly those specific signals reached the nerve. This approach, prioritizing clarity over completeness, is still the foundation of modern implant design.
FDA Approval and Mainstream Adoption
The FDA approved the first cochlear implant for commercial use in 1984. It was the House/3M single-channel device, cleared for adults with profound deafness. That approval was a turning point: it meant insurance companies could begin covering the procedure, hospitals could offer it as a standard treatment, and the broader medical community started taking the technology seriously.
Multi-channel devices followed quickly and eventually replaced single-channel models entirely. Approval expanded to include children in the 1990s, which was controversial at the time but dramatically changed outcomes for kids born with severe hearing loss. Children implanted early enough, typically before age two or three, could develop spoken language skills that were difficult or impossible with hearing aids alone.
How Candidacy Has Expanded
For most of cochlear implant history, you needed to have profound hearing loss in both ears to qualify. The technology was a last resort. That’s changed significantly. Current guidelines call for providers to evaluate each ear individually rather than looking at a patient’s overall hearing ability. This means people with single-sided deafness or asymmetric hearing loss (where one ear is substantially worse than the other) can now be candidates for implantation in the weaker ear.
This shift reflects how much the technology has improved. Modern implants can optimize hearing in a single ear, which is useful even when the other ear still works reasonably well. The result is that far more people qualify today than would have a decade ago, and the threshold for “severe enough” continues to move.
From Experiment to Standard of Care
The arc from 1957 to today covers roughly 68 years. For the first 15 of those years, cochlear implants existed only in research settings. The wearable single-channel era lasted about a decade. Multi-channel devices dominated from the mid-1980s onward, and the technology has been refined continuously since then, with better speech processing, smaller external components, and compatibility with wireless audio streaming. More than a million people worldwide now use cochlear implants, a number that would have seemed impossible when Djourno placed that first electrode on a nerve in a Paris operating room.

