Can We Create New Senses for Humans? Science Says Yes

Yes, we can create new senses for humans, and several working technologies already do exactly that. The core insight behind this field is surprisingly simple: the brain doesn’t “see” or “hear” the world directly. It receives electrochemical signals from sensory organs and figures out what to do with them. That means if you feed the brain new types of information through the skin, tongue, or ears, it will eventually learn to interpret that data as a genuine perception.

Why the Brain Can Accept New Inputs

Your brain is essentially a general-purpose computing device. It takes in signals from sensory organs, looks for patterns, and builds a model of the world. It doesn’t particularly care where those signals come from. As long as the data reflects something real about the environment, the brain works to decode it. This principle, demonstrated over decades of neuroscience research, is what makes artificial senses possible.

The earliest proof came in 1969, when researchers converted a camera’s video feed into a grid of small poking tips pressed against the backs of blind volunteers. Over days of training, the subjects learned to distinguish horizontal from vertical lines, identify simple objects, and even recognize faces, all through touch on their skin. That experiment opened the door to an entire field called sensory substitution: routing one type of sensory information through a completely different channel.

Devices That Exist Today

Modern sensory devices have moved well beyond lab prototypes. A wristband developed by neuroscientist David Eagleman’s team captures sound from the environment, breaks it into frequency components (mimicking what the inner ear does), and maps those frequencies onto patterns of vibration across the skin. Deaf users wear it and, with practice, learn to differentiate specific sounds: a dog barking, a doorbell, someone calling their name. After six months, one user reported that he no longer felt buzzing followed by an interpretation. Instead, he said, “I perceive the sound in my head.” The vibrations had become a seamless sense.

A tongue-based device called BrainPort takes a different approach. It uses a small grid of electrodes placed on the tongue that deliver tiny electrical pulses, described by users as feeling like Pop Rocks candy. A camera feed is translated so that bright pixels create strong stimulation at corresponding points on the tongue, gray pixels create medium stimulation, and dark areas produce nothing. Users learn to “see” shapes and navigate spaces through their tongue. The same device has been adapted for balance: when the head tilts forward, stimulation shifts toward the tip of the tongue, and when it tilts back, the signal moves rearward. This gives people with damaged vestibular systems a new way to stay upright.

For converting vision into sound, a system called the vOICe turns a 64-by-64 pixel image into an audio “soundscape.” Vertical position maps to pitch (higher objects sound higher), horizontal position maps to time (the image scans left to right), and brightness maps to volume. Blind users who train extensively with this system can learn to navigate rooms and identify objects by listening to these soundscapes.

Senses That Don’t Exist in Nature

Sensory substitution replaces a lost sense. But the more radical question is whether you can give humans entirely new perceptions, ones no biological organism has. The answer, again, is yes.

A vibrating belt ringed with motors can give you a sense of magnetic north. The motor pointing north buzzes continuously. As you turn, the buzzing shifts around your waist. Wearers report that after enough time, they develop an intuitive, always-on awareness of compass direction, something birds and sea turtles have but humans never did.

Researchers have also connected lidar sensors (the same depth-sensing technology used in self-driving cars) to a vibrating vest. The vest buzzes to tell the wearer how close they are to walls, chairs, or people, and which direction to move to reach a target. This is spatial awareness far beyond what human senses normally provide.

Electromagnetic field detection is another frontier. A purpose-built device can decompose the frequency of alternating current signals in the environment and present the intensity of different parts of the spectrum through different vibrating motors. This would let you “feel” the invisible electromagnetic fields radiating from power lines, appliances, and electronics.

Perhaps most striking, researchers have mapped infrared and ultraviolet light intensity onto vibration patterns, letting users pick up on information in light ranges that are completely invisible to the human eye, without any gene editing or retinal surgery.

Infrared Vision Through Contact Lenses

A 2025 breakthrough took a different approach entirely. Neuroscientists and materials scientists created contact lenses embedded with nanoparticles that absorb near-infrared light (in the 800 to 1,600 nanometer range, just beyond what human eyes detect) and convert it into visible wavelengths. Unlike night-vision goggles, these lenses require no power source. In human trials, participants wearing the lenses accurately detected flashing infrared signals and could perceive the direction infrared light was coming from.

The team also developed a wearable glass system using the same nanoparticle technology for higher-resolution infrared perception. An earlier version involved injecting these nanoparticles directly into mouse retinas, which also worked, but the contact lens approach is far less invasive.

Implants That Create Magnetic Sense

The biohacking community has taken a more direct route. Small magnets implanted under the skin of the fingertip vibrate in the presence of electromagnetic fields or ferromagnetic objects. The fingertip’s nerve endings, most sensitive to vibrations in the 200 to 300 Hz range, pick up these tiny movements and relay them to the brain. Implanted users have described the result as “an indescribable new sense.” Researchers have tested these implants for amplitude detection, frequency discrimination, and even ultrasonic range sensing, connecting the implant to external devices that translate sonar data into magnetic pulses the finger can feel.

How Long It Takes to Learn a New Sense

The brain doesn’t instantly know what to do with unfamiliar input. Training matters. In one study, sighted participants using a vision-to-sound device reached about 63% accuracy at identifying objects after just 75 minutes of self-guided practice. That’s well above chance but far from fluent. Researchers noted that longer training would likely push accuracy much higher. The deaf wristband user who reported perceiving sound “in his head” had been using the device for six months. The trajectory seems consistent across technologies: basic discrimination comes within hours, functional use develops over weeks, and something approaching genuine perceptual fluency takes months of regular use.

There are bandwidth limits. The skin can perceive vibrations between roughly 5 and 400 Hz, and vibration points need to be spaced at least 8 centimeters apart to avoid interference. That means skin-based devices will never match the resolution of biological sight or hearing. But they don’t need to. They just need to carry enough information for the brain to extract useful patterns.

Does It Actually Feel Like a New Sense?

This is one of the most debated questions in the field. When a blind person uses a vision-to-sound device for years and begins automatically “seeing” colors when they hear certain tones, is that genuine visual experience or just very fast cognitive interpretation? Paul Bach-y-Rita, the pioneer of sensory substitution, argued that if someone without functioning eyes can perceive detailed spatial information, correctly locate it, and respond like a sighted person, calling it “vision” is justified.

But the subjective experience remains hard to pin down. At least one long-term user of a sound-based vision device reported that visual percepts, including colors, began appearing automatically in response to the auditory stimuli. That level of automaticity suggests the brain isn’t consciously translating anymore. It’s perceiving. Whether that perception is truly “visual,” truly “auditory,” or something entirely new that doesn’t fit either category is a question neuroscience can’t yet fully answer. What’s clear is that the brain treats the input as real sensory data and acts on it accordingly.

Direct Brain Stimulation

All the technologies above work by routing information through existing sensory channels: skin, tongue, ears. The next frontier involves skipping those channels entirely and writing data directly into the brain. Neuralink’s brain-machine interface, for instance, was designed with the capability to electrically stimulate neurons on every one of its thousands of channels, not just read from them. The stated goal includes providing artificial senses of touch and body position to people controlling robotic limbs. No demonstrations of sensory input have been published yet, but the hardware architecture is built for it. Direct cortical stimulation could theoretically deliver sensory experiences with far higher bandwidth than the skin or tongue can support, though the technology remains in its earliest clinical stages.

The bottom line is that creating new human senses isn’t a theoretical possibility. It’s happening now, across a spectrum from contact lenses and wristbands to subdermal implants. The brain’s willingness to absorb and interpret novel information streams turns out to be remarkably broad. The limiting factor isn’t neuroscience. It’s engineering: building devices that are comfortable, high-resolution, and practical enough for everyday life.