The uncanny valley is real, but not in the simple, universal way most people assume. The original idea, proposed by Japanese robotics professor Masahiro Mori in 1970, suggests that as a robot or digital face becomes more humanlike, our comfort with it rises steadily until it hits a tipping point. At that point, subtle imperfections trigger a sharp dip into revulsion before climbing back up for faces that are truly indistinguishable from real ones. Decades of research have since tested this idea, and the picture is more nuanced than a single valley on a graph.
What the Evidence Actually Shows
A major review of empirical studies published in Frontiers in Psychology found that the data does not consistently support the uncanny valley as Mori originally described it. The straightforward version of the hypothesis, that there’s a reliable, predictable dip in comfort at a specific level of human likeness, didn’t hold up across experiments. The effect appears under some conditions but not others, with different people, and for different reasons than a simple likeness scale would predict.
What the evidence does support is something called the perceptual mismatch hypothesis. The discomfort isn’t triggered by how humanlike something looks overall. It’s triggered when human and non-human features are mixed in conflicting ways: a face with realistic skin but dead eyes, or a body that looks human but moves mechanically. When your brain detects features that don’t belong together, that’s when the eeriness kicks in. A cartoon character with exaggerated proportions doesn’t bother you because everything is consistently non-human. A hyper-realistic digital face with slightly wrong lip movement does, because the signals conflict.
What Happens in the Brain
Brain imaging studies help explain why mismatched features feel so unsettling. In one study, researchers showed participants videos of a human, a clearly mechanical robot, and an android (a robot designed to look human). The android triggered significantly stronger activity in the anterior intraparietal sulcus, a brain region involved in processing actions and predicting how things should move based on how they look. The response to the human and the robot were similar to each other, but the android, which looked human yet moved unnaturally, generated what neuroscientists describe as a prediction error. Your brain expects something that looks human to move like a human. When it doesn’t, the mismatch registers as a kind of alarm.
Activity was also observed in areas near the amygdala, a region linked to threat detection and emotional responses. This fits with two psychological theories about why the effect exists at all. The pathogen avoidance hypothesis suggests the response evolved to help us steer clear of sick individuals whose appearance seems “off.” The mortality salience hypothesis takes a darker angle: almost-human faces may remind us of corpses, triggering a deep, reflexive discomfort with death. Neither theory has been definitively proven, but both align with the brain data showing that the uncanny valley, when it occurs, activates threat-related processing.
Not Everyone Experiences It
The uncanny valley isn’t universal. A study of 255 participants across three age groups found that younger adults (18 to 39) and middle-aged adults (40 to 59) showed the classic dip in comfort with near-human robots, but older adults (60 to 87) did not. Older participants actually preferred more humanlike robots regardless of how realistic they looked, showing no valley at all. This has practical implications for designing assistive robots for elderly care, a growing field worldwide.
People on the autism spectrum also tend to experience the effect differently. A study comparing 32 individuals with autism to 47 neurotypical participants found that the neurotypical group showed the uncanny valley effect clearly, while those with autism showed a much less distinct response. The researchers traced this to differences in how the two groups process faces. Individuals with autism tended to focus on individual facial features (local information) rather than the overall impression of the face (global information). Since the uncanny valley seems to depend on your brain rapidly categorizing something as “almost but not quite human,” processing faces feature by feature rather than holistically may prevent that unsettling mismatch from registering.
AI Faces May Be Crossing the Valley
For years, computer-generated faces reliably triggered uncanny responses. That’s changing fast. A 2025 study tested AI-generated face images created with current tools and found that participants could not reliably distinguish them from real photographs. This held true even for images of celebrity faces, where participants had real-world familiarity to draw on. Providing additional real photos for comparison during the task offered only limited improvement, and prior familiarity with the celebrity produced only modest gains in detection.
This suggests that the latest generation of AI-generated faces may have effectively crossed the uncanny valley for still images. The key threshold, according to the researchers, is achieving anatomical realism combined with photographic quality. Once a synthetic face clears that bar, most people accept it as genuine. The uncanny valley still applies to faces and figures that fall short of that bar, but the zone of discomfort is narrowing as the technology improves.
How Designers Work Around It
The entertainment and robotics industries have developed practical strategies for avoiding the valley. The core principle is consistency: if you’re making something humanlike, every element needs to match. Realistic skin textures need to pair with realistic proportions. Movements need to match the level of realism in the appearance. When behaviors, appearance, and abilities conflict with each other, that’s where discomfort creeps in.
One common approach is to avoid near-human realism entirely. Pixar characters, for instance, are stylized enough that your brain never tries to categorize them as real people. Their proportions are exaggerated, their skin is slightly cartoonish, and their movements are fluid but clearly animated. This sidesteps the valley completely. The same logic applies to robot design: a robot that looks like a friendly machine rather than a near-human face tends to be better received. The uncanny valley is most dangerous in the narrow band where something is realistic enough to trigger human expectations but not realistic enough to satisfy them.
It’s Real, but It’s Conditional
Interestingly, the effect isn’t limited to humans. A study published in PNAS found that macaque monkeys showed the same pattern when viewing synthetic monkey faces. The monkeys looked longer at real faces and at clearly unrealistic synthetic faces, but avoided looking at realistic synthetic faces. This suggests the uncanny valley may have biological roots that predate human culture, possibly tied to the same threat-detection systems that help social animals identify sick or abnormal group members.
So is the uncanny valley real? Yes, but it’s not a fixed law of perception. It’s a conditional response that depends on perceptual mismatch, varies by age and neurotype, has identifiable brain signatures, and appears to exist in other primates. The original smooth valley on Mori’s graph was more of an intuition than a precise model. What actually triggers the response is more specific: the collision of human and non-human cues in the same face or body, processed by a brain that expects consistency and reacts with discomfort when it doesn’t get it.

