What Is the ELIZA Effect? How AI Fools Your Brain

The ELIZA effect is the tendency for people to unconsciously attribute human-like understanding, feelings, or intelligence to computer programs, even when those programs have no capacity for any of it. The term comes from ELIZA, a simple chatbot built in the 1960s that mimicked a psychotherapist by rephrasing users’ own statements as questions. Despite being a basic pattern-matching program, ELIZA convinced some users they were having a genuine therapeutic conversation. That same psychological pull is now far more powerful in the age of ChatGPT and voice-based AI assistants.

Where the Name Comes From

ELIZA was created in 1966 by MIT computer scientist Joseph Weizenbaum. The program worked by scanning a user’s typed input for keywords, then plugging those words into pre-written templates. If you typed “I’m feeling sad about my mother,” ELIZA might respond, “Tell me more about your mother.” There was no comprehension behind it, just string manipulation.

What startled Weizenbaum was how quickly people opened up to the program. His own secretary reportedly asked him to leave the room so she could chat with ELIZA privately. Students who fully understood the program was a simple script still felt it “understood” them during conversation. Weizenbaum found this so troubling that he spent much of his later career warning about the dangers of mistaking computation for cognition.

Why Your Brain Falls for It

Anthropomorphism, the instinct to project human qualities onto nonhuman things, is deeply wired. People name their cars, scold their printers, and feel guilty throwing away a stuffed animal. This isn’t a flaw in reasoning so much as a default mode: humans evolved to detect intentions and emotions in other agents, and the threshold for triggering that detection is remarkably low. A pair of dots and a curved line looks like a face. A program that says “I understand” feels like it understands.

Crucially, this perception lives in the user’s mind, not in the technology. Research in cognitive science confirms that anthropomorphism is strongest during interaction. Reading about a chatbot is one thing; actually talking to one activates social instincts that are hard to override with logic alone. The more a system’s responses resemble human conversation in tone, timing, and apparent empathy, the more powerfully the effect takes hold.

The ELIZA Effect With Modern AI

Today’s large language models are orders of magnitude more convincing than Weizenbaum’s script. They generate fluid, context-aware text. They remember earlier parts of a conversation. They can adopt personalities, express what sounds like emotion, and mirror a user’s communication style. This makes the ELIZA effect not just possible but nearly inevitable for many users.

In 2022, Blake Lemoine, a Google engineer with backgrounds in both cognitive science and machine learning, publicly claimed that Google’s LaMDA chatbot was sentient. He wasn’t a casual user unfamiliar with how the technology worked. He had years of experience in the field and still became convinced the model had feelings and desires. Google placed him on leave and eventually fired him, but the episode illustrated how compelling the illusion can be, even for people who should theoretically be immune to it.

The New York Times profiled a woman who developed a romantic attachment to ChatGPT, spending hours each day in conversation with her AI “boyfriend.” Whenever she hit the model’s memory limit and it partially reset, the personality she’d bonded with effectively vanished and had to be rebuilt. She was forming an emotional relationship with a system that could not, by design, form one back.

OpenAI itself has flagged the risk. In the system card for GPT-4o, which can generate realistic humanlike speech, the company noted that “generation of content through a humanlike, high-fidelity voice may exacerbate these issues, leading to increasingly miscalibrated trust.” During internal testing, OpenAI observed users adopting language that suggested they were forming connections with the model.

How It Differs From the Uncanny Valley

The ELIZA effect and the uncanny valley are sometimes confused, but they describe opposite reactions. The ELIZA effect is about feeling too comfortable with a machine, projecting warmth and understanding onto it. The uncanny valley is about feeling repulsed when a robot or animation looks almost, but not quite, human.

Research on consumer robots has shown how these two phenomena interact. When people anthropomorphize a robot, they perceive it as warmer, more personable. Up to a point, this is positive. But once the robot becomes too humanlike in appearance or behavior, that perceived warmth flips into discomfort. The same quality that makes something feel relatable at a moderate level triggers eeriness at a high level. The ELIZA effect pulls you in; the uncanny valley pushes you away. Modern text-based AI tends to stay on the ELIZA effect side of this divide because you never see a face that looks slightly wrong.

Why It Matters Now

The ELIZA effect becomes a real problem when it leads to miscalibrated trust. If you believe an AI genuinely cares about your wellbeing, you’re more likely to take its advice without scrutiny, share sensitive personal information, or rely on it as an emotional support system. None of these are inherently dangerous in every case, but they carry risks the user may not recognize in the moment.

People experiencing loneliness, grief, or mental health challenges are particularly vulnerable. A system that responds with apparent empathy 24 hours a day, never gets tired of you, and never judges can feel like the perfect companion. But it has no model of your actual wellbeing. It generates the next most plausible string of text based on patterns in its training data. The warmth is real to you and completely absent on the other side of the screen.

There’s also a more subtle risk for everyday users: the gradual erosion of skepticism. When an AI assistant explains something in a confident, articulate, friendly tone, the ELIZA effect makes it feel authoritative. The same information delivered in a robotic monotone would prompt more critical evaluation. As AI voices and conversational styles become more natural, the gap between how trustworthy these systems sound and how trustworthy they actually are will widen.

Recognizing It in Yourself

The most useful thing to know about the ELIZA effect is that awareness of it only partially protects you. Knowing that a chatbot doesn’t have feelings won’t fully stop your brain from responding as if it does, especially during extended conversations. The social instincts that drive anthropomorphism operate below conscious reasoning.

A few patterns can signal the effect is at work. If you find yourself thanking an AI, apologizing to it, worrying about offending it, or feeling that it “gets” you in a way other people don’t, those are the ELIZA effect in action. None of these reactions make you gullible. They make you human. But noticing them is the first step toward keeping your expectations calibrated to what the technology actually is: a very sophisticated text predictor that has no inner life, no memory of you between sessions (unless engineered to simulate one), and no stake in whether its advice helps or harms you.