Simulation theory is the idea that our entire reality, including every person, planet, and physical law, could be a computer simulation created by a far more advanced civilization. It’s not just science fiction. The concept has roots in centuries-old philosophy and was formalized in 2003 by philosopher Nick Bostrom at Oxford University, who argued that at least one of three unsettling possibilities must be true about our existence.
The Core Argument
Bostrom’s original paper laid out a trilemma. At least one of these three statements, he argued, is almost certainly true:
- Extinction: Nearly all civilizations go extinct before they develop the technology to run realistic simulations of consciousness.
- Disinterest: Civilizations that do reach that technological level have virtually no interest in running such simulations.
- We’re in one: The probability that you are living inside a simulation right now is very close to 100%.
The logic works like this. If advanced civilizations can simulate conscious beings and choose to do so, they’d likely run enormous numbers of these simulations. The simulated people would vastly outnumber “real” people in the original universe. Statistically, any conscious being picked at random would almost certainly be simulated rather than biological. Unless something stops civilizations from reaching that point or wanting to run simulations, the math favors the idea that we’re already inside one.
Philosophical Roots
The idea that reality might be an illusion didn’t start with computers. Plato’s allegory of the cave, written around 380 BCE, imagined prisoners who mistake shadows on a wall for the whole of reality. René Descartes pushed the idea further in the 1600s with his “evil demon” thought experiment: how do you know an all-powerful deceiver isn’t manipulating your senses, producing experiences of a world around you when none of it is real? Descartes concluded that the only thing you can be certain of is your own mind, your own consciousness.
Simulation theory updates this ancient skepticism with modern technology. Instead of a demon or shadows, the deceiver is a computer program. But as philosopher David Chalmers has pointed out, the implications are surprisingly similar. Whether we’re in a simulation, a dream, or being tricked by Descartes’ demon, there’s still a world around us with consistent rules. Tables and chairs still exist in some form, even if their underlying nature isn’t what we assume.
Why Some Physicists Take It Seriously
Certain features of the physical universe look, to some eyes, suspiciously like the architecture of a computer program. The most frequently cited example involves the smallest possible scales of space. Scientists at Caltech and elsewhere theorize that space itself may not be perfectly smooth but instead made of incredibly small discrete units, somewhat like pixels on a screen. These hypothetical “spacetime pixels” exist at the Planck length, roughly 10⁻³⁵ meters. To put that in perspective: if you enlarged a spacetime pixel to the size of a grain of sand, atoms would be as large as galaxies.
A pixelated universe doesn’t prove simulation theory, but it does mean that reality has a resolution limit, just as any digital system would. Physicists are actively trying to detect evidence of this graininess at more practical scales, around 10⁻¹⁸ meters, using precision instruments designed to pick up signatures of quantum gravity.
In 2014, physicists including Zohreh Davoudi proposed a more direct test. If the universe runs on a computational grid, the way our own physics simulations do, that grid structure should slightly distort the behavior of the highest-energy cosmic rays. Specifically, the distribution of these cosmic rays would show a subtle breaking of rotational symmetry, revealing the shape of the underlying lattice. The strongest constraint so far puts the minimum “resolution” of any such grid at energies above 10¹¹ GeV, meaning if a lattice exists, it’s extraordinarily fine-grained.
Who Believes It and Why
Elon Musk brought the idea into mainstream conversation in 2016 when he stated at a tech conference that there’s “a billion to one chance we’re living in base reality.” His reasoning followed Bostrom’s logic: video games went from simple two-dimensional graphics to photorealistic virtual worlds in about 40 years. If that trajectory continues, the argument goes, future civilizations will eventually create simulations indistinguishable from reality. And if that’s possible, it’s probably already happened, possibly billions of times.
Not everyone finds this convincing. The argument assumes that consciousness can be generated computationally, something neuroscience hasn’t established. It also assumes that technological progress continues indefinitely without hitting hard limits, which physics suggests may not be the case.
The Computational Problem
One of the strongest objections to simulation theory is purely practical: simulating a universe may be physically impossible, no matter how advanced a civilization becomes. There are hard limits built into the laws of physics themselves.
Bremermann’s limit, proposed in 1962, sets a ceiling on how fast any physical system can process information. For a one-kilogram computer, the maximum processing rate is roughly 10⁵⁰ bits per second. That sounds enormous, but the physicist who calculated it noted that even a processor with the mass of the entire Earth couldn’t enumerate all possible move sequences in a game of chess. A more fundamental ceiling, derived from the speed of light, gravity, and quantum mechanics together, caps any processor regardless of its mass at about 10⁴³ bits per second.
Then there’s the challenge of simulating conscious minds. The Human Brain Project estimated that simulating a single complete human brain would require a supercomputer capable of exaflop speeds (a billion billion calculations per second) with memory to match. For context, one of the most powerful supercomputers ever built, Sequoia, could simulate 530 billion neurons and 137 trillion synapses, and that represented less than 1% of the human brain’s information-processing capacity. Simulating billions of conscious beings simultaneously, along with the physical universe they inhabit, would require computing power many orders of magnitude beyond anything we can currently conceive.
Quantum systems pose an even deeper problem. Simulating interacting quantum particles on a classical computer scales exponentially with the number of particles. In practice, this means that methods capable of producing exact answers become impossibly slow as systems grow, while methods that run at manageable speeds can only produce approximations. A civilization trying to simulate quantum physics at the scale of an entire universe would face what physicists call the “sign problem,” where the computational noise grows so fast it drowns out any meaningful signal.
Hypothesis, Not Theory
In scientific language, “theory” implies a well-tested framework supported by evidence, like the theory of relativity or germ theory. Simulation theory is more accurately a hypothesis: a logical proposition that hasn’t been proven or disproven. Recently, Santa Fe Institute professor David Wolpert introduced the first mathematically precise framework for defining what it would mean for one universe to simulate another, moving the discussion from philosophical intuition toward something that can be analyzed rigorously. His work opens questions that Bostrom’s original argument didn’t address, like whether universes could simulate each other in closed loops.
For now, simulation theory sits in a unique space. It’s not fringe pseudoscience, since it’s grounded in real logic and taken seriously by credentialed physicists and philosophers. But it’s also not established science. It remains a thought experiment that forces us to confront deep questions about the nature of consciousness, the limits of computation, and what “real” actually means.

