Local realism is a pair of assumptions about how the physical world works: that objects have definite properties whether or not anyone measures them, and that nothing can influence something else faster than the speed of light. These two ideas feel obvious in everyday life, yet experiments over the past several decades have shown that nature violates at least one of them. That violation sits at the heart of quantum mechanics and has reshaped how physicists understand reality itself.
The Two Assumptions
Local realism combines two distinct principles. Understanding each one separately makes it easier to see why their combination matters.
Realism is the idea that physical objects have definite properties that exist independently of observation. A coin in your pocket is heads or tails before you look at it. A particle has a specific position, momentum, or spin before any instrument measures it. In physics terms, the results of measurements are caused by physical variables that already exist, whether those variables are directly observable or hidden from view.
Locality is the idea that an object can only be directly influenced by its immediate surroundings. If you flip a switch in New York, it cannot instantly cause a light to turn on in Tokyo without some signal traveling between them. More precisely, no influence or information can travel faster than the speed of light. If you cause a change at one location, the effects of that change can only appear within the region that light could have reached since the change occurred.
Put together, local realism says the universe is made of stuff that has definite properties at all times, and those properties can only be changed by nearby interactions that respect the cosmic speed limit. This was the default worldview of classical physics, and it’s the framework most people intuitively carry around.
Einstein and the Case for Local Realism
The most famous defense of local realism came in 1935, when Albert Einstein, Boris Podolsky, and Nathan Rosen published what’s now called the EPR paper. Quantum mechanics at the time was simply silent about what was likely to be true in the absence of observation. There were rules for predicting what you’d find if you measured a particle, but no account of what the particle was doing before you looked. That there could be laws for finding things if one looks, but no laws for how things are independently of looking, struck Einstein as deeply incomplete.
The EPR paper described a thought experiment involving two particles that interact and then fly apart to distant locations. Because of how they interacted, the particles become “entangled,” meaning their properties are linked. Measuring the position of one particle instantly tells you the position of the other, no matter how far apart they are. The same goes for momentum. Einstein argued that since you can predict a distant particle’s property without disturbing it, that property must have existed all along. It must be an element of physical reality.
The conclusion was that quantum mechanics, which doesn’t assign those definite values before measurement, must be an incomplete description of nature. Einstein believed there had to be deeper, “hidden” variables carrying the real information, variables that quantum theory simply hadn’t discovered yet. Under this view, the apparent weirdness of entanglement was just a gap in our knowledge, like two sealed envelopes containing opposite-colored cards. Opening one envelope and finding a red card instantly tells you the other is blue, but nothing spooky happened. The outcome was determined when the envelopes were sealed.
Bell’s Theorem Changes Everything
For nearly three decades, Einstein’s argument seemed like a philosophical preference rather than a testable claim. That changed in 1964 when physicist John Bell proved a remarkable theorem. He showed that if local realism is true, meaning hidden variables exist and are governed only by local interactions, then measurements on entangled particles must obey a specific mathematical constraint now called Bell’s inequality.
The logic works like this. If each particle carries a hidden instruction set that determines its behavior at every possible measurement angle, then the statistical correlations between measurements on two distant particles have a ceiling. They can only be so strongly correlated. Bell’s inequality defines that ceiling. Quantum mechanics, on the other hand, predicts correlations that exceed it.
This was no longer philosophy. It was a concrete, testable prediction. If experiments found correlations below Bell’s ceiling, local realism survived. If they found correlations above it, at least one of the two assumptions (realism or locality) had to be wrong.
Experiments That Broke the Ceiling
Starting in the 1970s, physicists began testing Bell’s inequality with real entangled particles, typically pairs of photons. John Clauser ran early versions of these experiments, and in the early 1980s Alain Aspect performed a landmark series of tests that showed quantum correlations clearly exceeding Bell’s limit. Anton Zeilinger later extended this work in increasingly sophisticated ways. All three shared the 2022 Nobel Prize in Physics “for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science.”
Each generation of experiments closed potential escape routes known as loopholes. Skeptics pointed out that if detectors were inefficient, or if the two measurement stations could somehow communicate during the experiment, the results might mimic a Bell violation without actually disproving local realism. By 2015, several teams ran what are called loophole-free Bell tests, closing the most significant loopholes simultaneously. One experiment using entangled photons, published in Physical Review Letters, found a result so far beyond what local realism allows that the probability of it occurring by chance under local realism was less than 1 in 10 to the 30th power, an 11.5 standard deviation effect. For context, physicists typically consider 5 standard deviations conclusive.
What Has to Give
If local realism fails, either locality is wrong, realism is wrong, or both are. Physicists and philosophers have proposed several ways to make sense of this, and there’s no universal agreement on which assumption to abandon.
The Copenhagen interpretation, developed by Niels Bohr and Werner Heisenberg in the 1920s, effectively drops realism. In this view, physical systems simply do not have definite properties until they are measured. The particle isn’t secretly heads or tails before you look. The act of measurement brings the outcome into existence. This is the interpretation most commonly taught in physics courses, though it leaves many people unsatisfied because it doesn’t explain what a measurement is or why observing a system should change it.
Other physicists prefer to keep realism but accept nonlocality. In this picture, particles do have definite properties at all times, but influences between entangled particles can act instantaneously across any distance. This preserves the intuition that things are real whether or not you look, at the cost of accepting that the universe contains connections that don’t respect the speed of light. Importantly, this kind of nonlocality still can’t be used to send messages faster than light, so it doesn’t violate relativity in the way you might fear.
The statistical (or ensemble) interpretation takes a more minimalist route. It treats the mathematical tools of quantum mechanics as tools for predicting the behavior of large groups of particles, not individual ones. The probability distribution is taken at face value. You can predict what a population of particles will do on average, but the theory simply doesn’t speak about individual particles. This sidesteps the question of what’s “really” happening to any single particle.
None of these interpretations has been experimentally distinguished from the others. They all predict the same measurement outcomes. The choice between them remains, for now, partly a matter of which assumption a physicist finds most expendable.
Why It Matters Beyond Physics Class
The violation of local realism isn’t just a philosophical curiosity. It has practical consequences, particularly in quantum cryptography. Quantum key distribution protocols use entangled particles to generate encryption keys between two parties. The security of these systems rests on the fact that entangled particles violate Bell’s inequality. If an eavesdropper tries to intercept or mimic the entangled particles using any local realist strategy (hidden copies, pre-arranged values), the correlations they produce will fall below Bell’s limit, and the eavesdropping becomes detectable.
Researchers at UC San Diego have noted that quantifying exactly how much “freedom of choice” an attacker would need to fake quantum entanglement helps determine how easy it would be to break certain quantum encryption schemes. The stronger the Bell violation in an experiment, the harder it is for any classical strategy to replicate it, and the more secure the communication channel becomes.
Local realism, in short, is the intuitive picture of reality that most of us carry by default. Particles have properties, and nothing acts at a distance without a signal. Quantum mechanics forced physicists to confront the failure of that picture, and the experiments confirming that failure are among the most rigorously tested results in all of physics.

