Scientific realism is the view that our best scientific theories describe the world as it actually is, including parts of it we cannot directly observe. When physicists talk about electrons or biologists describe DNA, scientific realists hold that these entities genuinely exist and that science is giving us real knowledge about them, not just useful fictions for making predictions. The position has three core commitments: the world exists independently of our minds, scientific statements can be literally true or false, and our best theories constitute genuine knowledge of that world.
The Three Pillars of Scientific Realism
Scientific realism rests on three distinct commitments that work together. The first is ontological: the world science investigates exists independently of human minds. Atoms don’t pop into existence because physicists theorize about them. They’re already there, doing what they do, whether or not anyone is looking.
The second commitment is semantic, meaning it’s about how we interpret scientific language. Realists take scientific claims at face value. When a theory says “electrons orbit the nucleus,” that statement is either true or false in a straightforward sense. It’s not a metaphor, a convenient shorthand, or merely a tool for organizing observations. This applies equally to things we can see (like bacteria under a microscope) and things we can’t (like quarks or gravitational waves).
The third commitment is epistemological: science actually delivers knowledge. Our mature, well-tested theories don’t just save the appearances or happen to predict outcomes correctly by accident. They tell us something true, or at least approximately true, about the structure and contents of reality.
The Strongest Argument for Realism
The most influential case for scientific realism is known as the “no miracles” argument, articulated by philosopher Hilary Putnam in 1975. The reasoning is intuitive: if our scientific theories weren’t at least approximately true, their extraordinary success at predicting new phenomena would be a miracle. The fact that general relativity predicted gravitational lensing decades before we photographed it, or that the standard model predicted the Higgs boson before it was detected, would be inexplicable coincidences if these theories didn’t latch onto something real about the world.
The argument works in two steps. First, a theory’s repeated success gives us reason to believe it correctly captures what we can observe. Second, the best explanation for why it captures observations so well is that it’s also getting the unobservable parts right. If a map consistently leads you to the right destinations, the simplest explanation is that the map is accurate, not that you’ve been repeatedly lucky.
The Challenge From Abandoned Theories
The most powerful objection to scientific realism comes from the history of science itself. Known as the pessimistic meta-induction, it points out that science is littered with theories that were empirically successful in their day but later turned out to be fundamentally wrong. If those past theories were false despite their success, why should we believe our current theories are true?
The list of once-successful, now-abandoned theories is long and sobering. Caloric theory explained heat transfer impressively but posited a substance (caloric fluid) that doesn’t exist. Phlogiston theory accounted for combustion and was widely accepted for decades before Lavoisier’s oxygen theory replaced it. Newtonian mechanics made astonishingly accurate predictions for over two centuries before being superseded by Einstein’s relativity. Fresnel’s theory of light worked beautifully but depended on a luminiferous ether that turned out to be nonexistent. Bohr’s early model of the atom successfully predicted spectral lines of ionized helium yet was built on a picture of electron orbits that quantum mechanics would reject.
The underlying logic is simple: history shows a clear pattern of successful theories being overturned. Assuming our current moment in science is not uniquely privileged, today’s best theories will likely be overturned too. So we shouldn’t believe they’re true, no matter how well they work right now.
Underdetermination: When Evidence Isn’t Enough
A second challenge to realism is the problem of underdetermination: the available evidence can often be explained equally well by more than one theory. If multiple, mutually incompatible theories all fit the data, how can we claim any one of them is “the truth”?
A simple example makes this concrete. If you know someone spent $10 on apples at $1 each and oranges at $2 each, you know they didn’t buy six oranges, but you can’t tell whether they bought one orange and eight apples, or two oranges and six apples, or any other combination that totals $10. The data underdetermines the answer. Similarly, if children who play violent video games are more aggressive on the playground, that correlation fits at least three different theories: the games cause aggression, aggressive kids seek out violent games, or some third factor (like being bullied) drives both behaviors. The same evidence supports all three explanations.
In the history of science, this plays out dramatically. When Newton’s mechanics failed to predict the orbit of Uranus correctly, scientists saved the theory by hypothesizing an unseen eighth planet, and they turned out to be right (Neptune was discovered in 1846). But the exact same strategy failed when applied to Mercury’s orbital quirk. Scientists postulated a hidden planet called Vulcan between Mercury and the Sun, which never materialized. The anomaly wasn’t resolved until Einstein’s general relativity replaced Newtonian gravity entirely. The lesson for antirealists: when the data fits multiple stories, declaring one story “true” is premature.
Constructive Empiricism: The Main Alternative
The most influential antirealist position is constructive empiricism, developed by philosopher Bas van Fraassen in his 1980 book The Scientific Image. Where a scientific realist says science aims to give us a literally true story of the world, the constructive empiricist says science aims only to give us theories that are empirically adequate, meaning they correctly describe what is observable.
The difference is subtle but significant. A constructive empiricist can happily use quantum mechanics, trust its predictions, and work with its equations. What they refuse to do is commit to believing that the unobservable entities in the theory (wavefunctions, superposition states) literally exist as described. Accepting a theory, on this view, means believing it gets the observable part right. It does not require believing it gets the unobservable part right. You trust the map for the roads you can drive on; you remain agnostic about the terrain it marks beyond the horizon.
Structural Realism: A Middle Ground
One of the most interesting responses to the pessimistic meta-induction is structural realism, which tries to split the difference between full-blooded realism and antirealism. The idea, developed by philosopher John Worrall, is that we should believe in the mathematical structure of our best theories without committing to their descriptions of what specific objects or substances exist in the world.
The case of light in 19th-century physics illustrates this well. Fresnel’s theory described light as vibrations in a solid elastic ether. Maxwell’s later theory described it as oscillations in an electromagnetic field. The ether vanished entirely between the two frameworks. But the mathematical equations that described how light behaves, its structure, carried over from Fresnel to Maxwell almost intact. What changed was the story about what light is made of. What stayed the same was the structural description of how it behaves.
Even the notorious phlogiston-to-oxygen transition shows a version of this pattern. Phlogiston theory and Lavoisier’s oxygen theory disagreed completely about what combustion is (the release of phlogiston versus the combination with oxygen). Yet both theories treated combustion, respiration, and the calcination of metals as the same kind of process, and both recognized that this process was the reverse of what happens during ore smelting. Tables of chemical affinity formulated by phlogiston theorists could be reinterpreted in Lavoisier’s framework. The central entity was fictional; the relational structure was preserved.
Structural realism claims this is the general pattern. When scientific revolutions occur, the furniture of the world gets redescribed, sometimes radically. But the structural and mathematical relationships between phenomena survive. If that’s what we commit to believing, we can explain science’s predictive success without being embarrassed every time a theoretical entity gets discarded.
Why the Debate Matters
Scientific realism might sound like a purely academic question, but it shapes how we think about science in everyday life. When a climate model projects warming over the next century, a realist stance says the model is tracking real physical processes and its projections reflect how the world actually works. An antirealist might say the model is empirically adequate for past observations but remain cautious about whether its internal mechanisms (specific feedback loops, representations of cloud dynamics) correspond to reality.
The debate also affects how we interpret theoretical entities that sit at the frontier of observability. Are dark matter and dark energy real substances filling the cosmos, or are they placeholders in equations that happen to fit current data? Your answer depends, at least in part, on whether you think successful theories generally get the unobservable parts right or merely the observable predictions.
Most working scientists operate as practical realists, treating the entities in their theories as real without pausing to justify the philosophy behind that assumption. But the philosophical question remains genuinely open. The history of science offers real reasons for humility, while the predictive power of modern science offers real reasons for confidence. Contemporary work in the philosophy of science increasingly occupies a middle path, asking not whether our theories are simply “true” or “false” but which specific parts of them are likely to survive the next revolution.

