Can Science Be Wrong? Yes, and That’s the Point

Yes, science can be wrong, and being wrong is actually built into how science works. Unlike systems of belief that claim permanent truth, science operates on a principle that every claim must be open to being disproven. When a scientific idea turns out to be incorrect, the process of identifying and correcting that error is science functioning exactly as designed. That said, there’s a big difference between an individual study being wrong and an entire field of knowledge collapsing overnight.

Being Wrong Is the Point

In the early twentieth century, philosopher Karl Popper argued that what separates real science from everything else is one key feature: falsifiability. A scientific claim must make predictions that can be tested and potentially shown to be incorrect. If no experiment could ever disprove an idea, Popper considered it outside the realm of science entirely. He pointed to Freudian psychology as an example: it offered broad explanations of human behavior but made no specific predictions that an experiment could contradict.

This means science doesn’t claim to produce permanent, unchangeable truths. It produces the best explanations available given current evidence. Ideas that survive repeated testing over decades are probably robust, but “probably robust” is the strongest language the system allows. That’s not a weakness. It’s what makes science self-correcting in a way that dogma never can be.

How Often Individual Studies Are Wrong

More often than most people realize. When researchers attempted to replicate 100 published studies in psychology, only about 36% produced results similar to the originals. The effect sizes in those successful replications were, on average, half as large as initially reported. This pattern, sometimes called the replication crisis, extends well beyond psychology into biomedicine and other fields.

Peer review, the process where other scientists evaluate a study before publication, catches less than you’d hope. In studies testing how well reviewers spot errors, they failed to identify two-thirds of major mistakes. On average, reviewers caught only two or three out of eight or nine deliberately inserted errors. About 40% of reviewers missed cases where authors had extended their conclusions well beyond what their data actually showed. Peer review is a useful filter, but it’s far from foolproof.

The scale of corrections is growing. More than 10,000 research papers were retracted in 2023 alone, a new record. Many of those retractions involved fraud or serious methodological problems that slipped past initial review. Each retraction represents the system catching an error, sometimes years after publication.

How Medical Advice Changes

If you’ve ever felt whiplash from shifting health recommendations, you’re not imagining it. A review of medical guidelines issued during the COVID-19 pandemic found that when expert recommendations were later tested in randomized trials, over one in three turned out to be wrong. That 35% reversal rate was consistent with earlier research on medical reversals across other areas of medicine. These weren’t fringe opinions. They were official guideline recommendations from the National Institutes of Health.

Some reversals reshape everyday health advice. For decades, dietary guidelines warned Americans to strictly limit cholesterol intake, treating eggs and shellfish almost like health hazards. By 1995, the USDA had begun quietly stepping back from specific cholesterol limits, and the 2015 guidelines dropped the 300-milligram daily cap entirely. The science had shifted: dietary cholesterol turned out to have far less impact on blood cholesterol than originally believed.

Daily aspirin tells a similar story. Millions of healthy adults over 60 took low-dose aspirin for years to prevent heart attacks, often on their doctor’s advice. In 2022, the U.S. Preventive Services Task Force reversed course and recommended against starting aspirin for primary heart disease prevention in adults 60 and older. Better evidence showed the bleeding risks outweighed the cardiac benefits for people who hadn’t already had a heart attack or stroke.

When Entire Worldviews Shift

Sometimes science isn’t just slightly off. Sometimes the entire framework turns out to be fundamentally incomplete. Historian of science Thomas Kuhn described this process as a paradigm shift. In his model, science operates in long stretches of “normal science,” where researchers solve problems within an accepted framework. Over time, puzzles accumulate that the framework can’t explain. These anomalies build until confidence in the old model cracks, triggering a crisis and eventually a revolution where a new paradigm replaces the old one.

The most famous example took centuries to play out. For over a thousand years, educated people accepted that the Earth sat at the center of the universe with everything orbiting around it. When Galileo pointed his telescope at Jupiter in 1610, he saw moons orbiting that planet, not Earth. He also observed the phases of Venus, which could only be explained if Venus orbited the Sun. These observations didn’t just tweak the old model. They dismantled it.

The shift from Newtonian physics to Einstein’s relativity followed a similar pattern. Newton’s laws worked beautifully for everyday situations and still do. You can use them to build bridges, launch rockets, and predict eclipses. But at extreme speeds, near massive objects, or across cosmological distances, Newton’s predictions break down. Einstein’s general relativity revealed that space, time, matter, and force are all interconnected in ways Newton’s framework couldn’t capture. Kuhn noted that this wasn’t a simple upgrade: the entire conceptual web of space, time, matter, and force had to be “shifted and laid down again on nature whole.”

Wrong vs. Incomplete

This distinction matters. When people ask “can science be wrong,” they often picture a binary: either science got it right or it didn’t. Reality is more nuanced. Newton’s physics wasn’t wrong in the way that, say, the belief that the Earth is flat is wrong. It was incomplete. Newton’s equations remain accurate for the vast majority of practical applications. They represent a specific case within Einstein’s broader framework, valid when speeds are slow relative to light and gravitational fields aren’t extreme. Mathematically, Einstein’s equations reduce to Newton’s under those everyday conditions.

This pattern repeats across science. Old theories often survive as useful approximations within specific boundaries, while new theories expand the range of what can be explained. The wrongness, when it exists, is usually about scope and precision rather than being completely off base.

Individual studies, though, can be flatly wrong. A single paper might have flawed data, a biased sample, statistical errors, or outright fraud. This is why experienced scientists rarely change their views based on one study, no matter how dramatic. Confidence builds through replication: different teams, in different labs, using different methods, arriving at the same conclusion. The more times an idea survives that gauntlet, the more trustworthy it becomes.

Why the System Still Works

Given all of this, a reasonable question is why anyone should trust science at all. The answer is that no other system for understanding the physical world has a built-in error correction mechanism this aggressive. Religious texts don’t retract chapters. Political ideologies don’t run controlled experiments on their claims. Science does both, constantly.

The 10,000 retractions in 2023 aren’t evidence that science is broken. They’re evidence that science is auditing itself at an unprecedented scale. The aspirin reversal isn’t a failure. It’s what happens when a system values updated evidence over tradition. The replication crisis in psychology, painful as it has been for the field, led to sweeping reforms in how studies are designed, registered, and reported.

Science can absolutely be wrong. Any individual study, guideline, or even paradigm might eventually be overturned. What makes science different from other ways of knowing isn’t that it’s always right. It’s that when it’s wrong, it has the tools and the cultural expectation to find out and change course.