Bias in science is a systematic error that pushes study results away from the truth. It’s not the same as random error or honest mistakes. Bias pulls findings in one direction, making a treatment look more effective than it is, inflating the importance of a discovery, or hiding results that don’t fit expectations. It can creep in at every stage of research, from choosing who to study to deciding which results get published.
How Bias Enters the Research Process
A useful way to think about scientific bias is as a “cycle” with four entry points: how the research question is framed, how the study is designed, how it’s carried out, and whether the full results see the light of day. Bias at any one of these stages can distort the final picture, and multiple biases can stack on top of each other within a single study.
Some biases are deliberate, but most aren’t. Researchers often introduce bias without realizing it, guided by unconscious assumptions about what their results should look like. That’s what makes bias so persistent and so hard to eliminate entirely.
Selection Bias: Who Gets Studied
Selection bias happens when the people (or animals, or samples) in a study don’t represent the broader population the research claims to describe. During the early stages of COVID-19 testing, for example, only people with strong symptoms were likely to get tested. If you estimated disease prevalence from those tests alone, you’d get a wildly inflated number, in some scenarios as high as 100%, because the sample excluded everyone who was asymptomatic or mildly ill.
A larger version of this problem affects entire fields. Decades of psychology research have drawn conclusions about “human behavior” based almost entirely on participants from Western, educated, industrialized, rich, and democratic societies. These populations turn out to be psychological outliers in measurable ways, which means findings presented as universal truths about the human mind may only describe a thin slice of our species. The literature remains overwhelmingly skewed toward these groups, and there’s still no standard method for measuring how large the psychological differences are between societies.
Confirmation Bias: Seeing What You Expect
Confirmation bias is the tendency to notice and trust evidence that supports what you already believe while dismissing evidence that contradicts it. It’s one of the most studied cognitive biases in science, and it operates at every level, from how experiments are designed to how data gets interpreted.
A striking historical example involves the 1919 solar eclipse expedition that aimed to test Einstein’s general theory of relativity. Later analysis of the historical record revealed that the lead astronomer, Arthur Eddington, and his colleagues had to decide which photographic plates to keep and which to throw out before any clear conclusion could be drawn. Eighteen plates from a second observation team in Brazil were discarded without strong justification. As historians have noted, there was nothing inevitable about the observations themselves until the scientific community had finished deciding which data counted. The results confirmed Einstein’s predictions, but the path to that conclusion was shaped by human judgment in ways the initial reports didn’t acknowledge.
More broadly, confirmation bias means scientists tend to design studies that confirm their hypotheses rather than test ideas that might disprove them. This runs directly counter to the idealized scientific method, where researchers are supposed to actively seek disconfirming evidence.
P-Hacking: Massaging the Numbers
One of the most common forms of statistical bias is known as p-hacking: collecting, selecting, or analyzing data in different ways until a nonsignificant result crosses the threshold into statistical significance. The magic number in most fields is a p-value below 0.05, meaning there’s less than a 5% chance the result occurred by luck alone. P-hacking games that threshold.
The specific techniques are varied. Researchers might check results partway through an experiment and decide whether to keep collecting data. They might measure many different outcomes and only report the ones that turned out significant. They might drop outliers, split or combine treatment groups, or add and remove variables after looking at the data, all in search of a publishable result. When this happens across many studies, the telltale sign is a suspicious cluster of p-values sitting just below 0.05.
Publication Bias: The File Drawer Problem
Studies with exciting, statistically significant results are far more likely to get published than studies that find nothing. The studies showing no effect don’t disappear because they’re bad science. They simply get filed away and forgotten. This is called the file drawer problem, and it warps the scientific record. When researchers later combine published studies into large-scale reviews (meta-analyses) to estimate how well a treatment works or how strong an effect is, the missing null results mean the combined estimate will be larger than the true effect. The published literature, in other words, systematically overstates what science has actually found.
Funding Bias: Who Pays Matters
Industry-sponsored research consistently produces results that favor the sponsor’s product. A large analysis covering nearly 3,000 studies found that industry-funded drug trials were about 30 times more likely to report favorable efficacy results compared to studies funded by governments or nonprofits. In nutrition research, industry-sponsored studies on dairy intake and cardiovascular disease showed a notably larger beneficial effect than non-industry studies, which found almost no effect at all. Tobacco industry-funded reviews were nearly 90 times more likely to conclude that secondhand smoke wasn’t harmful.
The mechanisms aren’t always obvious. Internal pharmaceutical documents show that scientific publication is treated as part of marketing strategy. When researchers at the University of California, San Francisco interviewed lead investigators on 200 industry-funded drug trials, all of which included statements saying the sponsor had no role in study design or conduct, 92% of investigators said the sponsor was actually involved in designing the study, 73% said the sponsor helped analyze the data, and 87% said the sponsor was involved in reporting findings. Only a third of authors said they had the final say on what appeared in the publication.
Observer Bias and the Placebo Effect
Observer bias is any systematic difference between what’s actually happening and what gets recorded, caused by the person doing the measuring. If a researcher knows which patients received the real treatment, they might unconsciously rate those patients as improving more. Patients themselves introduce bias too: people who know they’re receiving a treatment often report feeling better regardless of whether the treatment works.
Double-blind studies, where neither the researchers nor the participants know who’s getting the real treatment, are the primary defense against both problems. Keeping everyone in the dark prevents researchers from treating groups differently and reduces the outsized placebo effect that can inflate results.
Bias in Peer Review
Even the gatekeeping process that decides what gets published is vulnerable. Peer review bias is a violation of impartiality in evaluating a submitted paper. Reviewers may be influenced by the prestige of the authors’ institution, the authors’ gender, or whether the findings align with the reviewer’s own work. Several models attempt to counteract this. Double-blind review hides author identities from reviewers and is used in imaging, nursing, and humanities journals. Triple-blind review goes further, hiding author identities from editors as well and assigning a deidentification code. Open peer review, where reviewer names are published alongside their comments, introduces accountability that has a small but measurable positive effect on the quality of published reports.
How Science Fights Back
The scientific community has built several structural defenses against bias. The most significant is pre-registration of clinical trials. The Declaration of Helsinki now states that every clinical trial must be registered in a publicly accessible database before the first participant is enrolled. This means researchers commit to their methods and planned analyses in advance, making it much harder to cherry-pick outcomes or bury unfavorable results after the fact. The WHO maintains a network of approved registries, and major medical journals require registration as a condition of publication.
Randomization, where participants are assigned to treatment or control groups by chance, prevents selection bias from skewing who ends up in each group. Blinding prevents observer bias. Pre-registration prevents selective reporting. Replication, where independent teams repeat a study to see if they get the same result, catches findings that were flukes or artifacts of a biased design. No single safeguard eliminates bias entirely, but layered together, these tools make it progressively harder for systematic errors to survive undetected.
Bias in science isn’t a sign that science is broken. It’s a predictable consequence of research being conducted by humans with expectations, incentives, and cognitive blind spots. The defining strength of the scientific process is that it treats bias as a known enemy and builds specific countermeasures into its methods, even if those countermeasures are imperfect and unevenly applied.

