Is Science Objective or Subjective?

Science aims to be objective, but it is practiced by humans, funded by institutions, and shaped by cultural priorities, which means subjective influences are woven into every stage of the process. The honest answer is that science is neither purely objective nor purely subjective. It is a system of methods designed to minimize subjectivity, and those methods work remarkably well, but they never eliminate the human element entirely.

What Objectivity Means in Science

When people call science “objective,” they usually mean two things: that scientific claims are based on evidence anyone can examine, and that the methods used to gather that evidence don’t depend on any single person’s feelings or beliefs. A chemistry experiment run in Tokyo should produce the same results as one run in Berlin. The laws of gravity don’t change based on who measures them.

The philosopher Karl Popper formalized this idea through the concept of falsifiability. For a theory to count as scientific, it must make predictions that could, in principle, be proven wrong. Einstein’s theory of relativity predicted specific, measurable effects that, if absent, would have destroyed the theory. This is what separates science from frameworks like psychoanalysis, which Popper argued could explain any possible human behavior and therefore couldn’t truly be tested. The ability to be wrong is, paradoxically, what makes a scientific claim trustworthy.

Popper also insisted that no scientific statement is beyond questioning. Even the basic observations used to test theories must themselves be testable by other people. This requirement of “intersubjective” testing, where multiple independent observers can check each other’s work, is one of science’s strongest tools for keeping individual bias in check.

Where Subjectivity Enters the Process

Despite these safeguards, subjective choices shape science at every turn. Researchers choose which questions to ask, which experiments to design, how to interpret ambiguous data, and which results to emphasize. These decisions are influenced by training, culture, career incentives, and personal expectations.

One of the best-documented examples is confirmation bias: the tendency to notice evidence that supports what you already believe and overlook evidence that contradicts it. Scientists are not immune to this. A well-known historical case involves Arthur Eddington’s 1919 expedition to test Einstein’s predictions about how gravity bends light. Eddington already expected a particular result, and scholars have since argued that the noisy, ambiguous data from that expedition was interpreted more favorably than it should have been. The conclusion turned out to be correct, but the path to it was shaped by expectation.

Confirmation bias doesn’t require dishonesty. Scientists may unconsciously design studies that are more likely to confirm their hypotheses than to challenge them. They may choose comparisons or statistical methods that subtly favor the outcome they anticipate, not out of fraud, but because cognitive shortcuts guide decisions they aren’t fully aware of making.

How Funding Shapes What Science Finds

The question of who pays for research introduces another layer of subjectivity. A scoping review of over 650 large-scale medical analyses found that 92% of industry-sponsored studies recommended the drug manufactured by the sponsoring company, and 82% reported statistically significant positive results for that drug. When researchers directly compared industry-funded and independently funded studies on the same topics, the industry-funded versions reported favorable conclusions 100% of the time, compared to 80% for non-industry studies. The effect sizes in industry-funded research also tended to be larger.

This doesn’t necessarily mean the data is fabricated. Subtle decisions about study design, which patients to include, which outcomes to measure, and which results to highlight can all tilt findings in a sponsor’s direction without anyone committing outright fraud. Publicly funded research, by contrast, tends to score higher on reporting standards and appears in higher-impact journals.

Beyond corporate influence, broader social forces steer science in particular directions. The Manhattan Project is a dramatic example of wartime priorities redirecting vast resources toward fundamental physics. More quietly, research funding worldwide tends to concentrate on problems that matter to wealthy nations and powerful institutions, leaving other questions unexplored. Science as a body of knowledge reflects not just what is true about the world, but which truths societies have chosen to investigate.

Peer Review: Objective Gatekeeper or Subjective Filter?

Peer review is often described as science’s quality-control system, and it does catch errors and improve research. But it is far from a neutral process. When multiple reviewers evaluate the same scientific paper or the same peer review, they disagree roughly 28% to 32% of the time. That rate of disagreement is comparable to what you’d find if you asked two co-authors of the same paper to independently rate their own contribution.

Even superficial factors influence peer review. In one controlled experiment, reviewers rated longer reviews as higher quality, even when the additional length added no useful information. The effect was strongest for perceived “coverage,” where longer reviews scored a full 0.83 points higher on a 7-point scale. Authors, meanwhile, rated reviews recommending acceptance an average of 1.4 points higher than reviews recommending rejection, regardless of the review’s actual quality. These biases aren’t malicious. They’re the predictable result of human psychology operating inside a system that depends on human judgment.

The Reproducibility Problem

If science were perfectly objective, any competent researcher should be able to repeat an experiment and get the same result. In practice, this often doesn’t happen. A 2016 Nature survey found that more than 70% of researchers had tried and failed to reproduce other scientists’ experiments, and more than half couldn’t reproduce their own. A more recent survey of hundreds of professors in the U.S. and India painted a similarly sobering picture: only about 34% of American researchers and 15% of Indian researchers who attempted to replicate others’ work reported fully successful results.

These numbers don’t mean the underlying science is wrong. Reproducibility failures can stem from small differences in equipment, materials, or technique that are hard to capture in a published paper. But they do reveal how much unacknowledged variability exists in what’s supposed to be a standardized process, and how far real-world science falls from the ideal of perfectly objective, perfectly repeatable results.

How Science Corrects for Its Own Subjectivity

What makes science unusual isn’t that it’s free from bias. It’s that it has built-in mechanisms specifically designed to counteract bias, even if those mechanisms are imperfect.

Double-blind trials are one of the clearest examples. In these studies, neither the participants nor the researchers know who is receiving the real treatment and who is getting a placebo. This prevents researchers from unconsciously treating the two groups differently, and it prevents participants’ expectations from coloring their reported symptoms. When unblinding happens accidentally before the study ends, it must be documented and reported as a potential source of bias. The entire structure exists because scientists recognized that human subjectivity contaminates results and engineered a workaround.

Statistical methods, pre-registration of hypotheses, independent replication, and open data sharing all serve the same purpose. They don’t eliminate subjectivity, but they create a system where subjective errors tend to get caught and corrected over time. Thomas Kuhn, the philosopher who introduced the concept of paradigm shifts, pointed out that the transition between major scientific frameworks is not a purely rational process. Competing ideas, personal reputations, and even national loyalties can influence which theory wins out. But the long arc of science bends toward self-correction in ways that other human institutions generally don’t.

A Practical Way to Think About It

A useful framework is that scientific methods are designed to be objective, but scientific practice is carried out by subjective beings. Any individual study can be influenced by the researchers’ expectations, the funder’s interests, or the reviewers’ biases. But the collective enterprise of science, where thousands of researchers test, challenge, and build on each other’s work, produces knowledge that is more reliable than any single person’s perspective could be. The objectivity of science lives not in any one experiment, but in the process of ongoing scrutiny and correction that no single researcher controls.