What Is Bad Science and How Do You Spot It?

Bad science is research that reaches unreliable conclusions because of flawed methods, manipulated data, or deliberate fraud. It ranges from honest mistakes amplified by sloppy study design all the way to intentional fabrication of results. Understanding what makes science “bad” helps you evaluate the health claims, product promises, and news headlines you encounter every day.

What Counts as Bad Science

Bad science isn’t a single thing. It’s a spectrum. At one end, you have research that’s simply weak: small sample sizes, no control group, vague hypotheses, or conclusions that stretch far beyond what the data actually showed. At the other end, you have outright fraud, where researchers fabricate data, falsify results, or plagiarize other people’s work. The U.S. government formally defines research misconduct as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research.” That definition deliberately excludes honest error and genuine scientific disagreements, which are a normal part of how science works.

The distinction matters. A researcher who designs an experiment poorly or misinterprets statistical results has produced low-quality science, but that’s different from one who cherry-picks data to support a predetermined conclusion. Both can mislead the public, but only the latter involves intentional deception. Good science can still involve conflicting data or valid differences in how results are interpreted. Bad science, by contrast, typically involves ignoring or hiding evidence that contradicts the desired outcome.

The Hallmarks of Pseudoscience

Pseudoscience is bad science’s more organized cousin. It mimics the language and appearance of real research while violating its core principles. The philosopher Karl Popper identified the key dividing line: a scientific claim must be falsifiable, meaning it has to be possible, at least in theory, for an observation to prove it wrong. When promoters of a theory refuse to accept any evidence that could disprove it, that theory has left the realm of science.

Several specific red flags mark pseudoscientific claims:

  • Handpicked examples. Supporters point to carefully selected cases rather than representative evidence from the broader population.
  • Unrepeatable experiments. The results rely on experiments that other researchers can’t reproduce.
  • Disregard of refuting information. Observations or experiments that conflict with the theory are ignored or dismissed.
  • Built-in subterfuge. The theory is structured so that any outcome confirms it, and nothing could ever disprove it.

If you notice a claim that has an answer for every objection but never updates based on new evidence, you’re likely looking at pseudoscience rather than science.

P-Hacking and Statistical Manipulation

One of the most common ways legitimate-looking research goes wrong is through the misuse of statistics, particularly something called p-hacking. In many fields, a study’s results are considered “statistically significant” if they cross a specific threshold: a p-value of 0.05 or less, meaning there’s roughly a 1 in 20 chance the results happened by random chance alone. That threshold was never meant to be a magic number, but it became one because journals prefer to publish positive findings.

P-hacking happens when researchers keep slicing their data in different ways, testing different variables, or tweaking their analysis until they land on a result that clears that 0.05 bar. They might exclude certain participants, combine groups differently, or test dozens of relationships and only report the one that hit significance. The data itself isn’t fabricated, but the process of fishing for a specific result makes the finding unreliable. As one widely cited verse in the statistics community puts it: “P point oh five we publish, else perish.” The pressure to produce significant results creates incentives for exactly this kind of manipulation.

The Replication Crisis

If a scientific finding is real, other researchers should be able to repeat the experiment and get similar results. When a massive collaborative project attempted to replicate 100 published studies in psychology, only about 36% produced statistically significant results matching the originals. The effect sizes in the replications were, on average, half as large as those initially reported.

This “replication crisis” isn’t limited to psychology. In 2005, researcher John Ioannidis published what became a landmark paper arguing that most published research findings in biomedicine are false. The problem stems from a combination of factors: studies that are too small, analytical methods that amplify noise, and a publishing system that rewards novel, dramatic results over careful, incremental work.

Publication Bias and the File Drawer Effect

Imagine 20 research teams all testing whether a new supplement improves memory. Nineteen find no effect. One, by chance, gets a positive result. That one study gets published. The other 19 sit in file drawers, never seeing the light of day. This is the file drawer effect, and it systematically distorts what the scientific literature looks like from the outside.

When studies with small effects, small samples, or nonsignificant results go unpublished, the body of available evidence skews positive. Anyone reviewing “the research” on that supplement would find a published study showing it works, with no way to know about the 19 that showed it doesn’t. Meta-analyses, which combine multiple studies to estimate a true effect, end up producing estimates that are larger than the real effect because they can only work with published data.

How Industry Funding Shapes Results

Who pays for a study influences what that study finds. Industry-sponsored research in both food and pharmaceuticals is more likely to produce results and conclusions favoring the sponsor’s product compared to independently funded research. This isn’t always because the data is fabricated. Often, the influence is subtler: it shapes which questions get asked in the first place.

The tobacco industry provides a well-documented example. Research funded through the industry’s Center for Indoor Air Research was significantly more likely to focus on secondhand smoke than independently funded projects (63% vs. 30%). But here’s the telling detail: of those secondhand smoke studies, 67% focused on measuring exposure levels rather than health effects. Only 11% examined whether secondhand smoke actually harmed people. The research wasn’t necessarily producing false results. It was strategically designed to divert attention away from the most damaging questions.

Similarly, food industry-funded trials were significantly less likely to study dietary behaviors than independently funded ones (33% vs. 67%). The funding doesn’t just influence answers. It influences which questions never get asked.

Predatory Journals and Broken Gatekeeping

Peer review is supposed to be science’s quality filter. Before a study gets published in a reputable journal, other experts in the field evaluate whether the methods are sound and the conclusions are justified. Predatory journals bypass this entirely. Since the term was first coined in 2010, the number of publishers engaging in minimal or nonexistent peer review has grown considerably.

These journals charge authors publication fees while providing little to no editorial oversight. At least one major predatory publisher has been found in a U.S. federal court to not engage in peer review at all, to fraudulently misrepresent its impact factor, and to deceive authors about publication costs. The problem isn’t just that bad research gets published in these journals. It’s that these papers then get cited in legitimate systematic reviews and meta-analyses. Of hundreds of journal titles from one predatory publisher examined in one study, only one was indexed in MEDLINE (the main biomedical database), but papers from these journals still found their way into Google Scholar and, from there, into the broader scientific conversation.

The Wakefield Case: Bad Science in Action

Few examples illustrate the full anatomy of bad science as clearly as the 1998 study by Andrew Wakefield claiming a link between the MMR vaccine and autism. Published in The Lancet, one of the world’s most prestigious medical journals, the study had a sample size of just 12 children. That alone should have limited its influence, but the claims were dramatic enough to generate global media coverage.

The problems ran deep. Wakefield had failed to disclose that he was being funded by lawyers representing parents in lawsuits against vaccine manufacturers. The study claimed its patient sampling was consecutive (meaning unbiased), when it was actually selective. Wakefield’s team had conducted invasive medical procedures on children without proper ethical clearance. Ten of the 12 co-authors eventually retracted their support for the paper’s interpretation, acknowledging that the data was insufficient to establish any causal link.

The Lancet fully retracted the paper in February 2010. Subsequent investigation revealed that Wakefield and his colleagues had deliberately picked and falsified data to suit their case. The study wasn’t just wrong. It was fraudulent. Yet its effects persist: vaccine hesitancy fueled by this single retracted paper continues to influence public health decisions more than two decades later.

How to Spot Bad Science Yourself

You don’t need a PhD to evaluate scientific claims. A few practical questions can filter out most bad science before it changes how you think or act. Start with the source: where did this information come from? A peer-reviewed journal is more reliable than a press release, which is more reliable than a social media post. But even peer-reviewed doesn’t mean bulletproof, as the Wakefield case shows.

Next, ask whether the claim represents the scientific community’s view or a single outlier study. Science builds knowledge through accumulation, not one dramatic finding. If a headline says “Study shows coffee cures cancer” but thousands of other studies don’t support that, the headline is misleading regardless of whether that single study is real. Check whether any controversy is being blown out of proportion. Many media stories frame settled science as “debated” by giving equal weight to fringe positions.

Finally, look at the strength of the evidence. A randomized controlled trial with thousands of participants tells you more than a case study of five people. A finding that’s been replicated by independent teams is more trustworthy than one that hasn’t. And any claim where the people making it also profit from it deserves an extra layer of skepticism.