What Is Junk Science? Definition and Warning Signs

Junk science is research or data that looks scientific on the surface but fails to meet basic standards of scientific rigor. It can show up as flawed forensic evidence in a courtroom, industry-funded studies designed to confuse the public, or viral health claims on social media with no real data behind them. The term was popularized by Peter Huber more than 30 years ago, originally in the context of unreliable expert testimony in legal cases, but it has since expanded to describe any misuse or distortion of scientific methods to support a predetermined conclusion.

Understanding junk science matters because it shapes real decisions: court verdicts, government regulations, and personal health choices. The line between solid evidence and junk isn’t always obvious, especially when the junk comes wrapped in the language and appearance of legitimate research.

How Junk Science Differs From Real Science

Legitimate science operates on a simple but powerful principle: claims must be testable, and researchers must actively look for evidence that could prove those claims wrong. This concept, called falsifiability, is the core dividing line. Real science invites scrutiny. Junk science avoids it.

Several specific patterns set junk science apart:

  • Unfalsifiable claims. Junk science often uses “escape hatches” to explain away negative results rather than accepting them as evidence against a theory. If no possible outcome could disprove the claim, it isn’t science.
  • No peer review. Legitimate research is evaluated by independent experts before publication. Junk science bypasses this process, preferring to assess its own claims in isolation or dismissing peer review as biased.
  • Refusal to evolve. Sound science updates when new evidence contradicts old conclusions. Junk science stagnates, with faulty claims persisting for decades despite contradictory data.
  • Overreliance on anecdotes. Personal testimonials and hunches replace systematic, reproducible data.
  • Extravagant accuracy claims. Advocates frequently assert near-100% accuracy rates that far exceed what any real testing has demonstrated.

Pseudoscience and junk science overlap but aren’t identical. Pseudoscience is typically a belief system that mimics the form of science without following its methods, like astrology or phrenology. Junk science is broader. It includes pseudoscience but also covers legitimate-looking research that is poorly designed, selectively reported, or deliberately manipulated to serve a specific agenda.

The Reproducibility Problem

Even mainstream science struggles with reliability, which makes spotting junk science harder. In a large survey of biomedical researchers, 72% agreed there is a reproducibility crisis in their field. Nearly half of participants said they had tried to replicate another team’s published study and failed. Even more striking, 23% reported failing to replicate their own published work. Only 5% of researchers believed that more than 80% of biomedical research was reproducible.

In psychology, a landmark project attempted to replicate 100 foundational studies from top journals. Only 36% produced statistically significant results on the second attempt, compared to 97% of the originals. These numbers don’t mean all non-reproducible research is junk science, but they show how much published work sits on shakier ground than most people assume. When even well-intentioned studies frequently can’t be replicated, deliberately flawed research can hide in plain sight.

How Industry Has Weaponized Bad Science

The tobacco industry wrote the playbook for using junk science strategically. Internal documents later made public revealed that seven of the world’s largest tobacco companies colluded as early as 1977 in what they called “Operation Berkshire” to promote doubt about the link between smoking and disease. Their goal wasn’t to prove cigarettes were safe. It was to create enough confusion that regulators and the public couldn’t act with confidence.

The strategy was deliberate: enlist credentialed scientists to make the industry’s case so the industry itself stayed out of view. Tobacco companies recruited faculty from prestigious universities and medical schools, funding their work through undisclosed “special project” awards. One Yale professor repeatedly argued that the methods used to assess secondhand smoke risks were flawed, without revealing he was a paid tobacco industry consultant. Philip Morris funneled money to Harvard’s Center for Risk Analysis, whose director helped the company craft messaging about environmental tobacco smoke.

This approach worked because it exploited a genuine feature of science: uncertainty. No single study is ever perfect, and there are always methodological questions to raise. By amplifying those normal uncertainties, tobacco companies delayed public health action for decades. The same template has since been applied to debates over environmental regulations, where proposals to require independent replication of every study before regulations can take effect would, in practice, delay protections that already took years to develop.

Junk Science in the Courtroom

Courts have a specific framework for keeping unreliable science out of trials. The Daubert standard, used in federal courts, requires judges to evaluate expert testimony against five criteria: whether the method has been tested, whether it has undergone peer review, its known error rate, whether standards exist for its application, and whether it is generally accepted within the relevant scientific community.

A December 2023 amendment to the federal rules of evidence made this gatekeeping stricter. Judges must now confirm that it is “more likely than not” that expert testimony reflects reliable methods reliably applied to the facts. The change came because too many courts had been admitting expert testimony without meaningful scrutiny. In one 2025 appeals case, a circuit court reversed a lower court’s decision, noting that the judge had evaluated four separate challenges to expert testimony in a single hearing lasting just over an hour, with less than 30 minutes devoted to two of the experts combined.

Despite these safeguards, junk science remains especially persistent in criminal cases. Chris Fabricant of the Innocence Project has documented what he calls “poor people science,” a disparity where civil litigation (where money is at stake) tends to attract more rigorous scientific evidence, while criminal cases (where freedom is at stake) rely on forensic methods that have never been properly validated. Bite mark analysis, hair microscopy, and certain bloodstain pattern interpretations have all faced challenges for lacking the kind of empirical foundation the Daubert standard demands.

How Junk Science Spreads Today

Social media has become the fastest channel for junk science to reach the public. Platforms are saturated with health hacks and wellness products, from claims that sea moss smoothies cure chronic illness to posts suggesting lemon juice can treat heartworm in dogs or that castor oil compresses break up tumors. These claims spread because they offer simple, appealing solutions, and because most people scrolling through a feed aren’t equipped to evaluate the evidence behind them.

Predatory journals add another layer of legitimacy to bad science. These publications mimic the appearance of respected scientific journals, complete with official-sounding names, fake impact factors, and fabricated editorial boards. They charge authors a fee to publish but skip meaningful peer review. When a journalist named John Bohannon submitted a deliberately flawed paper to 304 open-access journals, more than half accepted it despite errors so obvious that any competent reviewer would have caught them.

Predatory journals often claim to be indexed in major databases like PubMed or Web of Science, even when they aren’t. Some go further, creating counterfeit websites designed to look identical to legitimate journals. For researchers, the risk is wasted publication fees and a tarnished record. For the public, the risk is more serious: a paper published in what looks like a real journal gets cited in a news article or shared on social media, and suddenly a baseless claim carries the weight of “published research.” Most readers have no way to tell the difference between data from a rigorous journal and data from a predatory one.

How to Spot It Yourself

You don’t need a science degree to evaluate claims critically. Start with the source. Is the study published in a journal you can verify through established databases? Has it been covered by other researchers or science journalists, or does it exist in isolation? A single study making a dramatic claim that no one else has replicated is a red flag, not a breakthrough.

Look at the claim itself. If it promises near-perfect results, applies universally with no exceptions, or can’t be tested in a way that could prove it wrong, treat it with skepticism. Pay attention to whether the people promoting the claim have financial ties to the outcome. And be wary of anecdotes standing in for data. One person’s recovery story, no matter how compelling, tells you nothing about whether a treatment works across a population.

The most useful habit is simply asking: what would it take to prove this wrong? If the answer is “nothing could,” or if the people making the claim seem uninterested in that question, you’re likely looking at junk science.