Why Animal Testing Should Still Be Allowed

Animal testing remains a cornerstone of medical research because living organisms have a level of biological complexity that no current alternative can fully replicate. Every major medical advance of the past century, from antibiotics to organ transplants to cancer therapies, relied on animal models at some stage of development. The practice is controversial, but the case for allowing it rests on concrete scientific needs, regulatory realities, and the limitations of replacement technologies.

Most Major Medical Breakthroughs Required It

The discovery of vitamins, hormones, antibiotics, safe blood transfusion, insulin, kidney dialysis, cancer chemotherapy, and the eradication of smallpox all depended on animal research. The polio vaccine alone required millions of primates during its development in the 1950s. Penicillin, the polio vaccine, and insulin have collectively saved millions of lives and improved quality of life for millions more. These aren’t abstract or historical claims. If you’ve received a vaccination, taken an antibiotic, or benefited from a surgical technique developed in the last hundred years, animal research was almost certainly part of the process that made it safe for you.

More recently, animal models played a direct role in developing targeted cancer therapies. Trastuzumab, a drug used to treat an aggressive form of breast cancer, was tested in mouse models that helped researchers understand how it blocks tumor growth. Follow-up studies in mice showed that combining newer versions of the drug with immune checkpoint therapies significantly improved immune responses against tumors, a finding that opened new treatment strategies for patients who weren’t responding to existing options.

Living Bodies Are Too Complex to Simulate

The strongest scientific argument for animal testing is that no alternative can recreate the full complexity of a living organism. Cells grown in a dish behave differently from the same cell type inside a body. Organs compensate for stress in ways that isolated tissues cannot. Interactions between the circulatory system, immune system, and metabolism all influence how a drug behaves, and these interactions vanish the moment you remove cells from their natural environment.

Researchers at the EXCLI Journal identified five specific reasons why replacing animal experiments remains so difficult: it’s hard to account for how the body breaks down foreign substances, hard to capture interactions between different cell types, hard to translate real-world doses into lab concentrations, hard to simulate long-term exposure effects, and hard to predict how changes at the cellular level translate into actual health outcomes. As one example, the liver detoxifies ammonia through an intricate relay between two compartments of the liver lobule. Under stress, a key enzyme reverses its function entirely, switching from producing ammonia to consuming it. Researchers studying liver cells in the lab have found it extremely difficult to reproduce this switch outside the body. Situations involving organ architecture and compartmentalization are still far beyond what lab-grown systems can simulate.

Alternatives Are Promising but Not Ready

Organ-on-a-chip technology, which uses tiny devices lined with human cells to mimic organ function, is one of the most exciting developments in the field. Because these chips contain human cells, they may be more relevant to predicting human responses than animal models in some cases. But a 2025 report from the U.S. Government Accountability Office found that organ-on-a-chip systems currently cannot replace animal testing. They can only be used alongside it. The report cited a lack of benchmarks and validation studies, meaning researchers and drug companies don’t yet have a reliable way to measure how accurate these chips are compared to animal models or real clinical outcomes.

Computer modeling has similar limitations. The FDA now reviews computational simulations to help set initial human doses for some drugs, and in select cases these models can justify waiving certain animal studies. But these exceptions apply to narrow circumstances, such as when a drug targets a receptor that only exists in humans and the only available animal model would be a specially engineered mouse. For the vast majority of drugs, computer models and cell-based tests provide useful early data but cannot capture the full-body picture that regulators need before allowing human trials.

Regulators Still Require It (With Some Flexibility)

For decades, FDA rules effectively mandated animal testing before any new drug could enter human trials. That changed in late 2022 when Congress passed the FDA Modernization Act 2.0, which explicitly authorized the use of non-animal alternatives like cell-based assays and computer models to support new drug applications. The law also removed the requirement to use animal studies for certain biological products. This was a significant shift in principle, but in practice, the FDA still requires animal safety data for most drugs. The agency is issuing new guidance and offering case-by-case waivers, but sponsors can only skip animal studies if they provide adequate data from alternative methods, a bar that most current technologies can’t yet meet on their own.

Animal Models Aren’t Perfect Predictors

Honesty about the limits of animal testing actually strengthens the case for doing it thoughtfully rather than abandoning it. A study published in the British Journal of Cancer analyzed how well animal toxicity data predicted side effects in human cancer drug trials. The results were sobering: animal models had a median positive predictive value of 0.65, meaning that when animal studies flagged a toxic effect, it showed up in humans about 65% of the time. The negative predictive value was only 0.50, meaning animals missed half of the toxicities that later appeared in people. Blood-related side effects were the most reliably predicted.

These numbers highlight a real weakness. But they also illustrate why animal testing persists: even an imperfect safety screen catches a meaningful percentage of dangerous effects before any human is exposed. Removing that screen without a validated replacement would mean more surprises in early human trials, where the consequences of unexpected toxicity can be severe.

Ethical Frameworks Govern How Animals Are Used

Modern animal research operates under a set of principles known as the 3Rs: Replacement, Reduction, and Refinement. Replacement means avoiding animal use when a non-animal method can answer the same question. Reduction means using the fewest animals possible while still producing reliable, reproducible results. Refinement means minimizing pain, suffering, distress, or lasting harm to any animal that is used. These principles, first articulated in 1959, now shape regulations and institutional oversight worldwide.

In the United States, research institutions that use animals must maintain an Institutional Animal Care and Use Committee that reviews every proposed experiment before it begins. The NIH’s Office of Laboratory Animal Welfare and the Institute for Laboratory Animal Research provide additional oversight. No researcher can simply decide to use animals without justifying why alternatives won’t work, how many animals are truly needed, and what steps will be taken to minimize suffering. The system isn’t flawless, but it creates structured accountability that has meaningfully reduced animal use over time.

Animals Benefit From the Research Too

Animal testing doesn’t only serve human medicine. Vaccines and surgical techniques developed through animal models are used in veterinary care for pets, livestock, and wildlife. Microsurgery techniques were first tested in rhesus monkeys and rabbits in the 1950s and 1960s, with researchers successfully performing vascular microsurgery to restore blood flow in severed digits. Those techniques now benefit both human and animal patients. Research into organ transplant rejection using rat models has advanced our understanding of immune responses that apply across species. Even xenotransplantation, the transplant of organs from one species to another, has progressed through studies using genetically modified pig hearts in primates, work that could eventually address the chronic shortage of donor organs for humans.

Gene Therapy Development Depends on Animal Models

Some of the most promising frontiers in medicine rely heavily on animal research right now. CRISPR gene editing, which allows scientists to precisely correct disease-causing mutations, was used to prevent muscular dystrophy in mice by editing the defective dystrophin gene in their germline DNA. The corrected mice produced functional dystrophin protein, the molecule that’s missing or broken in patients with Duchenne muscular dystrophy. This kind of proof-of-concept work simply cannot be done in a petri dish, because the question isn’t whether you can fix a gene in isolated cells (you can) but whether that fix translates into a functioning body that develops normally and maintains the correction over a lifetime. That requires a whole organism.

The path from a corrected mouse to a human therapy is long and uncertain, but without the animal model, researchers would have no way to evaluate whether a gene editing approach is safe and effective enough to even consider testing in people. For genetic diseases like cystic fibrosis, sickle cell disease, and muscular dystrophy, animal models remain the bridge between laboratory discovery and clinical reality.