Why Gain-of-Function Research Ethics Remain Unresolved

Gain-of-function research occupies one of the sharpest ethical fault lines in modern science. It involves deliberately enhancing a pathogen’s abilities, such as making a virus more transmissible or more virulent, to study how future pandemics might emerge. The core ethical tension is straightforward: this work could help prevent a catastrophe, or it could cause one. No single ethical framework has resolved that tension, which is why the debate has persisted for over a decade and recently intensified with new U.S. federal restrictions in 2025.

What Gain-of-Function Research Actually Involves

In the broadest sense, any experiment that alters an organism’s genetic makeup and gives it new traits counts as gain-of-function work. Most of it is routine and uncontroversial. The subset that draws ethical scrutiny is narrower: experiments that give dangerous pathogens properties they don’t naturally have, particularly increased ability to spread between people or increased severity of disease.

Yoshihiro Kawaoka, a prominent virologist at the University of Wisconsin-Madison, has outlined three categories. The first, which he called “gain-of-function research of concern,” involves creating viruses with properties that don’t exist in nature, like engineering an avian flu strain to spread through the air between mammals. The second involves making viruses somewhat more dangerous but still comparable to what already circulates in the wild. The third falls in between: pathogens that look alarming in lab animals but don’t pose a realistic threat to humans. The ethical debate centers almost entirely on that first category.

The Scientific Case for Doing It

Proponents argue that gain-of-function experiments serve several purposes that are difficult or impossible to achieve any other way. The most frequently cited is pandemic preparedness. By identifying which genetic changes could make an animal virus transmissible in humans, researchers aim to build an early warning system. The U.S. Centers for Disease Control and Prevention, for example, developed an Influenza Risk Assessment Tool that ranks the danger posed by circulating flu strains, and the molecular data feeding that tool was generated through gain-of-function studies going back to the 1970s.

There’s also a vaccine development argument. When researchers identified genetic markers in an outbreak strain suggesting it could spread more easily in ferrets (the standard animal model for human flu transmission), that finding became the persuasive factor to move forward with vaccine development for that strain. Gain-of-function data has also been used to understand how viruses evade the immune system and develop drug resistance, both critical for designing treatments.

A deeper justification is more forward-looking. Scientists still cannot reliably predict what a virus will do based solely on its genetic sequence. Proponents argue that this inability is precisely why gain-of-function studies must continue: to build the library of genotype-to-phenotype connections that would eventually make prediction possible. As more of those links are established, it may become feasible to screen vaccine candidates and keep dangerous viral characteristics out of them.

Two Ethical Frameworks, Neither Fully Satisfying

The ethical debate typically draws on two major philosophical approaches, and neither provides a clean answer.

The first is utilitarian risk-benefit analysis, which asks whether the expected benefits of the research outweigh the expected harms. In theory, this is simple math: multiply each possible outcome by its probability and compare totals. In practice, it breaks down for gain-of-function research because no one can confidently estimate the probability of a lab-created pandemic or the full scope of benefits the research might yield over decades. Critics point out that this framework can end up recommending actions with a small chance of catastrophic outcomes, as long as the expected average benefit looks positive on paper. For risks that could affect millions of people, many ethicists find that approach dangerously insufficient.

The second is the precautionary principle, which holds that when an activity raises threats of serious harm, uncertainty about those threats should not prevent protective action. A weak version simply says that uncertainty isn’t a reason to avoid cost-effective safety measures. Stronger versions would block the research unless it can be proven safe. The problem is that the precautionary principle can give contradictory advice here: both conducting the research and not conducting it carry serious, uncertain dangers. Failing to study how a pandemic pathogen might evolve could leave humanity unprepared. But performing the experiments creates the very threat you’re trying to prevent. The principle, applied strictly, argues against both options.

The H5N1 Controversy That Started It All

The ethical debate crystallized in 2011 when Ron Fouchier, a virologist in the Netherlands, announced that his lab had modified H5N1 avian influenza to transmit through the air between ferrets. Kawaoka’s lab soon reported a related set of experiments using reverse genetics to create a similar airborne-capable virus. H5N1 kills roughly 60% of the humans it infects but doesn’t spread easily between people. The idea that a lab had bridged that gap was alarming.

The controversy initially focused on biosecurity: whether publishing the specific mutations amounted to providing a recipe for a biological weapon. But a second concern quickly emerged that proved more durable. Critics questioned whether any institutional biosafety measures could adequately contain a pathogen that, by design, combined high lethality with high transmissibility. The experiments had created something that could, if released accidentally, seed a pandemic with no existing population immunity. Whether the scientific insights justified that risk became the central ethical question, and it remains unresolved.

Lab Accidents Are Not Hypothetical

One reason the ethical debate resists easy resolution is that laboratory containment failures are a documented reality, not a theoretical concern. A Chatham House review of the scientific literature identified 309 individuals with laboratory-acquired infections across 94 incident reports involving 51 different pathogens. The review also found 16 separate incidents where pathogens escaped biocontainment facilities entirely.

The historical record includes cases with severe consequences. In 1979, an accidental release of anthrax spores from a Soviet military facility in Sverdlovsk killed 64 people. After the SARS epidemic ended in 2003, the virus reappeared three separate times due to laboratory accidents in Singapore, Taiwan, and China. The Chinese incident in 2004 involved two lab workers who weren’t even working with SARS directly but were infected independently at a national institute studying the virus. Nine people were infected, and one died. In 1973, a London laboratory technician contracted smallpox after visiting a pox laboratory housed in the same university, leading to an outbreak that killed two people who had merely visited a patient in an adjacent hospital bed.

These incidents involved pathogens that were being studied, not ones that had been deliberately enhanced. The ethical implication is significant: if containment fails at a measurable rate with naturally occurring pathogens, the consequences of failure with an engineered pathogen possessing novel transmissibility or virulence could be far worse.

Oversight and Where It Stands Now

The U.S. government has attempted to manage these risks through regulatory frameworks. The HHS P3CO Framework (short for Enhanced Potential Pandemic Pathogen Oversight Framework) established a multidisciplinary, pre-funding review process for proposed research expected to create or use pathogens with enhanced pandemic potential. The framework requires reviewers to weigh potential scientific and public health benefits against biosafety and biosecurity risks, and to evaluate whether adequate risk mitigation strategies exist.

That framework is now being overhauled. In May 2025, an executive order titled “Improving the Safety and Security of Biological Research” directed federal agencies to revise or replace existing oversight policies. The NIH announced it will no longer accept new grant applications for “dangerous gain-of-function research” submitted after May 7, 2025, and intends to suspend ongoing funding for such work. All NIH-funded researchers have been told to review their current projects, identify any that might qualify, and prepare to halt them.

Internationally, the World Health Organization published a Global Guidance Framework for the Responsible Use of the Life Sciences, aimed at helping member states mitigate biorisks and govern dual-use research. But international enforcement mechanisms remain limited, and oversight standards vary dramatically between countries, raising concerns that restrictions in one nation simply push the work elsewhere.

The Question of Alternatives

Part of the ethical calculus depends on whether safer methods can achieve the same goals. Critics of gain-of-function research argue that computer modeling, artificial intelligence-driven biological design tools, and other approaches can be equally effective for understanding pathogen evolution without creating dangerous new organisms. They point to strategies like developing universal vaccines that work against broad families of viruses, improving rapid vaccine manufacturing platforms, and creating antiviral drugs with wide-spectrum activity.

Proponents counter that computational models are only as good as the experimental data that feeds them. Without gain-of-function experiments to establish the link between specific genetic changes and real-world viral behavior, the models lack the ground truth needed to make reliable predictions. This disagreement is not purely scientific; it reflects a deeper ethical judgment about how much uncertainty is acceptable when the stakes include the possibility of a pandemic.

Why the Ethics Remain Unresolved

The ethical analysis of gain-of-function research resists a tidy conclusion because it involves genuinely competing values: scientific freedom, public safety, global health equity, and the prevention of both natural and engineered pandemics. The potential benefits are real but diffuse, spread across future preparedness gains that are hard to quantify. The potential harms are also real but probabilistic, concentrated in low-likelihood, high-consequence scenarios that standard risk-benefit tools handle poorly.

What most ethicists agree on is that the decision cannot rest with scientists alone. Research that could affect entire populations carries obligations of transparency and democratic accountability that go beyond what institutional biosafety committees were designed to provide. The 2025 policy shifts in the U.S. reflect a political judgment that the existing oversight structure was insufficient, though whether the new framework will strike a better balance remains to be seen.