Dual use refers to any research, technology, or material that serves a legitimate purpose but could also be repurposed to cause serious harm. The term comes up most often in the life sciences and national security, where the same experiment that advances medicine could, in the wrong hands, help create a biological weapon. Understanding dual use matters because it shapes how governments regulate everything from laboratory research to international equipment sales to artificial intelligence.
The Core Idea Behind Dual Use
A kitchen knife can chop vegetables or hurt someone. Dual use applies that same logic to advanced science and technology, but at a much higher stakes level. The U.S. government formally defines dual use research of concern (DURC) as life sciences research that can be reasonably anticipated to provide knowledge, products, or technologies that could be directly misapplied to pose a significant threat to public health, agriculture, the environment, or national security.
The concept isn’t limited to biology. Export control regimes use “dual use” to describe industrial chemicals, manufacturing equipment, and software that have peaceful commercial applications but could also contribute to weapons programs. A fermentation tank designed for brewing beer, for instance, could theoretically be repurposed to grow dangerous microorganisms. The dual use label triggers additional scrutiny, licensing requirements, or outright restrictions on who can access these items.
What Makes Research “Of Concern”
Not all life sciences research qualifies as dual use. U.S. federal policy, established in 2012 and expanded in 2014, narrows the scope to experiments involving 15 specific agents and toxins (including anthrax, Ebola, and avian influenza) combined with seven categories of experiments. Those categories capture research that:
- Enhances harm: makes a pathogen more dangerous or lethal
- Defeats vaccines: disrupts immunity or renders immunizations ineffective
- Creates drug resistance: helps a pathogen evade treatments or dodge detection methods
- Increases spread: boosts a pathogen’s stability, transmissibility, or ability to be dispersed
- Expands host range: allows a pathogen to infect species it couldn’t before
- Increases vulnerability: makes a population more susceptible to infection
- Resurrects extinct threats: generates or reconstructs an eradicated pathogen
If a proposed experiment checks both boxes (listed agent, listed category), it triggers a formal review process before the work can proceed.
The H5N1 Controversy That Changed the Rules
The debate over dual use became very public in late 2011 when two research teams, one led by Ron Fouchier at Erasmus Medical Center in the Netherlands and the other by Yoshihiro Kawaoka at the University of Wisconsin, submitted papers describing how they had modified the H5N1 avian flu virus so it could spread between ferrets through respiratory droplets. Ferrets are a standard model for human flu transmission, so the implication was clear: these mutations could potentially make H5N1 transmissible between people.
H5N1 is extraordinarily lethal in humans but had never naturally evolved the ability to spread easily from person to person despite decades of human contact with infected poultry. The U.S. National Science Advisory Board for Biosecurity (NSABB) reviewed both papers and recommended they be published only in redacted form, stripping out the methodological details someone would need to replicate the work. Fouchier, Kawaoka, and 37 other influenza researchers then voluntarily paused all related experiments for 60 days while the scientific community debated how to handle the findings.
The episode forced a broader reckoning. It demonstrated that a single well-intentioned study could hand someone a roadmap to a pandemic pathogen, and it led directly to stronger federal oversight policies for dual use research.
How Institutions Review Dual Use Research
At U.S. universities and research institutions, the principal investigator running a project is responsible for flagging when their work involves agents on the DURC list. They submit a formal review application to an Institutional Review Entity (IRE), typically a subcommittee of the institution’s Biosafety Committee. The IRE evaluates whether the project meets the federal definition of dual use research of concern.
If it does, the IRE works with the researcher to draft a risk mitigation plan. This might include restricting who has access to certain data, adding biosafety containment measures, or limiting which details get published. The mitigation plan then goes to the relevant federal funding agency for approval before the research moves forward. The entire system relies on researchers self-identifying their work as potentially dual use, which is why training and awareness are critical parts of the framework.
Dual Use in Export Controls
Beyond the laboratory, dual use shapes international trade policy. The Australia Group, a coalition of over 40 countries, maintains shared control lists covering chemical weapons precursors, biological equipment, human and animal pathogens, plant pathogens, and related manufacturing technology and software. If you want to export a high-end centrifuge, certain fermentation systems, or specific chemical compounds to another country, these lists determine whether you need a special license or whether the sale is blocked entirely.
The logic is straightforward: countries want their industries to sell legitimate products while preventing those same products from ending up in weapons programs. The lists are updated regularly as new technologies emerge and threat assessments evolve.
AI as a New Dual Use Frontier
Artificial intelligence is rapidly becoming one of the most significant dual use concerns. AI models designed to fold proteins, discover new drugs, or design novel molecules can also be used to identify compounds with increased toxicity or to engineer dangerous biological agents. Researchers have pointed out that AI protein design models are “vulnerable to misuse and the production of dangerous biological agents,” as noted by prominent scientists in the field.
What makes AI particularly tricky is accessibility. Open-source protein and molecular design tools are being integrated with chatbot interfaces that make them usable by people without deep scientific training. A tool built to accelerate drug discovery could, with relatively minor changes in how it’s prompted, help someone design something harmful. This represents a shift from traditional dual use concerns, where misuse typically required significant expertise and physical laboratory access.
DNA Synthesis Screening
One practical line of defense sits at the companies that synthesize custom DNA sequences. Researchers routinely order specific genetic sequences for their experiments, and the U.S. government has established a screening framework for these providers. The system works on two levels: customer screening verifies the identity and legitimacy of the person placing the order, while sequence screening checks whether the requested DNA matches sequences from known dangerous pathogens or toxins.
When either screening raises a flag, the company follows up before filling the order. This acts as a gatekeeping function, making it harder for someone to quietly order the genetic building blocks of a Select Agent without triggering scrutiny. It’s not foolproof, but it adds a meaningful checkpoint between intent and capability.
The Global Framework
The World Health Organization published a global guidance framework for the responsible use of life sciences that frames dual use governance as a shared responsibility. The document targets policymakers, regulators, scientists, research institutions, funding bodies, publishers, and private sector actors, essentially everyone involved in the research lifecycle. Its core message is that mitigating biorisks requires coordinated action across all these groups, not just rules imposed from the top down.
Different countries handle dual use oversight differently. The U.S. has its federal DURC policies, the EU has its own export control regulations, and many nations lack formal frameworks entirely. This patchwork creates gaps, since a researcher in one country may face rigorous review while a colleague doing identical work elsewhere faces none. The WHO framework aims to close those gaps by providing a common set of values and tools that any country can adopt, though implementation remains uneven.

