Autonomous weapons, systems that can select and engage targets without direct human input, pose a layered set of risks that span technical failures, rapid conflict escalation, legal gray zones, cybersecurity vulnerabilities, and the spread of cheap killing technology to groups that previously couldn’t access it. These aren’t hypothetical concerns. Military spending on AI systems is projected to grow from $22.4 billion in 2026 to $101 billion by 2034, and the technology is advancing faster than the international rules meant to govern it.
Misidentification and Civilian Harm
The most immediate risk is that an autonomous weapon hits the wrong target. On a battlefield, the difference between a combatant and a civilian often comes down to context that humans struggle with and machines handle even worse: a person surrendering, someone who was fighting but is now wounded, a farmer carrying a tool that looks like a weapon on a thermal camera. International humanitarian law requires forces to distinguish between fighters and protected people, but several nations have raised formal concerns about whether autonomous systems can actually do this.
The problem is especially sharp in situations that change fast. A combatant can become a protected person in seconds by surrendering or being injured. Recognizing that shift requires what legal experts call “contextual human judgment,” the ability to read intent, body language, and circumstances that don’t reduce neatly to sensor data. Whether a civilian is directly participating in hostilities, a legal determination that unlocks the use of force against them, requires value judgments and assessments of intention that machines cannot reliably make.
When these systems get it wrong, the error has a specific name in international discussions: “unintended engagement.” The core concern isn’t that a military would deliberately target civilians. It’s that the operator’s intention doesn’t translate to the outcome the weapon produces. The greater the distance between the moment a human sets the system’s parameters and the moment it fires, the higher the chance of catastrophic mistakes.
Conflict Escalation Beyond Human Speed
Autonomous weapons compress the time between detecting a threat and responding to it. That speed is part of their appeal, but it also removes the pauses where leaders reconsider, de-escalate, or negotiate. Military analysts have identified three specific ways these systems drive conflicts to spiral.
First, they create what’s called a moral hazard. When a nation can fight without risking its own soldiers’ lives, decision-makers are emboldened to pursue more aggressive actions they’d otherwise avoid. The political cost of military action drops, which makes it easier to choose.
Second, when an autonomous system makes a mistake, the other side may not be able to tell it was a glitch rather than a deliberate attack. If an AI-controlled drone strikes the wrong target near a border, the affected nation has to decide whether it was an accident or an act of war, and they have to decide quickly. The difficulty in distinguishing errors from deliberate hostile actions is a direct path to unintended escalation.
Third, when both sides deploy autonomous systems, the pace of warfare accelerates beyond what human command structures can manage. Decisions that once took hours compress into seconds. This reduces opportunities for reflection and restraint, making it more likely that conflicts cross critical thresholds before anyone fully understands what’s happening. Some analysts compare the risk to financial “flash crashes,” where automated trading systems interact in ways that produce sudden, massive collapses no one intended.
The Accountability Gap
When a conventional weapon kills civilians, there’s a chain of responsibility: the soldier who pulled the trigger, the officer who gave the order, the commander who planned the operation. With autonomous weapons, that chain frays. If a system independently selects and strikes a target that turns out to be a school, who is legally responsible? The programmer who wrote the targeting algorithm months earlier? The commander who deployed the system in a general area? The manufacturer?
This “responsibility gap” has no settled legal answer. The autonomy of these weapons increases the separation between the effects of the system and any nearby human to an extent that could make it impossible to hold anyone accountable. Under existing international law, state responsibility for wrongful conduct typically requires a causal link between a person’s actions and the harm. When an algorithm makes the final decision, that link becomes difficult to establish. The result could be a class of weapons where serious violations of the laws of war occur and no one is meaningfully held responsible.
Emergent Behavior and Unpredictability
Complex AI systems sometimes behave in ways their designers never programmed or anticipated. This is known as emergent behavior: actions that arise from the interaction of many subsystems and can’t be predicted from looking at any individual component. As the cybernetics pioneer Norbert Wiener warned in 1960, “machines can and do transcend some of the limitations of their designers, and in doing so they may be both effective and dangerous.”
For autonomous weapons, this creates a paradox. Making systems more adaptive and capable of handling real-world complexity can actually make them less predictable. A system that performs reliably across a wide range of scenarios may do so precisely because it can improvise, but improvisation in a weapons system is a fundamentally different proposition than improvisation in, say, a chess program. This tension between reliability and predictability poses serious challenges for testing and certification. You can verify that a system works correctly in thousands of simulated scenarios, but you can’t guarantee it won’t encounter a situation in the real world that triggers behavior no one foresaw.
Hacking, Spoofing, and Hijacking
Every autonomous weapon depends on software and communications links, and both can be exploited. By building weapons around digital components, militaries are introducing vulnerabilities that don’t exist with conventional arms. A rifle can’t be hacked. An autonomous drone can.
The attack methods range from subtle to dramatic. “Spoofing” involves simulating a friendly control signal to trick the weapon into following an adversary’s commands. Software bugs in the governing code can be exploited to seize control. The communication tether between an autonomous weapon and its operator, meant as a safety feature, also provides a backdoor that electronic warfare programs can target. Unlike intercepting a human soldier’s communications, which yields intelligence, intercepting an autonomous weapon’s communications could let an adversary hijack the entire system and turn it against its owners.
These systems also lack the basic self-awareness to recognize when they’re receiving faulty data. A human pilot who gets contradictory instrument readings can use common sense and situational awareness to question the information. An autonomous system has no general frame of reference against which to measure whether its inputs make sense, making it fundamentally more susceptible to deception.
Proliferation to Non-State Actors
Autonomous weapons technology is getting cheaper and more accessible. Commercial drones capable of GPS-guided autonomous flight can be purchased on Amazon or eBay for a few hundred dollars. Violent non-state actors have already used these off-the-shelf drones for assassination attempts, bomb drops, kamikaze strikes, and surveillance missions. The convergence of cheaper hardware, autonomous navigation software, and do-it-yourself payload modifications has amplified the asymmetric threat these groups pose.
The concern isn’t just individual drones. Autonomous swarm technology, where dozens or hundreds of small drones coordinate without human operators, is rapidly moving from military research labs into the commercial sector. Once the software to coordinate a swarm exists, it’s difficult to keep it from spreading. Unlike nuclear weapons, which require rare materials and massive infrastructure, the components for autonomous weapons are dual-use commercial products. This makes traditional arms control approaches, built around controlling access to specialized materials, largely ineffective.
Where International Regulation Stands
Efforts to regulate autonomous weapons have been underway at the United Nations since 2014, primarily through a Group of Governmental Experts operating under the Convention on Certain Conventional Weapons. Progress has been slow. The group operates by consensus, meaning any single nation can block agreement. As of 2024, negotiations are focused on formulating “elements of an instrument” to address lethal autonomous weapons, without even agreeing on whether that instrument should be a binding treaty or voluntary guidelines.
UN Secretary-General António Guterres has called on member states to conclude a legally binding instrument prohibiting lethal autonomous weapons by 2026. Several countries, particularly in the Global South, have pushed for outright bans or strict regulations. But major military powers investing heavily in the technology have generally resisted binding restrictions.
The concept of “meaningful human control” has become the central framework in these discussions. The International Committee of the Red Cross has argued this control should apply specifically to the critical functions of selecting and engaging targets. Some states want it extended to every stage, from design through deployment. For human control to qualify as meaningful, the operator must be able to actually influence the system’s behavior, not just press a button that rubber-stamps an algorithm’s decision. Defining and enforcing that standard remains the core unresolved challenge, and the technology continues to outpace the diplomacy meant to constrain it.

