The use of lethal robots in conflict is inevitable. When it happens, it’ll create a significant shift in the ways of warfare. A discussion has already begun (see here and here) on how such capabilities might be developed and applied.
Robots in general are becoming smaller, smarter, cheaper and more ubiquitous. Lethal robots are becoming more deadly and discriminating. The degree of autonomy will be a key driver of a robot’s role in conflict and is likely to evolve in three generations; the semi-autonomous, the restricted-autonomous, and ultimately the fully-autonomous generation.
We’re already a decade into the semi-autonomous generation—using robots to kill people but with humans still in the decision loop. Technology and cost factors mean the semi-autonomous generation has—so far—been dominated by states. Moreover, the targeting of senior-level decision makers has come to be regarded as a legitimate and effective tactic. ‘Targeted killings’ by states with drones, aircraft, missiles or occasionally Special Forces raids have become common. As lethal robots proliferate they’ll increasingly be used for such missions because of their low cost and risk.Enjoying this article? Click here to subscribe for full access. Just $5 a month.
The motivation for states to regulate the use of semi-autonomous lethal robots has waned in recent years as more states develop the capability. And, in response to the potential for criticism and retaliation, states may increasingly seek to make the actions of their robots “plausibly deniable.” We can discern aspects of that approach already with US drone strikes in Yemen and Somalia, as well as parallels regarding the use of cyber weapons and clandestine or proxy forces.
Non-state actors seem poised to join the contest by adding a remotely-detonated explosive charge to a commercially-available off-the-shelf drone. Such a device would likely be guided to its desired target through an on-board camera.
The semi-autonomous generation of lethal robots remains focused on distinguishing between combatants and non-combatants and holding individuals accountable for the decision to use lethal force—a constraint adopted by states through the Geneva Conventions. Many non-state forces reject this distinction and as a consequence they are more likely to employ lethal robots without a human in the loop – which will likely be first manifest in the restricted-autonous generation. That generation will involve the use of autonomous robots to kill people within a designated space or timeframe.
The Global Positioning System will be critical to enabling that capability. Absent a human in the loop, the robot will be empowered/programmed to make the “decision” to kill. Distinguishing between combatant and non-combatant will be less critical. Targeting will occur on the basis of humans simply being in a certain place at a certain time. Restricted-autonomous lethal robots require less supervision and appear to provide more operational flexibility than the semi-autonomous generation.
Restricted-autonomous generation lethal robots are particularly appealing to non-state actors or irregular forces seeking to optimise their “terrorising effect” by generating mass casualties. Commercially available drones and other robots adapted by non-state forces to achieve lethal missions will likely appear first; later, copies of state-produced lethal drones may become more prevalent.
Restricted-autonomous generation lethal robots would also have tactical utility for state forces. In the initial operations, states would likely endeavour to provide early warning to non-combatants to move clear of a certain area. But as this generation of lethal robots becomes more capable such constraints are likely to be progressively relaxed as the robots determine targeting priorities based on “signature behaviour.” The restricted-autonomous generation will likely also witness the introduction of counter-robot robots tasked with finding and destroying lethal robots.
The use of restricted-autonomous lethal robots will also encourage enemy combatants to use non-combatants as human shields to deter attacks. While that process likely won’t witness the abandonment of the Geneva Conventions, invariably the West’s current high bar is likely to be called into question. That, in turn, will likely spur the development of technology for robots to discriminate between combatants and non-combatants.
The fully-autonomous generation of lethal robots will witness the use of fully autonomous robots to kill designated combatants. The zenith of that generation can be imagined by reference to the 1984 movie The Terminator, but in reality a fully-autonomous lethal robot is more likely to take a functional rather than humanoid form.
The technology used to discriminate between combatants and non-combatants is difficult to imagine at present and may well be multi-faceted to include visual, magnetic and explosive vapour sensors coupled with biometric databases and behavioural algorithms. The cost of such systems will likely witness states being the first to leverage this generation of lethal robot. Non-state actors are unlikely to be able to afford or see the benefit in using fully-autonomous lethal robots. In the near-term, though, those groups are on the cusp of using lethal robots built from commercially available technologies. It’s in this area that states must urgently develop countermeasures.
Marcus Fielding served as a senior officer in the Australian Army with operational experience in Pakistan, Afghanistan, Haiti, East Timor and Iraq. He holds degrees in science, engineering, defence studies, business administration, military arts and science as well as strategic studies. This article was first published in The Strategist, the Australian Strategic Policy Institute blog, and is reprinted with kind permission.