This week, U.S. Navy Secretary Ray Mabus announced that he will streamline the navy’s efforts to keep up with advances in unmanned technology by appointing a new Deputy Assistant Secretary of the Navy for Unmanned Systems, who will bring “together all the many stakeholders and operators who are currently working on this technology.”
“Additionally, the Navy Staff will add a new office for unmanned in the N-9, the N-Code for Warfare Systems, so that all aspects of unmanned – in all domains – over, on and under the sea and coming from the sea to operate on land – will be coordinated and championed,” the secretary noted.
As as Breaking Defense reported today, this may help the U.S. military’s push to acquire genuine autonomously operating weapon systems. Breaking Defense’s Sydney Freedberg Jr. muses:
Imagine a swarm of buzzing, scuttling or swimming robots that are smaller but smarter. While a human has to fly the Predator by remote control, these systems would make decisions and coordinate themselves without constant human supervision — perhaps without any contact at all.
Of course, one has to separate between remotely controlled systems and autonomous robots, but this distinction may slowly be eroding. Breaking Defense also quotes Brig. Gen. Kevin Killea, head of the Marine Corps Warfighting Laboratory in Quantico, who said that one of the key obstacles remains communication with unmanned autonomous fighting systems:
The key here is how those systems will communicate. “For swarming to work, I don’t think we can be dependent on the RF [radio frequency] spectrum…. We’re seeing more and more every day that that’s vulnerable. We can’t go down the primrose path that we’re on with dependence on RF spectrum.
For his future fleet of small unmanned autonomous “swarmbots,” Killea sought inspiration from nature, particularly the way termites coordinate without communicating: “Without communicating they sense the environment change around them, and they instinctively know which way to go,” – termites release scents for other members of the hive to sniff, who can then react instinctively to danger.
However, Killea laughingly cautioned Freedberg that human beings will still have to be involved when it comes to decisions whether or not to engage a target: “Don’t get all Terminator/Sarah Connor paranoid on me here. Humans must be kept in the loop, particularly with regard to use of force.”
Killea highlights an important issue: should unmanned robot drones be capable to autonomously “make a decision” to kill another human being? A recently published study by the Center for New American Security (CNAS) entitled “Meaningful Human Control in Weapon Systems” addresses this matter and summarizes some of the ethical (and simultaneously technical) problems that the deployment of these new weapon systems may encounter:
In discussions on lethal autonomous weapon systems at the United Nations (UN) Convention on Certain Conventional Weapons in May 2014, “meaningful human control emerged” as a major theme. Many who support a ban on autonomous weapon systems have proposed the requirement of meaningful human control as one that ought to apply to all weapons, believing that this is a bar that autonomous weapons are unlikely to meet.
The report further points out that real problems will arise holding people accountable for wrongful actions of autonomous weapon systems (e.g., striking the wrong target) – an “accountability gap” as the study calls it. Thus, technological problems may perhaps be only one obstacle to overcome before the future use of autonomous “killer robots” in the U.S. military.