Perhaps no other technology animates the imagination of defense policymakers and analysts as much as artificial intelligence (AI), or more precisely, a subfield of AI called machine learning. The Pentagon is no exception, with the Trump administration having pushed an AI agenda for the military, including through the creation of a Joint Artificial Intelligence Center (JAIC) in 2018. But while military gains from AI technologies are substantial, the way policymakers involved in military AI hard sell its potential often gives observers pause.
Speaking virtually at a think tank event on November 6, JAIC’s director Lieutenant General Michael Groen compared the military risks the United States faces today to 1914, when World War I broke out, marking the beginning of industrialized warfare. BreakingDefense quotes Groen as saying that “the Information Age equivalent of… lancers riding into machine guns” is using traditional command and control (C2) systems against an adversary equipped with AI. Groen, a Marine Corps intelligence officer whose tours of duty included Iraq and who became JAIC head on October 1, also pointed out the inefficiency of current processes integrating intelligence to kinetic action, in terms of a persistent lag between collation and analysis and engagement even in asymmetric conflicts.
To be sure, the basic thrust of Groen’s assertion is valid and well taken; the problem, however, is that this is not the first time United States military seeks to ride the information revolution wave. As Eli Berman, Joseph Felter, and Jacob Shapiro argued in their programmatic 2018 book “Small Wars, Big Data: The Information Revolution in Modern Conflict,” big data analysis stands to vastly improve the position of the superior force in an asymmetric conflict – of the kind the United States found itself in in Iraq and Afghanistan – where information (in form of civilian tips, data about social and economic environments, and other intelligence) is the often the determining variable in deciding the outcome of a counterinsurgency action, kinetic as well as non-kinetic.
And indeed, as Berman, Felter, and Shaprio write, the U.S. counterinsurgency campaigns in both Iraq and Afghanistan involved keeping records of “significant activity” involving U.S. forces, “including details such as time and place of insurgent attacks, type of attacks, and select outcomes,” in extremely granular details leading to the SIGACT-III database. In Afghanistan, the U.S. Defense Advanced Research Projects Agency (DARPA) funded a project, Nexus 7, that used data from a variety of sources – including commercial satellite imagery – in order to support International Security Assistance Force decision making, they write.
But, the real focus of Groen’s remarks at the Center for Strategic and International Studies event was conflict between near-peer militaries. Here, the thrust of AI use is – as the BreakingDefense piece also notes – to speed up observe-orient-decide-act (OODA) loops. The father of the concept, U.S. Air Force’s John Boyd, imagined all warfare as a clash of competing OODA loops, and where the key operational and tactical objective becomes disrupting the adversary’s OODA loop while speeding up one’s own. As I have described this elsewhere:
Modern militaries seek this advantage by gathering as much information as it is possible about the enemy’s forces and disposition, through space-based satellites, electronic and acoustic sensors and, increasingly, unmanned drones. Put simply, the basic idea is to have a rich ‘information web’ in the form of battlefield networks which links war fighters with machines that help identify their targets ahead. A good network — mediated by fast communication channels — shrinks time as it were, by bringing future enemy action closer.
This, of course, is not a new idea, forming as it does the basis of the network-centric “Revolution in Military Affairs” that came into vogue in the run up to, and after, the first Gulf War in 1991. Where AI can play a major role – assuming the U.S. can resolve numerous ethical, legal, and political challenges involved in implementing the idea – is that with it comes the possibility of automating the entire kill chain – beginning with appropriate aggregation and real-time analysis of data from the battlefield sensors for target acquisition to seamless relaying this information to weapons systems for engagement (“shooters”) that then act on the basis of that input on their own – taking the human out of picture in the OODA loop entirely (to use Paul Scharre’s formulation).
The Pentagon’s Joint All-Domain Command and Control (JADC2) project seeks to integrate multiple sensor inputs for rapid execution of joint operations (“data fusion”). But as Michael Klare wrote in Arms Control Today in April, “Over time, however, the JADC2 project is expected to incorporate AI-enabled decision-support systems, or algorithms intended to narrow down possible responses to enemy moves and advise those commanders on the optimal choice.” But this falls short of complete automation of the OODA loop, with the decision to engage a target ultimately falling on a human commander.
Nevertheless, significant technical challenges remain, which brings me back to Groen’s ambitious assertions. Just to mention two of them: First, deep learning based automated target recognition (ATR) using synthetic aperture radar imagery – a key class of images available to the military, which can be obtained independent of weather conditions, unlike optical satellite images – very much remains a work in progress. Second, even in the domain of asymmetric warfare and counterinsurgency, letting an AI system integrate multiple streams of enemy intelligence – including intercepts that would need real-time translation – is a challenge; for example, while deep learning based machine translation that accurately preserves semantics has improved dramatically over the years, it still remains at the level of a proof of concept.
At the end, AI for the military remains deeply alluring, but the Pentagon will still have to wait for a while before AI tech reaches the levels needed for significant operational gains while it takes baby steps in that will enable it to fully leverage it. For example, before the JADC2 can really get serious about AI C2, it must fix pesky issues around data standardization across all services.