Asia Defense

Pentagon Hosts Meeting on Ethical Use of Military AI With Allies and Partners

Recent Features

Asia Defense | Security

Pentagon Hosts Meeting on Ethical Use of Military AI With Allies and Partners

This comes in the backdrop of growing interest in global technology cooperation.

Pentagon Hosts Meeting on Ethical Use of Military AI With Allies and Partners
Credit: Flickr/Mike MacKenzie

Last week, on September 15 and 16, the Pentagon’s Joint Artificial Intelligence Center (JAIC) held a meeting with officials from 13 countries, including but not only U.S. allies, around the ethical military uses of artificial intelligence, the first of its kind. Breaking Defense quotes Mark Beall, the JAIC’s head of strategy and policy, who called the meeting “historic,” as saying, “This group of … countries, to my knowledge, has never been brought together under one banner before.” Earlier this year, the Pentagon adopted a set of ethics guidelines around AI use.

At a time when China and Russia’s pursuit of military AI has raised considerable alarm in Western capitals, Beall noted that the meeting was not about creating a coalition against specific countries. Rather, “we’re really focused on, right now, rallying around [shared] core values like digital liberty and human rights… international humanitarian law,” Beall said. But his  past statements suggest that the United States remains interested in developing interoperability with allies around AI technologies. In an April interview, he noted “[a]t the highest level, JAIC is very much interested in how it is we upgrade our alliances for the digital era.”

The possibility of international collaboration around new technologies have increasingly come to dominate the policy community’s agenda over the past couple of years. A key recommendation of a December 2019 Center for New American Security report on how the United States could consolidate its leading position when it comes to AI was international R&D collaboration. That report also noted the need for U.S. leadership “in setting global AI norms, standards, and measurement is essential to promote AI ethics, safety, security, and transparency in accordance with U.S. interests.” Inter alia, this will require closely collaborating with other like-minded partners.

Earlier this year, CNAS also announced a project that would lay down the contours of a technology alliance of like-minded democracies. A State Department statement on today’s virtual meeting of officials from the U.S.-Australia-Japan-India “Quad” notes that digital connectivity, 5G, and cybersecurity was also on its agenda. (The JAIC meeting included participants from Australia and Japan, but not India.)

After the meetings last week, Beall noted that his “personal goal for this forum is to create a framework for data sharing and data aggregation [and collaboration on] very powerful, detailed algorithms.” But to what extent countries, including the U.S., can hold meaningful discussions around sensitive technologies like AI – which will increasingly form part of the military-technological vanguard – without contravening domestic export control laws or sacrificing military advantage remains to be seen.

An added complication lies with the fact that AI is not really a weapon, nor is it a single monolithic technology. Rather, as CNAS’ Elsa Kania wrote in 2018, it is a more nebulous, “catch-all concept alluding to a range of techniques with varied applications in enabling new capabilities.” Given this, how countries could converge on concrete specific discussions around AI applications – without necessarily discussing specific weapons on which AI can be deployed, such as drones or unmanned underwater vehicles – remains in question. In fact, at the multilateral level, states are yet to even agree on a common definition of lethal autonomous weapon systems.