Flashpoints

Can China and the US Find Common Ground on Military Use of AI?

Recent Features

Flashpoints | Security | East Asia

Can China and the US Find Common Ground on Military Use of AI?

Entrusting AI with the autonomy and power to deploy nuclear weapons is a scenario to be avoided at all costs. Time to take baby steps toward that goal.

Can China and the US Find Common Ground on Military Use of AI?
Credit: Depositphotos

The day when artificial intelligence (AI) makes important military decisions is no longer in the distant future but is happening in the present reality. In the ongoing Israel-Hamas conflict, the AI-based targeting system “Lavender” has been used by the Israeli military to deadly effect. Utilizing AI, Lavender generates a kill list and identifies targets for the human operators to approve any strikes

To be sure, while it can be argued that there is still human autonomy within Lavender, the human operator cannot maintain full control. Is the operator fully cognizant of the developments on the ground? Has the operator considered all options and possibilities? Was the decision influenced by automation bias? These questions are cause to doubt the full efficacy of human autonomy in a weapon system, much less in combat situations. Indeed, the Israeli operators, with only a mere 20 seconds to approve strikes, often acted as mere “rubber stamp,” relying heavily on AI’s identification with minimal review. 

As evidenced by Lavender, issues such as the level of machine autonomy tolerated by humans, the risk involved, and reliability of the system present real challenges to a world that is increasingly embracing AI and the advent of new technology. Such events underscore the imperative of utilizing AI with utmost responsibility to prevent catastrophic outcomes and safeguard against potential risks. 

The United States and China, currently leading the field of artificial intelligence, have long recognized the potential dangers posed by AI. In response to the risks of an AI arms race in the military domain, both nations convened to discuss the dangers posed by AI and expressed a willingness to revisit these discussions in the future. However, while both sides acknowledged the dangers, the highly anticipated meeting notably failed to broker agreements excluding AI from nuclear command and control systems, as well as autonomous weapon systems. 

Despite subsequent formal dialogues and Track 2 discussions among experts, the significant divergence in the positions of the two countries shows no signs of converging. The persistent lack of trust, concerns over technology transparency, and the ambitions to develop and deploy AI-based military systems have only exacerbated the divide. Nevertheless, despite all the impediments toward cooperation, there is no denying that entrusting AI with the autonomy and power to deploy nuclear weapons is a scenario to be avoided at all costs.

The necessity and urgency of this agreement are clear, as made clear by a chilling incident in the Cold War standoff between the United States and the Soviet Union. On September 26, 1983, the Soviet Union’s Oko early warning system reportedly detected a total of five intercontinental ballistic missiles launched from the United States and heading toward the Soviet Union. Lieutenant Colonel Stanislav Petrov, who was on duty at the moment, judged the alarm to be faulty and decided to hold off reporting to his superiors. Stanislav’s heroics may have averted an all-out nuclear war, for had he escalated and reported this incident, the Soviet Union would have launched an immediate retaliatory nuclear strike.

This close shave illustrates the fallibility of machines and elucidated the necessity of maintaining human control in strategic infrastructure. Certainly, such calls appear ever more salient now, as states are currently embroiled in an AI arms race, where AI is increasingly adopted into weapons systems.

Despite the urgency and importance of such an agreement, excluding AI from nuclear command and control was never an enticing option for China as a negotiating point with the United States. 

According to the 2022 Nuclear Posture Review, the United States explicitly declared that all critical decisions regarding the use of nuclear weapons must involve human intervention. Moreover, in the same year, the United States, along with the United Kingdom and France, agreed to maintain human control over the launch of nuclear weapons. Domestically, bipartisan lawmakers in the U.S. have introduced legislation to prevent AI-initiated nuclear launches. These open unilateral declarations by the United States may have made the Chinese position relatively delicate, as it appears that the U.S. might simply wheedle China into agreeing what the U.S. had already committed to.

In addition, with the continued and ever-growing U.S. sanctions preventing China from accessing AI chips, Beijing may have adopted a tougher negotiating stance, seeking to extract more concessions from Washington. Both nations recognized the importance of the issue, but the differing positions during the talks likely led to the failure of negotiations. 

Even if the U.S. and China do agree on a general framework involving keeping humans in the loop, structural impediments may render any broad agreement untenable. To be sure, the United States and China are two highly advanced AI-enabled economies, but differences in domestic AI infrastructure remain stark. First, AI development in the U.S. is primarily driven by AI entrepreneurship led by private companies such as Nvidia and OpenAI. This private-led development fashioned by the AI industry in the U.S. is in steep contrast to the state-led development of AI in China. With Xi Jinping explicitly mentioning that China must develop, control, and use AI to secure the country’s future for the next technological revolution, Beijing has invested copious amounts to direct the trajectory and direction of AI development. 

Such divergence, consequently, directly impacts regulations of AI. Given that China emphasizes a state-centric form of development, regulations of the AI industry are naturally state-centric as well – spearheaded by the Cyberspace Administration of China. Conversely, the United States does not have an overarching regulation for AI. This divide attests to the complexity in establishing any agreement.

Beyond divergence in domestic AI infrastructure that renders cooperation challenging, the difficulty is compounded by the fact that the two great powers may diverge in their respective capacity to train, apply, implement, and incorporate AI into its system. While technological successes don’t always directly translate to military application, the potential for such advancements is always present.  For instance, in a congressional testimony Gregory Allen of the Center for Strategic and International Studies suggested that while the U.S. enjoys abundant training data in military situations, China has limited military AI applications, but possesses an advantage in en masse societal adoption of AI. With differing levels of development and implementation of AI, it remains difficult to reach an agreement without a common starting point based on a sufficient understanding of each other. 

Viewed this way, realistically, it appears highly implausible for Beijing and Washington to agree to a formal pact on keeping humans in the loop of nuclear command and control issues.

Against this backdrop, to foster China-U.S. cooperation in managing the risks of AI, it is imperative that a lowest common denominator be sought. This search can come in the form of adopting a micro-view on how one perceives and interprets “human in the loop.” The concept typically regards the “loop” as one singular entity. A novel approach calls for the least common denominator to be sought by looking in detail within the loop. In other words, rather than forming an agreement that takes the hard bargaining position of requiring human control over every single process of the loop, perhaps one can first envision a scenario where human control is only agreed or mandated in certain aspects of the loop. Doing so will likely open up potential possibilities for negotiation, thereby making it more politically palatable and possible for China and the United States to agree on a deal.

Given that the U.S. and China placed a premium on discussions involving the nexus of AI and nuclear weapons, both parties can zero in on various nuclear command and control processes in isolation and proceed to negotiate the possibility of mandating human involvement for each process. To be sure, agreements need not be limited in a strict fashion, where restrictions of similar processes have to be in place. As an example, the United States can propose to decree full human control in the U.S. nuclear targeting process, in exchange for China’s acquiescing to mandate human involvement in the final decision making. 

In that sense, by first front-loading the more pressing and agreeable issues, the necessary impetus and momentum could be generated to reinvigorate the stalled talks. This, compounded by more permutations and possibilities of negotiations opened up via a partial human in the loop, negotiators from Beijing and Washington can flex their creative minds and diplomatic skills to reach a settlement in quid pro quo fashion.        

Authors
Guest Author

Mathew Jie Sheng Yeo

Mathew Jie Sheng Yeo is a researcher at the Taejae Future Consensus Institute, focusing on the intersection of emerging technology and security. He also serves as the assistant director of the Center for Strategic Studies at the Fletcher School of Law and Diplomacy, Tufts University. Mathew is currently pursuing his Ph.D. at the Fletcher School of Law and Diplomacy.

Guest Author

Hyeyoon Jeong

Hyeyoon Jeong is a researcher at the Taejae Future Consensus Institute, focusing on the intersection of emerging technologies and security. She pays particular attention to the relationship between artificial intelligence and nuclear weapons, examining the opportunities and risks associated with the use of AI in nuclear command and control. Prior to her career as a researcher, Hyeyoon served as an assistant professor in the Department of International Relations at the Air Force Academy.

Tags