Asia Defense

Navigating AI Competition: The Case for Human-Centric Approaches in China-US AI Dialogues

Recent Features

Asia Defense | Security | East Asia

Navigating AI Competition: The Case for Human-Centric Approaches in China-US AI Dialogues

Rather than assessing who’s “winning” the China-U.S. military AI competition, more attention should be redirected to bilateral discussions on stemming that competition.

Navigating AI Competition: The Case for Human-Centric Approaches in China-US AI Dialogues
Credit: ID 312376995 © Photovs | Dreamstime.com

In his famous 1963 speech, U.S. President John F. Kennedy declared, “Our problems are man-made, therefore they may be solved by man.” Although the reference was made to address the issue of peace and arms control at that time, Kennedy’s remarks appear even more relevant now, when the advent of AI has been argued to render problems far beyond human control.

Following the last summit between Chinese President Xi Jinping and U.S. President Joe Biden, where the joint statement emphasized the need to address the risk of advanced AI systems, officials from both sides convened in Geneva this year to discuss AI risks. The issue of AI safety and risks were once again highlighted and deliberated on during the recent meeting between U.S. National Security Adviser Jake Sullivan and Chinese Foreign Minister Wang Yi.

One of the most pressing concerns involves the increased reliance on military AI; militaries continually covet advancements in this developing technology. From the United States’ “Project Maven” and “Replicator” to China’s “AI Commander,” and from “Lavender” of Israel to even “Zvook” from the frontlines of Ukraine, military AI is increasingly relied upon and fast becoming ubiquitous. Indeed, General Mark Milley has even predicted that by 2039, a third of the U.S. military would be robotic. Such worrisome trends certainly raise the specter of an accelerating AI arms race if left unrestrained or unregulated.

To be sure, the advantages AI brings to the battlefield are undeniable. AI not only protects soldiers and reduces inefficiencies but also maximizes the chances of victory through decision advantage, enabling rapid and precise decision-making. As Russia’s Vladimir Putin once remarked, “Artificial intelligence is the future… whoever becomes the leader in this sphere will become the ruler of the world.” Supremacy in AI is now synonymous with national resilience. 

Reflecting this, China announced an ambitious plan through its “National New Generation AI Plan” to position itself as a “world-leading” AI powerhouse by 2025. This bold move has understandably sparked concerns in Washington. In response, the U.S. Department of Defense has allocated $1.8 billion in its fiscal year 2025 budget solely for AI initiatives, while the Biden administration tightened regulations on AI memory chips and semiconductors. As the presidential election approaches, either candidate is likely to adopt related policies. Vice President Kamala Harris would be expected to maintain the Biden-Harris administration’s firm stance on China’s AI regulations, while former President Donald Trump, echoing similar concerns, would likely prioritize maintaining U.S. technological superiority, indicating a shared recognition of the intensifying China-U.S. rivalry.

Experts increasingly warn that Beijing is rapidly catching up to, or even outpacing, the United States in key areas of military AI. A recent New York Times article highlighted that, despite stringent U.S. export controls, American-made chips are being actively traded on the Chinese underground market and used in military research, demonstrating that the AI competition has transcended mere technological boundaries. 

However, another report presented a surprising counterpoint to this narrative. Analyzing papers by Chinese defense experts, Sam Bresnick, writing for Foreign Policy, concluded that China, having not engaged in large-scale war for the past 40 years, struggles with the collection, management, and analysis of military data. Additionally, Bresnick points to deficiencies in China’s international competitiveness in intelligence, surveillance, and reconnaissance (ISR), and flaws in its AI system testing and evaluation (T&E) frameworks.

While the report indicates that China’s AI capabilities may not be as advanced as previously thought, this does not necessarily equate to the risks of China-U.S. competition being overstated. Undoubtedly, these dangers are real and should be taken seriously. However, assessing overall capabilities based on fragmented data is both erroneous and reckless. AI, by its very nature, is a black box – its processes cannot be fully explained, its outcomes cannot be reliably predicted, and it may ultimately resist complete human control. Therefore, attempting to quantify the gap without a common measuring yardstick would escalate the AI arms race and exacerbate the AI security dilemma. 

Hence, rather than focusing on the perilous task of assessing the China-U.S. military AI competition, more attention should be redirected to reinvigorate bilateral discussions to stem the passions of competing in this area.

Given that both nations have reaped significant benefits from AI, the fear of AI-related risks will unlikely halt either one’s AI development. Therefore, discussions between China and the United States must move beyond a focus on the potential and opportunities of AI to prioritize addressing immediate concerns and developments. Instead of overly speculating and attempting to regulate suppositional scenarios like Artificial General Intelligence (AGI), the focus of discussions should be rooted on current realities of military AI and driven by practical agendas such as the risks and shared concerns in that regard. 

Given that any technology has the potential to be misused or even anthropomorphized, what remains critical is the issue of who controls it and how the technology is wielded. Ergo, the greatest risk we face perhaps lies not in AI itself, but in human action involving AI. This alludes to the need to establish new focal points of discussion that specifically cater to the critical, yet often overlooked, role of humans. From this view, the topic of humans and human control must be at the center for any practical dialogues on military AI. 

This shift, by extension, will also require a reorientation in how one conceptualizes and approaches the dialogues. As improbable as it sounds, discussion topics on AI should adopt a more “humanistic” approach where a human’s role, responsibilities, and responses in military AI should be further deliberated and emphasized before any attempts to decipher AI itself. Adopting a “human-first” approach places the spotlight back on humans’ capability to control and manage military AI, thereby shifting the impetus of the dialogues from one that is predominantly problem recognizing to one that focuses on problem solving.  

In this sense, differing from conventional AI dialogues that focuses primarily on AI itself – be it efforts in aligning Washington and Beijing’s interpretation of AI conceptual terms or demystifying Track I discussions at the Track II level – dialogues should first and foremost delineate and decode the role of humans in military AI. Important but difficult questions – such as: What is the role of humans in the military AI process? How can humans have an enlarged role in military AI? What is the current level of human involvement in military AI? – must be frontloaded to ensure efficacy of dialogues. By clearing up the fog that lies behind what humans can and are willing to do with military AI, then dialogues will be in a better position to discuss what ought to be done collectively.

Take the case of discussion for a human-in-the-loop agreement as an example. Breakthroughs in brokering this agreement have been few and far between despite the best efforts across all diplomatic tracks. Expanding on a recent idea involving a “partial” human in the loop agreement to reinvigorate talks, dialogues could start off by elucidating the extent and level of human involvement within this proposition. For instance, clarifying the extent of permissible action human operators can exercise within each step of the decision-making process, or defining the appropriate degree of human involvement in the loop could be valuable starting points for discussion across all diplomatic tracks. These discussion points, which by no means are mere confidence building mechanisms, could lay the necessary groundwork for a more sustained, robust, and fruitful discussions between the United States and China.

In this context, the upcoming “Responsible AI in the Military domain Summit (REAIM) 2024,” scheduled for September in Seoul, South Korea,, could serve as a pivotal starting point for discussing the role of humans in relation to military AI. In the previous edition held last year, nations emphasized in the summit’s declaration that humans must remain responsible and make decisions when AI is used in military contexts. Building on that, the focus for REAIM 2024 should shift from merely acknowledging the importance of human roles to discussing more concrete guidelines. It is crucial that these discussions take place first, as only then can we effectively address the potential risks that AI might pose in military settings.

Authors
Guest Author

Mathew Jie Sheng Yeo

Mathew Jie Sheng Yeo is a researcher at the Taejae Future Consensus Institute, focusing on the intersection of emerging technology and security. He also serves as the assistant director of the Center for Strategic Studies at the Fletcher School of Law and Diplomacy, Tufts University. Mathew is currently pursuing his Ph.D. at the Fletcher School.

Guest Author

Hyeyoon Jeong

Hyeyoon Jeong is a researcher at the Taejae Future Consensus Institute, focusing on the intersection of emerging technologies and security. She pays particular attention to the relationship between artificial intelligence and nuclear weapons, examining the opportunities and risks associated with the use of AI in nuclear command and control. Prior to her career as a researcher, Hyeyoon served as an assistant professor in the Department of International Relations at the Air Force Academy.

Tags