The Debate

The 2024 China-US AI Dialogue Should Start With an Eye on Chem-Bio Weapons

Recent Features

The Debate | Security | East Asia

The 2024 China-US AI Dialogue Should Start With an Eye on Chem-Bio Weapons

Washington and Beijing should focus their forthcoming AI dialogue on platforms that neither want to see in the hands of terrorist groups: those that can aid in the construction of biological and chemical weapons.

The 2024 China-US AI Dialogue Should Start With an Eye on Chem-Bio Weapons
Credit: Depositphotos

In January 2024, U.S. National Security Advisor Jake Sullivan and Beijing’s most senior diplomat, Wang Yi, held a low-key meeting in Bangkok. There, tentative plans were reportedly made for a China-U.S. dialogue on the risks posed by the new generation of artificial intelligence platforms, to be held sometime in the northern spring of this year. For many in the health security and arms control communities, this is a welcome development.

Determining exactly what the priority issues are, however, seems to be a challenge for policymakers on both sides. U.S. officials have already voiced apprehension over the potential for AI-powered disinformation to corrupt democratic elections; Chinese authorities are no doubt worried about what the technology might mean for maintaining control over their population. Both governments have been exploring the many military applications that could be of use in the China-U.S. competition, from aerodynamics to autonomous targeting. 

However, a particular suite of AI platforms presents a risk to both countries, should they fall in the wrong hands – namely, tools with the potential to aid in the construction of biological and chemical weapons. 

A Shared Interest

Most media reportage on the intersection of “chem-bio” weapons and new AI platforms has focused on large language models (LLMs), and the potential for them to provide actionable instructions for creating biological agents. A now-infamous exercise held at the Massachusetts Institute of Technology is often cited, in which students managed to elicit instructions from an LLM for acquiring and releasing a range of viruses. 

However, more recent analyses, including those conducted by the RAND Corporation, have shown that LLMs provide little more help than the internet for unskilled actors seeking to obtain a viable biological weapon. While there remains some disagreement in this domain, it seems unlikely that an untrained individual using a chatbot would have the requisite tacit knowledge needed to make a deployable bioweapon, an undertaking that would require years of experience in wet labs.

A more serious danger is posed by a lesser-known class of AI platforms called biological design tools (BDTs). Since 2023, security researchers at the University of Oxford, the Nuclear Threat Initiative, and elsewhere have been appealing for the security community to take a long, hard look at these systems and their potential for weaponization. Among the most mature BDTs are protein design tools. These AI systems carry immense promise for human health, given their potential for helping in the design of novel molecules for the next generation of life-saving drugs. But they could also unleash a proliferation of technical expertise that could be used for malicious ends.

The potential for protein tools to help design toxin weapons, such as ricin or botulinum toxin, has already been explored. Using BDTs to aid in the construction of more complex pathogens, such as viruses with long, complex genomes, may soon be feasible (although many viruses have been reconstructed and revived in recent years without such tools, albeit by experienced scientists). There are a range of other AI-powered biological platforms that could be exploited by malicious actors: viral vector design tools, genome assembly tools, toxicity prediction tools, and others. When combined with the increasingly accessible and cost-effective ingredients of synthetic biology, the security implications of these new systems are real.

Most researchers believe that AI-driven biological design tools still require a solid base of technical knowledge and practical expertise to make proper use of them. However, it is clear these new platforms are both lowering the informational barriers to bioengineering, and “raising the ceiling” of potential harm for those who might misuse them. 

The world of chemical weapons is also profoundly impacted by generative AI. In 2022, a private company showed how their AI-powered molecule generator could produce a litany of chemical warfare agents, including the nerve agent VX, and a range of other as-yet-unseen molecules that have offensive potential. In 2023, a team of chemists built an “LLM-powered chemistry engine” that could perform complex operations when given simple text commands, such as “plan and execute the synthesis of an insect repellent.” One can imagine the risk of such a system in the hands of a militant group bent on destruction. 

A Shared Opportunity

While there are many points of contention between the United States and China on the future of AI, neither power wants to see chemical and biological weapons spread any further than they already have. Talks on strategic nuclear controls may be difficult, but this is one area in which there are clear, shared interests. Both countries have publicly stated their distaste for such weapons. The United States recently completed the destruction of its historical chemical weapons stockpile. China, which was the victim of biological and chemical warfare during World War II, has allowed hundreds of inspections on its territory by officials from the Organization for the Prohibition of Chemical Weapons (OPCW), and maintains chemical control lists on par with that of the Australia Group.

Domestic regulations aimed at controlling new AI platforms are only in their formative stages in both countries. Most, in the Chinese case, have focused on the implications for “social stability”; in the United States, concern has revolved around threats to job security, information integrity, and democratic processes. And while military applications of AI have been discussed in some policy documents, there has been little focus on the connection with “chem-bio” weapons.

U.S. President Joe Biden’s recent Executive Order on the Safe, Secure and Trustworthy Development of Artificial Intelligence is the first significant step in this direction. Section 4.4 of the Executive Order calls on the Department of Homeland Security to assess how artificial intelligence might enhance chemical, biological, radiological, and nuclear (CBRN) threats. A related component of the order focuses on nucleic acid synthesis technology – the increasingly accessible suite of machines that are used to make custom DNA and RNA. These “desktop” synthesizers, which can sit on any lab bench, are a key enabling technology in synthetic biology. 

Biological agents remain theoretical for as long as they are digital – the key interception point is where they are made physical. For this reason, many of the non-proliferation experts we have interviewed in recent months say targeting regulations at the “digital-physical frontier” of synthetic biology and artificial intelligence is a key priority. 

Cooperation between Washington and Beijing on “chem-bio-AI” would also benefit the leadership of both nations politically. The death and misery wrought by COVID-19 has demonstrated the destructive power of an infectious biological agent, whatever its provenance. Both countries emerged from the pandemic deeply affected, with scores of deaths and infections on both sides, and a legacy of distrust between the two capitals amid disagreements on who was to blame for the catastrophe. Embarking on concrete steps for controlling the intersection of chemistry, synthetic biology and artificial intelligence would signal a fresh start, and herald a new commitment to global health security.