The Debate

To Prevent an AI Apocalypse, the World Needs to Work With China

Recent Features

The Debate | Opinion

To Prevent an AI Apocalypse, the World Needs to Work With China

China has the desire, foundation, and expertise to work with the global society on mitigating catastrophic risks from advanced AI.

To Prevent an AI Apocalypse, the World Needs to Work With China
Credit: Depositphotos

To paraphrase Ernest Hemingway, governance of artificial intelligence (AI) has developed gradually, and then all at once. The second half of 2023 saw dizzying advances in AI governance. These rapid developments have shown that countries all around the world, despite geopolitical and cultural barriers, are coming together to face risks from AI.

At the United Kingdom’s Global AI Safety Summit on November 1, 2023, Chinese Vice Minister of Science and Technology Wu Zhaohui and U.S. Secretary of Commerce Gina Raimondo shared the stage at the opening plenary. At the end of the summit, China and the United States, joined by the European Union and 25 other countries, signed the “Bletchley Declaration” to strengthen cooperation on frontier AI risks

Before the summit, leading scientists from the U.S., U.K., China, Canada, and elsewhere co-authored a consensus paper on risks from advanced AI systems, and some of the same experts held a dialogue that jointly called for safety research and governance policies to prevent similar risks.

Later in November, at the 2023 Asia-Pacific Economic Cooperation (APEC) Leader’s Meeting, Chinese President Xi Jinping and U.S. President Joe Biden met in person. Both sides agreed to direct governmental talks on AI, with the Chinese readout referencing increased cooperation and the U.S. readout calling for addressing “risks of advanced AI systems.” A follow-up meeting between U.S. National Security Advisor Jake Sullivan and Chinese Foreign Minister Wang Yi in January 2024 indicated that the first meeting of the dialogue will occur this spring.

The United Nations has also been active, with Secretary General António Guterres creating a High-Level Advisory Body on AI featuring 38 distinguished experts in October 2023. The body, which has two Chinese members, released an interim report in December 2023 outlining recommendations for AI governance functions that should be undertaken by the international community. 

This consensus among scientists around the world, the U.N., and major powers to address AI governance challenges – hard-won in the context of tense great power relations – has created a critical window of opportunity for the world to meaningfully reduce AI risks. In particular, risks from frontier AI, including “highly capable general-purpose AI models” such as foundation models and “narrow AI with dangerous capabilities” such as models for bioengineering appear to have garnered the most consensus for international action. The potential for such models to be misused to conduct cyberattacks and develop biological weapons, as well as the possibility that more advanced models might escape human control, creates problems that the international community can only tackle if we are united – a prototypical shared threat. 

In the next year, an unprecedented level of international dialogue will occur on this issue, with the Global AI Safety Summits in South Korea and France, the U.N.’s Summit of the Future, China-U.S. governmental dialogues on AI, Track 2 dialogues between Chinese and Western institutions, and more. As a leading AI power and critical player in international governance, China must be a part of this discussion. 

However, some in the West have bought into a zero-sum view of AI development, with former U.S. Secretary of Defense Mark Esper perceiving an “existential” race between the U.S. and China on AI. Similarly, the Pentagon’s former Chief Software Officer believes that China is  seeking to “dominate the future of the world” through AI. 

In our report, “State of AI Safety in China,” the first to comprehensively analyze this landscape, we questioned those simplified narratives. We gave an overview of Chinese domestic governance, international governance, technical research, expert views, lab self-governance, and public opinion in addressing frontier AI risks. From China’s domestic rules on generative AI and internationally directed Global AI Governance Initiative, to technical research on safety and expert consensus on frontier AI risks, China is much more active on AI safety than many Western commentators suppose. 

In other words, we believe that China has the desire, foundation, and expertise to work with the global society on mitigating catastrophic risks from advanced AI. After years of involvement in China’s AI scene and painstaking research, we believe that this is an important window of opportunity to kindle global cooperation on AI safety that involves China. 

There are a number of opportunities for international collaboration – and working together is the only way to ensure safe AI. However, there remains uncertainty about what projects are actually most promising and likely to succeed. We have suggestions for how to make 2024 a banner year for international AI governance. 

Come to agreement on joint measures to mitigate frontier AI risks

The Bletchley Declaration and other joint multi-stakeholder statements reveal a coalescing consensus on the risks posed by increasingly advanced AI systems. Now is the right time to build upon that foundation, look forward, and consider joint actions to mitigate risks. 

For example, given national and cultural differences in AI capabilities, values, and governance regimes, what AI safety standards should be established internationally, and which can be left to national discretion? What are the benefits and risks of open-sourcing frontier AI models, and how should they be governed? Can developers agree about the circumstances under which frontier AI development should be slowed down or even paused? How should any new international institutions to govern AI be structured? 

Major upcoming global AI governance conventions such as the AI Safety Summits in South Korea and France and the U.N. Summit of the Future, as well as bilateral dialogues including the forthcoming China-U.S. intergovernmental AI dialogue, should aim to build consensus around such joint actions. 

Share ideas for domestic governance mechanisms

Each country and AI lab is vigorously testing ideas internally to refine a unique blend of AI governance policies for their own situation. However, there are more similarities than many think. For instance, the Bipartisan Framework for U.S. AI Act proposed by two U.S. senators proposes a national oversight body, licensing requirements, and safety incident reporting requirements to govern AI systems, similar to provisions in an expert draft for China’s national AI Law

Exchanging governance practices and sharing lessons learned would help countries assess pros and cons of policies such as red-teaming, licensing, and third-party auditing for frontier models. Such exchanges could occur between academics, industry, or policymakers, and more ambitiously could resemble a joint mapping exercise between the Singaporean and U.S. governments on AI governance. 

Accelerate progress on technical safety research

The collective efforts of the most talented researchers all around the world are likely necessary to develop better solutions to challenging technical AI safety problems. As OpenAI CEO Sam Altman noted at the 2023 Beijing Academy of AI Conference, “Given the difficulties in solving alignment for advanced AI systems, this requires the best minds from around the world.” 

International cooperation to supercharge technical AI safety research could involve new academic exchange and collaboration on promising lines of inquiry. In addition, agreements between labs or governments to devote at least one-third of their AI R&D funding to safety and governance seems to have garnered support by a number of top scholars, including 24 academic luminaries and attendees at a recent AI safety dialogue between Chinese and Western scientists. 

Share the benefits of AI

Proliferation of frontier models has major dangers, but these cannot be addressed without engaging the Global South. Moreover, global inequality will be exacerbated if the Global South lacks AI solutions to pressing social and environmental challenges. Given China’s positioning as a champion of Global South countries and as a leading AI power, it may have a particularly strong role to play on this issue. Leading AI developers will also be important for sharing AI capabilities – in safe and appropriate ways – by, for instance, partnering with local communities to ensure their local language is represented in new large language models. 

Only recently have greater numbers of scholars and practitioners around the world woken up to the deluge of risks that frontier AI development may pose. The last year has strengthened our belief that Western countries and China can set aside geopolitical differences to cooperate on mitigating these dangers. If we fail to take advantage of this breakthrough, the risks to humanity will be severe. We stand at the precipice of a new era; only by bridging divides can we ensure a safe journey for all.

Authors
Guest Author

Jason Zhou

Jason Zhou is a senior research manager at Concordia AI, where he works on promoting international cooperation on AI safety and governance. Jason received a Master’s degree from Tsinghua University as a Schwarzman Scholar, where he wrote a thesis on China-U.S. relations.

Guest Author

Kwan Yee Ng

Kwan Yee Ng is a senior program manager at Concordia AI, where she leads projects to promote international cooperation on AI safety and governance. Kwan Yee received a master’s degree from Peking University as a Yenching Scholar. 

Guest Author

Brian Tse

Brian Tse is the founder and CEO of Concordia AI. He is also a policy affiliate at the Centre for the Governance of AI. He co-edited the book "Global Perspective on AI Governance" published by Tongji University Press. He also served on the program committee of AI safety workshops at AAAI, IJCAI, and ICFEM. Brian has been invited to speak at Stanford, Oxford, Tsinghua, and Peking University on global risk and foresight on advanced AI.

Tags