Artificial intelligence (AI) has taken center stage in today’s global technology competition, especially since the commercial launch of OpenAI’s ChatGPT a year ago. Now the race to technological leadership among companies and nations has been extended to the sphere of regulations and rule-setting, with national leaders and politicians proclaiming that they do not want to repeat the same mistakes of being late to regulate the internet and social media.
Within the last few weeks, we have witnessed major announcements from the United States, in the form of a presidential executive order on AI; an advocacy framework from China on AI governance emerging from the tenth anniversary summit for its Belt and Road Initiative; and the AI Safety Summit being held in the United Kingdom. The slippery task of regulating AI, especially to do it globally, is gaining momentum, although in many ways countries still hold very divergent views and goals on AI regulatory and development issues.
It appears that a new framework for AI diplomacy is taking shape.
The United States’ AI Executive Order
First, let’s take a look at U.S. President Joe Biden’s executive order on AI, announced on October 30. Washington has long been criticized for its lack of comprehensive legislations to regulate the “big tech” companies on issues ranging from data and privacy protection to the responsibilities of social media platforms. Given the political impasse on Capitol Hill and beyond, this situation is unlikely to change anytime soon. However, ironically, this “executive-led” modus apparatus may allow the United States to take somewhat of a lead in the race to set the directions of the rules for the safe and secure deployment of AI in society, as others may be continuously caught in the mire of the details of how to regulate something as elusive and constantly evolving as AI.
The European Union (EU), long seen as the gold standard of data, privacy, and technology regulations, and with a focus on upholding principles such as human rights and consumer protection, has spent more than two years in negotiating among its 27 member states, yet reportedly is still struggling to come to a final agreement for its AI Act. The EU regulations are exemplified by their classification for risk levels associated with AI systems, and, hence, treated accordingly to varying requirements and compliances, with those systems classified as “high risk” to be tightly controlled by law.
If the EU approach focuses on legislation and regulation, the American way is much more about rule-setting for achieving the same goals of safety, security and trustworthiness, with an eye on development to maintain or even extend the United States’ technological leadership. Among the eight defined actions in the executive order, only one action is about rule-setting – albeit the longest and most substantial section – with seven other actions being more about development policies, including the federal government’s own application and usage of AI.
The most significant section of the executive order concerns “ensuring the safety and security of AI technology,” in which it calls for rules-setting over guidelines and standards, and also for developers of “potential dual-use foundation models” to report to the federal government information about training activities, ownership of such models, as well as results from red-team security tests. The success of this section of the executive order will rely mostly on the cooperation of commercial developers of AI models, built upon the “voluntary commitments” received from “top AI companies” after series of meetings and negotiations between the White House and these companies in the months preceding.
Although this one action out of the eight has received the most attention, the rest of the executive order is mostly about industry development and application strategies for the United States to maintain its lead. The remaining actions regard:
- Promoting innovation and competition: including implementing of a pilot program for the National AI Research Resource (NAIRR), enhancing intellectual property (IP) protection and combatting IP theft, and advancing AI usage for healthcare and climate change, and calling for the Federal Trade Commission (FTC) to consider exercising its rule-making authority to further ensure AI marketplace competition, etc.
- Supporting workers: further understanding the impact of AI on workers, including job opportunities, displacements or their wellbeing, in order to develop an AI-ready workforce.
- Advancing equity and civil rights: addressing unlawful discrimination possibly exacerbated by AI, in areas such as the criminal justice system, law enforcement, public social benefits, and in the broader economy, such as hiring, housing and transportation.
- Protecting consumers, patients, passengers, and students: this action calls for the “incorporation of safety, privacy and security standards” in those areas affected by AI in the health and human services, transportation, and educational sectors, using a sectoral approach to attempt to protect people from fraud or discrimination, without legislations.
- Protecting privacy: similar to the last action above, this action is not about rule-setting for a privacy regulatory regime, but rather just re-evaluating use of commercially available information already procured by government agencies, and encouraging development for privacy enhancing technologies (PETs).
- Advancing federal government’s use of AI: setting up AI management guidance within the federal government agencies, including hiring more data scientists and designating a Chief AI Officer at each agency.
- Strengthening American leadership abroad: establishing a plan for global engagement on promoting and developing AI standards, and other measures, forming the basis for an American AI diplomacy.
So we should keep in mind what the executive order is not – that is, a regulation, although it is often commonly referred to as such. Although it has established the basis for government oversight of the most advanced AI projects, especially those with dual-use implications, it does not follow the EU model with licensing or other strict compliant requirements. It is more of a set of industry development policies and directives, potentially forming the foundations for a CHIPS and Science Act 2.0 – where an actual future legislation will carry the financial appropriations and other measures to fortify the support for research and development or increasing the visa quotas for foreign talents.
In addition, as a manifestation of U.S. AI soft power, the executive order aims to continue to rely on the United States’ domestic AI governance to influence the world, beginning with the standards and guidelines to be adopted by the U.S. federal government.
China’s Global AI Governance Initiative
It is interesting to note something many may have overlooked: Less than two weeks before the U.S. executive order was announced, China in fact also announced its Global AI Governance Initiative at the Belt and Road Forum in Beijing, where the country celebrated the 10-year anniversary of its Belt and Road Initiative.
Unlike the almost 20,000 words-long U.S. executive order, the Chinese proclamation contained just about 1,500 characters, and only stuck to a number of high-level principles, such as upholding a “people-centered approach in developing AI,” adhering to “developing AI for good,” “fairness and non-discrimination,” with “wide participation and consensus-based decision-making,” to “encourage the use of AI technologies to prevent AI risks,” and so on.
But there is some subtle language in the initiative that may be more revealing about China’s true objectives. It reiterates the need to “respect other countries’ national sovereignty and strictly abide by their laws.” It opposes “using AI technologies for the purposes of manipulating public opinions, spreading disinformation, intervening in other countries’ internal affairs… and jeopardizing the sovereignty of other states.” It champions for “the representation and voice of developing countries in global AI governance,” while also maintain that they should “gradually establish and improve relevant laws, regulations and rules.”
Indeed, the Chinese objectives were more plainly on display in a People’s Daily commentary article on October 19, criticizing the G-7 joint declaration in May on AI governance for “drawing the lines based on values system,” hence architecting a “technology small circle” to exclude China’s participation in AI technology standards setting.
It is therefore somewhat ironic to see the Interim Measures for the Management of Generative Artificial Intelligence Services, jointly approved by seven ministries and agencies of the People’s Republic of China in July 2023. Article 4 calls for, as the first and foremost of a list of principles for those providing generative AI services, “upholding the core socialist values.” Indeed, China’s approach to establishing AI regulations has been hardly “gradual,” but is quite quick and decisive, although it does “improve” these laws rather frequently. In general, these laws are broad and vague, often referring to high-level principles and general terms, and leaving huge room for interpretation by the governing authorities.
From the U.K. AI Summit to AI Diplomacy
Given that the race to AI regulation has been led by the United States, China, and the EU, it was somewhat of a surprise that the U.K. government announced in June 2023, that it would host the first global summit on AI safety. Indeed, the United Kingdom has thus far been a laggard in AI regulation, with Prime Minister Rishi Sunak stating that he would not “rush to regulate” AI.
But it was the Biden administration of the United States that stole the thunder of the groundbreaking event, attended by leading government, business, and academic leaders from around the world. The United States took over the discourse by announcing its presidential executive order only two days before the start of the summit, politically also giving attending U.S. Vice President Kamala Harris a platform for a “raw show of U.S. power on the emerging technology.”
Progress was made in the summit with the signing of the Bletchley Declaration, agreed by 27 countries – including China and the United States – and the European Union. The communique focuses on tackling the risks of frontier AI to “identify AI safety risks of shared concerns, building a shared scientific and evidence-based understanding,” and “building respective risk-based policies across countries to ensure safety.”
However, it should not be overlooked that the U.S. government, in its announcement for its AI executive order, also proclaimed its efforts to build its international framework through engaging with 20 countries and the EU, covering most of the attendee countries and signees of the Bletchley Declaration. In this sense, the United States has made sure that it has dominated the discourse at the AI Safety Summit, while embracing the participation of China, forming the basis for a future framework for global AI diplomacy.
Indeed, there have been frequent calls to develop an international regulatory framework for AI governance by academics and business leaders, such as the advocacy for a new agency similar to the International Atomic Energy Agency. The AI Safety Summit in the U.K. can be a first step in that direction.
And it was not surprising that the remarks of the leader of China’s delegation – Wu Zhaohui, vice minister of Science and Technology – at the summit focused on the “equal rights” in “accessing advanced AI.” Wu was indirectly protesting the barriers erected by the United States and its allies to China’s AI development, especially the export controls on chips and other leading edge technologies. But such calls were clearly overshadowed by the fact that countries were at least able to gather to share views on AI risks at a high level, although the discourse is still dominated by the U.S. and its allies.
In this sense, China’s present participation should reflect their desire to be at least “in the room,” and their “wait and see” attitude toward this particular push toward global AI governance.