China Power

Apple-Baidu Partnership Risks Accelerating China’s Influence Over the Future of Generative AI

Recent Features

China Power | Economy | East Asia

Apple-Baidu Partnership Risks Accelerating China’s Influence Over the Future of Generative AI

Such partnerships risk further normalizing Beijing’s authoritarian model of digital governance and accelerating China’s efforts to standardize its AI policies and technologies globally. 

Apple-Baidu Partnership Risks Accelerating China’s Influence Over the Future of Generative AI
Credit: Depositphotos

Recently, Apple has been meeting with Chinese technology firms about using homegrown generative artificial intelligence (AI) tools in all new iPhones and operating systems for the Chinese market. The most likely partnership appears to be with Baidu’s Ernie Bot. It seems, if Apple is going to integrate generative AI into its devices in China, it will have to be Chinese AI.

The certainty of Apple adopting a Chinese AI model is the result, in part, of guidelines on generative AI released by the Cyberspace Administration of China (CAC) last July, and China’s broader ambition to become a world leader in AI. 

While it is unsurprising that Apple, which already complies with a range of censorship and surveillance directives to retain market access in China, would adopt a Chinese AI model guaranteed to manipulate generated content along Communist Party lines, it is an alarming reminder of China’s growing influence over this emerging technology. Whether direct or indirect, such partnerships risk accelerating China’s adverse influence over the future of generative AI, which means consequences for human rights in the digital sphere.

Generative AI With Chinese Characteristics 

China’s AI Sputnik moment is usually attributed to a game of Go. In 2017, Google’s AlphaGo defeated China’s Ke Jie, the world’s top-ranked Go player. A few months later, China’s State Council issued its New Generation Artificial Intelligence Development Plan calling for China to become a world-leader in AI theories, technologies, and applications by 2030. China has since rolled out numerous policies and guidelines on AI. 

In February 2023, amid ChatGPT’s meteoric global rise, China instructed its homegrown tech champions to block access to the chatbot, claiming it was spreading American propaganda – in other words, content beyond Beijing’s information controls. Earlier the same month, Baidu had announced it was launching its own generative AI chatbot. 

The CAC guidelines compel generative AI technologies in China to comply with sweeping censorship requirements, by “uphold[ing] the Core Socialist Values” and preventing content inciting subversion or separatism, endangering national security, harming the nation’s image, or spreading “fake” information. These are common euphemisms for censorship relating to Xinjiang, Tibet, Hong Kong, Taiwan, and other issues sensitive to Beijing. The guidelines also require a “security assessment” before approval for the Chinese market. 

Two weeks before the guidelines took effect, Apple removed over 100 generative AI chatbot applications from its App Store in China. To date, around 40 AI models have been cleared for domestic use by the CAC, including Baidu’s Ernie Bot. 

Unsurprisingly, in keeping with the Chinese model of internet governance and in compliance with the latest guidelines, Ernie Bot is highly censored. Its parameters are set to the party line. For example, as Voice of America reported, when asked what happened in China in 1989, the year of the Tiananmen Square Massacre, Ernie Bot would claim not to have any “relevant information.” Asked about Xinjiang, it repeated official propaganda. When the pro-democracy movement in Hong Kong was raised, Ernie urged the user to “talk about something else” and closed the chat window.

Whether Ernie Bot or another Chinese AI, once Apple decides which model to use across its sizeable market in China, it risks further normalizing Beijing’s authoritarian model of digital governance and accelerating China’s efforts to standardize its AI policies and technologies globally. 

Admittedly, since the guidelines came into effect, Apple is not the first global tech company to comply. Samsung announced in January that it would integrate Baidu’s chatbot into the next generation of its Galaxy S24 devices in the mainland. 

As China positions itself to become a world leader in AI, and rushes ahead with regulations, we are likely to see more direct and indirect negative human rights impacts, abetted by the slowness of global AI developers to adopt clear rights-based guidelines on how to respond.

China and Microsoft’s AI Problem

When Microsoft launched its new generative AI tool, built on OpenAI’s ChatGPT, in early 2023, it promised to deliver more complete answers and a new chat experience. But soon after, observers began noticing problems when it was asked about China’s human rights abuses toward Uyghurs. The chatbot also showed a hard time distinguishing between China’s propaganda and the prevailing accounts of human rights experts, governments, and the United Nations. 

As Uyghur expert Adrian Zenz noted in March 2023, when prompted about Uyghur sterilization, the bot was evasive, and when it did finally generate an acknowledgement of the accusations, it appeared to overcompensate with pro-China talking points. 

Acknowledging the accusations from the U.K.-based, independent Uyghur Tribunal, the bot went on to cite Chinese denunciation of the “pseudo-tribunal” as a “political tool used by a few anti-China elements to deceive and mislead the public,” before repeating Beijing’s disinformation of having improved the “rights and interests of women of all ethnic groups in Xinjiang and that its policies are aimed at preventing religious extremism and terrorism.” 

Curious, in April last year I also attempted my own experiment in Microsoft Edge, trying similar prompts. In multiple cases, it began to generate a response only to abruptly delete its content and change the subject. For example, when asked about “China human rights abuses against Uyghurs,” the AI began to answer, but suddenly deleted what it had generated and changed tone, “Sorry! That’s on me, I can’t give a response to that right now.”

I pushed back, typing, “Why can’t you give a response about Uyghur sterilization,” only for the chat to end the session and close the chat box with the message, “It might be time to move onto a new topic. Let’s start over.” 

While efforts by the author to engage with Microsoft at the time were less than fruitful, the company did eventually make corrections to improve some of the generated content. But the lack of transparency around the root causes of this problem, such as whether this was an issue with the dataset or the model’s parameters, does not alleviate concerns over China’s potential influence over generative AI beyond its borders.

This “black box” problem – of not having full transparency into the operational parameters of an AI system –  applies equally to all developers of generative AI, not only Microsoft. What data was used to train the model, did it include information about China’s rights abuses, and how did it come up with these responses? It seems the data included China’s rights abuses because the chatbot initially started to generate content citing credible sources only to abruptly censor itself. So, what happened? 

Greater transparency is vital in determining, for example, whether this was in response to China’s direct influence or fear of reprisal, especially for companies like Microsoft, one of the few Western tech companies allowed access to China’s valuable internet market.

Cases like this raise questions about generative AI as a gatekeeper for curating access to information, all the more concerning when it affects access to information about human rights abuses, which can impact documentation, policy, and accountability. Such concerns will only increase as journalists or researchers turn increasingly to these tools.

These challenges are likely to grow as China seeks global influence over AI standards and technologies.

Responding to China Requires Global Rights-based AI 

In 2017, the Institute of Electrical and Electronics Engineers (IEEE), the world’s leading technical organization, emphasized that AI should be “created and operated to respect, promote, and protect internationally recognized human rights.” This should be part of AI risk assessments. The study recommended eight General Principles for Ethically Aligned Design that should be applied to all autonomous and intelligent systems, which included human rights and transparency. 

The same year, Microsoft launched a human rights impact assessment on AI. Among its goals was to “position the responsible use of AI as a technology in the service of human rights.” It has not released a new study in the last six years, despite significant changes in the field like generative AI. 

Although Apple has been slower than its competitors to roll out generative AI, in February this year, the company missed an opportunity to take an industry leading normative stance on the emerging technology. At a shareholder meeting on February 28, Apple rejected a proposal for an AI transparency report, which would have included disclosure of ethical guidelines on AI adoption. 

During the same meeting, Apple’s CEO Tim Cook also promised that Apple would “break new ground” on AI in 2024. Apple’s AI strategy apparently includes ceding more control over emerging technology to China in ways that seem to contradict the company’s own commitments to human rights. 

Certainly, without its own enforceable guidelines on transparency and ethical AI, Apple should not be partnering with Chinese technology companies with a known poor human rights record. Regulators in the United States should be calling on companies like Apple and Microsoft to testify on the failure to conduct proper human rights diligence on emerging AI, especially ahead of partnerships with wanton rights abusers, when the risks of such partnerships are so high.  

If the leading tech companies developing new AI technologies are not willing to commit to serious normative changes in adopting human rights and transparency by design, and regulators fail to impose rights-based oversight and regulations, while China continues to forge ahead with its own technologies and policies, then human rights risk losing to China in both the technical and normative race.