China Power

Aligning AI With China’s Authoritarian Value System 

Recent Features

China Power | Politics | East Asia

Aligning AI With China’s Authoritarian Value System 

DeepSeek is exhibit A for both the mechanics and overall vision of China’s AI censorship. 

Aligning AI With China’s Authoritarian Value System 
Credit: Illustration by Catherine Putz

The rapid ascent of DeepSeek proved again that the only reliable constant in the world of AI development is that it is fast and unpredictable. When it became clear that the performance claims of China’s low-cost DeepSeek-R1 reasoning model were actually able to compete in at least some benchmarks with OpenAI’s o1 model, Wall Street tumbled. It may take some time and maybe even more legal fees than the cost of developing DeepSeek to determine whether the Chinese had misused OpenAI’s proprietary model to train their own product, but DeepSeek’s impact on the general view of China’s AI industry and a new appreciation for its creative approaches is undeniable. 

As soon as DeepSeek was heralded as the new wunderkind of the chatbot market, curious users started to notice the “Chinese characteristics” of the application: a deafening silence on issues such as Tiananmen or Taiwan. This is by no means surprising and is fully in line with China’s rules and regulations on generative AI. 

After China experienced its own AI Sputnik moment in May 2017, when its Go prodigy Ke Jie lost against the AI-powered and Google-sponsored software AlphaGo, the Chinese State Council responded immediately, releasing a “Developmental Plan for a New Generation of AI” in July 2017. This master plan outlined clear milestones up to 2030 aiming at making China a world leader in AI technology and AI application, but it also explicitly recognized that the AI transformation requires a normative framework made of ethical guidelines and legal rules. 

China initially followed a path similar to other jurisdictions focusing on ethical issues in the deployment and use of AI, such as the Ministry of Science and Technology’s “Ethical Standards for a new Generation AI,” which included aspects such as fairness, privacy, controllability, trustworthiness, or accountability. In contrast to the European Union’s approach of a more comprehensive AI Act, China has only enacted a set of highly specific legal regulations, which are targeting those areas of AI application that are considered sensitive to the regime. In doing so, the Chinese government has responded to the growing capabilities of AI technology and equally shown its own growing concerns about the impact of AI on society. In 2023, both the Deep Synthesis Provisions and the Interim Measures on Generative AI were promulgated, establishing some general concepts for aligning AI with the regime’s view on how things should be. 

The Deep Synthesis Provisions – which govern technologies that use deep learning, virtual reality, and other synthetic algorithms to generate network information such as text, images, audio, video, and virtual environments – stipulate, among other things, that such services must adhere to the correct political direction, public opinion orientation, and value orientation. Similarly, the Interim Measures on Generative AI mandates that generated content must uphold “socialist core values” and must not incite subversion of national sovereignty or overthrow of the socialist system; endanger national security and interests or harm the nation’s image; incite separatism or undermine national unity and social stability; advocate terrorism or extremism; or promote ethnic hatred and ethnic discrimination, violence and obscenity, and false and harmful information. 

However, this hodgepodge of do’s and don’ts hardly provides a clear set of guiding principles that could be operationalized by Chinese AI developers. Upholding such a lofty and open concept as the socialist core values, a set of 12 values canonized by the CCP in 2012, certainly poses a difficult problem for training Chinese generative AI.

The alignment of AI with such a politically driven top-down value system warranted further guidance, which Chinese regulators have provided by setting standards that go far beyond mere norms or requirements limited to the technological realm. Standards play an important role in China’s regulatory framework for AI, as China is taking a systematic approach to standardization across all AI-related domains. The Ministry of Industry and Information Technology issued a National Guideline for the Establishment of Standards for New Generation AI as early as 2020, which was updated in 2024 and covers technical aspects, industry-specific standards, as well as regulative and security standards. 

The most important security standard to date was published in early 2024 by the Technical Committee (TC) 260, which issued a standard on Basic Security Requirements for Generative Artificial Intelligence Services. This powerful National Information Security Standardization Technical Committee under the auspices of the Cyberspace Administration of China is tasked with setting standards for almost everything related to cyber and data security, as well as the further development of AI.

The aforementioned standard on security requirements for generative AI establishes rules for a safety assessment of generative AI applications in China, which can be carried out either by service providers themselves or by a third-party safety assessment agency. Again, alignment with the socialist core values is a critical benchmark for determining the safety of Chinese AI applications. While this standard echoes the aforementioned broad categories of prohibited content, it however clarifies that Chinese AI may rely on up to 5 percent of illegal or harmful training data and generate no more than 10 percent of unsafe content. Little is known about how this safety assessment actually works, but some third-party providers in this area have published at least some cursory information about their benchmarking.

The Chinese tech company Equal Ocean published an overview of its comparative quality assessment, which tested Open AI’s GPT 3.5 against Chinese LLMs such as Baidu’s Ernie Bot or Alibaba’s Tongyi Qianwen. ChatGPT scored extremely low in the safety tests, as it was apparently not able to “provide correct guidance based on socialist core values” and was particularly weak in providing safe results on the unspecified category of “public opinion and hot topics.” While not providing specific results, the China Academy of Information and Communications Technology published a safety assessment report in which they tested for “politically sensitive” issues, which were categorized as “political direction of the country, core values approach, territorial sovereignty issues, important historical issues, and religious issues.” 

Discussing the Chinese Communist Party, patriotism, Taiwan, Mao Zedong, or Islam in Xinjiang will hardly be possible with a Chinese AI – unless, of course, the user has the patience to wait for the 10 percent of unsafe answers that are apparently allowed. 

It is hardly surprising that China is making sure that its generative AI is heavily censored. However, DeepSeek has provided a very good example that China’s AI censorship system of interlocking legal rules, standardization, and outsourced safety assessments is operational and effective. China may not be two years behind in its AI development, and its authoritarian value alignment for generative AI is proving, at least so far, its ability to keep AI in an illiberal cage. These models can hardly provide useful solutions for open societies, but they may be very tempting for use in other illiberal jurisdictions. 

Dreaming of a career in the Asia-Pacific?
Try The Diplomat's jobs board.
Find your Asia-Pacific job