The Chinese Communist Party (CCP) has long sought to influence the media and information space in other countries, and the effort has intensified over the past decade. Much of the activity is overt — diplomats publishing op-eds or state-run news outlets generating propaganda. While some covert tactics were also documented, for many years there was no significant evidence that Chinese actors were engaging in aggressive disinformation campaigns like the one pursued by Russia on global social media platforms ahead of the 2016 U.S. elections. That has now changed.
Over the past month alone, a series of exposés demonstrated that pro-Beijing actors are carrying out a whole range of covert activities in multiple countries and languages. The campaigns aim to spread proven falsehoods, sow societal discord and panic, manipulate perceptions of public opinion, or undermine the democratic process.
Evidence revealed last year indicated that some Chinese-language campaigns had begun on platforms like Twitter as early as April 2017, but the latest round of incidents and investigations points to a more definitive shift in Chinese influence operations. It remains to be seen how foreign governments, technology companies, global internet users, and even the CCP’s own propaganda apparatus will adapt to the challenges presented by this change. Whatever their response, it is clear that a new era of disinformation has dawned.
Sowing Local Divisions on a Global Scale
Since March, coordinated and covert attempts by China-linked actors to manipulate information — particularly regarding COVID-19 — have been detected in countries including the United States, Argentina, Serbia, Italy, and Taiwan, with the relevant content often delivered in local languages.
Moreover, in a departure from Beijing’s more traditional censorship and propaganda campaigns, the narratives being promoted are not necessarily focused on advancing positive views and suppressing negative views of China.
For example, in an analysis of China-related Twitter posts disseminated in Serbia between March 9 and April 9 by automated “bot” accounts, the Digital Forensics Center found that the messages praised China for supplying aid during the coronavirus pandemic (much like a similar effort in Italy). But the posts also amplified criticism of the European Union for supposedly failing to do the same, despite the fact that the bloc actually provided millions of euros in assistance.
Elsewhere, disinformation attempts have tried to sow discord within other countries. In Argentina, a Chinese agent hired a local intermediary to approach editors from at least three news outlets in early April, according to the Falun Dafa Information Center. The broker allegedly offered to pay approximately $300 if they published a prewritten article in Spanish that smeared the Falun Gong spiritual group, which is persecuted in China, including by suggesting local citizens who practice Falun Gong could pose a threat to public health in Argentina. All of the approached outlets reportedly rejected the offer.
In other cases, the manipulated content shared had no connection to China at all. In a campaign in the United States reported by the New York Times, text and social media messages amplified by China-linked accounts in mid-March carried false warnings about a nationwide lockdown and troop deployments to prevent looting and rioting. The campaign was an apparent attempt to incite public panic and increase distrust in the U.S. government.
The Australian Strategic Policy Institute documented another recent example in which coordinated campaigns by nationalistic Chinese netizens — whose precise links to the Chinese state remain uncertain — attempted to harm Taiwan’s international reputation and its relationship with the United States. A network of 65 Twitter accounts that had previously posted in mainland-style Simplified Chinese abruptly switched to Traditional Chinese characters, thereby impersonating Taiwanese citizens. They then posted messages expressing apologies to the Ethiopian-born director general of the World Health Organization, lending false credence to allegations he voiced that racist slurs were directed against him from Taiwan. In April, some of the accounts also jumped on an existing Iranian-linked Twitter campaign calling for California’s secession from the United States, trying to give the impression that Taiwanese users supported California’s independence. (This effort was likely undermined by their referring to the island as “Taiwan (CHN).”)
Evolving Tactics and New Platforms
The March campaign in the United States underscored some of the evolving tactics of China-linked disinformation campaigns. While platforms like Facebook and Twitter remain important battlegrounds, recent investigations indicate a shift toward text messages and encrypted messaging applications. Due to their more atomized structures, monitoring and countering disinformation on these channels is more difficult than on Facebook and Twitter.
A recent report by Recorded Future found that, ahead of the January 2020 general elections in Taiwan, Chinese content farms used artificial intelligence “to generate massive volumes of content” that was then spread to Taiwanese users in an attempt to undermine the electoral prospects of incumbent President Tsai Ing-wen and her Democratic Progressive Party. The research group also cited evidence that China-based actors had deployed a tool developed by a Chinese company that enables batch posting and sharing of content across multiple platforms. Analysts believe the tool was deployed in Taiwan “because these technologies can ease the spread of weaponized content at scale, especially on closed messaging platforms such as LINE, where Taiwanese users frequently reshare content.”
But low-tech tactics are also being used. Over the past year, numerous reports have emerged of China-linked actors seeking out Chinese-speaking social media influencers with international followings, offering to purchase their accounts or pay them to post certain information. Other reports indicate that this practice is not limited to Chinese speakers, but also extends to individuals like an English-speaking Canadian YouTuber.
Prospects for Growth and Potential Risks
Much about China-linked overseas disinformation campaigns remains unknown. Indeed, the examples above are likely just the tip of the iceberg. Given the visible efforts by Chinese diplomats and state media to shore up the government’s reputation and downplay its responsibility for repressive measures in Wuhan that contributed to the global coronavirus outbreak, it seems reasonable to assume that covert Twitter bot campaigns have occurred in additional countries, particularly in Europe. There also appears to be some evidence of cross-fertilization among Russian, Iranian, and Chinese disinformation networks, although the degree of actual premeditated coordination is unclear.
The WeChat social media platform, owned by the Chinese company Tencent, is a potentially influential channel for political disinformation and content manipulation, with more than 100 million users outside China. A study published this week by Toronto’s Citizen Lab found systematic surveillance of posts by users registered abroad, with evidence of scanning for politically sensitive terms. The researchers found no evidence of systematic deletions, but the monitoring and collection of such data opens the door to manipulation, including on topics of electoral consequence in democracies.
The Chinese government is not the only actor currently experimenting with Russian-style disinformation campaigns. The latest edition of Freedom House’s Freedom on the Net report found evidence of malign digital electoral interference by various government, nongovernmental, and partisan forces in 26 countries, though most acted within their borders rather than trying to influence other countries. But the CCP presides over one of the world’s most repressive regimes and an economy second only to that of the United States. If it invests heavily in this new approach to international influence, it will pose enormous challenges to democratic governments, technology firms, and internet users.
This sphere of activity also poses a challenge for the CCP itself. Once exposed, disinformation campaigns that spread falsehoods and sow divisions in other societies undermine a key dimension of Beijing’s foreign propaganda narrative, one that it has invested heavily in promoting over the past three decades: that China’s rise is peaceful; that the regime is benign and shuns any interference in other countries; and that political, economic, and media engagement with a CCP-led China is a win-win prospect for all involved. It is difficult to predict whether and how the Party will try to reconcile this contradiction. For the time being, the CCP’s global disinformation campaigns show no sign of abating.
Sarah Cook is a senior research analyst for China, Hong Kong, and Taiwan at Freedom House and director of its China Media Bulletin.