Data and algorithms are the fundamental blocks of cyberspace, but while data practices are increasingly being regulated around the world algorithm regulation is relatively untouched. The EU General Data Protection Regulation (GDPR), for example, remains the groundbreaking model for data protection regulations in most parts of the world. However, there is a void in the regulation of algorithms.
In August, China issued the Draft Internet Information Service Algorithmic Recommendation Management Provisions, with an interest in standard setting in this space.
The provisions provide a framework for the regulation of recommendation algorithms. The provisions apply to “search filters” and “personalized recommendation algorithms” as used in social media feed algorithms (Weibo), content services (Tencent Music, streaming), online stores (e-commerce, app stores), and so on. It also regulates “dispatching and decision making” algorithms, such as those used by gig work platforms (like transport and delivery services) and “generative or synthetic-type” algorithms used for content generation in gaming and virtual environments, virtual meetings, and more.
Through the provisions in this draft, China seeks to address multiple concerns, such as the spread of mis- or disinformation, lack of user autonomy, perceived economic harms of price discrimination, online addiction, and issues regarding platformized gig work. They also reflect China-specific concerns like the fear of disaffection and consequent social mobilization.
Article 13 of the draft includes anti-manipulation provisions, banning manipulation of topic lists, search rankings, or popular search terms. It includes a ban on creating fake user accounts in an attempt to falsify traffic numbers. The same article also targets practices such as self-preferential treatment by the company involved and evasion of supervision. The article seems ideologically rooted but ignores the reality that, in constantly updating content streams, new content can emerge organically and become influential. Control over such phenomena can hardly be coded into hard law. This is concerning, given the regulations provide a penalty for infractions, including fines.
Another provision mandates that algorithmic recommendations should not be “addictive.” This can have deleterious effects on companies since the primary purpose of recommendations is to expose users to additional content that likely matches their tastes or needs. An adverse interpretation of “addiction” could directly impact legitimate businesses such as TikTok and Kuaishou (whose primary user interface is an infinite scroll through algorithmic recommendations), gaming providers that rely on user engagement, as well as streaming content providers.
Recommendation algorithms make use of a large number of static and dynamic signals, including user profiles based on demographic and behavioral factors, as well as content profiles that capture various attributes of the content to be recommended. Many of these factors are dynamic – for example, social media feed algorithms sort the available content based on user behavior such as clicks/likes/shares from past content, as well as content attributes such as recency and popularity. A recommendation algorithm performs a multi-objective optimization based on a very large number of input data points. Further, there is a continuous feedback loop where the algorithm is provided feedback on how well particular recommendations worked. In some cases, recommendation algorithms are personalized to individual users or to user cohorts. Recommendation algorithms are inherently dynamic and therefore, specific outcomes are often not reproducible as well as difficult to explain post-facto.
These aspects are in conflict with several mandates in the provisions. For example, Article 9 of the provisions requires algorithmic recommendation services to have complete feature databases. Per Article 20, compliance requirements include filing an algorithm self-assessment report and details about the content intended to be publicized. In its quest for transparency, the draft provisions overlook the fundamentally dynamic nature of provision of recommendations in various online services.
User autonomy is a major plank for the draft, in an attempt to remedy discriminatory practices at the algorithmic level. Provisions in the draft specifically require greater user autonomy and control over what is recommended to them (control over user profiles) or whether algorithmic recommendations are used (the option to turn off recommendations). While well-intentioned, this is unlikely to work in practice. Recommendation algorithms perform the key function of curation, which enables users to deal with information overload. While a limited number of users may turn recommendations off or exercise control over their profile, the overwhelming majority are unlikely to use these mechanisms, either because they are happy with algorithm curation or to avoid the additional effort.
On the whole, the draft provisions attempt to address genuine concerns with algorithmic recommendations and the power such algorithms hold over users. In the absence of regulatory oversight, it is possible that individual users will go down content rabbit holes (leading to problems such as addiction and disengagement.); suffer economic harm (through illegal price discrimination and self-preferencing/improper competition). Society as a whole can damaged by misinformation and harms to minors (gaming addiction, exposure to harmful content). The inclusion of specific articles that address many of these known issues is welcome. Further, the provisions include a mix of controls over algorithm inputs (user profiles), behavior, and outputs (audit and logging requirements), thus providing regulatory knobs at all the relevant points.
However, the provisions overreach with their emphasis on the promotion of mainstream values and requirements that algorithmic curation demote content that may upset the economic or social order. The provisions require algorithmic recommendation service providers to “uphold mainstream value orientations,” “vigorously disseminate positive energy,” and “advance the use of algorithms in the direction of good.” These draft provisions are indicative of a paternalistic and authoritarian state, which places the responsibility for promoting “mainstream values” on entities that use recommendation algorithms, to prevent any challenge to government control over every aspect of Chinese life.
Headmaster Xi appears intent on disciplining his pupils.