ASEAN Beat

What’s Next for Social Media in Indonesia?

Recent Features

ASEAN Beat | Society | Southeast Asia

What’s Next for Social Media in Indonesia?

How to make the most of Indonesia’s new Digital Platform law.

What’s Next for Social Media in Indonesia?
Credit: Pixabay

In late 2019, the Indonesian government decided to establish the nation’s new Digital Platform Law (PP PSTE no. 71/2019 on the Government Regulation on the Implementation of Electronic and Transaction System). The law, which revised the previous law (PP PSTE no. 82/2012), imposed a number of obligations on electronic system companies, not to mention social media companies. It requires social media companies to control the circulation of information on their platform, as well as protecting user data. However, merely placing obligations on social media companies will not be enough. The law needs follow-up in a few key areas.

First, the government and social media companies must establish a shared understanding of what counts as “restricted content.” In article 5, the Digital Platform Law orders social media companies to ensure that their platforms do not contain or accommodate the spread of restricted content. However, social media companies don’t always adhere to this policy. This is because both parties understand the term “restricted content” differently.

As mentioned in article 45 of the UU no. 19/2016 (the law that revises the Electronic Information and Transaction Law) the Indonesian government uses the term “restricted content” (“konten terlarang”). The term encompasses gambling, violation of moral code, blasphemy and/or defamation, extortion and/or threat, and hate-speech and false news. This definition carries a heavy political accent for two reasons: first because the term tends to connote what the Indonesian government perceives as a threat, and second because it hands the Indonesian government the legal foundation to actually control the perceived threat and order its removal.

This definition is distinct from social media companies’ definition of restricted content. Social media companies like Facebook and Twitter don’t talk about restricted content but “harmful content” (“konten berbahaya”). They put harmful content, generally, into two categories: content that should be removed and content that should only be limited. The first category usually applies to content deemed to violating the companies’ policies, such as graphic content, bullying, violent content, adult nudity, etc. In this case, social media companies and the Indonesian government are in agreement.

However, the second category is different. Social media companies define the second category of “harmful content” as content which is not violating company policy yet impedes users from receiving authentic information. One of the major instances is false news (disinformation). Facebook argues that disinformation should only be reduced, not removed, due to the vague interpretation of what constitutes disinformation versus opinion. Twitter seconds this, arguing that disinformation should not be taken down since it does not harm the principle of democracy in the sense that it does not suppress voter turnout. While social media companies desire to reduce this type of “harmful content,” the Indonesian government demands its removal.

With these differences, social media companies are unlikely to comply with the Indonesian government’s order since they perceive the risk posed by each type of restricted content differently. To solve this problem, the Indonesian government, social media companies, and social media users need to have further discussion defining and responding to restricted content. The approved definition and response should provide protection for vulnerable communities.

Second, Indonesia’s government should encourage social media users to be more proactive. Indonesia’s current reporting mechanism suggests that the Indonesian government will proactively restrain content found to violate the new law. Although no objection has been raised to this authority, this doesn’t seem a very sound model for the future of content governance in Indonesia.

Of course there are some interrelated issues that demand the proactive involvement of the government. Content that promotes terrorism, for instance, needs social media companies and the government to collaborate in exchanging information regarding terrorist organizations. However, the response should differ in other cases, such as disinformation. As previously argued, opinion and satire are often misinterpreted as disinformation. In spite of the Indonesian government’s duty to prevent potential unrest caused by deceptive information, its impartiality toward any content that does not uphold its agenda is uncertain. Therefore, the main actors in combating disinformation should not be the government but instead the users — both individuals and communities.

The Indonesian government should start encouraging individuals and communities to bear the duty to constrain the spread of disinformation on social media platforms by actively reporting content deemed to be violating the social media content policy. Social media companies should also carry out actions with similar goals. Aside from that, it is also important for social media companies to provide and promote an easily viewed and understood in-platform reporting mechanism. Social media companies have to make sure that content that is being reported by individuals and communities is acknowledged with a real response, without the company having to be pressured by the government.

Janitra Haryanto is a research project officer at the Center for Digital Society (CfDS) UGM. His research mostly aims to advocate and recommend tech-related policy to the government and other stakeholders.