Flashpoints

Artificial Intelligence, Disinformation, and the 2024 Indonesian Elections

Recent Features

Flashpoints | Politics | Southeast Asia

Artificial Intelligence, Disinformation, and the 2024 Indonesian Elections

With the world at a technological watershed, Indonesia needs to develop mechanisms to guard against the manipulation of AI for political purposes.

Artificial Intelligence, Disinformation, and the 2024 Indonesian Elections
Credit: Depositphotos

The 2024 Indonesian elections will take place at a technological watershed, with artificial intelligence (AI) and machine learning poised to play an influential role. As Indonesia readies itself for this crucial electoral contest, there is both excitement about AI’s possibilities and concern about its potential misuses. This article delves into how AI might reshape the political landscape of Indonesia, focusing on the opportunities it presents and the challenges it might pose. This is especially important in light of the General Elections Commission (KPU)’s recent declaration that the regulation of AI technology is beyond its jurisdiction.

In the domain of individualized voter outreach and engagement, AI has emerged as a revolutionary instrument. Political parties now possess the capability to analyze vast troves of voter data meticulously. Using AI, political parties can not only address specific concerns and appeal to diverse demographics, but also can micro-target, segmenting a large voter base into distinct interest-driven categories. While digital campaigns, amplified by these technologies, have created novel channels for political parties to establish widespread connections, scholars, policymakers, and observers have raised concerns about the potential impact of AI on democracy globally.

Despite their numerous advantages, AI-powered systems give rise to ethical concerns and introduce fresh risks to human rights and democratic processes on a global scale. Take, for instance, the ongoing conflict between Hamas and Israel; social media is currently inundated with misinformation, including AI-generated content aimed at swaying public sentiment. Governments and observers around the world are increasingly alarmed by the unchecked influence and potential misuse of AI in shaping political narratives and influencing public discourse.

AI and Politics in Indonesia

Similar concerns have been voiced by observers in Indonesia, where the widespread availability of AI tools opens the door for political actors to exploit this technology for coordinated propaganda campaigns aimed at influencing electoral outcomes. There is clear evidence of Indonesians harnessing AI technology to craft engaging social media content on platforms like TikTok, Facebook, Instagram, and X. It is highly suspected that these tools will be widely employed as the 2024 election date draws near, in an attempt to sway voters.

The capacity of AI tools, like Elevenlabs.io, HeyGen, and DeepbrainAI, to manipulate videos, emulate voices, or generate false text on a massive scale poses a significant threat to democracy. The proliferation of AI-generated propaganda, meticulously designed to exploit individual biases, wields the power to influence and mold public opinion. Unfortunately, even in Indonesia, fact-checkers often find themselves ill-prepared, lacking the tools to verify or debunk such deceptive material promptly.

Plus, with the clear evidence of buzzers or cybertroopers that have operated on social media during previous electoral periods, AI tools could help these engagement-hungry propagandists pollute further the information ecosystem during the election period.

While a growing suite of AI-powered tools has been crafted to detect AI-generated content, discerning fact from fiction remains a challenge, especially for ordinary social media users. Experts note that straightforward fact-checking isn’t always the solution, as the context often plays a critical role. This suggests that AI-generated content might be factually accurate but misleading within specific contexts, making AI fact-checking tools susceptible to errors in terms of context-dependent content.

In a recent discussion I had with Ika Ningtyas, the secretary-general of the Alliance of Independent Journalists, she voiced her concerns over the rising wave of AI-based content creation tools in Indonesia. Ika pointed out alarming instances of AI-generated content across platforms like X, TikTok, and Facebook. A striking case she cited was an AI-produced video of former Minister of Health Terawan Agus Putrano purportedly claiming a breakthrough in diabetes treatment. This video gained significant traction on Facebook before being debunked by CekFakta, a local fact-checking organization. For Ika, such cases underscore the potential of AI-generated content to shape, if not skew, public perceptions, as Indonesia approaches its election campaign period.

“While certain characteristics of AI-generated content can still be discerned by trained fact-checkers, it’s challenging for the average person to distinguish,” she said. “As a result, people might be more inclined to accept such content at face value. Often, these ‘deepfake’ contents gain momentum faster than they can be debunked by fact-checkers.”

In another example, a video featuring President Joko Widodo singing a hit song “Asmalibrasi” spread like wildfire on X, showcasing the rising trend of using AI to depict Indonesian political figures. And last week, another AI-produced video depicting presidential candidate Prabowo Subianto speaking fluent Arabic went viral on TikTok. This particular video racked up over 1.7 million views in three days after it was posted on November 7, but was subsequently exposed as a hoax by fact-checker. The video remained on TikTok, and despite being debunked, recent comments on the video still indicate some segment of Indonesian TikTok users believe the video to be genuine. Such instances of rapid proliferation demonstrate the convincing nature of these synthetic creations and their potential influence on public opinion.

A Way Forward

Without effective oversight, such as that provided by entities like KPU and General Election Supervisory Agency (Bawaslu), the unchecked proliferation of AI-generated misinformation could spiral out of control, potentially leading to a misinformed and misled electorate. The fluidity of digital content transcending platforms is a particularly pressing concern, notably on encrypted platforms such as WhatsApp, which has historically played a pivotal role in influencing Indonesia’s elections. From static images to realistic deepfake videos, Indonesians are ubiquitously exposed to politically charged content, further intensifying the challenges that arise at the intersection of AI and elections.

Safeguarding free and fair elections necessitates anticipating and appropriately responding to emerging technologies. As AI systems become more sophisticated, policymakers, technology companies, and civil society need to collaborate to ensure these technologies empower rather than manipulate citizens. Although Indonesia launched its National Strategy for Artificial Intelligence in 2020, there’s a pressing need for a more focused policy tailored to the information realm. Such a policy would promote responsible innovation, ensuring that the advantages of AI are harnessed while simultaneously safeguarding essential democratic principles, especially freedom of expression.