ASEAN Beat

Facebook Fails to Detect Hate Speech Against Rohingya, Report Claims

Recent Features

ASEAN Beat | Politics | Southeast Asia

Facebook Fails to Detect Hate Speech Against Rohingya, Report Claims

A leading advocacy group claims that the platform’s ability to detect Burmese-language hate speech “remains abysmally poor.”

Facebook Fails to Detect Hate Speech Against Rohingya, Report Claims
Credit: Depositphotos

A new report claims that the social media giant Facebook is still failing to detect hate speech and calls to violence against Myanmar’s Rohingya Muslim minority, years after such behavior was found to have played a central enabling role in their persecution.

In a report published on March 20, the London-based watchdog Global Witness details how it tested Facebook’s safeguards by submitting eight paid ads to the social media network for approval, each including different versions of real-world hate speech copied from a United Nations report.

All eight ads were subsequently approved by Facebook to be published, leading Global Witness to conclude that “Facebook’s ability to detect Burmese language hate speech remains abysmally poor.” It added, “Facebook and other social media platforms should treat the spread of hate and violence with the utmost urgency.”

Since 2017, military assaults in Myanmar’s Rakhine State have displaced more than 730,000 Rohingya civilians and driven them across the border into Bangladesh. Hundreds of villages have been burned to the ground, families have been separated and killed, and hundreds, possibly thousands of women and girls have been raped. Yesterday, the United States government officially described Myanmar’s policy toward the Rohingya as genocide.

In March 2018 the U.N. Independent International Fact-Finding Mission on Myanmar reported that that social media platforms, of which Facebook was by far the most prominent, had played a “determining role” in the violence against the Rohingya, and had “substantively contributed to the level of acrimony and dissension and conflict” in the country. As the New York Times reported of its own investigation into Facebook’s unwitting role in facilitating the violence, “Myanmar military personnel who turned the social network into a tool for ethnic cleansing.”

To its credit, Facebook has admitted its role in the violence. It responded to the findings of the U.N. Fact-Finding Mission by removing the official pages of military commander-in-chief Senior Gen. Min Aung Hlaing, who led the military coup of February 2021, and other senior officials. In late 2018, Facebook published the findings of a report that it commissioned into its role in the ethnic cleansing of the Rohingya, admitting that “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”

One of Facebook’s pledges was to establish a dedicated team to work on the country, employ more content reviewers who speak the country’s various languages – from just two in early 2015 to 99 by the end of 2018 – and improve its ability to use artificial intelligence to flag examples of hate speech in Burmese.

But the Global Witness report, while only providing a small sample size, suggests that these safeguards are far from secure. Facebook says that before adverts are permitted to appear online, they’re reviewed to make sure that they meet their advertising policies, and that during this process they check the advert’s “images, video, text and targeting information, as well as an ad’s associated landing page.” Nonetheless, they were still approved for publication.

It was perhaps understandable that Facebook’s monitoring ability lagged behind its rapid phase of expansion in Myanmar after 2014, when the country saw one of the sharpest spikes in internet usage of any country in history. But with detailed information about the extent of its unwitting complicity in the persecution of the Rohingya, and the considerable profit that the company earns from its operations in Myanmar, the imperfection – to put it kindly – of its current systems gives an indication of its true hierarchy of interests.

“In places such as Myanmar where there is clear evidence that Facebook was used to incite real world harms that cost ten thousand people their lives, and hundreds of thousands their homes and livelihoods, and where the Rohingya face an ongoing heightened risk of violence and continued discrimination,” the Global Witness report rightly concluded, “the very minimum the platform should do is ensure it is not being used for future incitement, and provide remedy to victims.”