Facebook takes hate speech seriously… as long as it’s in English

Facebook is failing miserably at moderating content posted in minority languages

Admins feel let down

We interviewed the administrators of these pages about their experience of moderating hate, and what they thought Facebook could do to help them reduce abuse.

They told us Facebook would often reject their reports of hate speech, even when the post clearly breached itsCommunity Standards. In some cases, messages that were originally removed would be re-posted on appeal.

Most page admins said the so-called “flagging” process rarely worked, and they found it disempowering. They wanted Facebook to consult with them more to get a better idea of the types of abuse they see posted and why they constitute hate speech in their cultural context. hate speech is not the problem

Facebook has long had a problem with thescale and scope of hate speechon its platform in Asia. For example, while it has banned someHindu extremists, it has left their pages online.

However, during our study, we were pleased to see that Facebook didbroaden its definitionof hate speech, which now captures a wider range of hateful behavior. It also explicitly recognizes that what happens online can trigger offline violence.

It’s worth noting in the countries we focused on, “hate speech” is seldom precisely legally prohibited. We found other regulations such as cybersecurity or religious tolerance laws could be used to act against hate speech but instead tended to be used to suppress political dissent.

We concluded that Facebook’s problem is not in defining hate, but being unable to identify certain types of hate, such as that posted inminority languagesand regional dialects. It also often fails to respond appropriately to user reports of hate content.

Where hate was worst

Media reports have shown Facebook struggles to automatically identify hateposted in minority languages. It hasfailed to provide training materialsto its own moderators in local languages, even though many are from Asia Pacific countries where English is not the first language.

In the Philippines and Indonesia in particular, we found LGBTIQ+ groups are exposed to an unacceptable level of discrimination and intimidation. This includes death threats, targeting of Muslims, and threats of stoning or beheading.

On Indian pages, Facebook filters failed to capture vomiting emojis posted in response to gay wedding photos and rejected some very clear reports of vilification.

In Australia, on the other hand, we found no unmoderated hate speech – only other types of insensitive and inappropriate comments. This could indicate less abuse gets posted, or there is more effective English language moderation from either Facebook or page administrators.

Similarly in Myanmar LGBTIQ+ groups experienced very little hate speech. But we are aware Facebook is working hard toreduce hate speech on its platform there, in the wake of it being used topersecute the Rohingya Muslim minority.

Also, it’s likely gender diversity isn’t as volatile a subject in Myanmar as it is inIndia,Indonesia, and the Philippines. In these countries, LGBTIQ+ rights are highly politicized.

Facebook has taken someimportant steps towards tackling hate speech. However, we’re concerned COVID-19 has forced the platform to becomemore reliant on machine moderation. That too at a time when it can only automatically identify hate in around 50 languages – even thoughthousands are spoken every dayacross the region.

What we recommend

Our report to Facebook outlines several key recommendations to help improve its approach to combating hate on its platform. Overall, we have urged the company to convene more regularly with persecuted groups in the region, so it can learn more about hate in their local contexts and languages.

This needs to happen alongside a boost to the numbers of its country policy specialists and in-house moderators with minority language expertise.

Mirroringefforts in Europe, Facebook also needs to develop and publicize its trusted partners’ channel. This provides visible, official hate speech-reporting partner organizations through which people can directly report hate activities to Facebook during crises such as the Christchurch mosque attacks.

More broadly, we would like to see governments and NGOs cooperate to set up an Asian regional hate speech monitoring trial, similar to oneorganized by the European Union.

Following the EU example, such an initiative could help identify urgent trends in hate speech across the region, strengthen Facebook’s local reporting partnerships, and reduce the overall incidence of hateful content on Facebook.

Article byFiona R Martin, Associate Professor in Convergent and Online Media,University of SydneyandAim Sinpeng, Lecturer in Government and International Relations,University of Sydney

This article is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Story byThe Conversation

An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with

More TNW

About TNW

Meta proposes ad-free service for EU users — but you’ll have to pay for it

Meta begrudgingly launches €9.99 ad-free subscription for Facebook and Instagram

Discover TNW All Access

Facebook is receiving sensitive medical information from hospital websites

How to use Facebook’s new Feeds tab to customize the posts you see