Facebook reveals AI weaknesses on hate speech

Preceding the release of Facebook’s latest report on enforcing community standards, Frontier interviewed the company’s vice president for community integrity on what it’s doing to tackle hate speech in Myanmar.

By VICTORIA MILKO and CLARE HAMMOND | FRONTIER

Sitting in traffic in downtown Yangon, scrolling through a Facebook News Feed on a mobile phone, it can be easy to see why Facebook has been accused of not doing enough to respond to violence-inciting messages in Myanmar.

One post, liked by over 1,000 people, shows a political cartoon calling Rohingya Muslims fleas, with no right to live in the country. Another post, shared by over 500 people, displays the personal information of a journalist working in Myanmar, with some commentators calling for them to be arrested or even physically attacked.

In an era where a single Facebook post can disseminate a message further than any megaphone, the social media site has acted as a digital soapbox for individuals and groups across the country. Every share only amplifies the message reach even further and many argue that Facebook needs to do more to ensure dangerous content is removed before it goes viral.

Addressing challenges

Myanmar’s civil society hasn’t remained silent on the issue. In April, after several years of flagging problems with Facebook’s policy team, Yangon civil society groups sent an open letter to CEO Mr Mark Zuckerberg, criticising the lack of systems in place to combat hate speech, as well as Facebook’s opacity and reticence to engage local stakeholders.

Support more independent journalism like this. Sign up to be a Frontier member.

In an email response Zuckerberg said that Facebook was addressing the issue by hiring more Myanmar-language speakers and assigning a product team to develop tools that could be used to combat hate speech.

This week, in its most recent attempt to show how community standards are enforced and how the company is working to minimise the impact of standards violations, Facebook released an 83-page report, Understanding the Facebook Community Standards Enforcement Report.

Ahead of the report’s release, Frontier interviewed members of Facebook’s team, including the company’s vice president for community integrity, Mr Guy Rosen.

Facebook has taken several steps to address the issues raised against the company in Myanmar, including meeting with local organisations, verifying media pages and deleting imposter accounts, Rosen said. It’s also developing tools to better support Zawgyi character encoding, he added. (See our primer on the problems with Myanmar fonts.)

Another Facebook initiative is the development and use of artificial intelligence to combat content that violates its community guidelines.

The report shows that globally Facebook has been mostly successful in using artificial intelligence to remove posts containing nudity, terrorist propaganda and graphic violence. In 93.6 percent of cases, Facebook took action against content in those three categories before other users reported it.

But there is one area where Facebook systems are failing – and it’s arguably the one most relevant to Myanmar. The report shows that only 38 percent of hate speech posts had action taken against them before other users reported them.

“Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards, so we tend to find and flag less of it, and rely more on user reports, than with some other violation types,” states the report.

Attempts to deploy artificial intelligence to tackle hate speech have not been successful, said Rosen. “Artificial intelligence still isn’t quite there – there’s a lot of things we still need to do because there’s a lot of nuance, a lot of context needed, in order to really determine is something is actually hate speech or not,” he said. “And so it continues to be a combination of the technology and people.”

At its heart, artificial intelligence technology is about systems that learn by example. A problem for Facebook in Myanmar is that it has not developed adequate reporting tools. Until earlier this month, for example, Facebook’s Messenger platform did not provide a reporting function.

Facebook product manager Ms Sara Su said a lot of hateful content in Myanmar is still unreported or misreported. Without adequate context, reviewers “might not be able to understand the real world harm that might result”, she said.

“This tells us that we cannot rely on reports alone to get accurate signals, or make our systems work well to scale.”

A reliance on people, but a lack of Myanmar speaking staff

There’s consensus that Facebook needs Myanmar-language speakers to “train” its artificial intelligence, and that it doesn’t have anywhere near enough at present.

But the company wouldn’t disclose how many of its staff are fluent in Myanmar, only echoing Zuckerberg’s claim that “dozens” of Myanmar-speaking staff have been hired, and adding that this number would “double” by the year-end.

Frontier was also told that there is no data available on how many – if any – speakers of ethnic minority languages it has on staff. For a company built around the collection of data, it seems rather odd.

But even if Facebook doubles its unspecified number of Myanmar-language speaking staff, “Dozens of content reviewers is not going to cut it,” Phandeeyar CEO Mr Jes Petersen said in a recent interview with the New York Times.

Petersen cited the example of Germany, where Facebook has hired about 1,200 moderators in response to strict hate-speech laws. A proportional amount for Myanmar would be around 800 moderators, he said.

Myanmar civil society groups have asked for other numbers: on hate speech reporting, the removal of abuse and fake accounts, the average review and response time, and review targets, but Facebook told Frontier it could not share country-specific data. The groups also asked the company to talk less about inputs and more about performance, progress and positive outcomes.

Until it does, it will be impossible to tell if Facebook is mitigating the spread of hate speech in Myanmar. The consequences of failing to do so could be “really serious – potentially disastrous”, the groups wrote in an email reply to Zuckerberg.

Rosen told Frontier the company would continue to work with various stakeholders in Myanmar to learn more and to understand what else it can do.

“We may not have all the answers,” he said, “but there’s definitely a number of steps we have taken and will continue to take.”

TOP PHOTO: A woman looks at her Facebook wall while she travels on a bus in Yangon. (AFP)

By Victoria Milko

By Victoria Milko

Victoria Milko is a photojournalist from Washington, D.C. She was the Multimedia Editor at Frontier until November 2019 and her work has been featured in publications including The Washington Post, NPR and Al Jazeera.
Share on facebook
Share on twitter
Share on email

More stories

Latest Issue

Stories in this issue
Respect the election result, but don’t gloss over the flaws
The large turnout on November 8 powerfully demonstrated society’s commitment to democracy, but this should not overshadow deep flaws in the electoral process that threaten to undermine future progress.
Image, strategy and friends with money: How the NLD did it again
Trust in Aung San Suu Kyi, a tight social media strategy and help from business leaders were among the factors behind the National League for Democracy’s landslide election win.

Stay on top of Myanmar current affairs with our Daily Briefing and Media Monitor newsletters

Our fortnightly magazine is available in print, digital, or a combination beginning at $80 a year

Sign up for our Frontier Fridays newsletter. It’s a free weekly round-up featuring the most important events shaping Myanmar