Advertisement

Hate speech dropped by almost 50% over past 9 months, claims Facebook over allegations of inability to curb objectionable content

11:13 AM Oct 18, 2021 | Agencies

New Delhi: After allegations of its inability to curb hate speech, Facebook has now claimed that the prevalence of hate speech on the social media platform has dropped by almost 50% in the last three quarters.

The claim came in response to a report in The Wall Street Journal (WSJ) on Sunday, which said that Facebook's content moderators are not consistently successful at removing objectionable content using Artificial Intelligence (AI).

Advertisement
Advertisement

In a reply, Guy Rosen, Vice-President of Integrity at Facebook, said that their technology is having a big impact on reducing how much hate speech people see on Facebook. "According to our latest Community Standards Enforcement report, its prevalence is about 0.05 per cent of content viewed or about five views per every 10,000, down by almost 50 per cent in the last three quarters," he added.

Advertisement

"Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress. This is not true," Rosen said.

Facebook said when it began reporting metrics on hate speech, only 23.6% of content it removed was detected proactively by its systems; the majority of what it removed was found by people but now that number is more than 97%

The WSJ report claimed that internal documents show that two years ago, Facebook reduced the time that human reviewers focused on hate speech complaints, and made other adjustments that reduced the number of complaints. "That in turn helped create the appearance that Facebook's AI had been more successful in enforcing the company's rules than it actually was," the report said.

Rosen said in a blogpost that focusing just on content removals is the wrong way to look at how we fight hate speech. "We need to be confident that something is hate speech before we remove it. If something might be hate speech but we're not confident enough that it meets the bar for removal, our technology may reduce the content's distribution or won't recommend groups, pages or people that regularly post content that is likely to violate our policies," he noted.

Facebook said when it began reporting metrics on hate speech, only 23.6% of content it removed was detected proactively by its systems; the majority of what it removed was found by people. "Now that number is more than 97%. But our proactive rate doesn't tell us what we are missing and doesn't account for the sum of our efforts, including what we do to reduce the distribution of problematic content," the Facebook Executive said.

Also Read: Why regulating Facebook must be done sooner, rather than later, writes Harini Calamur

(To view our epaper please Read Now. For all the latest News, Mumbai, Entertainment, Cricket, Business and Featured News updates, visit Free Press Journal. Also, follow us on Twitter and Instagram and do like our Facebook page for continuous updates on the go)

Advertisement