Facebook opens up about efforts to scrub offensive content


Facebook's latest transparency move is showing you how much objectionable content it removes

Facebook said it released the report to start a dialog about harmful content on the platform, and how it enforces community standards to combat it.

In its quarterly report on Community Standards Enforcement, Facebook has revealed that it has cut down on spam, hate speeches, violence and adult nudity by axing 583 million fake accounts. Most of the action taken was to remove spam content and the fake accounts used to distribute that spam.

We took down 21 million pieces of adult nudity or porn in Q1 2018 - 96 percent of which was found and flagged by our technology before it was reported. If Facebook tamps down on bad content, as some analysts predict, it is unlikely to lose users and advertisements, which account for 98% of its annual revenue.

On Tuesday, May 15, Guy Rosen, Facebook's Vice President of Product Management, posted a blog post on the company's newsroom.

It admitted, however, that 3% to 4% of its accounts are fake. "In other words, of every 10,000 content views, an estimate of 22 to 27 contained graphic violence", the report said.

Facebook also took down 837 million pieces of spam in Q1, nearly all of which were identified and flagged before anyone reported them.

Facebook pulled or slapped warnings on almost 30 million posts containing sexual or violent images, terrorist propaganda or hate speech during the first quarter. Facebook said that Zuckerberg "has no plans to travel to the United Kingdom", said Damian Collins, the leader of the UK's Digital, Culture, Media and Sport Committee, in a statement Tuesday.

Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the US presidential election and the Brexit vote to leave the European Union, both in 2016.

With graphic violence, the autimaged system also did a pretty good job, but as far as flagging hate speech, the company was only able to recognize around 38% of the total number.

Meanwhile, Facebook removed or added warning labels to about 3.5 million pieces of graphic violence content. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language. The company attributed the decline to the "variability of our detection technology's ability to find and flag" fakes.

"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", he said.

Meanwhile, Facebook said on Monday it has suspended around 200 apps as part of its investigation into whether companies misused personal user data gathered from the social network.

Latest News