Facebook admits 4% of accounts were fake

What's the score

A spokeswoman later said that Facebook blocks "disturbing or sensitive content such as graphic violence" so that users under 18 can not see it "regardless of whether it is removed from Facebook". The post said Facebook found nearly all of that content before anyone had reported it, and that removing fake accounts is the key to combating that type of content.

The number of pieces of nude and sexual content that the company took action on during the period was 21 million, the same as during the final quarter of past year. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards".

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late previous year.

In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam, and shut down a further 583m fake accounts on the site in the three months.

Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the US presidential election and the Brexit vote to leave the European Union, both in 2016. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do.

For a sense of scale, between 22 and 27 of every 10,000 pieces of content contained graphic violence in the first quarter of 2018, up from between 16 and 19 in the previous quarter. The renewed attempt at transparency is a nice start for a company that has come under fire for allowing its social network to host all kinds of offensive content. The company didn't provide a number of views, but said it was "extremely low".

It attributed the increase to the enhanced use of photo detection technology. In total, it took action on 21 million pieces of content in Q4, similar to Q4.

However, Facebook's ability to find this hate speech before users had reported it was not as good as other categories, with the company picking up only 38%. The remaining problem content was reported by human users. "Facebook's community standards generally prohibit content like that, even though there may be some content like warzone violence where we put up warning screens".

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down. It says it found and flagged almost 100% of spam content in both Q1 and Q4.

Facebook also said it removed 583 million fake accounts in the same period, or the equivalent of 3 to 4 percent of its monthly users. The company's internal "detection technology" has been efficient at taking down spam and fake accounts, removing them by the hundreds of millions during Q1.

In 85.6 percent of the cases, Facebook detected the images before being alerted to them by users, said the report, issued the day after the company said "around 200" apps had been suspended on its platform as part of an investigation into misuse of private user data.

Related news: