Business Technology

Saturday 26 May 2018

Facebook remove more than 20 million pieces of adult nudity or pornography in three months

Also “applied warning labels” to 3.5m pieces of violent content

Facebook. Photo: Stock
Facebook. Photo: Stock
Adrian Weckler

Adrian Weckler

Facebook took down 21 million pieces of adult nudity or pornography in the first three months of this year and “applied warning labels” to 3.5m pieces of violent content during the same period, according to the company.

However, it declined to say how many minors -- legal users who are between the ages of 13 and 17 -- saw the offending content.

“We’re not releasing that in this particular report,” said Alex Schultz, the company’s vice president of data analytics.

“We do absolutely look at the exposure of minors versus adults to things like adult news and pornography. Facebook’s community standards generally prohibit content like that, even though there may be some content like warzone violence where we put up warning screens.”

A spokeswoman later said that Facebook blocks “disturbing or sensitive content such as graphic violence” so that users under 18 cannot see it “regardless of whether it is removed from Facebook”.

The social networking giant also said that it disabled 583m fake accounts in the first quarter of the year and now estimates that between 3pc and 4pc of all active accounts during the period were fake.

However, it said that most of the 583m fake accounts were disabled “within minutes of registration” and that it prevents “millions of fake accounts” on a daily basis from registering.

Facebook gave out its updated figures after revealing that it has suspended 200 apps from its service in a post Cambridge Analytica investigation, aimed at stamping out inappropriate use of user data.

But it declined to say which countries see more of the offending content or which category of users.

“Yes there are clear skews in many of these metrics,” said Schultz. “But we’re not releasing this on a country by country basis.”

Facebook’s vice president of product management, Guy Rosen, said that the company’s systems are still in development for some of the content checks.

“For serious issues like graphic violence and hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams,” said Rosen.

“We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook. By comparison, we removed two and a half million pieces of hate speech in Q1 2018,38 percent of which was flagged by our technology.”

“We took down 21 million pieces of adult nudity or porn in Q1 2018, 96 percent of which was found and flagged by our technology before it was reported,” said Rosen. “Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, seven to nine views were of content that violated our adult nudity and pornography standards.”

Facebook also took down 837 million pieces of spam in [the first three months of the year], “nearly 100% of which we found and flagged before anyone reported it” said Rosen.

“As Mark said at F8 we have a lot of work still to do to prevent abuse,” he said. “It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. And more generally, as I explained last week, technology needs large amounts of training data to recognise meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.

“In addition, in many areas — whether it's spam, porn or fake accounts — we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts. It's why we're investing heavily in more people and better technology to make Facebook safer for everyone.”

 

Online Editors

Business Newsletter

Read the leading stories from the world of Business.

Also in Business