Facebook Removed 2.2 Billion Fake Accounts From January To March
(CBSDFW.COM/CNN) -- Facebook took down 2.2 billion fake accounts between January and March, the company announced Thursday.
That number is only slightly less than the 2.38 billion monthly active users Facebook has around the world. For comparison, Facebook disabled 1.2 billion fake accounts in the previous quarter and 694 million between October and December 2017.
In total over a six month period, more than three billion fake accounts were removed from October 2018 to March 2019.
The new numbers were released Thursday in the company's third Community Standards Enforcement report. Facebook will begin releasing this report quarterly starting next year, rather than twice a year, and start including Instagram.
"The health of the discourse is just as important as any financial reporting we do, so we should do it just as frequently," CEO Mark Zuckerberg said on a call with reporters on Thursday about the report. "Understanding the prevalence of harmful content will help companies and governments design better systems for dealing with it. I believe every major internet service should do this."
In another blog post shared Thursday, Facebook VP of Analytics Alex Schultz explained some of the reasons behind the sharp increase in fake accounts. He said one factor is "simplistic attacks," which he claims don't represent real harm or even a real risk of harm. This often occurs when someone makes a hundred million fake accounts that are then taken down right away. Schultz said they are removed so fast that nobody is exposed to them and they aren't included in active user counts.
The company said it estimates 25 of every 10,000 content views, such as watching a video or checking out a photo, on Facebook were of things that violated its violence and graphic content policies. Between 11 and 14 of every 10,000 content views violated its adult nudity and sexual activity policies.
Facebook also shared for the first time its efforts to crack down on illegal sales of firearms and drugs on its platform.
It said it increased its proactive detection of both drugs and firearms. During the first quarter, its systems found and flagged 83.3% of violating drug content and 69.9% of violating firearm content, according to the report. Facebook said this occurred before users reported it.
Facebook's policies say users, manufacturers or retailers cannot buy or sell non-medial drugs or marijuana on the platform. The rules also don't allow users to buy, sell, trade or gift firearms on Facebook, including parts or ammunition.
In the report, the company also shared how many content removals users appealed, and how much of it the social network restored. People have the option to appeal Facebook's decisions, with the exception of content that is flagged for extreme safety concerns.
Between January and March, Facebook said it "took action" on 19.4 million pieces of content. The company said 2.1 million pieces of content were appealed. After the appeals, 453,000 pieces of content were restored.
Hate speech has been particularly challenging for Facebook. The company's automated systems have a hard time identifying and removing hate speech, but that the technology is improving. The percentage of hate speech Facebook said it found proactively — meaning before users reported it — rose to 65.4% in the first quarter, up from 51.5% in the third quarter of 2018.
"What [AI] still can't do well is understand context," Justin Osofsky, Facebook VP of global operations, said on the call. "Context is key when evaluating things like hate speech."
Osofsky also said Facebook will begin a pilot program where some of its content reviewers will focus on hate speech. The goal is for those reviewers to have a "deeper understanding" of how hate speech manifests and make "more accurate calls."
(© Copyright 2019 CBS Broadcasting Inc. All Rights Reserved. The-CNN-Wire™ & © 2019 Cable News Network, Inc., a Time Warner Company contributed to this report. All rights reserved.)