in ,

Facebook Removes Over 3 Billion Fake Accounts in Three Months

Facebook removed more than 3 billion fake accounts from October to March, twice as many as the previous six months, the company said Thursday.

Nearly all of them were caught before they had a chance to become “active” users of the social network.

In a new report, Facebook said it saw a “steep increase” in the creation of abusive, fake accounts in the past six months. While most of these fake accounts were blocked “within minutes” of their creation, the company said this increase of “automated attacks” by bad actors meant not only that it caught more of the fake accounts, but that more of them slipped through the cracks.

This photo illustration taken on March 23, 2018 shows Facebook logos on a computer screen in Beijing. / AFP PHOTO / NICOLAS ASFOURI (Photo credit should read NICOLAS ASFOURI/AFP/Getty Images)

As a result, the company estimates that 5% of its 2.4 billion monthly active users are fake accounts. This is up from an estimated 3% to 4% in the previous six-month report.

Facebook will begin releasing this report quarterly starting next year, rather than twice a year, and start including Instagram.
“The health of the discourse is just as important as any financial reporting we do, so we should do it just as frequently,” CEO Mark Zuckerberg said on a call with reporters on Thursday about the report. “Understanding the prevalence of harmful content will help companies and governments design better systems for dealing with it. I believe every major internet service should do this.”
In another blog post shared Thursday, Facebook VP of Analytics Alex Schultz explained some of the reasons behind the sharp increase in fake accounts. He said one factor is “simplistic attacks,” which he claims don’t represent real harm or even a real risk of harm. This often occurs when someone makes a hundred million fake accounts that are then taken down right away. Schultz said they are removed so fast that nobody is exposed to them and they aren’t included in active user counts.
The company said it estimates 25 of every 10,000 content views, such as watching a video or checking out a photo, on Facebook were of things that violated its violence and graphic content policies. Between 11 and 14 of every 10,000 content views violated its adult nudity and sexual activity policies.
Facebook also shared for the first time its efforts to crack down on illegal sales of firearms and drugs on its platform.
It said it increased its proactive detection of both drugs and firearms. During the first quarter, its systems found and flagged 83.3% of violating drug content and 69.9% of violating firearm content, according to the report. Facebook said this occurred before users reported it.
Facebook’s policies say users, manufacturers or retailers cannot buy or sell non-medial drugs or marijuana on the platform. The rules also don’t allow users to buy, sell, trade or gift firearms on Facebook, including parts or ammunition.
In the report, the company also shared how many content removals users appealed, and how much of it the social network restored. People have the option to appeal Facebook’s decisions, with the exception of content that is flagged for extreme safety concerns.
Between January and March, Facebook said it “took action” on 19.4 million pieces of content. The company said 2.1 million pieces of content were appealed. After the appeals, 453,000 pieces of content were restored.
Hate speech has been particularly challenging for Facebook. The company’s automated systems have a hard time identifying and removing hate speech, but that the technology is improving. The percentage of hate speech Facebook said it found proactively — meaning before users reported it — rose to 65.4% in the first quarter, up from 51.5% in the third quarter of 2018.
“What [AI] still can’t do well is understand context,” Justin Osofsky, Facebook VP of global operations, said on the call. “Context is key when evaluating things like hate speech.”
Osofsky also said Facebook will begin a pilot program where some of its content reviewers will focus on hate speech. The goal is for those reviewers to have a “deeper understanding” of how hate speech manifests and make “more accurate calls.”

Written by MT

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Obama Throws Touchdowns and Hit Doubles with Kids at an After-school Program- 9 Other Adorable Times Obama Embraced His Inner Child

Akshar School: The Unique School In Assam That Accepts Plastic Waste as School Fees