Now it has released the numbers in a Community Standards Enforcement Report.
In its quarterly report on Community Standards Enforcement, Facebook has revealed that it has cut down on spam, hate speeches, violence and adult nudity by axing 583 million fake accounts.
The self-assessment came Tuesday in Facebook's first breakdown of how much material it removes in an effort to shield its 2.2 billion users from offensive content and prevent the social network from becoming a terrorist recruitment center.
"Today, as we sit here, 99 percent of the ISIS and al-Qaida content that we take down on Facebook, our AI systems flag before any human sees it", Zuckerberg said at the hearing. Overall, we estimate that around three to four percent of the active Facebook accounts on the site during this time period were still fake. It said it estimates that between 7 and 9 views out of every 10,000 pieces of content viewed on the social media platform were of content that violated the company's adult nudity and pornography standards.More news: Las Vegas evens Stanley Cup series with Winnipeg
More news: Warriors Overcome Harden's Rockets In Game One Of Western Conference Finals
More news: Elon Musk says Tesla planning 'thorough reorganisation'
In addition to the removal of these fake accounts, the company acted on 21 million pieces of nudity and sexual activity, 3.5 million posts that displayed violent content, 2.5 million examples of the speech as well aS 1.9 million pieces of terrorist content. But of the more recent total, only 38 percent was flagged by Facebook before users reported it (an improvement on the 23.6 percent in the prior three months).
Facebook is struggling to block hate speech posts, conceding its detection technology "still doesn't work that well" and it needs to be checked by human moderators.
Facebook's new report, which it plans to update twice a year, comes a month after the company published its internal rules for how reviewers decide what content should be removed. Facebook said that Zuckerberg "has no plans to travel to the United Kingdom", said Damian Collins, the leader of the UK's Digital, Culture, Media and Sport Committee, in a statement Tuesday. Facebook said users were more aggressively posting images of violence in places like war-torn Syria. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important".
"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", he said. The Facebook leader added that the company would notify users if their data were compromised.