Facebook’s new transparency report now includes data on takedowns of ‘bad’ content, including hate speech

INSUBCONTINENT EXCLUSIVE:
Facebook this morning released its latest Transparency report, where the social network shares information on government requests for user
data, noting that these requests had increased globally by around 4 percent compared to the first half of 2017, though U.S
government-initiated requests stayed roughly the same
In addition, the company added a new report to accompany the usual Transparency report, focused on detailing how and why Facebook takes
action on enforcing its Community Standards, specifically in the areas ofgraphic violence, adult nudity and sexual activity, terrorist
propaganda, hate speech, spam and fake accounts. In terms of government requests for user data, the global increase led to 82,341 requests
in the second half of 2017, up from 78,890 during the first half of the year.U.S
requests stayed roughly the same at 32,742; though 62 percent included a non-disclosure clause that prohibited Facebook from alerting the
user & that up from 57 percent in the earlier part of the year, and up from 50 percent from the report before that
This points to use of the NDA becoming far more common among law enforcement agencies. The number of pieces of content Facebook restricted
based on local laws declined during the second half of the year, going from28,036 to 14,294
But this is not surprising & the last report had an unusual spike in these sort of requests due to a school shooting in Mexico, which led to
the government asking for content to be removed. There were also 4646 disruptions of Facebook services in 12 countries in the second half of
2017, compared to 52 disruptions in nine countries in the first half. And Facebook and Instagram took down2,776,665 pieces of content based
on 373,934 copyright reports, 222,226 pieces of content based on 61,172 trademark reports and 459,176 pieces of content based on 28,680
counterfeit reports. However, the more interesting data this time around comes from a new report Facebook is appending to its Transparency
report, called theCommunity Standards Enforcement Reportwhich focuses on the actions of Facebook review team
This is the first time Facebook has released its numbers related to its enforcement efforts, and follows its recent publication of its
internal guidelines three weeks ago. In 25 pages, Facebook in April explained how it moderates content on its platform, specifically around
areas likegraphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts
These are areas where Facebook is often criticized when it screws up & like when it took down thenewsworthy &Napalm Girl& historical photo
because it contained child nudity, before realizing the mistake and restoring it
It has also been more recently criticized for contributing to Myanmar violence, as extremists& hate speech-filled posts incited violence
This is something Facebook also today addressed through an update for Messenger, which now allows users to report conversations that violate
community standards. Today Community Standards report details the number of takedowns across the various categories it enforces. Facebook
says that spam and fake account takedowns are the largest category, with 837 million pieces of spam removed in Q1 & almost all proactively
removed before users reported it
Facebook also disabled 583 million fake accounts, the majority within minutes of registration
During this time, around 3-4 percent of Facebook accounts on the site were fake. The company is likely hoping the scale of these metrics
makes it seem like it doing a great job, when in reality, it didn&t take that many Russian accounts to throw Facebook entire operation into
disarray, leading to CEO Mark Zuckerberg testifying before a Congress that now considering regulations. In addition, Facebook says it took
down the following in Q1 2018: Adult Nudity and Sexual Activity: 21 million pieces of content; 96 percent was found and flagged by
technology, not people Graphic violence: took down or added warning labels to 3.5 million pieces of content; 86 percent found and flagged by
technology Hate speech: 2.5 million pieces of content, 38 percent found and flagged by technology You may notice that one of those areas is
lagging in terms of enforcement and automation. Facebook, in fact, admits that its system for identifying hate speech &still doesn&t work
that well,& so it needs to be checked by review teams. &…we have a lot of work still to do to prevent abuse,& writesGuy Rosen, VP of
Product Management, on the Facebook blog
&It partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content
because context is so important.& In other words, A.I
can be useful at automatically flagging things like nudity and violence, but policing hate speech requires more nuance than the machines can
yet handle
The problem is that people may be discussing sensitive topics, but they&re doing it to share news, or in a respectful manner, or even
describing something that happened to them
It not always a threat or hate speech, but a system only parsing words without understanding the full discussion doesn&t know this. To get
an A.I
system up to par in this area, it requires a ton of training data
And Facebook says it doesn&t have that for some of the less widely-used languages. (This is also a likely response to the Myanmar situation,
where the company belatedly & aftersix civil society organizations,criticized Mr
Zuckerberg in a letter & said it had hired &dozens& of human moderators
Critics say that not enough & in Germany, for example, which has strict laws around hate speech & Facebook hired about 1,200 moderators, The
NYT said.) It seems the obvious solution is staffing up moderation teams everywhere, until A.I
technology can do as good of a job as it can on other aspects of content policy enforcement
This costs money, but it also clearly critical when people are dying as a result of Facebook lacking ability to enforce its own
policies. Facebook claims it hiring as a result, but doesn&t share the details of how many, where or when. &…we&re investing heavily in
more people and better technology to make Facebook safer for everyone& wrote Rosen. But Facebook main focus, it seems, is on improving
technology. &Facebook is investing heavily in more people to review content that is flagged
But as Guy Rosen explained two weeks ago, new technology like machine learning, computer vision and artificial intelligence helps us find
more bad content, more quickly & far more quickly, and at a far greater scale, than people ever can,& saidAlex Schultz, Vice President of
Analytics, in a related post on Facebook methodology. He touts A.I
in particular as being a tool that could get content off Facebook before it even reported. But A.I
isn&t ready to police all hate speech yet, so Facebook needs a stop gap solution & even if it costs.