Facebook removes 11.6 million child abuse posts

INSUBCONTINENT EXCLUSIVE:
Image copyrightGetty ImagesImage caption For the first time, Facebook includes figures on Instagram
Facebook has released the latest figures in its efforts to remove harmful content from its platforms.They reveal 11.6 million pieces of
content related to child nudity and child sexual exploitation were taken down between July and September 2019.For the first time, Facebook
is also releasing figures for Instagram and including numbers for posts related to suicide and self-harm.This follows a public outcry over
the death of 14-year-old Molly Russell.The teenager killed herself in 2017 and her father then found large amounts of graphic material about
self-harm and suicide on her Instagram account.In a blog, Facebook vice-president Guy Rosen said: "We remove content that depicts or
encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to
engage in similar behaviour
"We place a sensitivity screen over content that doesn't violate our policies but that may be upsetting to some, including things like
healed cuts or other non-graphic self-injury imagery in a context of recovery."The figures, in Facebook's fourth Community Standards
Enforcement Report, reveal that between July and September 2019:11.6 million pieces of content related to child nudity and child sexual
exploitation were removed from Facebook - and 754,000 from Instagram "over 99%" of these were "proactively detected", indicating the firm
had still relied on third-party reports for about 100,000 examples2.5 million pieces of content related to suicide and self-harm were
removed from Facebook - and 845,000 from Instagram 4.4 million pieces of drug-sales content were removed from Facebook - and 1.5 million
from Instagram 2.3 million pieces of firearm-sales content were removed from Facebook - and 58,600 from Instagram 133,300 pieces of
terrorist-propaganda content were removed from InstagramFacebook said it had removed more than 99% of the content associated with al-Qaeda,
the Islamic State group and their affiliates.And for other terrorist organisations this proportion was 98.5% for Facebook and 92.2% for
Instagram.Overall, the figures suggest Facebook is removing ever larger quantities of harmful content
From January to March 2019, for example, it took down 5.8 million pieces of content related to child nudity and sexual exploitation of
children.But future efforts to clamp down on harmful content could be hampered by the social network's self-styled "pivot to privacy",
announced by chief executive Mark Zuckerberg, in part in response to the Cambridge Analytica scandal
The end-to-end encryption on Facebook-owned WhatsApp will be extended to Facebook Messenger and Instagram, with Mr Zuckerberg acknowledging
there would be a "trade-off" that would benefit child sex abusers and other criminals.Analysis By Angus Crawford, TheIndianSubcontinent News
correspondentFacebook, it seems, is still playing catch-up in two very important areas
Instagram, the junior partner, bought back in 2012 for $1bn (£0.8bn), now seems to be leading policy changes at Facebook around content
about self-harm and suicide.Molly Russell's father, Ian, led calls for Instagram to reform - and, to be fair, it has made a start, banning
images, pictures and even cartoons, that encourage or promote self-injury, so no more razor blades, bottles of pills or nooses
Already that kind of thing is harder to find on Instagram than it was a year ago
So what do we now hear from Instagram's parent company, Facebook? For the first time, it is revealing how much of that kind of depressing,
graphic damaging material it has been taking down, on both platforms - a little late some might think.The second area is around child abuse
material
In the third quarter of 2018, Facebook removed 8.7 million pieces of content related to child nudity and child sexual exploitation
In the same period this year, that figure had risen to 11.6 million
Now, either Facebook is doing an even better job of detection than before - or it has lost control of the problem
After all, it is not as though the company is new to this issue - it has been a member of the Internet Watch Foundation, which actively
hunts for this kind of content, for more than a decade
It has also been using sophisticated search-and-takedown software, such as Microsoft PhotoDNA, since 2011
But still the images of child sexual abuse keep coming, in ever greater quantities
And what is heading down the track? Mr Zuckerberg's plans to introduce encrypted messaging on the apps - which at a stroke throws an
invisibility cloak over all communications
Facebook may not be able to see what you've sent, but, possibly, nor will PhotoDNA or, crucially, the police