NZ massacre video ‘not gruesome enough’ to flag
The deeply disturbing footage from accused Christchurch mosque shooter Brenton Tarant's GoPro was shared countless times on social media before the major platforms began pulling it down.
Today, a Facebook policy director tried to explain why the site's counter-terrorism algorithms - introduced nearly two years ago - did not pick up the footage.
There was "not enough gore" in the video for the algorithm to catch it, The Daily Beast reports.
The accused gunman livestreamed the attack at the Al Noor Mosque on March 15 that claimed 43 lives.
Brian Fishman, Facebook's policy director for counter-terrorism, allegedly told US Congress during a closed-door briefing the video was not gruesome enough for its complex algorithm to detect.
According to a committee staffer in the room, members of Congress pushed back against Mr Fishman's defence.
One member said the video was so violent, it looked like footage from Call of Duty.
Another, Missouri Democrat representative Emanuel Cleaver told The Daily Beast Mr Fishman's answer "triggered something inside me".
"You mean we have all this technology, and we can't pick up gore?" Mr Cleaver said he told Mr Fishman. "How many heads must explode before they pick it up? Facebook didn't create darkness, but darkness does live in Facebook."
The livestream was online for an hour before New Zealand authorities asked the company to take it down. But the massacre video was still uploaded hundreds of thousands of times.
According to The Daily Beast, the video was originally uploaded to 8chan. By not including a direct link to Facebook, the shooter avoided a surge of suspicious traffic that would alert Facebook.
From there, the video was uploaded to Facebook more than 1.5 million times, according to The Washington Post - 1.2 million of those uploads were blocked by the social network's automatic filters, but 300,000 still slipped through the cracks.
Spokespeople for Facebook and for the committee declined to comment because the meeting was behind closed doors, the Daily Beast said.
Facebook detailed the loopholes in its artificial intelligence system in a blog post published just after the shooting.
"AI systems are based on 'training data', which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video," the post reads.
"To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."