Skip to main content

The day before a gunman killed 50 people in New Zealand in a live internet broadcast, Facebook’s chief technology officer appeared in a U.S. magazine article boasting of the company’s sophisticated automated systems.

Facebook’s software had become so good at distinguishing between similar images, Mike Schroepfer told Fortune, that it had been able to correctly identify a picture of a green leafy material as marijuana with 93.77-per-cent accuracy. It was 88.39 per cent sure an almost identical picture was broccoli.

The next day, Facebook was scrambling to explain how those same algorithms had not been able to stop a 17-minute live broadcast of a mass murder inside a mosque until its users began to complain – a full 12 minutes after the video had ended.

Social-media companies have put their faith in artificial intelligence to solve problems such as hate speech and violence on their platforms. But the massacre in New Zealand has exposed the shortcomings in the automated systems that companies such as Facebook, Google and Twitter say are key to keeping their platforms safe.

Analysts say the attack in Christchurch seemed designed to flout those systems. Someone claiming to be the shooter uploaded a link to a live Facebook feed to 8chan, an anonymous and lightly moderated message board popular with conspiracy theorists and far-right groups. Transcripts of the 8chan discussions show its users scrambling to download the video and repost it widely across the internet.

Facebook removed 1.2 million attempts to upload copies of the videos in the first 24 hours after the attack, though another 300,000 copies made it through. YouTube suspended a function that let users search for the most recently uploaded videos, because copies of the attack were going up faster than the site could take them down. Both companies said they faced challenges from users who re-edited the video to make it harder for their automated software to detect.

Social-media companies face several issues when it comes to using automated software to police content, experts say. Artificial intelligence systems are based on feeding computers thousands of pictures, videos and audio files until they are able to recognize patterns. That works well for things such as child pornography, terrorist propaganda and copyright-protected music, where the same material tends to get passed around repeatedly and users don’t try to alter the files too much.

However, artificial intelligence is much less effective at identifying live broadcasts of new content, such as the New Zealand shootings. Automation also has a difficult time distinguishing between actual videos of violence and video games or official news reports.

YouTube says it has gotten good at identifying copyright-protected material because studios and publishers provide reference copies of content to compare with pirated versions. But there are no reference libraries for a live mass shooting.

“We don’t live in a world where, behind us all the time, big brother is grabbing every bit of information and analyzing it," said Garth Davies, a Simon Fraser University criminologist who has been involved in developing an algorithm that scours the web to find extremist content. "Much of what we have to look for, we actually have to be looking for it.”

Even after tech companies have identified something as a problem – a picture of marijuana or a terrorist video – their software still isn’t good enough at recognizing all the various iterations of that content, says Hany Farid, an expert in digital forensics at Dartmouth College in New Hampshire who built automated software for detecting child pornography and terrorist propaganda.

In transparency reports from last year, Facebook said its software was able to flag 99.5 per cent of Islamic State and al-Qaeda terrorist propaganda, 96.8 per cent of all violent and graphic content and 51.6 per cent of all hate speech – all before it was reported by users. YouTube said it was able to remove 73 per cent of graphic, extremist or harmful content before anyone had seen it.

But given the sheer amount of material that gets uploaded to those platforms, those rates are still far too low, Dr. Farid warns.

He points to Facebook’s assertion that it could identify marijuana with 93.77-per-cent accuracy. Even a 99.9-per-cent accuracy rate would mean it was missing one out of every 1,000 images.

“So at a relatively mundane task of distinguishing images of broccoli from marijuana – with the highest and most sophisticated AI systems today – it’s a spectacular failure,” he said. “So stop telling me how AI is going to solve the problem. It’s not."

The problem is only likely to get worse as extremist groups move away from sites such as Facebook and YouTube toward encrypted messaging services such as Facebook’s WhatsApp and Telegram, which make it difficult for companies to identify the exact nature of the content being passed around.

“What we have today is the open internet, where everyone can see and view messages,” said Ahmed Al-Rawi, assistant professor of social media, news and public communication at Simon Fraser University. “But my concern will be what is concealed from us.”

Such fears have prompted renewed calls for regulation. In New Zealand, internet service providers penned an open letter to social-media giants calling on them to support laws such as the one passed in Germany last year threatening online companies with fines of as much as €50-million ($75-million) a day if they don’t remove hate speech and other illegal content within 24 hours.

Internet giants were slow to adopt the free automated software to detect child pornography that Dr. Farid developed with Microsoft a decade ago, in part because they feared it would open them up to liability for everything bad posted on their platforms. But now that social-media companies have started touting advances in automation, he worries that their promise of self-regulation amounts to too little, too late.

“It’s sort of like you built the entire U.S. highway system across and you don’t put in dividing lines, you don’t put in guardrails, you don’t put in reflectors, you don’t put in signs,” he said. “Then you realize people are literally dying every day and you’re like: Now what do we do?”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe