What Facebook, Twitter and YouTube can do now to stop terrorism and hate online

CNN/Stylemagazine.com Newswire | 3/18/2019, 3:43 p.m.
The attacks on two mosques in Christchurch, New Zealand bring into sharp relief a troubling pattern of global white nationalist ...
Social Media

By Robert L. McKenzie for CNN Business Perspectives

(CNN) -- The attacks on two mosques in Christchurch, New Zealand bring into sharp relief a troubling pattern of global white nationalist terrorism against Muslims. The New Zealand assailant's 87-page manifesto is replete with white supremacist propaganda that was shared across the internet. Social media sites like Facebook, Twitter and YouTube are now left scrambling to stop the spread of this hateful rhetoric as it continues to get shared across their platforms.

Washington Post reporter Drew Harwell's recent tweet captures the significant role that the internet played in this attack: "The New Zealand massacre was livestreamed on Facebook, announced on 8chan, reposted on YouTube, commentated about on Reddit, and mirrored around the world before the tech companies could even react."

Facebook announced it has removed 1.5 million videos of the attack. Twitter said on Friday it was working to remove the video from its platform. YouTube said it removes violent and graphic content as soon as it's made aware of it. 8chan said it is responding to law enforcement.

But there are steps these companies can take now to help mitigate the damage and prevent terrorist attacks like this from happening in the future.

It's clear that the internet — and social media in particular — plays a key role in amplifying conspiracy theories, misinformation and myths about Muslims. But the internet didn't create these things — anti-Muslim sentiment long predates the internet.

In a project at New America, we documented 763 separate anti-Muslim incidents in the United States between 2012 and today. Of the 763 incidents, there have been 178 hate incidents against mosques and Islamic centers in America and another 197 incidents of anti-Muslim violent crimes.

It's difficult to know the assailant's precise path to radicalization. How much violent extremist material was he consuming online? Was he part of a community of extreme hate on 8chan, or some other platform? Did YouTube's recommendation algorithms pull him into an echo chamber of hate and vitriol against Muslims?

The truth is that we may never know the answers to these questions. What's more, there are real challenges to curtailing online hate: namely, there are no current laws on the books to adequately address domestic terrorism. Labeling those who use social media and other technologies to incite violence as domestic terrorists would give tech companies the legal leverage they need to take down their content or block them altogether. Without legislation, we are left with competing, if not fuzzy, ideas about what constitutes acceptable and unacceptable forms of hate speech.

These challenges are compounded by the sheer amount of content that goes online every day. Every single minute there are on average 400 hours of video uploaded to YouTube; 510,000 comments, 293,000 updated statuses and 136,000 photos posted on Facebook; and 350,000 tweets posted on Twitter.

Notwithstanding these challenges, there are concrete and immediate actions tech companies should take to address anti-Muslim hate online.