The Christchurch shooter and YouTube’s radicalization trap

The Christchurch shooter and YouTube’s radicalization trap

by Tech News
0 comments 53 views
A+A-
Reset

Changes needed —

Researchers say YouTube’s policies and algorithms are still too opaque.

Cecelia D’Anastasio

Near sunset photograph of a mosque.

YouTube, Facebook, and other social media platforms were instrumental in radicalizing the terrorist who killed 51 worshippers in a March 2019 attack on two New Zealand mosques, according to a new report from the country’s government. Online radicalization experts speaking with WIRED say that, while platforms have cracked down on extremist content since then, the fundamental business models behind top social media sites still play a role in online radicalization.

According to the report, released this week, the terrorist regularly watched extremist content online and donated to organizations like The Daily Stormer, a white supremacist site, and Stefan Molyneux’s far-right Freedomain Radio. He also gave directly to Austrian far-right activist Martin Sellner. “The individual claimed that he was not a frequent commenter on extreme right-wing sites and that YouTube was, for him, a far more significant source of information and inspiration,” the report says.

wired logo

The terrorist’s interest in far-right YouTubers and edgy forums like 8chan is not a revelation. But until now, the details of his involvement with these online far-right organizations were not public. Over a year later, YouTube and other platforms have taken steps toward accepting responsibility for white supremacist content that propagates on their websites, including removing popular content creators and hiring thousands more moderators. Yet according to experts, until social media companies open the lid on their black-box policies and even algorithms, white supremacist propaganda will always be a few clicks away.

“The problem goes far deeper than the identification and removal of pieces of problematic content,” said a New Zealand government spokesperson over email. “The same algorithms that keep people tuned to the platform and consuming advertising can also promote harmful content once individuals have shown an interest.”

Entirely unexceptional

The Christchurch attacker’s pathway to radicalization was entirely unexceptional, say three experts speaking with WIRED who had reviewed the government report. He came from a broken home and from a young age was exposed to domestic violence, sickness, and suicide. He had unsupervised access to a computer, where he played online games and, at age 14, discovered the online forum 4chan. The report details how he expressed racist ideas at his school, and he was twice called in to speak with its anti-racism contact officer regarding anti-Semitism. The report describes him as somebody with “limited personal engagement,” which “left considerable scope for influence from extreme right-wing material, which he found on the internet and in books.” Aside from a couple of years working as a personal trainer, he had no consistent employment.

The terrorist’s mother told the Australian Federal Police that her concerns grew in early 2017. “She remembered him talking about how the Western world was coming to an end because Muslim migrants were coming back into Europe and would out-breed Europeans,” the report says. The terrorist’s friends and family provided narratives of his radicalization that are supported by his Internet activity: shared links, donations, comments. While he was not a frequent poster on right-wing sites, he spent ample time in the extremist corners of YouTube.

A damning 2018 report by Stanford researcher and PhD candidate Becca Lewis describes the alternative media system on YouTube that fed young viewers far-right propaganda. This network of channels, which range from mainstream conservatives and libertarians to overt white nationalists, collaborated with each other, funneling viewers into increasingly extreme content streams. She points to Stefan Molyneux as an example. “He’s been shown time and time again to be an important vector point for people’s radicalization,” she says. “He claimed there were scientific differences between the races and promoted debunked pseudoscience. But because he wasn’t a self-identified or overt neo-Nazi, he became embraced by more mainstream people with more mainstream platforms.” YouTube removed Molyneux’s channel in June of this year.

This “step-ladder of amplification” is in part a byproduct of the business model for YouTube creators, says Lewis. Revenue is directly tied to viewership, and exposure is currency. While these networks of creators played off each other’s fan bases, the drive to gain more viewers also incentivized them to post increasingly inflammatory and incendiary content. “One of the most disturbing things I found was not only evidence that audiences were getting radicalized, but also data that literally showed creators getting more radical in their content over time,” she says.

Making “significant progress”?

In an email statement, a YouTube spokesperson says that the company has made “significant progress in our work to combat hate speech on YouTube since the tragic attack at Christchurch.” Citing 2019’s strengthened hate speech policy, the spokesperson says that there has been a “5x spike in the number of hate videos removed from YouTube.” YouTube has also altered its recommendation system to “limit the spread of borderline content.”

YouTube says that of the 1.8 million channels terminated for violating its policies last quarter, 54,000 were for hate speech—the most ever. YouTube also removed more than 9,000 channels and 200,000 videos for violating rules against promoting violent extremism. In addition to Molyneux, YouTube’s June bans included David Duke and Richard Spencer. (The Christchurch terrorist donated to the National Policy Institute, which Spencer runs.) For its part, Facebook says it has banned over 250 white supremacist groups from its platforms and strengthened its dangerous individuals and groups policy.

“It’s clear that the core of the business model has an impact on allowing this content to grow and thrive,” says Lewis. “They’ve tweaked their algorithm, they’ve kicked some people off the platform, but they haven’t addressed that underlying issue.”

Online culture does not begin and end with YouTube or anywhere else, by design. Fundamental to the social media business model is cross-platform sharing. “YouTube isn’t just a place where people go for entertainment; they get sucked into these communities. Those allow you to participate via comment, sure, but also by making donations and boosting the content in other places,” says Joan Donovan, research director of Harvard University’s Shorenstein Center on Media, Politics, and Public Policy. According to the New Zealand government’s report, the Christchurch terrorist regularly shared far-right Reddit posts, Wikipedia pages, and YouTube videos, including in an unnamed gaming site chat.

Fitting in

The Christchurch mosque terrorist also followed and posted on several white nationalist Facebook groups, sometimes making threatening comments about immigrants and minorities. According to the report authors who interviewed him, “the individual did not accept that his comments would have been of concern to counter-terrorism agencies. He thought this because of the very large number of similar comments that can be found on the internet.” (At the same time, he did take steps to minimize his digital footprint, including deleting emails and removing his computer’s hard drive.)

Reposting or proselytizing white supremacist without context or warning, says Donovan, paves a frictionless road for the spread of fringe ideas. “We have to look at how these platforms provide the capacity for broadcast and for scale that, unfortunately, have now started to serve negative ends,” she says.

YouTube’s business incentives inevitably stymie that sort of transparency. There aren’t great ways for outside experts to assess or compare techniques for minimizing the spread of extremism cross-platform. They often must rely instead on reports put out by the businesses about their own platforms. Daniel Kelley, associate director of the Anti-Defamation League’s Center for Technology and Society, says that while YouTube reports an increase in extremist-content takedowns, the measure doesn’t speak to its past or current prevalence. Researchers outside the company don’t know how the recommendation algorithm worked before, how it changed, how it works now, and what the effect is. And they don’t know how “borderline content” is defined—an important point considering that many argue it continues to be prevalent across YouTube, Facebook, and elsewhere.

Questionable results

“It’s hard to say whether their effort has paid off,” says Kelley. “We don’t have any information on whether it’s really working or not.” The ADL has consulted with YouTube, however Kelley says he hasn’t seen any documents on how it defines extremism or trains content moderators on it.

A real reckoning over the spread of extremist content has incentivized big tech to put big money on finding solutions. Throwing moderation at the problem appears effective. How many banned YouTubers have withered away in obscurity? But moderation doesn’t address the ways in which the foundations of social media as a business—creating influencers, cross-platform sharing, and black-box policies—are also integral factors in perpetuating hate online.

Many of the YouTube links the Christchurch shooter shared have been removed for breaching YouTube’s moderation policies. The networks of people and ideologies engineered through them and through other social media persist.

This story originally appeared on wired.com.

Read More

You may also like

Leave a Comment