国产视频

The Thread

Why White Supremacy Online Is a Growing Problem

shutterstock_2053695674 [Converted]-01.png
国产视频 / GoodStudio on Shutterstock

On May 14, an 18-year old white man traveled from hours away to a supermarket in a predominantly Black community in Buffalo, N.Y. and 13 people, killing 10. In the aftermath, law enforcement agents that the shooter had online links to racist, white nationalist, and far-right conspiracy theories and personalities.

Coming just before the third anniversary of the in New Zealand, the shooting in Buffalo is the latest somber reminder of the growing threat posed by white supremacist disinformation and conspiracy theories spreading online. It puts a spotlight on the urgent need for internet platforms to do more to address such harmful content.

According to data from April 2021, right-wing extremists have been more than 267 plots or attacks and 91 fatalities since 2015. More than a quarter of these incidents and nearly half of the resulting deaths were caused by white supremacists. These attacks have a broad range of communities, including the Black, Jewish, immigrant, LGBTQ, and Asian communities. can play a role in vulnerable individuals to extremist content, which, in turn, can contribute to their radicalization. Despite growing concerns around the expanding white supremacy movement and its implications for national security and democracy, internet platforms have been slow to take action. The move to push platforms to take a stronger stance against white supremacist and related conspiracy theory content has been an uphill battle.

Vague Content Policies

For many years, internet platforms鈥 work on online extremism focused primarily on Islamic extremist groups such as the Islamic State. But, after years of sustained civil society advocacy, some platforms began making changes to their content policies鈥攚hich determine what content is permissible on their services鈥攖o address white supremacist content.

In June 2019, YouTube its to ban white supremacist and neo-Nazi content. In October 2020, it it would remove conspiracy theory content calling for real-world violence, including content posted by far-right movement QAnon. But, the platform stopped short of banning all QAnon-related content like other platforms have done.

Additionally, while some platforms have taken action against white supremacist content broadly, many failed to early on how white supremacist content intersects with other parallel movements, such as white nationalism and white separatism. Facebook, for example, waited until 2019 to its content policies to ban content glorifying white nationalism and separatism.

Although platforms have made important strides towards recognizing the harmful nature of white supremacist content online, many of their policies and feature loopholes that allow for harmful content and conspiracy theories to spread. Additionally, oftentimes white supremacy content touches on issues of political relevance, such as immigration, which makes drawing clear lines for the purposes of moderation difficult.

Lax Enforcement

Another reason disinformation and conspiracy theories continue to proliferate online is because platforms often fail to consistently enforce their content policies. In August 2019, YouTube several far-right individuals for promoting conspiracy theories related to 鈥渨hite replacement鈥 and Islamophobia, including one individual who was affiliated with the shooter who carried out two attacks in Christchurch, New Zealand. However, YouTube many of these channels less than two days later, noting that they did not violate their .

Platforms also tend to invest more resources in tackling , hate speech, and extremism, allowing misleading information in languages such as , , and Romanian to grow. This poses a serious threat to large swathes of the American and global population. White supremacy and conspiracy theory content is multifaceted in nature, encompassing everything from text posts to memes to live-streamed videos. Platforms鈥 ability to moderate such multimedia content also varies.

Although platforms have invested heavily in tackling specific types of mis and disinformation, such as those related to COVID-19 and U.S. presidential elections, these approaches are typically employed temporarily. Because of this, many platforms, including some of the largest, are often playing 鈥渨hack-a-mole鈥 when trying to tackle disinformation and conspiracy theories on a daily basis, especially as repeat spreaders are becoming more tech savvy and using new to evade detection.

Social media platforms can play a role in exposing vulnerable individuals to extremist content, which, in turn, can contribute to their radicalization.

For example, on November 5, 2020, Facebook the first Stop the Steal group on its services, which had been casting doubt on the legitimacy of the election and calling for its members to engage in violence. By then, the group had over 360,000 members. Over the next few weeks, several similar groups cropped up, with researchers at Facebook that they were among the fastest-growing groups on the service at the time and the company was unable to keep up. These groups numerous conspiracy theories around the outcome of the 2020 presidential election, culminating in the January 6 insurrection on the Capitol.

Ad-Driven Business Models

Many platforms shy away from banning white supremacist personalities and figures, as many of these accounts drive engagement on the platforms. Today, most internet platforms rely on advertising to generate revenue. The more ads that a platform is able to serve to users, the more revenue it generates. This model incentivizes platforms to permit divisive content that drives engagement and retains user attention on their services. White supremacist content is no different. In other circumstances, platforms are reluctant to ban accounts spreading white supremacist conspiracy theories and content that belong to individuals in , perhaps due to fear of retaliatory regulation.

Additionally, although platforms have begun altering their content policies to address white supremacist content, they are lacking when it comes to addressing the role that their automated content curation tools play in amplifying harmful content. Oftentimes, these algorithmic tools are designed to optimize for engagement, in support of the ad-driven business model. Researchers at the Anti-Defamation League have , despite changes to its technology, YouTube鈥檚 platform still algorithmically extremist content to individuals who have engaged with and are therefore susceptible to such content. This can generate a effect and radicalize vulnerable internet users.

Lack of Transparency

The examples above give us only a small window into the true scope of the problem posed by white supremacy and conspiracy theory content online. Many platforms claim that their efforts to tackle online misinformation, disinformation, and conspiracy theories are working. But, they offer very little transparency around the impact of their efforts.

Some platforms report aggregate figures on the amount of misleading and harmful content they remove in their transparency reports. But often these figures are lumped together with other categories of content, making understanding the true impact of their moderation efforts challenging. Additionally, platforms deploy a range of other mechanisms to curtail the spread of misleading information, including placing warning or contextual information labels on misleading content, downranking content, altering recommendation systems, and demonetizing accounts. But because companies provide very little comprehensive transparency on how these efforts influence the spread of disinformation online, it is difficult to identify where platforms can concretely improve and hold them accountable.

Disinformation and conspiracy theories linked to white supremacy are increasingly proliferating online, posing a major threat to the lives of people of color, marginalized communities, national security, and democratic processes in the United States and around the world. Internet platforms need to invest more in addressing the spread of such harmful content.

More 国产视频 the Authors

Spandana Singh
Spandana Singh

Policy Analyst, Open Technology Institute

Why White Supremacy Online Is a Growing Problem