Why We Might Be Fighting Hate Speech All Wrong
Another weekend, another white supremacist march. On Saturday, Richard Spencer led a second torch-lit protest in Charlottesville and gave a speech at the now infamous Robert E. Lee statue. As before, he spoke defiantly, . And, naturally, Spencer took to social media, posting videos from the march to his Twitter account.
Saturday鈥檚 march follows the first one in August, also helmed by Spencer, that ended in Heather Heyer鈥檚 death. In the aftermath of these sorts of marches, infused with blatant white nationalist language, at least one question comes to mind: How can we clamp down on hate speech?
This spring, I spent hours interviewing leaders of a white nationalist group to find out why they promote hate speech, and why, more specifically, they do so online. The common doctrine, supported by numerous studies and articles, is that hate speech functions as both a recruitment tool and a key way to catapult far-right ideas into mainstream discourse. Yet my conversations pointed to another reason: Hate speech also provides emotional fulfillment. And, if we fail to address that component of hate speech, we鈥檙e unlikely to face it down in a meaningful way.
White nationalist groups do use social media hate speech to recruit and spread their ideas. The people I talked to, for instance, said that they use their Twitter accounts to comment on current events, share posters containing racist caricatures, put out a call for new members, and, on at least one occasion, celebrate Adolf Hitler鈥檚 birthday. Moreover, most white nationalist pages often supplement these topics with discussions on incorrect black-on-white crime 鈥渟tatistics,鈥 falsified studies 鈥減roving鈥 racial IQ differences, and unsupported human genetic theories.
But this isn鈥檛 the crux of hate speech, online or offline; the chant at Charlottesville wasn鈥檛 鈥渟cientific鈥 or 鈥渞eason鈥-based. Instead of statistics, the was a surprisingly vulnerable admission of palpable insecurity: 鈥.鈥
It may be difficult for many of us to imagine, but hate speech offers a powerful gift to the self-fashioned 鈥渁lt-right鈥: forms of emotional gratification. In the current political era, that essentially ensures both a demand for and supply of online hate speech. And, it guarantees that people won鈥檛 stop posting hate speech online just because they have to remake a deleted account. Let鈥檚 be clear: Racist ideas, of any kind, shouldn鈥檛 be accepted, entertained, or ignored. Here, the point is that, as history has shown, , create stronger in-group bonds, and glorify the movement, whether that鈥檚 used or hate.
After Heyer鈥檚 death, you could argue that we saw a new approach to censorship. Never before had tech companies so unilaterally and forcefully taken on responsibility not just for censoring content on their platforms, but also for cutting off resources key to the movement鈥檚 online existence. It was noteworthy for an industry that, until recently, had stubbornly stuck to arguments of neutrality and anti-regulation. In addition to interventions, action was taken by a wide range of organizations: Airbnb used to book rooms in Charlottesville for alt-right members, and GoDaddy and Google removed The Daily Stormer鈥檚 domain registration. Squarespace, meanwhile, , and Spotify . And PayPal, GoFundMe, and and .
To some commentators, this purge was a victory鈥攁n eradication of the message and, therefore, the problem. Yet in looking only at standard explanations of hate speech, this approach misses the deeper point that hate speech is a much larger problem than we often think because it鈥檚 fueled by emotional gratification. In other words, the censorship we鈥檝e seen so far addresses the 鈥渨hat鈥 but not the 鈥渨hy鈥: that for white nationalist groups, emotional gratification is hard to find outside of hate speech communities. But within them, members find acceptance and purpose.
Why is this the case? White supremacy may be in full, inglorious view right now, but hate speech is still generally unwelcome in physical spaces. Charleston protestors felt comfortable , but afterward, or were . In contrast, online hate speech groups create the exact kind of safe space for them they so disdain in the outside world: a place where they feel they can be .
None of the people I spoke to were 鈥渙ut鈥 to their families and friends. They even observed some intra-group anonymity, in case of doxxing, being publicly identified, or targeted. But within the group, they described a shared worldview, save a few differences of opinion. What鈥檚 more, they greatly valued that community. According to one, 鈥渇inding anyone else to talk to about this stuff is a huge psychological help.鈥 Building off of a foundation of acceptance, this emotional support offers not only solidarity, but also a chance to make offline friends or . In some cases, the people I spoke to expanded this 鈥減sychological help鈥 to include financial support for when someone鈥檚 beliefs got him fired. More important still, hate speech groups give a bitter opinion-minority a sense of validation its members are otherwise missing in their lives.
The Internet, as a medium, only amplifies these emotions. By the late 1990s, researchers had already noted that, although online speech could be viewed by millions, the process not only impacts them like 鈥溾 would, but it鈥檚 also active where radio and TV consumption is passive. Online hate speech therefore feels personal and empowering because the posters are about their own browsing. More recent research has also proven that , creating a feel-good feedback loop. Beyond the psychological impacts, online speech has a unique logistical ability . These platforms offer a place to turn to whenever someone needs a fellow nationalist to talk to, regardless of the geographic distance.
The message seems clear: At least on its own, the censorship we鈥檙e seeing now doesn鈥檛 get to the core of hate speech. No matter the political climate, there will likely always be an emotional need for the 鈥渨hy鈥 of hate speech鈥攖he validation and support it offers鈥攔egardless of how difficult it is to access online groups. As we鈥檙e seeing, people have already begun to build their own hate-friendly and , in addition to moving to the dark web.
So, looking ahead, what can we take away from the varying axes of hate speech?
For one, my conversations suggest that, although the work may be largely uncomfortable, policymakers and tech companies ought to acknowledge that hate group members are still people, not Twitter bots censorship can just conjure away. Some efforts are taking this point to heart. Programs like Life After Hate and the Department of Homeland Security鈥檚 Countering Violent Extremism initiative, for instance, are crucial in efforts to push back against hate speech because they address the emotional component, work from the community level, and acknowledge the range of U.S. extremist ideology. But, under the Trump administration, Life After Hate has , and CVE has become 鈥.鈥
To do nothing, as we鈥檝e now seen many times over, has serious consequences, even in addition to the death of Heyer. Dylann Roof in his manifesto that his massacre was inspired by the statistics on the propagandistic hate website Council of Conservative Citizens. The Southern Poverty Law Center has found that Stormfront members have collectively murdered some 100 people, making them 鈥溾 for hate crimes and mass killings.
Reinstating funding for support groups like Life After Hate would be a good start, as would including far-right extremism in DHS program efforts. Standard counter-narratives also could play a weighty role; they could adapt to and learn from alt-right Internet icons, creating similar, consistent, personality- and meme-driven anti-hate accounts, rather than their standard YouTube videos and messaging campaigns. Regardless, in response to hate, what鈥檚 critical, especially in this political season, is that we try something better, and more visceral, than a cosmetic fix, something that ultimately roots out the 鈥渨hy,鈥 and not just the 鈥渨hat,鈥 of hate speech.