Joshua Geltzer
ASU Future of War Fellow, 2018
This article in , a collaboration among , , and .
鈥淣o other sentence in the U.S. Code,鈥 technology scholar David Post has , 鈥渉as been responsible for the creation of more value than鈥 a little-known provision of the Communications Decency Act called Section 230. But in January, President Donald 国产视频 technology adviser Abigail Slater that Congress should consider changes to the law. It鈥檚 not crazy to consider amending the provision, no matter the trillions of dollars resting on it. But the law itself is being mischaracterized and therefore misunderstood鈥攊ncluding by the very legislators who鈥檇 be responsible for amending it.
Section 230 has a simple, sensible goal: to free internet companies from the responsibilities of traditional publishers. Sites like Facebook and Twitter host comments and commentary that they don鈥檛 produce, edit, or even screen themselves, and Section 230 of the act ensures that those companies can鈥檛 be sued for content they host for which they haven鈥檛 assumed responsibility. The law : 鈥淣o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.鈥
But that鈥檚 not how CDA critics see it. There are two primary camps here. One thinks that Section 230 demands neutrality from tech platforms as a predicate for immunity. The other thinks that the provision completely frees tech companies from responsibility for moderating content on their platforms.
Sen. Ted Cruz, who belongs to the first camp, has , 鈥淭he predicate for Section 230 immunity under the CDA is that you鈥檙e a neutral public forum鈥濃攎eaning that, in Cruz鈥檚 view, a tech company鈥檚 efforts to remove alt-right content could void the company鈥檚 protection under that section. Think of Facebook鈥檚 recent to ban right-wing conspiracy theorist Alex Jones: By Cruz鈥檚 logic, that move by Facebook could be the type of political lean that costs the company its Section 230 protection.
Others, like technology lawyer Cathy Gellis, that 鈥淪ection 230 is a federal statute that says that people who use the Internet are responsible for how they use it鈥攂ut only those people are, and not those who provide the services that make it possible for people to use the Internet in the first place.鈥 In other words, in her view, the law absolves tech companies of anyresponsibility to remove offensive content from their platforms. Think here of calls for Facebook to do more to prevent terrorist radicalization and foreign election interference: By Gellis鈥 logic, those are problems for people who use Facebook, not Facebook itself.
They鈥檙e both wrong. In reality, Section 230 empowers tech companies to experiment with new ways of imposing and enforcing norms on new sites of discourse, such as deleting extremist posts or suspending front accounts generated by foreign powers seeking to . And with that freedom to experiment came a responsibility to do so: to find appropriate ways of setting the boundaries of acceptable discourse on the novel, messy world of online debate and discussion. The real question lawmakers now face isn鈥檛 whether companies are somehow forfeiting their protections by trying to tackle today鈥檚 online challenges. It鈥檚 whether companies are doing enough to deserve the protections that Section 230 bestows.
The Communications Decency Act was a reaction to , a landmark 1995 New York state court decision extending standards for liability, long imposed on publishers for their content, to new media. The decision found the early internet provider Prodigy susceptible to liability for allegedly defamatory posts on its message boards. Section 230 flipped that result: No longer would companies like Prodigy be held to a publisher鈥檚 standards for the posts, blogs, photos, videos, and other content that users were uploading to tech platforms with ever-increasing frequency.
Why? Some, like media law professor Frank LoMonte, now suggest the law was intended to absolve tech companies of any responsibility for moderating their platforms鈥攈ence LoMonte鈥檚 that, with Section 230, 鈥淐ongress elected to treat the Prodigies of the world鈥攅ventually including Facebook鈥攁s no more responsible for the acts of their users than the telephone company.鈥 This view suggests that Section 230 was meant to put the full burden on users of Facebook, Twitter, and YouTube to self-police and, in turn, to free those sites of any responsibility for what鈥檚 uploaded to them. Legal analysts like Adam Candeub and Mark Epstein build on that view and go even further, that companies risk losing protection under Section 230 if they engage in robust moderation of content rather than adhering to strict neutrality: 鈥淥nline platforms should receive immunity only if they maintain viewpoint neutrality, consistent with traditional legal norms for distributors of information.鈥 For tech companies, this narrative has been convenient, offering an excuse when they鈥檙e accused of moving too slowly and too ineffectively to address harmful, posted by hostile actors ranging from ISIS to the Kremlin to vicious internet trolls.
This view of Section 230鈥攖hat it was created to free companies of responsibility for moderating content on their platforms, and moreover that assuming such responsibility might cost them their protection鈥攊s now taking root among key lawmakers accusing tech companies of acting too aggressively against far-right extremist content. It鈥檚 the view espoused by Cruz, and it鈥檚 getting louder. In November, incoming Missouri Sen. Josh Hawley, a Republican, offered a similar characterization of Section 230 by that Twitter鈥檚 approach to moderation has jeopardized its immunity: 鈥淭witter is exempt from liability as a 鈥榩ublisher鈥 because it is allegedly 鈥榓 forum for a true diversity of political discourse.鈥 That does not appear to be accurate.鈥
But Section 230 is neither an excuse for failing to moderate content nor a reward for declining to do so. Indeed, its purpose wasn鈥檛 to absolve or forbid tech companies from moderating their platforms, but to empower them to do so.
It鈥檚 true that Section 230鈥檚 first sentence eliminates liability for content uploaded to tech platforms. But the second sentence is equally important: It removes liability for tech companies鈥 own efforts to police their platforms. That portion of the law immunizes tech companies for 鈥渁ny action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.鈥
So, even as Section 230 determined that tech companies wouldn鈥檛 face the same liability as publishers for what the companies didn鈥檛 remove, the law also protected the companies for what they did remove and what, by contrast, remained. This empowered them to experiment with forms of content moderation that would be most effective for the unprecedented types of content delivery vehicles they represented. That鈥檚 why Sen. Ron Wyden, a co-author of Section 230, has that it was intended as both a 鈥渟hield鈥 and a 鈥渟word鈥 for tech companies, protecting them from liability for vast amounts of content for which they鈥檙e not assuming responsibility but also empowering them to do what they can to eliminate the worst of that content. Wyden has said that 鈥渂ecause content is posted on [internet] platforms so rapidly, there鈥檚 just no way they can possibly police everything,鈥 but he also clarified that Section 230 was intended 鈥渢o make sure that internet companies could moderate their websites without getting clobbered by lawsuits.鈥
The idea that Section 230 absolves tech companies of any responsibility for what鈥檚 uploaded to their platforms speaks to a widespread libertarian impulse infused with optimism about new technologies: Let all speech flourish, and the best arguments will overcome the likes of terrorist extremism and white supremacist hate. That narrative seems to take the power to moderate global conversations out of the hands of the few in Silicon Valley and spread it among the many.
But these are no longer the halcyon early days of the internet. For all of the economic benefits and human connections that modern communications platforms have facilitated, they鈥檝e also brought hate speech, trolling, paranoid conspiracies, terrorist recruitment, and foreign election interference. It鈥檚 simply too late in the day to maintain a 鈥渢weet and let tweet鈥 ethos. The original wisdom of Section 230鈥攑rovide tech companies with room to experiment, but expect them to do so responsibly, even aggressively鈥攔ings truer now than ever.
And that brings us to today鈥檚 emerging debate about Section 230. With the steady stream of content policy challenges flowing across tech platforms, the notion that Section 230 provides an excuse for tech companies not to moderate their platforms seems increasingly untenable. And the even bolder idea being propagated by the likes of Cruz and Hawley that more aggressive moderation will cost the companies their immunity is simply . What we really need is a debate over whether companies are adequately using both their immunity and their accompanying responsibility to find novel ways to moderate novel technologies. Otherwise, as Wyden has the companies, 鈥淚f you鈥檙e not willing to use the sword, there are those who may try to take away the shield.鈥