Section 230 of the Communications Decency Act and the Future of Online Speech

August 09, 2019
by
Ellen P. Goodman
Ryan Whittington
3 min read
Section 230 of the Communications Decency Act protects online intermediaries like social media platforms from being sued for transmitting problematic third-party content.

Section 230 of the Communications Decency Act protects online intermediaries like social media platforms from being sued for transmitting problematic third-party content. It also lets them remove, label, or hide those messages without being sued for their choices. The law is thus simultaneously a shield from liability— encouraging platforms to transmit problematic content—and a sword—allowing platforms to manage that content as they like. Section 230 has been credited with creating a boisterous and wide-open Internet ecosystem. It has also been blamed for allowing platforms to profit from toxic speech.

Everyone can agree that the Internet is very different from what was imagined in 1996 when Section 230 was penned. Internet firms have concentrated power and business models that are nothing like what they had then. No one contemplated the velocity, reach, scale, nature, and influence of the speech now flowing over digital infrastructure. It is entirely reasonable to rethink how Internet liability is apportioned. But it is critical that we are clear about how changes to Section 230 might strengthen government control over speech, powerful platforms’ control, and/or make the Internet even more lawless.

Section 230 has become a flashpoint in the “techlash” against the power of dominant technology firms. Critics of all political stripes want to reform or repeal the law. For example, House Speaker Nancy Pelosi and Representative Adam Schiff have said that Section 230 effectively functions as a grant of power without responsibility. They have suggested that platforms need to perform more moderation to reduce harmful speech. Lawmakers on the right, including Senators Josh Hawley and Ted Cruz, have argued that platforms should maintain neutrality and prove that any moderation is non-discriminatory and unbiased. Other proposals attempt to provide narrowly tailored and content-neutral reforms to encourage Internet platforms to adopt greater responsibility over content moderation.

Unfortunately, conversations about changing Section 230 have been marked by confusion about what the law actually does. Too often, they assume that Section 230 demands platform neutrality, when in fact it encourages content-moderation. Or they assume that Section 230 protects platforms from hate speech liability, when in fact it is the First Amendment that does that. Section 230 is too critical to the digital economy and expressive freedoms to gut it. But its form of intermediary liability should not be sacrosanct. Reforms should incentivize responsible and transparent platform governance. They should not dictate particular categories of content to be banned from the Internet, nor should they create publisher licensing schemes. Section 230 fostered uninhibited and robust speech platforms. If we now want more inhibition, we must take into account the power of concentrated speech platforms and the opacity of algorithmic content moderation. We must also recognize that if the law makes it too risky to moderate speech, platforms may walk away entirely from the job.

This paper explains how Section 230 arose, what it was meant to achieve, where it failed, and how proposals to fix it fare.

Download the PDF »

Photo credit: Lenka Horavova / Shutterstock