The First Amendment Opportunism of Digital Platforms

February 11, 2019
There is a media policy design problem when it comes to digital information platforms (like Facebook and Twitter) in the United States.

There is a media policy design problem when it comes to digital information platforms (like Facebook and Twitter) in the United States. Media policy has been almost entirely privatized, with information platforms assuming the role of government, without transparency or accountability. They have their cake as subjects of the First Amendment to the constitution and eat it too as free speech enforcers who can (under Section 230 of the 1996 Communications Decency Act) and have (because of their business model) declined to invest in a better speech environment. This has led to a polluted public sphere in which dominant platforms choke off resources for journalism while flooding social media feeds with polarizing, false, trivializing, and hateful speech.

This brief identifies how First Amendment law and values have shaped a unique regulatory and self-regulatory space for information platforms as compared with other media. First Amendment doctrine has evolved to place ever-greater restrictions on government power over information platforms while these platforms have become powerful speech regulators. Most possible interventions present risks of overreach, but should at least be considered in order to increase platform accountability for the information environment. First Amendment constraints favor structural regulation to foster more competition in information distribution and platform self-regulation. But some kinds of media regulation to foster diverse and under-supplied information can also exist side-by-side with rigorous First Amendment protections. The brief concludes with recommendations for the way forward, including creating more competition through structural regulation, media regulation in cases of bottleneck control, and options to improve self-regulation.

This is principally a U.S. story, but the home environment of the platforms has influenced their operation globally. As a result, there is pushback in new and proposed laws in Europe, South America, and elsewhere seeking to rein in platform power. [1]

The Predicament

In the United States, the result of the legal structure is that platforms have unfettered and unaccountable authority to make policy and, to the extent that there is any government role, it is exercised via threat and suasion in ways that are also unaccountable and unfettered.[2] Because of the concentration of these platforms, policymakers need only convince a couple of CEOs to change their speech policies to do what the First Amendment would not allow these officials to do through law – such as block or deemphasize certain content. For example, when some Republican members of Congress complained of liberal bias on Facebook’s News Feed in 2016, the company dropped its more human-mediated editorial approach to promoting stories and doubled down on algorithmic selection. This is a problematic situation, whether one favors entirely unregulated digital speech fora or accountable media policy.

There are three First Amendment strands to this predicament: the status of platforms as speakers, with special privileges; platform internalization of the First Amendment principles of “neutrality” along with a resistance to defining and privileging the press; and general trends in jurisprudence to expand First Amendment coverage and protection for the sake of protecting economic interests against regulation.

The Status of Platforms as Speakers

The First Amendment, which states that “Congress shall make no law…abridging the freedom of speech, or of the press,” applies only to the state’s interference with speech. Today, however, it is private entities that are most influential in censoring, encouraging, and funding speech. This is not a new phenomenon, but the degree and kind of influence wielded by only a few companies is new. Dominant actors like Facebook not only control what is on their platform, they also shape the discursive space as a whole by purchasing competitors, incentivizing the production of certain content, dominating advertising markets and the exchange of personal data, and intermediating daily personal interactions among a large portion of the world’s people. These platforms in effect make media policy. The ways in which they use data, shape preferences, and control markets are meaningfully different from how big publishers and broadcasters did so in the past.

Criticism of the platforms in this regard is essentially threefold. First, they allow too much “bad speech.” They are structured to reward extreme, sensational, and polarizing speech by favoring engagement and by microtargeting customized audiences.[3] They are structured to recommend content that exploits people’s psychological vulnerabilities and bypasses the rational function. And although their terms of use prohibit violence, incitement, and hate speech, enforcement is inconsistent, with platform operators at times pulling punches in response to political pressure. This was the case, for example, when platforms removed speech from the U.S. conspiracy site InfoWars and speech inciting violence in Myanmar and the Philippines. Second, they do not support “good speech;” they have cannibalized the news businesses and fail to pay their due for information costs. Third, their content-management practices are essentially black boxes, thereby making it difficult to know who is being silenced and who amplified. In particular, they are not transparent about advertising, data collection and sharing, private censorship, ranking, and microtargeting.

"Whenever private speakers dominate a field of speech, there is tension between First Amendment law, which is essentially laissez faire, and First Amendment values, which may call for interventions."

Platforms are not the first powerful speech intermediaries to face criticism, and the private governance of speech is an old story. Whenever private speakers dominate a field of speech, there is tension between First Amendment law, which is essentially laissez-faire, and First Amendment values, which may call for interventions to ensure more participation, more “good” speech, and transparency.

The last really big technological transformation in speech mediation came in the form of broadcasting. In that instance, the law addressed this tension by relaxing First Amendment strictures to allow for regulation in the name of First Amendment values. Specifically, the U.S. Supreme Court adopted the notion of “scarcity” in the ruling in the case of Red Lion Broadcasting Co. v. FCC in 1969. This found that the physical scarcity of the airwaves and the special licenses for occupying them exclusively imposed on broadcasters special responsibilities, such as public-interest requirements and concentration limits. However, these are mostly gone now, as much as for political reasons as First Amendment concerns.

When the Internet came along in the 1990s, it was seen as a corrective to the platform power of broadcasters and cable companies. Early Supreme Court decisions classified Internet speech intermediaries like search and bulletin boards as First Amendment speakers (for example, in the ruling in the case of Reno v. ACLU in 1997), meaning that they enjoyed the same free-speech protections as newspapers or protestors. This was a foundational move, without which the Internet would not have developed as it did. It certainly made it much more likely that the Internet would be U.S.-dominated.

Importantly, not only would the full force of the First Amendment apply to Internet intermediaries, but the government also intervened to boost the power of the emerging platforms with Section 230 of the 1996 Communications Decency Act. Its 26 words provide sweeping immunity from civil liability for intermediary platforms: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, platforms have immunity for circulating speech regardless of its harm. Section 230 bestows on online intermediaries cost and time-to-market advantages over those who are held accountable for informational harms. It is possible that Internet platforms, with their network effects and winner-take-all structure, would have become essentially monopoly providers no matter what, but Section 230 was at the very least an accelerant, and possibly a necessary component, of their dominance.

Looking at the state of the law in the United States today with respect to Internet platforms, there is nothing like the scarcity policy of the early broadcasting days that would relax First Amendment strictures for government attempts to achieve positive First Amendment values. Rules that have been adopted in the European Union, like the right to be forgotten or hate speech liability, would almost certainly be unconstitutional. The same is true for other proposals like a mandate that search engines supply results in a “neutral” manner or that platforms privilege public media in their feeds.

The Fight Online Sex Trafficking Act passed in 2018 has taken the first bite out of Section 230 by holding platforms criminally liable for the circulation of speech that encourages or facilitates sex work. A constitutional appeal brought by the Electronic Frontier Foundation (representing human rights and sex workers’ rights groups) was dismissed on technical grounds, but there has yet to be a ruling on the constitutional merits of the law.[4]

Internalization of First Amendment Values

At the same time as platforms have benefited as bearers of First Amendment rights, they have acted as First Amendment enforcers in the services they have created. By waving the flag of free speech, as if the First Amendment prevented more aggressive platform governance, they have absolved themselves of responsibility for distributing and amplifying “bad” content. They have adopted a kind of “as if” governance style, i.e. acting as if they were quasi-governmental entities. As private entities, free from liability under Section 230 and from government regulation under the First Amendment, the platforms are left to regulate themselves, and they have until recently declined to take this seriously.

Policymakers and the public are angry at the platforms for failing in this regard. One of the frustrations is that the platforms have trouble admitting that they do in fact regulate speech through design choices and policies. They insist that they are tech companies, not media companies. They insist that they are essentially neutral with respect to speech, standing in the shoes of government without the associated accountability checks.

One of the features of tech neutrality with respect to speech is that platforms like Facebook and Twitter have for the most part resisted defining what constitutes a “press” worthy of special treatment. For the most part, the platforms have not distinguished between reputable journalistic content and things like conspiracy theories in prioritizing content for distribution. Some of their other actions also have a negative impact. For example, Facebook has de-prioritized many news sites in favor of more personally relevant posts from friends and family of its users, so as to reduce the flow of disinformation and to increase consumer satisfaction. However, this also had the effect of reducing the audience for news.

The reluctance to define or privilege the press is in keeping with First Amendment law. Although the First Amendment provides specifically for “freedom of the press” as well as for freedom of speech, courts have never fleshed out the meaning of the press clause or relied on it to develop a jurisprudence of press amplification or protection. One understanding of the clause is that the press’s keystone democratic function should confer on its institutions special protections and advantages, such as reduced liability for content harms, protection from prosecution under a federal shield law, and entitlement to subsidies or special distribution privileges. There are a few examples of federal law carving out special rights for the press. For example, the “news media” enjoy special exemptions from campaign finance laws[5] and fees requirements for requests under the Freedom of Information Act.[6] For the most part, however, Congress and the courts remain reluctant to define what the press is.[7] Doing so is difficult and risks being dangerously under-inclusive or meaninglessly over-inclusive.

The absence of any coherent law on what the press is means that serious journalism has no special constitutional status notwithstanding all the bromides about the importance of a free press to a free society. It is therefore unsurprising that platforms operating an “as if” governance regime forego making distinctions between fact-checked reporting and other speech. Their unwillingness to engage in content discrimination – as if they were the government – and their amplifying harmful speech instead is one of the sources of private regulatory failure.

“Lochnerization”: Using Free Speech to Advance Corporate Economic Interests

As speakers, platforms are well protected by the First Amendment. But the law is evolving in ways that may immunize companies from regulation, even when they are not engaged in expressive communication. As the Internet has been developing, First Amendment doctrine has been changing to make it even harder for government to regulate. This is what some scholars call the “Lochnerization” of the First Amendment (after a famous pre-New Deal case that held labor laws unconstitutional for interfering with freedom of contract).[8]

There are two principal moves. First, courts are expanding the coverage of the First Amendment to make communications that had not really been considered speech subject to constitutional protection (e.g. data about prescribing habits, Securities and Exchange Commission disclosure requirements, warning labels on products, and product descriptions). Second, there has been an increase in the amount of protection for what is covered and courts are more often using heightened levels of scrutiny. In other words, the government must make an ever stronger case to convince courts that what was once seen as economic or market regulation is not offensive to free speech.

These jurisprudential changes have been the result of a cunning campaign by libertarian groups like the Washington Legal Foundation, sometimes with help from the national Chamber of Commerce or the Cato Institute. The most visible victory in this effort was probably the Supreme Court’s Citizens United v. FEC decision in 2010 that struck down campaign finance regulations on the grounds that they abridged the speech rights of corporations. As a result, once a government attempt to achieve a goal unrelated to the suppression of speech, such as fair elections or consumer protection, is deemed a speech regulation, it will likely be struck down.

Lochnerization may not, in the end, be terribly consequential for digital speech platforms since there is no question that they deal in core First Amendment expression. It may be more important for platform companies like Uber that use data in ways that are not expressive but constitute communication all the same. For example, if a jurisdiction were to require Uber to open up its data to competitors, there would be a question as to whether the company was being compelled to “speak” in violation of the First Amendment As we look ahead to the convergence of information platforms with other kinds of platforms, the spread of First Amendment protection to previously unprotected or lesser-protected communication may limit the regulatory space.

Recommendations

From a media policy perspective, information platforms have the most favorable of all possible situations in the United States. They are free from liability under Section 230 and from government regulation under the First Amendment, but they also operate a free-speech governance regime “as if” they were the government foregoing distinctions between fact-checked reporting and other speech while also moderating some content without the transparency or accountability of government interference with private speech.

Thus, the platforms have used a mixture of First Amendment rights and values to produce results at odds with First Amendment values such as a functioning marketplace of ideas and regulatory intervention to address market failure. Given this state of affairs, there are mutually compatible policy and legal developments that should be considered.

Structural regulations to foster competition

The cleanest approaches avoid any content-based regulation that seeks to change the behavior of platforms with respect to certain classes of information (e.g. reducing hate speech and increasing verified journalism). Instead, they focus on structural regulation like antitrust, interoperability, and data portability, as well as possibly other forms of regulation that might encourage the entry of distributors who compete in part on the integrity of the information environment they provide and that provide publishers with different distribution models.

New Gatekeeping Regulation

A path to content-based regulation that has been available to media regulators is the scarcity rationale developed for broadcasting. In that case, it was the scarcity of radio frequencies that created the regulatory space to impose content-based obligations on broadcasters. The Supreme Court has come very close to striking down that rationale for broadcasters, and it is hard to see the development of any comparable rationale for information platforms.

However, one idea would be to premise a scarcity rationale on big platforms’ gatekeeping control of data. Data power is a form of structural dominance that is akin to control of the wavelengths. So long as it is not possible to compete with the existing platform behemoths, they control information space by virtue of their user base and exploitation of users’ data. While not on point legally, two recent Fourth Amendment cases in the Supreme Court about GPS surveillance (Jones v. U.S. (2012)) and cell-phone location data (Carpenter v. U.S. (2018)) provide support for the notion that “digital is different” because of the granular knowledge about users that digital data enables. Any content-based rule should be premised on findings about this kind of gatekeeping power.

Better Self-Regulation

In lieu of structural regulation or content-based behavioral regulation, the government may have a role in facilitating self-regulation. There has been a significant amount of experimentation by the platforms in recent months as they have tried to respond to pressure to demote certain types of content. But there has been very little concerted collective action on the part of the major tech companies to state shared values, such as the promotion and support of credible journalism. Moreover, the process by which the platforms come to decisions about tweaking their presentation of content, empowering consumers or advancing more privacy-protecting policies is usually opaque and not collaborative. Steps could be taken to open up consultations to various stakeholders and make them more public. The result might be platform codes of conduct and best practices with respect to transparency and the privileging of the press.

Facebook’s proposed for content removal decisions is an interesting innovation, provided that the tribunals release written decisions with their reasons so that we can begin to have a public “common law” on Facebook as speech regulator.

Support “good” information

What government cannot do through regulation, it has more flexibility to do through subsidies and tax policy. The objectives could be to fund journalism (especially local) that has been decimated by the loss of ad revenue to the platforms, to incentivize private philanthropic support for journalism, to incentivize more proactive content-moderation strategies, and to invest in media education that reduces demand for disinformation and increases demand (and willingness to pay) for high-quality information. What is needed in this area is a revived commitment to public media, with that term understood in new ways appropriate for the digital ecosystem.

Conclusion

Information markets are special because they traffic in protected First Amendment speech. That means they are especially hard to regulate, even when there are market failures, but also that it is especially important that they function well for the health of a democracy. Since the advent of the Internet and digital platforms, the expectation has been that speech conduits would self-regulate to provide the best kinds of marketplaces. Two decades into the experiment, it seems clear that change is needed to better support “good speech,” to reduce the salience of “bad speech,” and to improve transparency around editorial policies. Self-regulation and structural regulation are part of the solution, but so is media policy regulation focused on information gatekeepers and under-supplied content.

Ellen P. Goodman (@ellgood) is a professor at Rutgers Law School and a senior fellow with the Digital Innovation Democracy Initiative program of the German Marshall Fund of the United States.

 


[1] See, for example, Poynter Institute, “Guide to Anti-Misinformation Actions Around the World”, January 2019; French Act no. 2018-1202 of 22 December 2018 on the fight against the manipulation of information; Australia Competition and Consumer Division, Digital Platforms Inquiry, December 2018; Canada House of Commons, “Democracy Under Threat: Risks and Solutions in the era of Disinformation and Data Monopoly,” Report of the Standing Committee on Access to Information, Privacy and Ethics, December 2018; U.K. House of Commons Digital, Culture, Media and Sport Committee, Disinformation and ‘fake news’: Interim Report, July 2018.

[2] Casey Newton, How Congress missed another chance to hold big tech accountable, Techcrunch, December 12, 2018; Washington Post, “Transcript of Mark Zuckerberg’s Senate hearing”,  April 10, 2018; Washington Post, “Transcript of Zuckerberg’s appearance before House committee”, April 11, 2018.

[3] This is a point that Facebook CEO Mark Zuckerberg himself concedes. Mark Zuckerberg, “A Blueprint for Content Governance and Enforcement,” November 15, 2018.

[4] Woodhull Freedom Foundation et al. v. United States, DDC, 1:18-cv-01552-RJL (Filed 09/24/18).

[5] See 2 U.S.C. § 431(9)(B)(i) (2009) (exempting from mandatory campaign disclosures expenditures for the production of “any news story, commentary, or editorial distributed through the facilities of any broadcasting station, newspaper, magazine, or other periodical publication, unless such facilities are owned or controlled by any political party, political committee, or candidate”).

[6] See, for example, Office of Management and Budget Guidance for Freedom of Information fee exemption (news media representative “refers to any person actively gathering news for an entity that is organized and operated to publish or broadcast news to the public. …‘[N]ews’ means information that is about current events or that would be of current interest to the public.”).

[7] Shield laws in individual states typically define the press by reference to periodical publication for a general audience. See Mary-Rose Papandrea, Citizen Journalism and the Reporter’s Privilege, 91 Minn. L. Rev. 515 (2007) (detailing eligibility criteria under various State reporter’s privilege statutes).

[8] See, for example, Robert Post and Amanda Shanor, Adam Smith’s First Amendment, 128 Harv. L. Rev. Forum 165, 167 (2015) (“the First Amendment has become a powerful engine of constitutional deregulation”); Jedediah Purdy, Neoliberal Constitutionalism: Lochnerism for a New Economy, 77 Law & Contemp. Probs., no. 4, 2014, pp. 195, 195. See also Sorrell v. IMS Health Inc., 131 S. Ct. 2653, 2685 (2011) (Breyer, J. dissenting). Breyer warned that invalidation of state medical confidentiality law on First Amendment grounds “reawakens Lochner’s pre-New Deal threat of substituting judicial for democratic decision making where ordinary economic regulation is at issue.”