By Ben Horton*
A few weeks ago, Professors Jack Goldsmith and Andrew Keane Woods ignited controversy by suggesting in the Atlantic that China was right and America was wrong about internet censorship and surveillance. This seemingly contrarian stance rubbed people the wrong way, especially given reports that China’s online censorship delayed their response to COVID-19 and that Chinese agents have actively disseminated disinformation about the virus—and then attempted to suppress reports revealing their disinformation campaign.
Except the professors’ critics seem to have missed the point of their essay. Goldsmith and Woods said China was right that the internet inevitably would be censored and surveilled, not that China’s methods were normatively appealing.
Even discounting existing state surveillance and censorship on the internet in the United States, private surveillance and censorship is ubiquitous. And, notwithstanding our intuitions, most people want an internet that is subject to ubiquitous censorship—that is, “content moderation.”
Putting aside illegal content (child pornography, snuff films, etc.), most consumers do not want to be inundated with what Sarah Jeong has dubbed “the internet of garbage.” They do not want to be harassed, bullied, threatened, or spammed on the internet. And in the midst of a global pandemic, they want to ensure disinformation is kept to a minimum. They want to limit harmful speech.
Part of our problem is we still think of speech burdens in a binary, on-off way. But especially online, the question is not whether you can find content, it is how hard it will be to be find and how much it will be amplified.
The question is not if there will be censorship and surveillance,[1] the question is who gets to do it, and how it is done. Right now a relatively small group of private actors make not only the substantive decisions about content on the internet, they decide the process that drives those decisions and how information flows through their networks. They wield enormous power, and are almost completely unaccountable to the public.
So, what are our options?
Option 1: Stay the Course
First, the United States could continue to shield tech companies from most tort-based liability for content posted on their platforms via Section 230 of the Communications Decency Act, maintain an expansive view of the First Amendment, and not substantively regulate tech companies.
Supporters of the current system largely admit that ubiquitous content moderation is good, so long as it is private. They hold that a system of private speech regulation provides a market incentive for platforms to reach a Goldilocks-zone of content moderation: Enough harmful speech is blocked that it is possible to maintain deliberative communication amid the noise, but not so much that deliberative communication is also blocked. Consumers have a choice, and services that fail to moderate will either fail or be consigned to the dark corners of the internet.
But how real is that choice? Alphabet owns the two most popular websites in the world. Facebook (through its eponymous service and Instagram), Twitter, and Reddit collectively dominate U.S. social media. Over the past twenty years who has rivaled them? MySpace? Snapchat? Yahoo!? Tumblr? Even including these rivals, American consumers have had two significant options for their search engines and four or five social media sites. And, at least in part, that lack of choice is due to the inaction by antitrust enforcers at the Federal Trade Commission and Department of Justice when Google bought YouTube and when Facebook acquired Instagram. In a monopolistic environment consumers can try to campaign for changes to private companies’ policies, but their effectiveness might rely on some of the substantive regulations discussed below.
As Evelyn Douek has argued, these platforms are increasingly cooperative in their moderation decision-making, making consumer choice even more illusory. YouTube’s policies on terrorism-related content are not significantly different than Facebook’s or Twitter’s because they all belong to the same private group that develops those standards. Facebook’s new Oversight Board is probably a step in the right direction, but what happens if it becomes the de facto decision-maker for social media standards generally?
Finally, the market theory is contingent on the assumption that people choose their networks based on the ability of the network to curate information. But the profit incentive of social media companies is to increase our engagement—which might mean pushing harmful content on users, or at least enabling that sort of thing (until they’re caught). The negative effects of this content might be exaggerated, but without greater transparency we just don’t know.
Aside from the harms of disinformation, staying the course has the additional drawback of eliminating the United States from the global conversation about internet governance. As Microsoft President Brad Smith mentioned in a recent interview, in the future, tech companies may simply adapt their products to the regulations of the European Union and other Western democracies that lack stringent First Amendment or Section 230 protections against government involvement in online speech. We already see this to some extent with the NetzDG law in Germany, which, if nothing else, is offering us some useful transparency on content moderation.
Or tech companies themselves might simply decide how public health crises are managed.
Either way, the United States government, for better or worse, will simply not have much of a say in what the internet looks like.
Option 2: Content-Based Regulations
For constitutional reasons, the approach of regulating speech based on its content is closed off to the United States. There is a lively academic debate about the status of lies and hate speech under the First Amendment. But absent a political revolution, it will remain an academic debate. The Supreme Court has said, in an 8-1 opinion, it will not open up new “uncovered” zones of speech. Content-based regulations of harmful speech will continue to be subject to strict scrutiny, and they will continue to be struck down.
In the U.S. context, at least for the foreseeable future, content-based censorship will continue to be ubiquitous and limited to private actors. That does not mean we need to leave the speech moderating apparatus entirely to the private sector.
Option 3: Torts, Competition, Process, and Friction
Contrary to cyber-libertarians, the options available are not limited to “censorship” or no regulation at all. We have other tools at our disposal. The key is to focus on content-neutral regulations, especially those that govern the flow of information rather than regulations that criminalize certain content.
As a threshold matter, these policies do not have to—and likely will not—take the form of flat bans and mandates. They might be conditions attached to liability immunities or tax incentives, and they can—and should—distinguish between different types of online services. Of course, companies have been lobbied, and should be lobbied, to make these changes on their own; I am arguing that there is some role for direct government regulation in these realms.
First, we could reform Section 230. While supporters maintain that Section 230 is necessary to ensure that platforms can engage in decent moderation without fear of liability, detractors argue that a well-crafted alternative could still shield sites that engage in good-faith moderation without shielding sites that are designed to facilitate human trafficking, for instance. And regardless of where you stand on the 230 debate, given bipartisan support for both SESTA–FOSTA and the delayed “EARN IT Act,” 230 as we know it is unlikely to survive. If we want sensible intermediary liability protection, and not a patchwork of exceptions that probably make the internet less safe, the 230-or-nothing stance is increasingly politically untenable.
Second, we can advocate for regulations that promote competition, creating a market where consumers have real choices and their choices make a difference. This need not be the traditional “breaking up” of companies given the beneficial network effects consumers find in centralized services and the possible aggravation of harm that a balkanized internet could bring. Pro-competition policy could start with blocking the sale of startups to Facebook and Google. It could include the imposition of substantive requirements, like an information fiduciary responsibility or interoperability requirement on organizations with a certain share of the market. Any regulations, however, need to be sensitive to the needs of non-profits with large user bases and low revenues.
Third, and more controversially, we can require more transparent processes in content moderation. A number of organizations have released and advocated for the “Santa Clara Principles.” These include, at a minimum, publishing the number of posts and accounts taken down organized by the category of violation, providing notice to users whose accounts or posts are taken down, and instituting some kind of appeal process. If content-based moderation decisions are largely going to be done by private actors, their legitimacy relies on being transparent and understandable to the public. Even if changes are brought about by private pressure, we cannot collectively criticize and improve on secret processes.
Finally, and most controversially, maybe we can impose content-neutral, friction-creating regulations that force consumers to be more deliberate in sharing and consuming information. For instance, WhatsApp recently limited its forwarding function so that any messages that come from a chain of more than five people must be forwarded one chat at a time. This type of rule is not content-based; it applies to speech based on its virality, not the “topic, idea or message” communicated. Disclosure requirements—revealing, for example, whether or not a human is speaking—might also increase friction and deliberation. And some regulations of social media’s “frictionless” design might be allowable under the First Amendment.
These regulations avoid the hard epistemological questions and constitutional hurdles of defining harmful speech. They regulate the flow of information regardless of its content instead of worrying about speech concerning a particular topic. Furthermore, they ban no speech—deliberate communication is unaffected.
There are pros and cons to every policy mentioned, with administrability challenges and constitutional issues. But to reach a substantive discussion of the realistic possibilities for regulation in the U.S. context, the conversation needs to move beyond the false binary of “censorship versus free speech.”
* Ben Horton is a rising 3L at Harvard Law School and an Online Editor for HLPR.
[1] I am not talking about the problems of surveillance presented by innovations like the Ring doorbell, or facial recognition. I am referring to the level of surveillance necessary to ensure that speech is successfully moderated on platforms—being able to tie punishments to certain accounts, for example. That overlaps with the problems of online behavioral manipulation and surveillance capitalism, which I am not addressing in this post.