What are the appropriate remedies when Internet services, such as Twitter, Facebook, and YouTube, publish anti-social user content? Although regulators and analysts have pondered issues such as what constitutes offending content and who should decide the question, the issue of appropriate remedies needs development. This important question is the subject of an excellent article, Content Moderation Remedies, by Eric Goldman. Notwithstanding that the dominant strategy has been removal of the offending content, Goldman urges a more nuanced approach. He compiles useful and comprehensive examples of alternative remedies short of removal that Internet services already have employed and develops a helpful framework for determining when such remedies may be superior.
Content Moderation Remedies focuses on Internet services’ decisions on remedies, not on legal regulation. The article points out that although content may be illegal, Internet services most often are free from liability under Federal law.1 Services therefore enjoy some discretion in formulating appropriate remedies for offending content. Even if content is legal, services have discretion under their own house rules on how to deal with offensive material.
With this background, Goldman sees a clash of remedial policies, namely restricting anti-social online content, on the one hand, and avoiding the specter of censorship on the other. He argues that to date, the predominant remedy has been removal of offending content, leaving little room to explore the range of remedies that might better harmonize the clashing policies. The article then develops a framework for determining what alternative remedies might be more appropriate under particular circumstances. The goal is to employ nuanced solutions that are better than a one-size- fits-all remedy of removal of the content.
Notwithstanding the predominance of removal, the article offers many examples (36 to be exact) of remedies short of removal that Internet services have employed. Goldman helpfully sorts these promising remedies into five categories:
- actions directed at the content, such as editing or setting forth warnings about the content;
- actions directed at the online account, such as suspending the account or calling attention to the poster’s poor behavior;
- actions to reduce the visibility of the content, such as by reducing internal promotions or external search indexes;
- actions directed at financial consequences, such as terminating or suspending future earnings; and
- a catch-all category of assorted remedies that do not fit into the previous categories, such as educating users about the illicit content or reporting the issue to law enforcement.
Goldman argues that in many circumstances these remedies are superior to removal in no small part because removal may cause a litany of problems. For example, removal extinguishes evidence of the problem (such as Twitter’s removal of Trump’s tweets). Removal also orphans any comments on the offending content if the comments are not removed and breaks links to other content. Most harmful, removal impinges on free expression and contributes to a negative view of Internet services. Goldman urges flexibility so that the punishment fits the “crime” and offenders can be “rehabilitated rather than “banished.”
If a smorgasbord of potential remedies is to be successful, the challenge, of course, is to set forth a set of principles to guide the choice of remedies in various contexts. Goldman develops several sensible criteria, among others, whether the content in fact violates legal regulation or internal services’ rules, the severity of the violation, how a remedy will impact third parties, and the possibility of rehabilitation of the content provider.
Notwithstanding the article’s impressive effort to establish guidance for remedy selection, Content Moderation Remedies readers who favor certainty in the law may be concerned that the costs of remedial diversity outweigh its benefits. For example, Goldman proposes a scale from 1 to 100 to determine the severity of the violation that in turn impacts selection of the appropriate remedy. Situating particular content on such a scale and matching it to a remedy will be no small task in cases that do not demand removal.
The article’s opening example of problematic content, some might argue, is unfortunate if Goldman’s goal is to persuade readers of the need for remedial diversity:
In May 2019, a President Trump supporter published a video of House Speaker Nancy Pelosi, which slowed down authentic footage without lowering the voice pitch, conveying the inauthentic impression that Speaker Pelosi had delivered her remarks while intoxicated. The video quickly became a viral sensation and spread rapidly across the Internet… The video probably didn’t violate the law, and even if it did, the social media services likely were not legally liable for it. As a result, the social media services had the legal freedom to moderate the video as they saw fit.
Although Facebook, Twitter, and YouTube selected different remedies for this content, Goldman may face an uphill battle persuading many readers that anything other than removal is appropriate for fraudulent content such as this, which content could have major political implications. On the other hand, Goldman has a good argument that his many examples of responses short of removal in other contexts are the best evidence that Internet services should not automatically default to an all-or-nothing approach.
Goldman does not ignore the possibility that the best approach to the problem of offending content may be website design that would deter such content in the first place. In fact, Content Moderation Remedies treats all aspects of this important problem in a thoughtful and impressive way and is must reading for anyone interested in governance of the Internet.