Section 230’s Co-Authors Describe its Limits in Tech Project SCOTUS Filing

SCOTUS

The brief, filed on behalf of co-authors of Section 230, explains that the law was never intended as a “get out of jail free” card.

Students enrolled in the Tech Accountability & Competition Project (TAC) (a division of the Information Society Project’s Media Freedom & Information Access Clinic) helped to draft an amicus brief filed Jan. 19 in the U.S. Supreme Court on behalf of former Representative Chris Cox (R-CA) and Senator Ron Wyden (D-OR). Cox and Wyden both were members of the House of Representatives when they drafted the law known as “Section 230,” which they shepherded to near-unanimous passage and enactment in 1996. Section 230 generally provides immunity to internet platforms against claims based on third-party content appearing on their sites.

The case before the Court, Gonzalez v. Google, asks whether Section 230 immunity is available to Google, which, through its YouTube subsidiary, is alleged to have hosted and recommended ISIS recruitment videos in violation of U.S. law, thereby aiding various ISIS activities, including an attack on a Paris café in which a young, American woman was killed. Against this tragic backdrop, the brief explains the circumstances prompting then-Representatives Cox and Wyden to draft the law, the plain meaning of its key provisions, and why, in this particular case, Google is entitled to immunity. 

The brief also makes clear, however, that immunity is available only when the strict requirements that Section 230 imposes are met. These include: (1) that the internet platform did not contribute to the creation or development of the third-party content (YouTube played no such role and even had in place policies and systems intended to keep terrorist material off the platform); and (2) that the claim from which the platform seeks immunity would impose liability for publishing the material as opposed to some other platform conduct. Here, the brief explains that Google meets these requirements because it played no role in the videos’ creation, yet the claims in the lawsuit would make it liable for publishing them. 

The conclusion that Google is entitled to immunity in this particular case does not end the public policy debate around content moderation and recommending systems, TAC noted. As the brief explains in the third footnote, “[s]ome algorithmic recommending systems are alleged to be designed and trained to use information that is different in kind than the information at issue in this case, to generate recommendations that are different in kind than those at issue in this case, and/or to cause harms not at issue in this case.” Students, faculty, and affiliated researchers throughout the Yale University community continue to study and research these issues, especially the potential dangers of recommending systems powered by algorithms that often are designed to produce results that may not be in individuals’ or the public’s interests. 

For example, Professor Fiona Scott Morton, the Theodore Nierenberg Professor of Economics at the Yale School of Management and founder of the Thurman Arnold Project, recently co-authored an article exploring how we should apply antitrust law when the products at issue are addictive and dangerous, as many claim to be the case with respect to social media. Scott Morton observes that the brief addresses neither the full range of algorithmic recommending systems nor their potential harms: 

“There are algorithmic recommending systems that we know contribute to the spread of disinformation, negatively affect public health, take advantage of individual vulnerabilities, and serve us with material that angers or agitates us not for our benefit, but to manipulate us into remaining engaged so that the platform can monetize our time and attention, even if that generates negative personal consequences. The brief leaves open the question whether there could be immunity under these other fact patterns. But there is no doubt that many algorithmic recommending systems cause harms that are real. The harms, moreover, are not an inevitable by-product of technology, but result from profit-maximizing decisions made by firms that historically have under-invested in protecting their own users from harms caused by their own algorithms, despite earning tens of billions in profits year after year.”

Gene Kimmelman, Senior Policy Fellow at Yale University’s Tobin Center for Economic Policy, recent Interim Director and current Fellow at the Thurman Arnold Project, and lifetime consumer advocate, agrees, and offers his opinion that a decision in this case affirming Google’s entitlement to immunity would underscore the need for Congress to re-examine and possibly update or replace Section 230:  

“I believe that if Section 230 provides immunity in a case involving the recommending of terrorist recruitment videos that contributed to the violent death of a young American, as the student brief powerfully explains to be the outcome the law requires here, that tells you that it is time to revisit the 1996 law to ensure that the language it uses and the policy balance it strikes remain well-suited to the modern internet’s increasingly sophisticated algorithmic recommending systems. These developments, not addressed in the student brief because they are not at issue in the case, do more than recommend; they also manipulate and deceive, at scale. As Congress reviews Section 230, it should make clear that recommendation systems that manipulate or deceive are not immunized from liability for the harms they cause.”

The students from the Tech Accountability & Competition Project were supervised by Jack Balkin, Knight Professor of Constitutional Law and the First Amendment, and David Dinielli, Visiting Clinical Lecturer and the supervisor of TAC. Balkin and Dinielli appear as counsel on the brief along with Don Verrilli, the former Solicitor General of the United States, and others at Munger, Tolles & Olson LLP.

As part of the Media Freedom & Information Access Clinic at Yale Law School, the TAC Project aims to promote and enforce comprehensive legal regimes that require technology companies, digital platforms, and other public and private actors exerting power in the digital space to reduce harms arising from their business models and practices and respect the rights of all people affected by their products and services. 

The Media Freedom and Information Access Clinic is dedicated to increasing government transparency, defending the essential work of news gatherers, and protecting freedom of expression through impact litigation, direct legal services, and policy work.