AI and the First Amendment: A Q&A with Jack Balkin
Jack M. Balkin is Knight Professor of Constitutional Law and the First Amendment at Yale Law School. Balkin is the founder and director of Yale's Information Society Project, an interdisciplinary center that studies law and new information technologies. He is the author of more than 140 articles in different fields, including constitutional theory, internet law, freedom of speech, reproductive rights, jurisprudence, and the theory of ideology. In this Q&A, he discusses artificial intelligence (AI) and the legal repercussions that may occur as the technology advances.
How does the First Amendment apply to AI-generated expression? Do artificial intelligence programs have First Amendment rights? Is the content AI generates protected by the First Amendment?
The programs themselves don’t have First Amendment rights. Nor does it make sense to treat them as artificial persons like corporations or associations. The law gives corporations and associations First Amendment rights because they are groups of human beings who work together on common projects. It’s convenient to use the fiction of legal personhood to assign rights to the collective project. You don’t need to do this in the case of generative AI. Nevertheless, people and companies that use AI to produce content that they claim as their own have First Amendment rights as speakers. And people have rights to read or listen to content produced by AI, even though AI itself has no First Amendment rights.
Conversely, when speech is otherwise unprotected, people can’t avoid liability by substituting AI speech for human speech. A health provider that uses AI to give medical advice to patients is still subject to malpractice liability. Interesting problems arise when a company hosts an AI program that generates responses to prompts by end users, and the prompts cause the program to generate speech that is both unprotected and harmful. For example, generative AI programs sometimes “hallucinate”: they produce false speech upon prompting. Currently it’s pretty easy to generate an AI response that defames a person, for example. The courts will have to decide where responsibility lies — with the company hosting the AI or the prompter — and what degree of intention is required to impose liability, because the AI program itself lacks human intentions.
How will works created by generative AI be treated under copyright law?
Artificial intelligence raises a host of new problems. One question is the extent to which works produced by AI are copyrightable and who, if anyone, is the author. If I use AI as a tool to create first drafts of works that I edit or modify, the law will probably treat me as the author of the work for purposes of copyright law. But what if I simply write a prompt for the AI program and it spits out a completed work? Generative AI leads to a world in which the “author” is the prompt engineer rather than the people we ordinarily think of as the artist or composer.
A second issue concerns training data. Is it fair use when companies train their AI programs on mountains of copyrighted content? AI companies might argue that it is fair use by analogy to the Google Books case, Authors Guild, Inc. v. Google, Inc., where Google made copies of copyrighted books in order to enable text search that reproduced only small snippets of text.
A third problem concerns the texts, poems, musical compositions, and art produced by AI. Many of these AI-generated works will have similarities to copyrighted works. Under what conditions should the law treat these as infringing works or, alternatively, as fair uses?
How should governments regulate AI? Do we need a new government agency?
Regulation of AI seems inevitable, especially since it’s already occurring in Europe. The technologies are developing rapidly and becoming more powerful every day. The issues will be wide-ranging, not just the small number I’ve mentioned above. It’s unlikely that courts will be able to deal with the mounting problems without Congress developing a statutory framework, and it’s unlikely that Congress will be able to adequately address all the issues without delegation to an administrative agency with the relevant expertise.
Above all, it’s important to recognize that AI runs on the collection and analysis of enormous amounts of data. Data is the source both of AI’s power and many of its potential dangers. Unlike Europe, the U.S. still lacks a comprehensive digital privacy statute regulating the collection, use, and sale of data collected from human beings. We should have begun solving the problems of the digital age years ago. It’s time to catch up.