Charting New Courses in Artificial Intelligence

In a classroom at Yale’s Luce Hall, students and instructors debated whether a large language model (LLM) should be able to interpret a diversity jurisdiction rule versus simply reciting the rule for a client.
An hour later and a few blocks away, another class talked through the four privacy torts and how they might be applied to deepfakes created on an artificial intelligence (AI) platform.
Each course offers a unique opportunity to deeply engage with questions about AI — how to regulate it, how it interfaces with existing law, and how to harness its potential. But their approaches are completely different.
“Law and Large Language Models” is taught jointly by Scott Shapiro ’90, the Charles F. Southmayd Professor of Law and Professor of Philosophy, and Ruzica Piskac, Professor of Computer Science.
“Artificial Intelligence Law and Policy,” meanwhile, is a seminar co-taught by Knight Professor of Constitutional Law and the First Amendment Jack M. Balkin and Chinmayi Arun, executive director of Yale Law School’s Information Society Project.
In Balkin and Arun’s class, students analyze how existing law could be applied to regulate AI; in Shapiro and Piskac’s class, students actually work with AI. Both approaches require a robust understanding of the technology.
Training LLMs in the law
At the heart of “Law and Large Language Models” is a question: how do you automate legal reasoning?
The course takes both a theoretical and practical approach to answering it. As prerequisites, students need to have studied linear algebra and know how to code in Python. Ninety-five students are registered, hailing from both the Law School and Computer Science.
Shapiro and Piskac are frequent collaborators4 and have won two Amazon Research Awards; the first, for their proposal “Formalizing FISA: using automated reasoning to formalize legal reasoning,” eventually became their first co-taught course, “Law, Logic, and Security,” in 2022.

The new course embraces the differences between Shapiro and Piskac’s two professional “languages” — law and computer science. In class, Shapiro and Piskac take turns teaching and explaining their disciplines’ approaches to reasoning. Students’ homework involves running experiments training LLMs on legal and nonlegal texts and comparing the results. These exercises are invariably interesting, said Shapiro, because they highlight just how different legal texts are from nonlegal texts.
Yale Law School is doing the cutting-edge scholarship of the future.”
—Professor Scott Shapiro
“Large language models and AI are trained on the websites of the world, a tiny percentage of which are legal [websites]. But what if you could train an LLM on legal texts?” he said. In this course, students can, and do.
Shapiro said the course has three goals. The first is to create a truly interdisciplinary conversation on the applications of technology on professional activities. The second is to challenge an assumption in computer science that one good tool should be able to handle many kinds of tasks.
“We’re in the phase of ‘one tool to rule them all,’ but [students] need to see there’s another approach,” he said.
“The third goal is to get students to think about what the technology does, [what] are its strengths and weaknesses, and what are the best use cases for it? We’re trying to create law students who can speak to technologists. The hypothesis is that engineers aren’t going to build the legal tools of the future — the lawyers are, in partnership with engineers.”
Maggie Baughman ’27 is a student in Shapiro and Piskac’s class. Prior to Yale Law School, she studied international relations, computer science, and machine learning, and worked for a government-funded laboratory doing cyber research.
“Law is a mix of rules and patterns,” she said. “Different machine learning techniques are suited for different pieces of legal reasoning.”
Baughman was excited to take a course that was hands-on. Quoting Shapiro, she said, “I’m interested in the ‘technology of law’ rather than the ‘law of technology.’ This course is a venue for tool-building instead of learning about the laws governing tools.”
Orkhan Abdulkarimli ’25 LLM is also a student in Shapiro’s class. Abdulkarimli hails from Azerbaijan, where he earned his law degree and worked in cybercrime investigations.
“This course is the one I’m always talking about. It’s more than a law course, it’s technical — we do assignments on coding, but Scott is also teaching legal philosophy and how legal reasoning should be applied to AI reasoning,” said Abdulkarimli.
The interdisciplinarity of the course has been “outstanding,” he said: Shapiro’s approach to legal philosophy, combined with Piskac’s deep understanding of mathematical and technical concepts, has created a unique environment in which students are both coding and exploring sophisticated questions in the law, and where the professors themselves constantly challenge each other’s understanding.

Learning new languages
“Artificial Intelligence Law and Policy” is focused not on creating but on regulating AI, and examines intersections with issues of freedom of expression, intellectual property, and antidiscrimination law.
This class is much smaller: just 12 students and the two instructors. The course’s size was intended to make the class feel like a workshop, said Arun. Students come from backgrounds in computer science or tech companies; while they initially saw their fields as unrelated, they have discovered many connections between their prior work and AI.
The course’s goal, Arun said, is to help students think about new problems raised by AI, and the types of legal regulation best suited to those problems. “More generally, this is a course about law and technology — how law and technology interact, how technology changes law, and how law affects technological development,” Balkin said.
“When I started teaching technology law in 2010, always including articles by Jack and other Yale ISP scholars in my courses, I never imagined that I would get to teach and write about cutting-edge technology with Jack at Yale Law School,” said Arun. “It has been wonderful thinking through AI law and policy questions with our brilliant students and guest lecturers. Owing to my background, I am especially interested in how AI’s global political economy affects the design of technology and law. Introducing this to the class and engaging with everyone else’s point of view has been an enriching experience for me.”
Hibah Kamal-Grayson ’25 is in Balkin and Arun’s class. She has plenty of real-world experience with AI: prior to law school she spent 14 years in the tech policy space, where she worked “hand in hand with engineers to prevent the spread of mis- and disinformation,” she said. “I came to law school to get a more robust understanding of how the law will fit into this.”

It’s been a privilege to study with Balkin and Arun, she said, who share a unique ability to root AI questions in existing legal frameworks. “In ‘policy land,’ my professional highlight was when I got to speak on AI and human rights at the United Nations headquarters in Geneva. But I kept having these nagging [questions] of ‘How do we operationalize this? Who will be responsible?’
“What’s challenging is the ‘how’ piece. Let’s say we agree on these goals. What does accountability look like? How do we design legal frameworks? That’s what this class is helping us grapple with.”
Albert Wang ’27 also has a unique pre-law school background: he focused on science —specifically, cryptology, oncology, and neurodegenerative disease — as a pre-med student before pivoting to law.
“AI has a future of technological development and it will interact with so many traditional industries,” said Wang. “I want to see how we should regulate that.”
In both “Law and Large Language Models” and “Artificial Intelligence Law and Policy,” one theme that crops up again and again is the differences between the language of computer science and the language of law.
“Most of the time, lawyers don’t understand scientists and scientists don’t understand lawyers,” said Wang.
Whether students are training AI on legal texts in Luce Hall or figuring out how to regulate AI in Baker, they’re learning to communicate across disciplines.
Shapiro says this is a critical piece of the puzzle of how law will regulate AI and how AI will be used to revolutionize the law: helping students learn to “speak” the language of technologists, and vice versa. “It’s an attempt at serious interdisciplinary education,” he said.
According to Balkin, the Law School has made a commitment to thinking proactively about AI, which is why it continues to expand its course offerings. “Our faculty knows that it is going to affect many different parts of what we do — not only individual subject matters, but also the legal system and the work of lawyering itself,” he said.
True to form, Shapiro agreed, but put it differently. “Yale Law School is doing the cutting-edge scholarship of the future,” he said.