The speed of technological transformation has intrigued Tobey since his time at the Law School. As a student in the early 2000s, he witnessed the transition from paper notetaking to omnipresent laptops — a shift that motivated him to develop legal education software. He eventually traded technology entrepreneurship for the practice of technology law and has become a sought-after expert for companies seeking to mitigate the risks of new innovations.
Contending with the thorny ethical and legal questions posed by AI sits at the heart of Tobey’s work today. He’s involved in the first state attorney general settlement over generative AI accuracy — but even figuring out how to quantify accuracy for something that is “as open-ended and probabilistic as generative AI is an incredibly difficult legal and scientific and philosophical challenge,” he said.
Beyond litigation, Tobey’s team, which includes both lawyers and data scientists, is also developing new ways to test legal compliance in AI models. They’ve adapted the cybersecurity practice of “red teaming” — essentially, simulated hacking to uncover a system’s vulnerabilities — to identify legal risks in the responses provided to users. “It’s been a very successful endeavor,” Tobey said, and has helped companies build important new safeguards into their models. The issues at play have unfolded on a global scale, and recently, Tobey was named senior consultant to the United Nations on Parliamentary Engagement on Artificial Intelligence, helping countries address AI.
Tobey says his years at Yale Law School provided an essential foundation for his current work — though he didn’t know it at the time. “To be an AI lawyer, you really have to be a philosopher, because you can’t wrestle with the issues that AI raises without very quickly getting into pretty heavy questions of what is truth? How do we measure it? What is a product versus a service? What does it mean to have intelligence or judgment?” he said. “So the philosophical bent of Yale Law School prepared me to help extend the law into an area where some fundamental assumptions have to be examined and can’t be avoided.”
Cwik, too, is fascinated by the range of skill sets and approaches that technology law demands. Now an arbitratorand mediator with JAMS, Cwik has spent much of her career focused on legal issues related to science and technology. That’s required a lot of on-the-job learning: One of her earliest cases, for example, involved understanding the nuances of sophisticated immunology tests.
Cwik traces her interest in science and technology back to her childhood in Pittsburgh, Pennsylvania, where she attended a public elementary school that had established a partnership with the University of Pittsburgh and provided her with early access to computers. “I remember one of my proud moments in elementary school was when I was able to beat the computer at tic-tac-toe,” she said. This early exposure to computers, she believes, helped her “not be afraid to experiment and see what they’re capable of.”
Cwik was able to continue to develop her knowledge about artificial intelligence in 2018. During a yearlong fellowship at Stanford, she took a class on AI and the law taught by Mariano-Florentino Cuéllar ’97, former associate justice of the Supreme Court of California. “It was the first time thisclass was offered, and it was fascinating to hear his perspective,” she recalled. She has continued studying and thinking about AI ever since, including serving on the Planning Committee for the American Bar Association (ABA) Artificial Intelligence and Robotics National Institute since its inception in 2020 and serving as the vice-chair of the ABA Task Force on Law and AI.