socialnetwork-desktop2.jpg

AI Governance Symposium

AI Governance Symposium

November 12-13, 2021

Yale Law School (virtual)

The Yale Information Society Project/Wikimedia Initiative on Intermediaries and Information presents a two-day symposium on emerging issues in AI Governance. We bring together a group of academics and practitioners to discuss critical issues in understanding, regulating and developing AI technologies. 

See the full conference agenda below, sign up today and join the conversation.

Register for the AI Governance Symposium

Please share this invitation with others who may be interested. We look forward to seeing you soon on Zoom!

AGENDA

Friday Nov 12, 2021

10:30 AM- 11:40 AM   AI Accountability Frameworks

AI audits and impact assessments have been proposed as a way to address some of the issues associated with AI decision-making systems. This panel will discuss some of the difficulties with developing effective accountability frameworks for AI systems. We will examine individual courses of action, the role institutions and various stakeholders may play, as well as the importance that data governance has in any meaningful accountability proposal. 

Moderator: Ge Chen, Yale Information Society Project

Panelists: 

Elizabeth Anne Watkins, Princeton University

“Such a Dangerous New Feature": Assessing the Harms of Computer Vision as Account Verification in Gig Work”

Andrew Selbst, UCLA School of Law

“An Institutional View of Algorithmic Impact Assessments”

Jennifer King, Stanford Institute for Human-Centered AI

“Garbage In, Garbage Out - Regulating AI by Focusing on Data”

Margot Kaminski, University of Colorado School of Law

“Right to Contest AI”

11:50 AM- 12:50 PM   Impact through Litigation and Legislative Advocacy

While there has been well documented concern about AI decision making systems, there has not been much litigation to date. Class action lawsuits are one mechanism to allow people to seek remedies against harmful public and private practices. This procedural mechanism can also offer courts a way to understand AI better. However, building such a case in the context of AI systems faces a number of novel challenges. This panel will address some of these issues, and offer ways to construct strategies that involve both litigation and legislative advocacy. 

Moderator: Albert Fox Cahn, Surveillance Technology Oversight Project

Panelists:

Lindsay Nako, Impact Fund

“AI and Collective Legal Action: Policing Tomorrow’s Technologies”

Christine Webber, Cohen Milstein Sellers & Toll PLLC 

“Using Impact Litigation to Fight AI Bias in Employment & Housing: Challenges and Opportunities”

Veena Dubal, UC Hastings College of Law

“Essentially Dispossessed”

Break  

1:00 - 2:40 PM Public and Private Use of AI

What are some of the ways government agencies use algorithms? What are the impediments to preventing certain uses by public agencies, especially when they are developed by private companies? How should researchers approach questions of ethics in AI development? The panelists in this session will answer these questions and broaden our understanding of the various legal and technical roadblocks to comprehensive regulation.

Moderator: Mehtab Khan, Yale Information Society Project

Panelists:

Rebecca Wexler, University of California Berkeley School of Law

“Ignorance-Based Secrecy: Surveillance Software, Discovery, and the Law Enforcement Privilege.”

Dan Burk, UC Irvine School of Law

“Cheap Creativity and What it Will Do”

Baobao Zhang, Syracuse University

“Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers”

Amanda Levendowski, Georgetown University Law Center

“Resisting Face Surveillance”

Saturday Nov 13, 2021

10:30 AM - 11:40 AM  Global Issues in AI Governance

How do we compare global approaches to AI governance? What challenges do platforms pose to AI regulation? How will technologies, such as large language models shape freedom of expression on the internet? This panel will discuss this question, with a specific focus on the global nature of the issues with AI technologies.

Moderator: Pauline Trouillard, Yale Information Society Project

Panelists:

Anupam Chander, Georgetown University Law Center

“Fear, Dancing, and TikTok's AI”

Shelby Grossman, Stanford University

“How Foundation Models will Shape Disinformation, and Implications for Human Detection”

Amba Kak, AI Now Institute

“Narratives on AI Across Jurisdictions”

Tajh Taylor, Wikimedia Foundation

“Ethics and shared decision-making in ML around the world”

11:50 AM - 12:50 PM  AI Infrastructures

We can only begin to talk about meaningful accountability by talking more about where the data underpinning these technologies comes from. How is this data used? What are the terms and license agreements, if any? This panel will shed more light on the technical and policy decisions involved in building AI systems.

Moderator: Niklas Eder, Yale Information Society Project

Panelists:

Alex Hanna, Google Research

“Genealogies of Datasets”

Mehtab Khan, Yale Information Society Project

“Dataset Accountability”

Solon Barocas, Cornell University

“Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs”

Break  

1:10 - 2:20 PM Privacy Harms and AI

AI systems implicate individual identity and dignity in ways that current legal frameworks don’t adequately protect. AI systems routinely misclassify individuals on the basis of race and gender, but it is difficult to specify the harms associated with such categorization. This panel will bring together the technical and legal insights on the harms associated with AI systems, the difficulties of uncovering them, and how various stakeholders may approach remedies in response to these harms. 

Moderator: Akriti Gaur, Yale Law School

Panelists:

Ari Ezra Waldman, Northeastern University School of Law

“How Tech Companies Approach AI Governance”

Morgan Klaus, University of Colorado, Boulder 

“How We Teach Computer Vision to See Race and Gender”

Ryan Calo, University of Washington School of Law

“Revisiting Privacy Harms”