Through a series of panels, the AI Governance Virtual Symposium seeks to promote discussion around the proper institutional and legal framework for the development and use of artificial intelligence. The series examines (potential) governance structures for artificial intelligence at various levels—from local to global. The AI Governance Virtual Symposium is co-hosted by the Georgetown Institute for Technology Law & Policy and the Yale Information Society Project. Antoine Prince Albert III, Heather Branch, Hillary Brill, Anupam Chander, April Doss, Niklas Eder, Nikolas Guggenberger, Daniel Maggen, Keshav Raghavan, Eoin Whitney, and Kyoko Yoshinaga have contributed to the programming and the materials.

AI Governance Virtual Symposium: AI for Municipalities

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

Panelists:

Ann Cavoukian, Executive Director, Global Privacy & Security by Design Center
Albert Fox Cahn, Founder & Executive Director, The Surveillance Technology Oversight Project
Ellen Goodman, Co-founder and Director, Rutgers Inst. for Information Policy & Law
Moderator: Sheila Foster, The Scott K. Ginsburg Professor of Urban Law and Policy; Professor of Public Policy, Georgetown University

Summary:

In the first session of the AI Governance Virtual Symposium, Professor Sheila Foster moderated a panel on AI for municipalities, featuring Dr. Ann Cavoukian, Albert Fox Cahn, and Professor Ellen Goodman. Spending on smart cities, Professor Foster noted, is likely to reach more than $130 billion this year, with AI expected to play a substantial role in their operation. This development promises to reshape many facets of city life, but it also gives rise to multiple challenges, ranging from privacy and security to the lack of transparency.

Dr. Ann Cavoukian, Executive Director of Global Privacy & Security by Design Center, discussed in her remarks the need to incorporate deidentification practices at the source of data collection in smart cities. AI is not magical, Dr. Cavoukian stressed, and transparency is essential to ensuring that privacy remains an inherent component of data gathering, as well as a safeguard against harmful and costly mistakes. Though generally optimistic about the potential benefits of smart cities, Dr. Caoukian insisted that eliminating the hidden biases endemic to AI and preventing the misuse of collected data require the ongoing ability to “look under the hood” of the systems being used.

Albert Fox Cahn, Founder and Executive Director of The Surveillance Technology Oversight Project, offered a more cautious stance on the use of AI by municipalities, drawing attention to municipalities’ tendencies to over-collect data, often beyond the collections’ original purpose. Public oversight of data collection and use, Mr. Fox Cahn warned, is hindered by the opacity of municipal procurement, compounded by the additional layers of technological complexity. In this reality, it is not always clear precisely what benefits the technology promises to produce and its efficacy in doing so. To further complicate the matter, the public allocation of the presumed benefits and of the harms these systems entail is often lopsided, with vulnerable communities bearing the brunt of the costs and enjoying little gains. Even when municipalities attempt to minimize misuse by restricting the use of collected data to its original purpose, it is often difficult to prevent the data from being turned over to state and federal authorities.

Professor Ellen Goodman, Co-founder and Director of the Rutgers Institute for Information Policy and Law, focused in her discussion on the subject of trust, noting how the failure to separate relatively mundane uses of AI from sensitive use cases can undermine public trust across the board. Exacerbating this challenge is the general concern over the role of private companies in data collection and its potentially harmful effects on democratic accountability and public participation. To gain public trust, municipalities must provide shielded data storage for their residents, protected from commercial and other interests, by employing purpose limitations and privacy controls.

Summary by Daniel Maggen, ISP Visiting Fellow

AI Governance Virtual Symposium: Audrey Tang on AI

Panelists:

Audrey Tang, Minister, Republic of China (Taiwan)
Interviewed by Anupam Chander, Professor of Law, Georgetown University Law Center
Nikolas Guggenberger, Executive Director, Yale Information Society Project
Kyoko Yoshinaga, Non-Resident Senior Fellow, Institute for Technology Law & Policy, Georgetown University Law Center

Summary:

In the opening segment to the AI Governance Virtual Symposium, organized by the Information Society Project at Yale Law School and the Institute for Technology Law and Policy at Georgetown Law, Audrey Tang, the Digital Minister of Taiwan, discussed Taiwan’s approach to AI governance and her vision for the future of AI with Professor Anupam Chander, Nikolas Guggenberger, Kyoko Yoshinaga, and Antoine Prince Albert III.

In her inspiring remarks, Minister Tang suggested treating AI as means of increasing democracy’s “bitrate,” using the technology to foster and facilitate interpersonal relationships. Discussing the Taiwanese approach, Minister Tang stressed the need for government investment in digital infrastructure, akin to that allocated to traditional tangible infrastructures. Without such investment, Minister Tang warned, government would be forced to rely on private resources, which may not be compatible with democratic rule.

On the future of AI governance, Minister Tang discussed the importance of technological education to increasing digital competence and access to AI. Investments in digital competence are key to fostering democratic participation by transforming citizens into active media producers. Discussing the risks of reliance on AI, Minister Tang addressed the need to promptly respond to implicit biases by introducing transparency and robust feedback mechanisms. Likening the use of AI technology to fire, Minister Tang advocated introducing AI literacy at a young age, teaching children how to successfully and safely interact with AI, and implementing safety measures in public infrastructure.

Offering an optimistic view on the future of AI, Minister Tang suggested that we replace talk of AI singularity with the language of “plurality”, using AI to expand the scope of social values to include future generations and the environment. Doing so requires international cooperation in developing norms that would promote fruitful AI governance and increased opportunities for future generations