Through a series of panels, the AI Governance Virtual Symposium seeks to promote discussion around the proper institutional and legal framework for the development and use of artificial intelligence. The series examines (potential) governance structures for artificial intelligence at various levels—from local to global. The AI Governance Virtual Symposium is co-hosted by the Georgetown Institute for Technology Law & Policy and the Yale Information Society Project. Antoine Prince Albert III, Heather Branch, Hillary Brill, Anupam Chander, April Doss, Niklas Eder, Nikolas Guggenberger, Daniel Maggen, Keshav Raghavan, Eoin Whitney, and Kyoko Yoshinaga have contributed to the programming and the materials.

AI Governance Virtual Symposium: How Do We Regulate AI? Comparative Perspectives

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

 

Panelists:

Chinmayi Arun, Resident Fellow, Information Society Project, Yale Law School
Jessica L. Rich, Esq., Distinguished Fellow, Institute for Technology Law And Policy, Georgetown Law; Former Director of the Bureau of Consumer Protection, Federal Trade Commission
Lucilla Sioli, Director, Artificial Intelligence and Digital Industry, DG Connect, European Commission
Moderator: Anupam Chander, Professor of Law, Georgetown University

Summary:

In the third session of the AI Governance Virtual Symposium, Professor Anupam Chander moderated a panel on comparative perspectives on AI regulations featuring Chinmayi Arun of the Information Society Project, Jessica L. Rich of the Institute for Technology Law And Policy, and Lucilla Sioli of the European Commission.

Discussing AI regulation in the global South, Chinmayi Arun noted the Western norms and imagery imposed by major multinational companies on the global majority through data collection choices and model development. This tension is exacerbated by the fact that some countries in the majority world are not democracies and others have weak regulators. Arun discussed how this results in technologies criticized in the minority world being embraced by countries in the majority world. Lastly, Arun touched on the tension between states’ questioning of certain technologies and international agreements guaranteeing the free flow of data.

Lucilla Sioli discussed the EU’s proposed utilization of the CE product marking framework to regulate the placement of AI products on the European market. Sioli stressed that the purpose of the proposed regulation is not to regulate AI technology but rather to impose rules on using certain AI systems in specific contexts according to a scaled measurement of the system’s risk and sensitivity. In the proposed regulation, AI use cases are ranked from mundane, low-risk systems to high-risk, prohibited use cases. This scaled regulation, Sioli noted, can help address the concerns of businesses reluctant to use AI out of fear of customer objection.

Presenting the perspective from the US, Jessica Rich, former Director at the FTC, discussed the many non-binding principles and standards addressing the use of AI and requiring transparency, truthfulness, and nondiscrimination, as well the increasing number of legislative proposals on the subject. Although AI technology is not regulated in the US as a general matter, Rich noted that as a process, AI is incorporated into products and services regulated by comprehensive regulatory frameworks. However, further regulation is needed, Rich added, to ensure that corporations cannot escape accountability by assigning responsibility to an algorithm.

AI Governance Virtual Symposium:AI Ethics & Corporate Responsibility

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

Panelists:

Yoko Arisaka, General Manager at Sony’s Legal Department
Erika Brown Lee, SVP and Assistant GC at Mastercard 
Jutta Williams, Staff Product Manager at Twitter 
Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology

Summary:

In the second session of the AI Governance Virtual Symposium, Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology, moderated a panel on AI and corporate social responsibility featuring Yoko Arisaka, General Manager at Sony’s Legal Department, Erika Brown Lee, SVP and Assistant GC at Mastercard, and Jutta Williams, Staff Product Manager at Twitter.

Discussing Sony’s ethics activities, Yoko Arisaka emphasized corporations’ duty to promote creativity and sustainability. Ethics in AI requires constantly exploring the meaning of humanity, of what we are looking for as people. Though the challenges created by AI and machine learning cannot be resolved entirely, Arisaka underscored the importance of mitigating these risks and maximizing fairness. This requires transparency about collecting personal data, providing individuals with accessible information on the AI’s use of their data. Discussing the particular challenges faced by companies with global operations, Arisaka notes the variance in different people’s sense of ethics. To meet this challenge, global companies must develop standard ethics guidelines developed in dialog with different groups, academia, industry, and the public sector.

Presenting Twitter’s Responsible Machine Learning Initiative, Jutta Williams discussed the need for public development and accountability in assessing the algorithm’s fairness. Williams discussed this initiative as resting on four pillars: taking responsibility for algorithmic decisions, equity and fairness of outcomes, transparency about decisions, and enabling user agency and algorithmic choice.

For Erika Brown Lee, corporations’ social responsibility concerning AI revolves around the question of trustworthiness, as users need to be able to trust that service providers are good stewards of their personal data. Ethical entities, Brown Lee stressed, are responsible for ensuring that individuals and their rights are honored. Individuals should own their data, control and understand how it is used, benefit from its use, and have a right to keep personal data private and secure. 

AI Governance Virtual Symposium: AI for Municipalities

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

Panelists:

Ann Cavoukian, Executive Director, Global Privacy & Security by Design Center
Albert Fox Cahn, Founder & Executive Director, The Surveillance Technology Oversight Project
Ellen Goodman, Co-founder and Director, Rutgers Inst. for Information Policy & Law
Moderator: Sheila Foster, The Scott K. Ginsburg Professor of Urban Law and Policy; Professor of Public Policy, Georgetown University

Summary:

In the first session of the AI Governance Virtual Symposium, Professor Sheila Foster moderated a panel on AI for municipalities, featuring Dr. Ann Cavoukian, Albert Fox Cahn, and Professor Ellen Goodman. Spending on smart cities, Professor Foster noted, is likely to reach more than $130 billion this year, with AI expected to play a substantial role in their operation. This development promises to reshape many facets of city life, but it also gives rise to multiple challenges, ranging from privacy and security to the lack of transparency.

Dr. Ann Cavoukian, Executive Director of Global Privacy & Security by Design Center, discussed in her remarks the need to incorporate deidentification practices at the source of data collection in smart cities. AI is not magical, Dr. Cavoukian stressed, and transparency is essential to ensuring that privacy remains an inherent component of data gathering, as well as a safeguard against harmful and costly mistakes. Though generally optimistic about the potential benefits of smart cities, Dr. Caoukian insisted that eliminating the hidden biases endemic to AI and preventing the misuse of collected data require the ongoing ability to “look under the hood” of the systems being used.

Albert Fox Cahn, Founder and Executive Director of The Surveillance Technology Oversight Project, offered a more cautious stance on the use of AI by municipalities, drawing attention to municipalities’ tendencies to over-collect data, often beyond the collections’ original purpose. Public oversight of data collection and use, Mr. Fox Cahn warned, is hindered by the opacity of municipal procurement, compounded by the additional layers of technological complexity. In this reality, it is not always clear precisely what benefits the technology promises to produce and its efficacy in doing so. To further complicate the matter, the public allocation of the presumed benefits and of the harms these systems entail is often lopsided, with vulnerable communities bearing the brunt of the costs and enjoying little gains. Even when municipalities attempt to minimize misuse by restricting the use of collected data to its original purpose, it is often difficult to prevent the data from being turned over to state and federal authorities.

Professor Ellen Goodman, Co-founder and Director of the Rutgers Institute for Information Policy and Law, focused in her discussion on the subject of trust, noting how the failure to separate relatively mundane uses of AI from sensitive use cases can undermine public trust across the board. Exacerbating this challenge is the general concern over the role of private companies in data collection and its potentially harmful effects on democratic accountability and public participation. To gain public trust, municipalities must provide shielded data storage for their residents, protected from commercial and other interests, by employing purpose limitations and privacy controls.

Summary by Daniel Maggen, ISP Visiting Fellow

AI Governance Virtual Symposium:  Interview with Minister Audrey Tang on AI

Panelists:

Audrey Tang, Minister, Republic of China (Taiwan)
Interviewed by Anupam Chander, Professor of Law, Georgetown University Law Center
Nikolas Guggenberger, Executive Director, Yale Information Society Project
Kyoko Yoshinaga, Non-Resident Senior Fellow, Institute for Technology Law & Policy, Georgetown University Law Center

Summary:

In the opening segment to the AI Governance Virtual Symposium, organized by the Information Society Project at Yale Law School and the Institute for Technology Law and Policy at Georgetown Law, Audrey Tang, the Digital Minister of Taiwan, discussed Taiwan’s approach to AI governance and her vision for the future of AI with Professor Anupam Chander, Nikolas Guggenberger, Kyoko Yoshinaga, and Antoine Prince Albert III.

In her inspiring remarks, Minister Tang suggested treating AI as means of increasing democracy’s “bitrate,” using the technology to foster and facilitate interpersonal relationships. Discussing the Taiwanese approach, Minister Tang stressed the need for government investment in digital infrastructure, akin to that allocated to traditional tangible infrastructures. Without such investment, Minister Tang warned, government would be forced to rely on private resources, which may not be compatible with democratic rule.

On the future of AI governance, Minister Tang discussed the importance of technological education to increasing digital competence and access to AI. Investments in digital competence are key to fostering democratic participation by transforming citizens into active media producers. Discussing the risks of reliance on AI, Minister Tang addressed the need to promptly respond to implicit biases by introducing transparency and robust feedback mechanisms. Likening the use of AI technology to fire, Minister Tang advocated introducing AI literacy at a young age, teaching children how to successfully and safely interact with AI, and implementing safety measures in public infrastructure.

Offering an optimistic view on the future of AI, Minister Tang suggested that we replace talk of AI singularity with the language of “plurality”, using AI to expand the scope of social values to include future generations and the environment. Doing so requires international cooperation in developing norms that would promote fruitful AI governance and increased opportunities for future generations.