Through a series of panels, the AI Governance Virtual Symposium seeks to promote discussion around the proper institutional and legal framework for the development and use of artificial intelligence. The series examines (potential) governance structures for artificial intelligence at various levels—from local to global. The AI Governance Virtual Symposium is co-hosted by the Georgetown Institute for Technology Law & Policy and the Yale Information Society Project. Antoine Prince Albert III, Heather Branch, Hillary Brill, Anupam Chander, April Doss, Niklas Eder, Nikolas Guggenberger, Daniel Maggen, Keshav Raghavan, Eoin Whitney, and Kyoko Yoshinaga have contributed to the programming and the materials.

AI Governance Virtual Symposium: AI's Role in Addressing and Exacerbating Climate Change (October 29, 2021)

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

Panelists: Priya Donti, Chair of Climate Change AI,
Sasha Luccioni of the Mila Institute and Co-Founder of Climate Change AI
Professor Masaru Yarime of the Hong Kong University of Science and Technology
Moderator: Jackie Snow, Journalist, New York Times, Wall Street Journal, National Geographic and others

Summary:

AI's role in climate changeIn this installment on the AI Governance Virtual Symposium Series, journalist Jackie Snow mediated a discussion between Priya Donti, Sasha Luccioni, and Masaru Yarime on the role of Artificial Intelligence in addressing and exacerbating climate change.

Priya Donti, Chair of Climate Change AI, surveyed the different ways AI applications can be used to mitigate climate change and support climate action, either by reducing greenhouse gas emissions or adapting to the results of climate change. These applications include information gathering, forecasting, improving operational efficiency, performing predictive maintenance, accelerating scientific experimentation, and approximating time-sensitive simulations. At the same time, AI can be used in systems that directly or potentially increase greenhouse gas emissions, such as emission-intensive industries, in addition to the substantial amounts of energy consumed by AI systems themselves.

Sasha Luccioni of the Mila Institute and Co-Founder of Climate Change AI discussed the “This Climate Does not Exist” project. In this project, AI is used to generate images that simulate the appearance of climate events in user-chosen locations. As research suggests, by bringing these effects closer to home, such AI-produced personalized imagery can help viewers better realize the urgency of climate change.

Professor Masaru Yarime of the Hong Kong University of Science and Technology noted the increasing involvement of AI systems in climate change-related technologies. After surveying these involvements, including promoting energy efficiency in information and communication systems, industry, transportation, and the household, Professor Yarime discussed how these use cases could fit into existing AI governance regimes.

 

AI Governance Virtual Symposium: Classifying AI Systems and Understanding AI Accidents (September 24, 2021)

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

 

Panelists: Catherine Aiken, the Director of Data Science and Research at CSET
Helen Toner, Director of Strategy at CSET,
Moderator: April Falcon Doss, Executive Director of the Georgetown Institute for Technology Law and Policy

Summary:

AI Symosium​In this semester’s first session of the AI Governance Virtual Symposium, moderated by April Falcon Doss, Executive Director of the Georgetown Institute for Technology Law and Policy, Catherine Aiken and Helen Toner from Georgetown’s Center for Security and Emerging Technology (CSET) presented their work on AI classification and AI accidents.

Catherine Aiken, the Director of Data Science and Research at CSET, discussed CSET’s work on developing AI classification frameworks for policymakers. As a general-purpose technology, different AI systems can have different meanings and implications for various regulatory frameworks. Responding to the challenge AI’s multifacetedness poses to policymaking, the CSET, in collaboration with the OECD, seeks to develop a user-friendly framework to classify AI systems uniformly along policy-relevant dimensions. To be successfully employed by policymakers, classifications need to be readily usable and understandable, characterize elements most relevant for policy and governance, involve minimal administrative burdens, be attuned to other AI governance frameworks, and be reliably consistent for a range of users. In line with these key criteria, CSET has developed two alternative and complementary classification frameworks, one classifying AI systems according to their level of autonomy and impact, and the other looking at the context the system operates in, the kind of input it receives, the model it utilizes, and its output.

Helen Toner, Director of Strategy at CSET, discussed the work done at the CSET on the emerging phenomenon of AI accidents. Regulatory frameworks risk lagging behind the rapid development of AI technology. To give the policy response sufficient time to adapt, CSET has been developing tools to foresee AI-related problems before they arise. Doing so involves drawing both on past non-AI technological accidents and known weaknesses and vulnerabilities characteristic of the use of AI systems, due to encounters with unexpected input, failures to devise appropriate specifications, and difficulties involving the system’s interpretability and inability to assure users of its accuracy. Looking at these two sources, the research has identified five factors contributing to AI accidents: competitive pressure, system complexity, the speeds at which AI systems operate, untrained and distracted users, and cascading effects in multiple instance systems. In responding to these weaknesses, policymakers can focus on investments in AI safety R&D and standards and testing capacities, work across borders to reduce accident risk, and facilitate information sharing.

AI Governance Virtual Symposium: Watching Algorithms, The Role of Civil Society (June 18, 2021)

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

 

Panelists: Julia Angwin, The Markup, Editor-in-Chief and Founder
Iverna McGowan, Center for Democracy & Technology, Europe Director
David Robinson, Cornell's College of Computing and Information Science, Visiting Scientist, AI Policy and Practice Initiative
Moderator: Byron Tau, Wall Street Journal, Reporter

Summary:

AI Governance Symposium in JuneIn this year’s concluding session of the AI Governance Virtual Symposium, Byron Tau of the Wall Street Journal led Julia Angwin, Iverna McGowen, and David Robinson in a discussion on the role of civil society in keeping the use of algorithms in check.

Julia Angwin, Founder and Editor-in-Chief of The Markup, observed the role of journalism in bringing to light the various ways in which different AI use cases affect our lives, from hiring algorithms to those used in criminal proceedings. Despite their prevalence, algorithms are prone to introducing and enhancing various biases and are nevertheless often subject to little scrutiny. Furthermore, algorithms can be used to circumvent accountability for failures which comparable human decision-making would be held accountable for.

Iverna McGowen, the Europe Director of the Center for Democracy & Technology, discussed the EU’s recently published draft AI regulations. The proposed regulation task governmental agencies with determining the level of risk posed by various AI-based systems and regulates them according to their risk level. However, this risk-based approach should not come at the expense of a rights-based approach that puts the AI’s potential effect on human rights at center stage. Civil society organizations have an essential role in ensuring that the use of AI lives up to human rights standards and the general principles of the rule of law, including transparency and fairness.

David Robinson of Cornell’s College of Computing and Information Science focused on the mechanism that can be used in the service of AI governance. Examples of such mechanisms can, for instance, be gleaned from the process of organ allocation, which has been subject over the years to various regimes of public oversight and input. The debate over algorithms can act as a moral spotlight, focusing on specific aspects of fairness. However, there are also more general questions about the very use of algorithms in different circumstances.

AI Governance Virtual Symposium: How Do We Regulate AI? Comparative Perspectives (May 28, 2021)

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

 

 

Panelists:

Chinmayi Arun, Resident Fellow, Information Society Project, Yale Law School
Jessica L. Rich, Esq., Distinguished Fellow, Institute for Technology Law And Policy, Georgetown Law; Former Director of the Bureau of Consumer Protection, Federal Trade Commission
Lucilla Sioli, Director, Artificial Intelligence and Digital Industry, DG Connect, European Commission
Moderator: Anupam Chander, Professor of Law, Georgetown University

Summary:

In the third session of the AI Governance Virtual Symposium, Professor Anupam Chander moderated a panel on comparative perspectives on AI regulations featuring Chinmayi Arun of the Information Society Project, Jessica L. Rich of the Institute for Technology Law And Policy, and Lucilla Sioli of the European Commission.

Discussing AI regulation in the global South, Chinmayi Arun noted the Western norms and imagery imposed by major multinational companies on the global majority through data collection choices and model development. This tension is exacerbated by the fact that some countries in the majority world are not democracies and others have weak regulators. Arun discussed how this results in technologies criticized in the minority world being embraced by countries in the majority world. Lastly, Arun touched on the tension between states’ questioning of certain technologies and international agreements guaranteeing the free flow of data.

Lucilla Sioli discussed the EU’s proposed utilization of the CE product marking framework to regulate the placement of AI products on the European market. Sioli stressed that the purpose of the proposed regulation is not to regulate AI technology but rather to impose rules on using certain AI systems in specific contexts according to a scaled measurement of the system’s risk and sensitivity. In the proposed regulation, AI use cases are ranked from mundane, low-risk systems to high-risk, prohibited use cases. This scaled regulation, Sioli noted, can help address the concerns of businesses reluctant to use AI out of fear of customer objection.

Presenting the perspective from the US, Jessica Rich, former Director at the FTC, discussed the many non-binding principles and standards addressing the use of AI and requiring transparency, truthfulness, and nondiscrimination, as well the increasing number of legislative proposals on the subject. Although AI technology is not regulated in the US as a general matter, Rich noted that as a process, AI is incorporated into products and services regulated by comprehensive regulatory frameworks. However, further regulation is needed, Rich added, to ensure that corporations cannot escape accountability by assigning responsibility to an algorithm.

AI Governance Virtual Symposium:AI Ethics & Corporate Responsibility (May 7, 2021)

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

Panelists:

Yoko Arisaka, General Manager at Sony’s Legal Department
Erika Brown Lee, SVP and Assistant GC at Mastercard 
Jutta Williams, Staff Product Manager at Twitter 
Moderator: Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology

Summary:

In the second session of the AI Governance Virtual Symposium, Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology, moderated a panel on AI and corporate social responsibility featuring Yoko Arisaka, General Manager at Sony’s Legal Department, Erika Brown Lee, SVP and Assistant GC at Mastercard, and Jutta Williams, Staff Product Manager at Twitter.

Discussing Sony’s ethics activities, Yoko Arisaka emphasized corporations’ duty to promote creativity and sustainability. Ethics in AI requires constantly exploring the meaning of humanity, of what we are looking for as people. Though the challenges created by AI and machine learning cannot be resolved entirely, Arisaka underscored the importance of mitigating these risks and maximizing fairness. This requires transparency about collecting personal data, providing individuals with accessible information on the AI’s use of their data. Discussing the particular challenges faced by companies with global operations, Arisaka notes the variance in different people’s sense of ethics. To meet this challenge, global companies must develop standard ethics guidelines developed in dialog with different groups, academia, industry, and the public sector.

Presenting Twitter’s Responsible Machine Learning Initiative, Jutta Williams discussed the need for public development and accountability in assessing the algorithm’s fairness. Williams discussed this initiative as resting on four pillars: taking responsibility for algorithmic decisions, equity and fairness of outcomes, transparency about decisions, and enabling user agency and algorithmic choice.

For Erika Brown Lee, corporations’ social responsibility concerning AI revolves around the question of trustworthiness, as users need to be able to trust that service providers are good stewards of their personal data. Ethical entities, Brown Lee stressed, are responsible for ensuring that individuals and their rights are honored. Individuals should own their data, control and understand how it is used, benefit from its use, and have a right to keep personal data private and secure. 

AI Governance Virtual Symposium: AI for Municipalities (April 2, 2021)

Co-hosted by ISP & Georgetown Institute for Technology Law & Policy

Panelists:

Ann Cavoukian, Executive Director, Global Privacy & Security by Design Center
Albert Fox Cahn, Founder & Executive Director, The Surveillance Technology Oversight Project
Ellen Goodman, Co-founder and Director, Rutgers Inst. for Information Policy & Law
Moderator: Sheila Foster, The Scott K. Ginsburg Professor of Urban Law and Policy; Professor of Public Policy, Georgetown University

Summary:

AI Governance Symposium In the first session of the AI Governance Virtual Symposium, Professor Sheila Foster moderated a panel on AI for municipalities, featuring Dr. Ann Cavoukian, Albert Fox Cahn, and Professor Ellen Goodman. Spending on smart cities, Professor Foster noted, is likely to reach more than $130 billion this year, with AI expected to play a substantial role in their operation. This development promises to reshape many facets of city life, but it also gives rise to multiple challenges, ranging from privacy and security to the lack of transparency.

Dr. Ann Cavoukian, Executive Director of Global Privacy & Security by Design Center, discussed in her remarks the need to incorporate deidentification practices at the source of data collection in smart cities. AI is not magical, Dr. Cavoukian stressed, and transparency is essential to ensuring that privacy remains an inherent component of data gathering, as well as a safeguard against harmful and costly mistakes. Though generally optimistic about the potential benefits of smart cities, Dr. Caoukian insisted that eliminating the hidden biases endemic to AI and preventing the misuse of collected data require the ongoing ability to “look under the hood” of the systems being used.

Albert Fox Cahn, Founder and Executive Director of The Surveillance Technology Oversight Project, offered a more cautious stance on the use of AI by municipalities, drawing attention to municipalities’ tendencies to over-collect data, often beyond the collections’ original purpose. Public oversight of data collection and use, Mr. Fox Cahn warned, is hindered by the opacity of municipal procurement, compounded by the additional layers of technological complexity. In this reality, it is not always clear precisely what benefits the technology promises to produce and its efficacy in doing so. To further complicate the matter, the public allocation of the presumed benefits and of the harms these systems entail is often lopsided, with vulnerable communities bearing the brunt of the costs and enjoying little gains. Even when municipalities attempt to minimize misuse by restricting the use of collected data to its original purpose, it is often difficult to prevent the data from being turned over to state and federal authorities.

Professor Ellen Goodman, Co-founder and Director of the Rutgers Institute for Information Policy and Law, focused in her discussion on the subject of trust, noting how the failure to separate relatively mundane uses of AI from sensitive use cases can undermine public trust across the board. Exacerbating this challenge is the general concern over the role of private companies in data collection and its potentially harmful effects on democratic accountability and public participation. To gain public trust, municipalities must provide shielded data storage for their residents, protected from commercial and other interests, by employing purpose limitations and privacy controls.

Summary by Daniel Maggen, ISP Visiting Fellow

AI Governance Virtual Symposium:  Interview with Minister Audrey Tang on AI (March 10, 2021)

Guest Speaker: Audrey Tang, Minister, Republic of China (Taiwan)
Interviewed by: Anupam Chander, Professor of Law, Georgetown University Law Center
Nikolas Guggenberger, Executive , Yale Information Society Project
Kyoko Yoshinaga, Non-Resident Senior Fellow, Institute for Technology Law & Policy, Georgetown University Law Center

Summary:

In the opening segment to the AI Governance Virtual Symposium, organized by the Information Society Project at Yale Law School and the Institute for Technology Law and Policy at Georgetown Law, Audrey Tang, the Digital Minister of Taiwan, discussed Taiwan’s approach to AI governance and her vision for the future of AI with Professor Anupam Chander, Nikolas Guggenberger, Kyoko Yoshinaga, and Antoine Prince Albert III.

In her inspiring remarks, Minister Tang suggested treating AI as means of increasing democracy’s “bitrate,” using the technology to foster and facilitate interpersonal relationships. Discussing the Taiwanese approach, Minister Tang stressed the need for government investment in digital infrastructure, akin to that allocated to traditional tangible infrastructures. Without such investment, Minister Tang warned, government would be forced to rely on private resources, which may not be compatible with democratic rule.

On the future of AI governance, Minister Tang discussed the importance of technological education to increasing digital competence and access to AI. Investments in digital competence are key to fostering democratic participation by transforming citizens into active media producers. Discussing the risks of reliance on AI, Minister Tang addressed the need to promptly respond to implicit biases by introducing transparency and robust feedback mechanisms. Likening the use of AI technology to fire, Minister Tang advocated introducing AI literacy at a young age, teaching children how to successfully and safely interact with AI, and implementing safety measures in public infrastructure.

Offering an optimistic view on the future of AI, Minister Tang suggested that we replace talk of AI singularity with the language of “plurality”, using AI to expand the scope of social values to include future generations and the environment. Doing so requires international cooperation in developing norms that would promote fruitful AI governance and increased opportunities for future generations.