Center Advances U.S.-China Understanding of AI Governance
As artificial intelligence (AI) continues to grow in scale and affects a broader swath of everyday life, communication and understanding between the world’s leading AI ecosystems is increasingly necessary. Official dialogue between governments remains fundamental, despite challenges. In an article published recently in Project Syndicate, PTCC Fellow Karman Lucero outlines the challenges of AI dialogue between the U.S. and China, why the two countries have different perspectives and goals, as well as how Track II dialogues can better serve to realize substantive agreements and risk prevention measures.
Through research and dialogues, the Paul Tsai China Center has been a leader in advancing mutual understanding between the U.S. and China in the field of AI governance. The Center has organized numerous rounds of discussions since 2019 that have brought together top legal, policy, and technical experts from academia, the private sector, think tanks, and government in the U.S., China, and other countries.
Recent examples include a dialogue at Yale Law School in the fall of 2023 that focused on large language models and the challenges posed by generative AI. This was followed by a dialogue, held additionally with European stakeholders, at Oxford University in May 2024, focusing on the governance of foundation models. Another dialogue focuses on military applications of AI, bringing together practitioners in an environment designed to foster substantive discussions on real-world challenges.
These dialogues create space for experts to address common issues, such as how governance institutions are adapting to respond to the challenges posed by AI, and to identify areas of AI governance that could benefit from, or even require, sustained communication and collaboration, just as others are bound to remain areas of intense competition. Neither American nor Chinese experts see value in tools they cannot control nor understand. As such, participants have agreed that it is possible and beneficial to share ideas and cooperate on improving AI safety without necessarily compromising security, trade secrets, or people’s personal information. Participants have also started to identify ways to collaborate on the further development of metrics to monitor the growth of AI model capabilities and to compare best practices for implementing ethical principles and regulations related to AI governance.