In recent years, algorithmic harms—a host of harms to fundamental civil rights—have become a pressing problem for contemporary democracy. As machine-based systems promise to optimize our life with greater efficiency, they present a critical new array of civil rights concerns. For instance, a facial recognition system for improving criminal detection wrongly flagged innocent customers as shoplifters, a healthcare software designed to identify high-risk patients denied medical treatment to Black individuals with poor health conditions, and a social media algorithm intended to boost social engagement exacerbated addictive behavior and mental illness in teenagers. At the core of these problems is the varied, dynamic, and opaque nature of algorithmic harms.
To confront these new challenges, policymakers worldwide are increasingly adopting regulatory measures directed at algorithmic harms. In December 2023, the European Union passed a landmark comprehensive AI Act, targeting the governance of AI applications through a risk-based approach. Nearly concurrently, the U.S. White House issued an Executive Order to commence initiatives to protect citizens from algorithmic harms. While algorithmic harm concerns are widely recognized in policy agendas, policymakers still struggle to articulate their nature and scope, impeding effective legislation and meaningful enforcement.
This Article constructs a taxonomy of algorithmic harm that identifies four different interests at stake: privacy erosion, autonomy circumvention, equality diminishment, and safety risks. This taxonomy is informed by case studies of three AI harm mitigation frameworks. This comparative analysis examines the strengths and limitations of each framework, arguing for a shift toward alternative, harm-centered solutions. In doing so, this Article suggests a set of refined harm-based rules, modeled on recent proposals but modified to reflect a more comprehensive understanding of algorithmic harms, aligning the law more closely with the actual problems it intends to regulate.
Sylvia Lu is a faculty fellow at the University of Michigan Law School. Her teaching and research interests lie in the interplay of law, innovation, and society. Lu writes about data privacy laws, artificial intelligence and law, and comparative law, with a particular focus on the United States, the European Union, and China. She holds a Doctor of Science of Law and a Master of Laws degree from the University of California, Berkeley and earned a Master of Laws from National Tsinghua University in Taiwan.
Information Society Project