AI disrupted the foundations of privacy law: it amplifies consumer vulnerabilities, thrives on inferences beyond disclosed data, and reshapes privacy as a relational and power-driven issue way from one of individual choices. Current frameworks, rooted in Fair Information Practices (FIPPs), fail to address these dynamics because they frame the issue as one of individual control as opposed to one of harm reduction, falling for the mistaken assumption that one can anticipate AI harms. A meaningful framework for AI regulation requires harm-minimization and an understanding of harm that foregrounds exploitation. This shift reframes accountability around the societal impacts of data practices. By understanding privacy as the prevention of exploitation through personal data, one can overcome the limits of the individual-consumer model, solve persistent accountability gaps, and escape the false trade-off between regulation and innovation.
Ignacio Cofone is the Professor of Law and Regulation of AI at the University of Oxford, Faculty of Law and Institute for Ethics in AI, and a governing body fellow of Reuben College. His research explores how the law should adjust to AI-driven social and economic changes with a focus on accounting for nonmaterial data harms. His book, “The Privacy Fallacy: Harm and Power in the Information Economy” (2023), analyzes the potential of privacy and data protection law to do so by restructuring them based on duties of non-exploitation.
Sponsoring Organization(s)
Information Society Project