Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.
Our data cannot be protected at scale. Algorithmic systems have long made decisions that their programmers will never understand. It is no longer rare to encounter failures that cannot be addressed even after they are detected—something breaks, in other words, but no one can fix it. Indeed, for as long as the digital world has existed it has been defined by unmitigable risks. And yet more and more software and AI systems are adopted every day. Which leads us to this fundamental question: Why do we so easily and so collectively ignore the risks that plague our digital environment?
In a world of conflicting interests and finite resources, agents are constantly faced with decisions that require compromises between constraints, opportunities, and costs. This is the art of making trade-offs, which forms the crux of rational decision-making. In this talk, Professor Floridi analyzes the logic of trade-offs, exploring the fundamental principles that underlie the process of balancing competing interests and objectives. He shall argue that such logic is part of the logic of requirements and even more broadly of a logic of design.
This talk starts with a discussion of Platforms & Cultural Production (Polity, 2021) followed by a review of recent scholarship on global perspectives on platformization and on platform evolution. The specific focus will be on the institutional implications of platformization across the cultural industries, identifying key changes in markets, infrastructures, and governance.