Overview

The Digital Future Whitepaper Series, launched in 2020, is a venue for leading global thinkers to question the impact of digital technologies on law and society. The series aims to provide academics, researchers and practitioners a forum to describe novel challenges of data and regulation, to confront core assumptions about law and technology, and to propose new ways to align legal and ethical frameworks to the problems of the digital world.


 

Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission

by Rebecca Kelly Slaughter

Click here to read the white paper.

The proliferation of artificial intelligence and algorithmic decision-making has helped shape myriad aspects of our society: from facial recognition to deepfake technology to criminal justice and health care, their applications are seemingly endless. Across these contexts, the story of applied algorithmic decision-making is one of both promise and peril. Given the novelty, scale, and opacity involved in many applications of these technologies, the stakes are often incredibly high.

As an FTC Commissioner, I aim to promote economic and social justice through consumer protection and competition law and policy. In recent years, algorithmic decision-making has produced biased, discriminatory, and otherwise problematic outcomes in some of the most important areas of the American economy. This article describes harms caused by algorithmic decision-making in the high-stakes spheres of employment, credit, health care, and housing, which profoundly shape the lives of individuals. These harms are often felt most acutely by historically disadvantaged populations, especially Black Americans and other communities of color. And while many of the harms I describe are not entirely novel, AI and algorithms are especially dangerous because they can simultaneously obscure problems and amplify them—all while giving the false impression that these problems do not or could not possibly exist.

This article offers three primary contributions to the existing literature. First, it provides a baseline taxonomy of algorithmic harms that portend injustice, describing both the harms themselves and the technical mechanisms that drive those harms. Second, it describes my view of how the FTC’s existing tools—including section 5 of the FTC Act, the Equal Credit Opportunity Act, the Fair Credit Reporting Act, the Children’s Online Privacy Protection Act, and market studies under section 6(b) of the FTC Act—can and should be aggressively applied to thwart injustice. And finally, it explores how new legislation or an FTC rulemaking under section 18 of the FTC Act could help structurally address the harms generated by algorithmic decision-making.


Identity, Thy Name Is Gordian

by Dan Geer

Click here to read the white paper.

The issue of identity is passing unignorable. The nature of the web—that everything and everyone is equidistant—erases the inherited intuitions of the public at large even if the public understands that when you cannot tell a computer from a person, you can drop the distinction.

Identity in a connected world is certainly different than in the village where, to a first approximation, you know everyone and everyone knows you—"you" being both your physical manifestation plus your history and kinship. Literature and catwalks alike are overrun with folks who claim that their real life only began when they moved to some place where nobody knew who they were and, better still, someplace big enough that the odds of seeing the same person twice was zero unless it was intentional. hey call this freedom.

So welcome to the Internet. You don't have to be told that things here are seldom as they seem, that milk often masquerades as cream. What, then, does identity mean in, on, around, or through the Internet?


Nowhere to Hide: Data, Cyberspace, and the Dangers of the Digital World

by Andrew Burt

Click here to read the white paper.

For as long as software has been relied upon, officials and researchers alike have been sounding alarm bells about the vulnerability of all our data—sometimes comically, but nonetheless gravely. Here, for example, is how one Congressional report described the issue of data security: "If architects built buildings the way programmers build programs, then the first woodpecker to appear would destroy civilization." This was in 1989.

Here’s how the head of the Central Intelligence Agency described a variation of the same problem: “We are staking our future on a resource that we have not yet learned to protect.” This was in 1998.

Examples of these types of warnings are not hard to find—not because such prognostications require such foresight, but because it is not all that hard to be right about the risks of digital technologies. Their dangers are plentiful, and we use them more and more.

Yet layered underneath all our profound privacy and security vulnerabilities, there are also three much less obvious effects of these trends, which form the basis of this essay: Privacy is dead. So is trust. And you’re not who you think you are.

After I overview each trend, I will make a handful of concrete suggestions about what we can and should do to address each development—as lawyers, as policymakers, and as citizens around the world. The sky may seem like it is falling in cyberspace, I will argue, and with good reason, but it need not fall as fast nor land as hard.