“(Im)Perfect Enforcement,” an Information Society Project conference to be held at Yale Law School in New Haven, Connecticut on April 6-7, 2019.
Recent technological advancements enable an unprecedented level of algorithmic decision-making processes and automated legal enforcement actions. Both of these methods of replacing humans with algorithms are often celebrated for “more perfectly” enforcing rules. Social media networks employ algorithmic decision-making to scale content moderation; criminal justice institutions delegate decisions on sentencing, probation, and risk to algorithms; machine-to-machine contracting in high-frequency trading depends on both algorithmic decision-making and automated enforcement; and blockchain technology and smart contracts aim to create self-enforcing contracts.
We aim to explore the fundamental principles and the practical applications of algorithmic-decision making and automated enforcement of laws, rules, and contracts. We encourage submissions from all disciplines that contribute to related legal, regulatory, or policy discussions addressing potentials and challenges, including:
How can we delineate the meaning of “(im)perfect enforcement” with regards to both decision-making processes and legal enforcement? What laws should or should not be perfectly enforced, and what laws should not? When and why do people want perfect enforcement? What factors influence those evaluations?
How do we find the right balance between automation, flexibility, and justice? How, if at all, does automation change the character of laws, rules, and contracts?
What does it mean to have enforcement without enforcers? Where are the checks in the system? To what extent do rules require discretion? What is the potential of automated interpretation and management? What are the roles of humans in the loop in different contexts and cultures? Are there universal truths or values?
When is there a right to obtain a human decision? What tasks and decisions should we automate – or not automate – and why? What principles should we follow when allocating resources to building the infrastructure for the automation of decision-making processes and legal enforcement actions? Who should take the lead?
When, if ever, is it useful to conduct an economic analysis of (im)perfect enforcement, and how would we do it? What is the role of artificial intelligence in the debate on rules vs. standards and ex-ante vs. ex-post regulation? Are there useful insights from behavioral economics?
To what extent should we build legal compliance into the architecture of autonomous systems? How might we implement it in practice?