“New Worlds Arise”: Professors Meares and Tyler Research Online Safety

As the field of online trust and safety rapidly develops, how can criminology research help contribute to vital online communities? This is the question at the heart of a new research article co-authored by Yale Law School professors Tracey Meares and Tom Tyler, founding directors of the Justice Collaboratory.
Meares is the Walton Hale Hamilton Professor of Law and a nationally recognized expert on policing in urban communities. Her research looks at how members of the public think about their relationships with legal authorities.
Tyler is the Macklin Fleming Professor Emeritus of Law and Professor of Psychology. His research explores the role of justice in shaping people’s relationships with groups and societies.
Their new essay, published in the Annual Review of Criminology, was co-authored with Matt Katsaros, director of the Social Media Governance Initiative at the Justice Collaboratory.
Meares and Tyler frequently collaborate on research, and in the Q&A below, they jointly reflect on the challenges of regulating online content and how models based on procedural justice can make the Internet safer for everyone.
In your paper, you describe some of the methods for regulating content that have been used by online social platforms. Why is better regulation of online social spaces necessary, and how can the field of criminology provide insights for mitigating online misconduct?

As it has developed and replaced offline interactions, online exchanges have taken on many of the less desirable features of real-world communication. Whether it is the general expression of racism, sexism, xenophobia, or more relationship-specific bullying or messages of anger and revenge, online communications can have the same destructive elements as in-person speech. As a consequence, platforms are being pressured to manage the content they allow. Most platforms were initially designed by engineers with a presumption that their content would be benign in the framework of “dating sites.” They would promote opportunities to make connections and build positive relationships. Platforms have belatedly recognized the need to deal with negative content as it has grown in frequency and toxicity, pressured by users and government regulators. In this context, platforms have drawn upon the extensive theories and research that exist in criminology about how to manage bad actors and inappropriate actions. The online and offline worlds face many similar challenges when trying to regulate behavior, and in many ways efforts at mitigating online misconduct can be shaped by prior efforts to mitigate offline misconduct, i.e. criminal behavior.
Criminological theories explore not only commonly known ideas such as deterrence but also the major contributions of social psychology, such as voluntary compliance. In our paper, we focused on the importance of promoting internalized compliance as a foundational concept for regulating online content. When people comply with rules voluntarily, not only will we achieve more durable rule-following at lower cost, but, we believe, we can also achieve online spaces that are more “pro-social.”
The paper talks about platform-driven governance versus community-driven governance. Can you explain the differences between these approaches and offer a criminal justice lens for assessing their effectiveness?

The key question is how to encourage people to follow rules about appropriate conduct. One approach is for platforms to use their control over platforms to institute and enforce a set of rules. Platforms have considerable control over access and can and do suspend or terminate users. Despite these advantages platforms face continual challenges as users adapt to regulatory efforts. One challenge is that platforms are not governments, so it is hard to motivate voluntary rule following based upon the legitimacy of the authorities. An alternative approach is to focus on a user’s connection to other users and their desire to be a valued member of an online community. In offline communities, people follow community norms to avoid being ostracized by others in their community, so this could also be a mechanism online. To promote this approach, platforms allow smaller groups to be created around shared interests and permit those groups to make their own rules and have their own moderators to enforce them. Doing so draws upon the recognition in criminology that loyalty to groups can be a strong motivator of rule following behavior.
Your paper notes that sanction-based content moderation isn’t always effective, as “across multiple platforms … individuals who feel more fairly treated by a platform’s enforcement process are more likely to voluntarily follow rules.” What design interventions can improve online safety?
While many platforms have drawn upon criminological models to design their content moderation systems, they have frequently adopted sanction-based models, models that have many problems in their application in real-world settings. Consequently, they have made little effort to gain voluntary acceptance by designing procedures that users experience as just to enhance the legitimacy of their moderation efforts. A legitimacy-based approach created around perceived justice has been found to be very effective in real-world communities and recent research suggests that it also enhances voluntary rule following online. What leads to user perceptions of justice? Procedures that allow users input prior to decisions to block their posts; clear explanations of what the rules are and of how a post violates those rules; and opportunities to appeal decisions, particularly when those decisions are initially made algorithmically. Research suggests that there is a fundamental tension between algorithmic decision making and people’s views about justice, with people generally associating receiving justice with human rather than machine made decisions.
What are other ways in which justice theory could be used to design remedies for online transgressions?
Procedural justice models focus on designing and implementing rules and remedies based upon what users think is a fair way to make decisions about content. Once platforms accept the basic idea that they should design their procedures in this way, they can draw upon a rich literature in criminology concerning how to design just procedures in regulatory situations. That literature shows that people want voice, i.e. the opportunity to tell their side of the story before decisions are made; neutrality, i.e. clear and transparent rules and explanations for how decisions reflect those rules; respect, i.e. treatment based upon the presumption that they are not “bad actors” with malevolent intentions; and finally, trust that the intentions of the authorities are sincere and benevolent, i.e. decision makers are trying to do what is best for the users involved.