About this blog

Collab in Action (CIA) is the Justice Collaboratory’s blog written by its senior research team of Camila Gripp, PhD (Criminal Justice issues) and Farzaneh Badiei, PhD (Social Media issues). The Justice Collaboratory’s mission is to bring the latest ideas in the social sciences to bear on current problems. Rooted in the tenets of procedural justice, we seek to improve both the criminal justice and social media governance systems. We do this by:

Transforming the Goal: Legitimacy. The objective of both the criminal justice and social media governance system, must be to increase trust and cooperation between communities and the state.

Transforming the Focus: Communities, not individuals, should be our most meaningful unit of analysis.

Transforming the Language: Public Safety. Public safety is not just the reduction of crime or the maintenance of order. Rather, safety requires freedom from insecurity and victimization, community disenfranchisement, and government overreach.

This blog is published by and reflects the personal views of the individual authors, in their individual capacities. It does not purport to represent Yale University's institutional views, if any. No representation is made about the accuracy of the information, which solely constitutes the authors’ personal views on issues discussed. The information contained in this blog is provided only as general information and personal opinions, and blog topics may be updated after being initially posted.

The falling dominoes: on tech-platforms recent user moderation

January 14, 2021

Tech-platforms in the past week had to remove and ban someone because of inciting violence. That person happened to be the President of the United States. It is hard to argue that tech-platforms did not see this coming since they deal with all kinds of other harmful behavior on their platforms. It is also hard to argue that tech-platforms had a governance structure in place to address the problem. The reason for this unpreparedness is that these platforms don’t only moderate content, they moderate users but they still use content moderation techniques that are not built to deal with users.

The recent spate of take-down and bans by tech-platforms is a testament to the fact that in governing their users’ behavior, tech-platforms have to go beyond content moderation. After tech-platforms banned Trump and some of his acolytes, it became clear that their content moderation techniques are not sufficient to build legitimacy and trust.

Users’ perceptions of tech-platforms are very important. Political leaders and others can use tech-platforms to affect people's behavior and incite violence. But the way tech-platforms deal with such behavior online is also a determining factor on how the users are going to behave in the future.

If the users trust a platform and perceive its processes as legitimate, those users are more likely to accept a decision by the platform even if they don’t agree with it. That certainly did not happen in the recent events. Instead, we saw what we might call “safety theatre”[1]. We saw top-down measures that removed harmful, violence-inciting content and people. We did not see measures through which platforms tried to respond to the aggrieved parties (those who thought it was unfair to remove their President from the platform). It was not clear how the platforms moderated the users. Using only content moderation techniques with no clear governance structure is like theatrical solutions we often see at the airport security: likely to be ineffective but visible.

To go beyond content moderation, platforms should build governance structures that can proactively create trust and legitimacy. Governance is not just due process or 100 page community standards of behavior. Governance is the necessary structure that helps build communities which, combined with fairness, can bring trust to a platform.

Finally, it is important to look at the interrelation of different tech-platforms and consider their actions collectively and not individually. We have been debating at which layer it is appropriate to do content moderation. But I think we should look at the issue more holistically.  From the outside, tech-platforms (located in various layers of the Internet) appeared to have a domino effect on one another.  They used similar methods and processes for the same goal: removing Trump and his supporters. Such a domino effect can threaten the presence of certain people on the Internet. Therefore, actions should be taken proportionally and with a fair governance structure in place that is appropriate for its respective layer of the Internet.




[1] Bruce Schneier wrote an essay about security theatre which “…refers to security measures that make people feel more secure without doing anything to actually improve their security.” The term safety theatre in this essay was inspired by that essay.

https://www.schneier.com/essays/archives/2009/11/beyond_security_thea.html