The following social media governance commentary appears in the Justice Collaboratory's Collab in Action (CIA) blog.
The falling dominoes: on tech-platforms recent user moderation
January 14, 2021
Tech-platforms in the past week had to remove and ban someone because of inciting violence. That person happened to be the President of the United States. It is hard to argue that tech-platforms did not see this coming since they deal with all kinds of other harmful behavior on their platforms. It is also hard to argue that tech-platforms had a governance structure in place to address the problem. The reason for this unpreparedness is that these platforms don’t only moderate content, they moderate users but they still use content moderation techniques that are not built to deal with users.
The recent spate of take-down and bans by tech-platforms is a testament to the fact that in governing their users’ behavior, tech-platforms have to go beyond content moderation. After tech-platforms banned Trump and some of his acolytes, it became clear that their content moderation techniques are not sufficient to build legitimacy and trust.
Users’ perceptions of tech-platforms are very important. Political leaders and others can use tech-platforms to affect people's behavior and incite violence. But the way tech-platforms deal with such behavior online is also a determining factor on how the users are going to behave in the future.
If the users trust a platform and perceive its processes as legitimate, those users are more likely to accept a decision by the platform even if they don’t agree with it. That certainly did not happen in the recent events. Instead, we saw what we might call “safety theatre”[1]. We saw top-down measures that removed harmful, violence-inciting content and people. We did not see measures through which platforms tried to respond to the aggrieved parties (those who thought it was unfair to remove their President from the platform). It was not clear how the platforms moderated the users. Using only content moderation techniques with no clear governance structure is like theatrical solutions we often see at the airport security: likely to be ineffective but visible.
To go beyond content moderation, platforms should build governance structures that can proactively create trust and legitimacy. Governance is not just due process or 100 page community standards of behavior. Governance is the necessary structure that helps build communities which, combined with fairness, can bring trust to a platform.
Finally, it is important to look at the interrelation of different tech-platforms and consider their actions collectively and not individually. We have been debating at which layer it is appropriate to do content moderation. But I think we should look at the issue more holistically. From the outside, tech-platforms (located in various layers of the Internet) appeared to have a domino effect on one another. They used similar methods and processes for the same goal: removing Trump and his supporters. Such a domino effect can threaten the presence of certain people on the Internet. Therefore, actions should be taken proportionally and with a fair governance structure in place that is appropriate for its respective layer of the Internet.
[1] Bruce Schneier wrote an essay about security theatre which “…refers to security measures that make people feel more secure without doing anything to actually improve their security.” The term safety theatre in this essay was inspired by that essay.
https://www.schneier.com/essays/archives/2009/11/beyond_security_thea.html
SMGI 2020 Recap & Looking Ahead
The notorious 2020 is going to end soon, and we look forward to a fresh start in 2021. At the Social Media Governance Initiative (SMGI), we refined our goal this year to bring prosocial governance mechanisms to tech corporations and social media platforms. By prosocial we mean governance strategies which encourage people to follow rules, cooperate with each other and their communities, and to engage with their community — socially, economically and politically.
We want to promote prosocial governance strategies in contrast to the dominant punitive governance methods which social media platforms seem to adopt. The punitive measures were all too popular this year because of the “infodemic” or disinformation about the COVID virus. Next year will be a good time to reflect on the consequences of punitive measures and designing prosocial governance strategies.
To advance our mission we have taken a number of important steps:
The Research Network
This year we formed a new research network comprising key academics who have novel insights about social media governance. In 2021, we will publish a special issue at the Yale Journal of Law and Technology that discusses the important work these scholars are doing in the realm of social media governance studies. The special issue will be available in April, and will include papers with topics such as:
- Advances in our understanding of trust building and platform-based governance;
- Designing in-product & policies using alternative theories of justice and going beyond criminal justice;
- When to deploy “soft” and “hard” interventions in social media governance;
- The challenge of cultural diversity and the power of human rights approaches;
- Experimental evidence of promoting compliance and engagement online;
- How to organize innovation at the firm level to promote prosocial engagement.
Events
During the fall semester, we held three #TeaAtTwo virtual presentation sessions. The first was with Paolo Parigi discussing trust and measuring trust in social media platforms; the second with Professor Baron Pineda on governing diverse cultures on social media platforms using a human rights framework; and the final one with Professor Sarita Schoenebeck on repairing technological harms. These sessions were very popular and we plan to host another SMGI #TeaAtTwo series in 2021.
Partnerships
We have forged exciting new research partnerships with a number of tech-platforms. SMGI aims to bring healthy and civil online interactions to these Internet platforms by providing the evidence-based suggestions that can be implemented in policy and product design. We will publish the result of our partnerships next year.
Publications
We regularly published blogs about the most pressing social media governance issues during the last year, and we will continue to do so in 2021. We covered topics such as how to have a prosocial Facebook Oversight Board, how to fight COVID-19 disinformation with non-punitive mechanisms, and what the prosocial movement is, and how we want to start it.
I have also published a report about Telegram’s governance shortcomings in which I use the theories of procedural justice and collective efficacy to make suggestions about how to improve the governance structures and protect vulnerable communities online.
We are eager to publish other experts’ thought pieces and opinions too, so please reach out if you want to publish a piece on the Justice Collaboratory Collaboration in Action (CIA) blog.
Stay Connected
You can keep in touch by contacting smgi@yale.edu and ask to be added to our listserv.
On behalf of the entire SMGI Network, we wish you a less punitive and more prosocial 2021.
Stay safe.
Making Facebook Oversight Board Pro-Social
October 8, 2020
By Farzaneh Badiei
In In 2018, Mark Zuckerberg announced that, to make Facebook’s oversight and governance independent, they wanted to convene an appeals mechanism which, in their words, would:
- Prevent the concentration of too much decision-making within our teams.
- Create accountability and oversight.
- Provide assurance that these decisions are made in the best interests of our community and not for commercial reasons.
As a result of this vision, Facebook convened the Oversight Board in 2019 and announced its members in 2020.
The Oversight Board will only be effective if Facebook’s users perceive it as a legitimate mechanism. Unfortunately, the design of the Board does itself no favors to help legitimize its function – with a top-down design, use of punitive measures, ability to limit decision-making to a select few, and potential to apply standards broadly that do not belong to the community – it is unlikely that the users and stakeholders of Facebook will feel compelled to follow the rules.
An independent and legitimate governance mechanism needs to take the following steps:
- Encourage bottom-up decision making (consult with the public)
- Adopt decisions and rules that the community perceives as legitimate
- Measure the impact of those rules
- Reform
Because of lack of perceived legitimacy surrounding the ability of Facebook Oversight Board to reform Facebook’s governance and hold it accountable, some activists have even started their own initiatives to hold Facebook “really” accountable. These are interesting initiatives, but one might want to think about how to salvage the Oversight Board itself.
In spite of the top-down design of Facebook’s Oversight Board, we can envision it offering a more pro-social and community-oriented governance. That would entail a governance model with enough legitimacy to hold Facebook accountable. A legitimate decision-making body would also encourage rule-following and cooperation among the users, and we might witness reduction in rule violation. There are four steps that the Board can take to achieve a more bottom-up and pro-social governance:
Consult: The Oversight Board should demonstrate its independent nature and include the community in the decision-making process. The consultation phase is the step to show to people that the Oversight Board is independent and is not a mere spin-off of someone else’s processes. But the public consultations should not be the usual window dressing. Instead it should be clear how the consultation outcomes make the governance better.
Adopt: The strong human rights emphasis at the Oversight Board is an excellent sign. But as part of adopting the results of the consultation, it is necessary to show how the Board goes about protecting rights and how they uphold them for the users and communities that have been affected. Otherwise, using human rights as a term is nice but not effective. The Board needs to operationalize human rights and show us how they have done it.
At the adoption stage, it is important to show how many of the community proposals guide adoption and, when appropriate, how better to include those proposals. Legitimacy will require the Oversight Board to listen to and to adopt community feedback from the beginning, and also to demonstrate why some recommendations will not be possible to accept.
Measure: Measuring impact scientifically can give a real picture of what is going on, which will lead to greater acknowledgement of the independent value of the Board, and will show how the Board is doing regarding the protection of human rights. It can also show what policies have worked and what has not worked. Measuring impact is important because it can validate or disprove the preconceived notion that we can solve all Facebook’s problems with take-downs and appeals mechanisms. The impact of the decisions should be measured not only based on what rights have been violated but also how the decisions of the Board have affected the users’ behavior and interaction.
Reform: The reform step is the most crucial of all. It shows that the Oversight Board upholds its principles by taking action. The Board needs to be agile enough to undergo constant reform, so that it can quickly respond to the shifting pressures that the social media landscape presents.
These recommendations are just one way that the Board could be reformed with a more pro-social and community-oriented approach. That could help change the Board into an independent governance mechanism. After all, if the Board has a critical role in Facebook’s governance and its decisions, that will not only affect content and freedom of speech. It will also affect all behavior and interactions on Facebook. Surely, the governance of such interactions should ultimately be grounded in legitimacy in the eyes of Facebook’s users.
The social media executive order: regulation of behavior not content
June 1, 2020
By Farzaneh Badiei
Last week Twitter flagged a tweet from the U.S. President as false. Consequently, the President decided to issue an executive order “on preventing online censorship.” While we have observed the apparent self-defeating nature of this order, there is a bigger issue at stake.
The executive order tries to take away the immunity that Section 230 of the Communications Decency Act (CDA) granted Internet platforms. The weakening of platforms’ immunity is to take place by policies that the President has asked various government agencies to put in place. The executive order claims that it intends to protect freedom of expression, apparently ignoring that Section 230 was a contributor to online freedom of expression.
The primary intention behind the executive order appears to be to hold platforms liable for their take-down decisions. The White House argument is that by de-immunizing the platforms, lawsuits against these platforms would be possible. Because of that liability, they will not make unfair decisions. In this way, the President proposes to protect freedom of expression.
But Section 230 has in fact, enabled freedom of speech by granting the very immunity under attack. Under Section 230 as so far understood, a platform or Internet service provider (ISP) is not liable for content they host: they are treated as simple intermediaries for someone else’s (i.e. the original poster’s) publication. If platforms will be at risk of liability, they will have to worry about every piece of content that users create on their platforms. This is why some have expressed concerns that the executive order can lead to censorship. The Order would seem to increase the probability that the President’s own tweets would be subject to removal.
One major point that is not discussed in these debates is that the executive order is not only about regulating content and speech. It is about regulating our behavior on the Internet. This includes the behavior of various Internet actors, online platforms and online users.
Through hierarchical and punitive means, the government wants to regulate online behavior. It wants to deter online platforms from self-regulation. By taking punitive measures, it can incentivize platforms to turn into exclusive clubs for a certain audience, or even to suppress user-generated content. It could take away the distributed content-generation on the Internet, which allows everyone to innovate and build. This could lead to even fewer creative tech giants.
This approach will hamper self-regulation both at the platform level and at the community level. Online actors so far have been governing themselves to some extent. They have also been innovative in their approach to governance (not necessarily pro-social but still innovative). If this executive order stands, however, online actors will likely be hesitant to regulate themselves and be held liable for their take-down decisions, since they will be liable for their governance structures.
The order also goes against self-regulation of communities. Online platforms to some extent give their community the autonomy to govern themselves. Online communities (like subreddits on Reddit) can have their own policies to govern themselves. If the platforms can be held liable for taking content down on their platforms, they cannot allow their communities to govern themselves, and will have to impose rules from the top.
Overall, the executive order is bad for the Internet. It is based on deterrence and punitive tactics. It tries to regulate our online behavior: how online platforms run, how we choose the online platforms, and how and whether we govern ourselves. Citizens should be given the opportunity to govern ourselves and to decide what we find acceptable online in the long run.
Pro-social media and COVID-19 disinformation
April 6, 2020
By Farzaneh Badiei
The following is a commentary from the Justice Collaboratory’s Director of Social Media Governance Initiative Farzaneh Badiei.
In the face of COVID-19, social media platforms are adopting various approaches to govern their users’ behaviors. Most platforms are carrying on with typical response techniques they believe to be effective, like blocking users, removing content, or using automated enforcement for bulk removal. However, some progressive platforms are taking pro-social approaches to address COVID-19.
Pro-social governance promotes action that guides people to behave in a way that benefits others. This can be as simple as encouraging people to follow the rules and in doing so influencing others’ to follow the rules too. In this health crisis, we want to highlight some of the pro-social initiatives that have emerged but may have gone unnoticed.
- Twitter and Pinterest: Twitter is helping people follow rules by deactivating suggested key terms that would take them to non-credible sources. Similarly, Pinterest limited their search results to only verified health news channels. This might reduce potential exposure to shared disinformation, which we hope prevents data void.
- Nextdoor: Nextdoor is encouraging people to help one another in their neighborhoods. They recently launched a “help map” that allows users to add themselves to a map announcing what help (services) they can offer. The interesting part of this feature is that the users inspired Nextdoor to launch the help map, because there was an organic increase in offers to help on the platform after the COVID-19 spread. This is a great example of how existing pro-social attitudes among users can influence and find their way into a platform’s feature set.
- Facebook: Facebook continues with its standard block and remove tactics but has reduced its content reviewer’s work force. As a result, it has reduced the number of contents reviews and relies on other methods. For examples, it uses third party fact checkers to fact check and rate false information.
- Joint tech-companies hackathon: some tech companies have also started a hackathon competition for developers to build software that drives social impact. This is worthy, but it may be tricky. It is not clear what they mean by “social impact” and it seems that it encourages tech community to wade into pro-social efforts for public health without including health, public policy, or legal experts. This shows that to “just be pro-social” is not as easy as it might sound, and tech companies need to take a more holistic approach in their pro-social efforts.
The difference between an innovative pro-social approach and the standard punitive approach is that pro-social approaches are designed to encourage people’s basic desire to connect. The punitive measures focus on the outcome and technology, and this model of governance is prevailing because social media platforms believe that punitive methods are effective and measurable. In contrast, pro-social measures focus on people and their social interactions, but the effect is perceived as hard to measure and gradual.
To foster pro-social initiatives and embed them as the prevailing governance approach, platforms should highlight them, deploy them, and provide methods to measure their effect. It is equally important to illustrate that quick, technical fixes (like removing content with artificial intelligence) are often not effective in the long run — especially when they try to address a deeper social problem.
At the Justice Collaboratory’s Social Media Governance Initiative, we are hopeful that social media platforms will continue deploying pro-social initiatives; and with the help of our network of scholars and platform partners, we aim to follow these developments while providing a pro-social governance narrative and using serious science to measure their impact.
The Pro-Social Movement Starts Here
February 18, 2020
By Farzaneh Badiei
The Justice Collaboratory’s Social Media Governance Initiative (SMGI) aims to shape and lead a concerted pro-social movement for social media platforms. We want to encourage online decision makers to promote cooperation and enable communities to advocate for social norms and moral values that advance civil and civic society. Such actions can enhance trust in and legitimacy of decision makers and help promote better governance.1
A few platforms already are taking steps toward pro-social action. For example, Nextdoor has a “community vitality” project to support civil discourse and help build meaningful connections. Recently, during the coronavirus crisis Twitter decided to disable its auto-suggest results about the issue so that people do not receive or share potentially misleading information. Despite the potentially positive effects of such initiatives, taking pro-social actions is low on the list of solutions for governing people’s behavior on the Internet. Platforms continue to resort to suspending, blocking, deleting and eliminating users from their platforms as the main mechanisms for regulation. Punitive measures are inadequate to fight terrorism, disinformation, harassment and other issues. We should foster pro-social behavior so that we can keep a global, pluralistic Internet together.
To help the social media world foster pro-social behavior, SMGI will provide research-based evidence for social media platforms and self-regulating online communities. We want to discover the problems platforms and online users and communities are struggling with and evaluate the effect of various pro-social initiatives on these platforms using theory and empirical research.
We plan to report periodically about pro-social initiatives taken by online communities and social media platforms that shape the pro-social landscape. The landscape can give us insights into more creative and less punitive governance mechanisms. We aim to document these kinds of initiatives across the Internet and strengthen the pro-social movement by evaluating these initiatives through research.
To encourage more cooperation in this field between the academics, civil society and social media platforms, we are building a network that includes those interested in the pro-social initiative movement. The network will provide a space for scholars, activists and social media platforms to collaborate and get regular updates about innovative pro-social solutions. Join our movement! Contact us to know more at smgi@yale.edu
1 For more information about legitimacy and trust refer to: Tyler, Tom R. Why people obey the law. Princeton University Press, 2006 and Meares, Tracey L. "Norms, legitimacy and law enforcement." Or. L. Rev. 79 (2000): 391.
Govern Fast and Break Things
December 4, 2019
By Farzaneh Badiei
Social media platforms that lack legitimate and coherent governance are prone to be called upon by various authorities to generate a quick, outcome-oriented solution in the face of catastrophes. The reactions are usually to online incidents and discoveries — for example, a sudden discovery of a violation of privacy, or some atrocity that spread online. When platforms act in response to these calls without having a legitimate governance mechanism in place, their responses are ad hoc and without a strong foundation. This can negatively affect the Internet, global online platforms and online communities — and might not even resolve the issue.
To address the problem, we need to encourage platforms to establish governance mechanisms, informed by various governance strategies, such as procedural justice. Procedural justice requires the decision makers to be neutral and transparent, to have trustworthy motives, and to treat people with respect and dignity.
Governance differs from various processes and policies that platforms already have. We can define governance as the structure through which the rules that govern our behaviors online are themselves generated, enforced and reformed. If it does not apply coherently to the ecosystem of the platforms, we will face a patchwork of ineffective solutions. We will also see more than half-baked initiatives that governments or others try to impose or that platforms adopt themselves.
To illustrate the importance of having a legitimate and coherent governance mechanism, we (at the Justice Collaboratory) will publish a series of commentaries. The commentaries will be related to issues for which social media platforms do not have a governance solution, and the efforts that are taking place to overcome those issues. The framework we use is based on the concepts of governance, legitimacy, and procedural justice.
The starting point, in this blog, is the Christchurch Call to Action (aka the Christchurch Call). The Christchurch Call is an initiative led by the New Zealand and French governments to “eradicate terrorist and violent extremist content online”, after the mass shooting at two mosques in Christchurch, New Zealand on March 15, 2019. Some tech corporations and governments made several commitments that ranged from terrorist content removal to providing due process. Until now, three international organizations and 48 other countries have joined the Christchurch Call (the United States remains conspicuously absent from this list).
Outcome oriented solution
Despite the emphasis on due process and human rights, the ultimate goal of the Christchurch Call is to reduce and limit the amount of terrorist, extremist content. Some governments involved with the initiative believe that removing content is a step toward fighting against extremism. For example, the Prime Minister of Canada believes that:
“Terrorist and extremist content online continues to spill out into the real world with deadly consequences. Today, we are stepping up to eliminate terrorist and violent extremist content online. Together, we can create a world where all people – no matter their faith, where they live, or where they are from – are safe and secure both on and offline.”
It is alarming that there is such major focus on the outcome (elimination of terrorist and violent extremist content). The Christchurch Call contains commitments to due process but they are weak. There is no strong focus on the need for a legitimate decisionmaking process. People don’t perceive governments’ actions to be legitimate just because they are the governments. This is specifically true when governments use mechanisms that are not democratic such as “cooperation” with tech corporations.
Reforming a top down approach
Top down approaches do not inherently generate distrust in users, but the processes must be transparent, give people a voice, treat the users with dignity and convey trustworthy motives or intentions. These attributes did not have a strong presence in the process that led to the Christchurch Call . The initiative was the governments’ attempt to quasi-regulate social media platforms – not through legislative efforts but through opaque cooperation with tech corporations. New Zealand and France got together with a handful of tech corporations such as Microsoft, Facebook, and Twitter, and negotiated a text that was not revealed publicly until very close to its adoption. In less than six months it went from meetings and talks to a process that will be implemented in services on which billions of users depend.
The Christchurch Call was a top down process from the start despite the efforts to include others. It is problematic because it allows big corporations to please governments by offering to solve serious social problems using temporary relief. It would be better instead to focus on legitimate governance, which would give a way to address those social problems systematically. By asking tech corporations to regulate their users with no coherent governance mechanism in place, governments endorse these platforms’ business models. Because governments endorse a handful of companies’ approaches, the corporations become more powerful, such that no one else can compete with them. This is when companies can use such authoritative, incoherent ways of providing governance as brand promotion.
The Christchurch Call has been trying to be inclusive, to give communities a voice and to address the governance related questions during the implementation phase of the commitments. The government of New Zealand has been trying hard to be transparent and adopt a process that includes various stakeholders, is consultative and considers Internet freedoms and human rights. However, the implementation plan has shortcomings. The governments and tech corporations had already decided about the institutional design of the organization that implements the Christchurch Call. Since almost the beginning of the Christchurch Call negotiations, the New Zealand government and tech corporations considered the Global Internet Forum to Counter Terrorism (GIFCT), an opaque industry led forum, as the incubator of the commitments. GIFCT is now becoming a permanent institution and has promised to changes its structure. It has come up with an advisory committee that is to include civil society and governments, and a Board comprising tech corporations. The plan is still in the making, but the “advisory” role they have considered for civil society, the transparency plans and many more governance issues are unanswered. They have various working groups that members can join. These working groups might partly address the concern about coherent approach to governance but there remains several issues: the academic working group mandate is not clear, the algorithm working group is outcome oriented instead of being governance oriented, and there is a technical working group that is separated from its policy consequences. All the final decisions are still to be made by the GIFCT Board, where the votes are reserved to members (who all need to be tech platform operators).
The conveners of GIFCT have promised that if the civil society and stakeholders participate, they can answer the process related questions together. However, the tech corporations and governments have already decided what role others should assume in this institution. To discuss whether the GIFCT’s approach is legitimate, participants need to “go inside the room”, assume the advisory role and implicitly grant the legitimacy of the platform in the first place.
This month (December 2019), various stakeholders gather to test an operationally ready “shared crisis response protocol” that GIFCT companies will follow. It is not clear through what processes the New Zealand government, law enforcement, and civil society have developed the protocol. It is also not clear how much of the feedback they will receive during the meeting would be taken into consideration. Despite the promises that this will be a living, evolving protocol, it is uncertain how the users can seek reforms.
Governments and tech corporations cannot seek governance legitimacy through involving stakeholders as after thoughts. Communities – the users of these platforms – have no say in how governance institutions should be built. Communities will not be in charge of building institutions, and they will not be involved with running those institutions, but (with this kind of government support) everyone will end up subject to those institutions.
Imagine online platforms with a legitimate governance mechanism
To make it clearer what a coherent, legitimate governance mechanism could look like, let’s imagine that platforms had a “governance mechanism” in place which could be used to respond to abhorrent live streaming of murder and terror (which is what precipitated the Christchurch Call). I postulate that the platforms could have responded to the Christchurch attack in a much more effective way if they had considered the following governance components in their processes (and in advance):
A way to include communities in decisionmaking processes on equal footing;
A known mechanism to transparently suggest a policy approach by any of the stakeholders;
An avenue to challenge the existing policies with the possibility to reform them, preferably in a bottom up manner and at the request of people;
A more effective enforcement of and compliance with the policies by being more inclusive, treating people with dignity and explaining the rationale behind the decisions.
What’s next?
The collective approach to social media governance should move away from being reactionary and outcome oriented and move toward using strategies that can bring legitimacy to the decision making process and shape, enforce and reform the policies. Governance and procedural justice in these discussions are usually set aside or are not addressed cohesively. In my next blog I will explain how lack of a coherent governance mechanism also leads to hasty adoption of outcome oriented solutions through algorithms.
Facebook (still) lacks good governance
September 25, 2019
The following is a commentary from the Justice Collaboratory’s Director of Social Media Governance Initiative Farzaneh Badiei.
This week, Facebook made two key announcements about combating hate and extremism online and the establishment of an independent oversight board. The announcements were timely and strategic: on Wednesday, September 18, Facebook and other tech giants had a hearing at the US Senate about “Mass Violence, Extremism, and Digital Responsibility.” Beginning of this week, there was a side meeting during the UN General Assembly about the Christchurch Call, a call to eradicate terrorist and violent extremist content online.
Facebook’s efforts to address the issue of violent extremism through various initiatives and not through concrete social media governance solutions is unlikely to achieve its goals. Social media governance solutions have to cover all the three parts of policymaking, adjudication and enforcement, through which Facebook asserts authority. Currently such governance arrangements are either inadequate or non-existent.
Procedural justice is necessary to maintain the legitimacy of the decisionmaker but Facebook does not address this key theory of governance in most of the arrangements it proposes to launch. The main components of procedural justice, including how fairly Facebook treats its users, i.e. gives them a voice, treats them with neutrality and explains the reasons for its decisions, most of the time are not at the center of its governance initiatives.
At Facebook, the policymaking processes about content-take down are inconsistent with procedural justice because they are opaque, top-down, and reactive. The enforcement mechanisms do not go much beyond content moderation, and removal of accounts. The dispute resolution mechanisms do not give much of a chance to the users to be heard, and the outcomes do not explain fully why a decision was made.
The inadequacy or the lack of governance mechanisms that deal with extremist content is apparent from Facebook’s reaction to governments requests to combat extremisms online. In its announcement, Facebook mentions the Christchurch Call as one of the main reasons (but not the only reason) to make changes to its policy on terrorism and dangerous content. The Christchurch Call is an initiative of the New Zealand and French governments. It was formed in the aftermath of Christchurch attack, to eradicate violent, extremist content online. The two governments negotiated a set of commitments with a few large tech corporations (including Facebook). The negotiations took place in a bilateral fashion and the governments of New Zealand and France issued the Call without considering civil society and other stakeholders’ feedback. Only certain tech corporations were in the room. Civil society, the technical community and academics were not consulted until after the commitments were agreed upon. It is worth noting that the New Zealand government has been trying hard to include civil society in the implementation process.
During hearings and negotiations with governments, companies make promises that are mostly about tactics and techniques of taking content down (most of the time promising to take content down automatically and by using Artificial Intelligence). They are rarely about reforming policy processes through which Facebook sets its community standards about content moderation and defines the conditions under which the users should are permitted or prohibited from using its services.
Perhaps Facebook leaders believe content removal is a more efficient solution than having an elaborate decision-making system that embodies procedural justice from the beginning. It is true that content removal can give companies some impressive and tangible Key Performance Indicators (KPIs). In 2018 Zuckerberg announced that they are going to proactively handle content with Artificial Intelligence. He also stated that Facebook proactively flags 99% of “terrorist” content and that they have more than 200 people working on counter-terrorism. In its recent announcement, Facebook stated that it has expanded that team to 350 people with a much broader focus on “people and organizations that proclaim or are engaged in violence leading to real-world harm.”
Presenting a large volume of content and account removals might provide some temporary relief for the governments. However, removal of accounts and content on its own is not really a governance mechanism. While some content needs to be removed, with urgency, the decision to remove content, or ban certain organizations and individuals from the platform should be made through a governance mechanism that is procedurally just. Investing in a system that issues fair decisions and upholds procedural justice will yield better results in the long term: there will be fewer violations of the rules, the users will perceive the process as legitimate and self-enforce the dispute resolution outcome. Good governance demands a social infrastructure that can shape decisions from the front end.
Convening the oversight board was a step towards addressing the governance issue. Facebook invested a lot in coming up with such an oversight board, in consultation with various organizations around the world. Such efforts are indeed commendable, but not sufficient. The oversight board is only tasked with resolving cases of content removal that are too complex and disputed either by Facebook or by users. The volume of take-downs are very large and the board will only handle a limited number of cases. The oversight board is in charge of applying the top-down policies of Facebook. Thus, it is not clear how it can be used as a tool for holding Facebook accountable to users.
An example of a top-down policy decision is Facebook’s recent addition to the definition of terrorism. During the Christchurch call discussions, civil society and academics emphasized that we do not have a globally applicable definition of “terrorism.” Facebook acknowledges that this is a problem; however, since no clear governance boundaries sets a process for making policy decision, it has discarded such feedback, come up with its own global definition of terrorism, and recently broadened that definition.
Setting a definition of “terrorism,” applying it globally and expanding it at the request of governments or in the aftermath of crisis illustrates that Facebook does not have the adequate governance structure in place to respond to these requests through legitimate processes.
Policymaking is a major part of social media governance. Having an independent board to resolve disputes does not solve the problem of top-down policies and opaque policymaking processes. Social media platforms are in need of governance and the increase in the number of content take-down is not the best measure for combatting extremism.