Report Of The Facebook Data Transparency Advisory Group

In concert with Facebook, Justice Collaboratory faculty co-directors Tracey Meares and Tom Tyler lead a team of seven independent experts to assess the metrics included in the first two versions of its Community Standards Enforcement Report. This Data Transparency Advisory Group (DTAG) created a final report summarizing its findings.

The following social media governance commentary appears in the Justice Collaboratory's Collab in Action (CIA) blog.

 

Facebook (still) lacks good governance


September 25, 2019

By Farzaneh Badiei

The following is a commentary from the Justice Collaboratory’s Director of Social Media Governance Initiative Farzaneh Badiei.

This week, Facebook made two key announcements about combating hate and extremism online and the establishment of an independent oversight board. The announcements were timely and strategic: on Wednesday, September 18, Facebook and other tech giants had a hearing at the US Senate about “Mass Violence, Extremism, and Digital Responsibility.” Beginning of this week, there was a side meeting during the UN General Assembly about the Christchurch Call, a call to eradicate terrorist and violent extremist content online.

Facebook’s efforts to address the issue of violent extremism through various initiatives and not through concrete social media governance solutions is unlikely to achieve its goals. Social media governance solutions have to cover all the three parts of policymaking, adjudication and enforcement, through which Facebook asserts authority. Currently such governance arrangements are either inadequate or non-existent.

Procedural justice is necessary to maintain the legitimacy of the decisionmaker but Facebook does not address this key theory of governance in most of the arrangements it proposes to launch. The main components of procedural justice, including how fairly Facebook treats its users, i.e. gives them a voice, treats them with neutrality and explains the reasons for its decisions, most of the time are not at the center of its governance initiatives.

At Facebook, the policymaking processes about content-take down are inconsistent with procedural justice because they are opaque, top-down, and reactive. The enforcement mechanisms do not go much beyond content moderation, and removal of accounts. The dispute resolution mechanisms do not give much of a chance to the users to be heard, and the outcomes do not explain fully why a decision was made.

The inadequacy or the lack of governance mechanisms that deal with extremist content is apparent from Facebook’s reaction to governments requests to combat extremisms online. In its announcement, Facebook mentions the Christchurch Call as one of the main reasons (but not the only reason) to make changes to its policy on terrorism and dangerous content. The Christchurch Call is an initiative of the New Zealand and French governments. It was formed in the aftermath of Christchurch attack, to eradicate violent, extremist content online. The two governments negotiated a set of commitments with a few large tech corporations (including Facebook). The negotiations took place in a bilateral fashion and the governments of New Zealand and France issued the Call without considering civil society and other stakeholders’ feedback. Only certain tech corporations were in the room. Civil society, the technical community and academics were not consulted until after the commitments were agreed upon. It is worth noting that the New Zealand government has been trying hard to include civil society in the implementation process.

During hearings and negotiations with governments, companies make promises that are mostly about tactics and techniques of taking content down (most of the time promising to take content down automatically and by using Artificial Intelligence). They are rarely about reforming policy processes through which Facebook sets its community standards about content moderation and defines the conditions under which the users should are permitted or prohibited from using its services.

Perhaps Facebook leaders believe content removal is a more efficient solution than having an elaborate decision-making system that embodies procedural justice from the beginning. It is true that content removal can give companies some impressive and tangible Key Performance Indicators (KPIs). In 2018 Zuckerberg announced that they are going to proactively handle content with Artificial Intelligence. He also stated that Facebook proactively flags 99% of “terrorist” content and that they have more than 200 people working on counter-terrorism. In its recent announcement, Facebook stated that it has expanded that team to 350 people with a much broader focus on “people and organizations that proclaim or are engaged in violence leading to real-world harm.”

Presenting a large volume of content and account removals might provide some temporary relief for the governments. However, removal of accounts and content on its own is not really a governance mechanism. While some content needs to be removed, with urgency, the decision to remove content, or ban certain organizations and individuals from the platform should be made through a governance mechanism that is procedurally just. Investing in a system that issues fair decisions and upholds procedural justice will yield better results in the long term: there will be fewer violations of the rules, the users will perceive the process as legitimate and self-enforce the dispute resolution outcome. Good governance demands a social infrastructure that can shape decisions from the front end.

Convening the oversight board was a step towards addressing the governance issue. Facebook invested a lot in coming up with such an oversight board, in consultation with various organizations around the world. Such efforts are indeed commendable, but not sufficient. The oversight board is only tasked with resolving cases of content removal that are too complex and disputed either by Facebook or by users. The volume of take-downs are very large and the board will only handle a limited number of cases. The oversight board is in charge of applying the top-down policies of Facebook. Thus, it is not clear how it can be used as a tool for holding Facebook accountable to users.

An example of a top-down policy decision is Facebook’s recent addition to the definition of terrorism. During the Christchurch call discussions, civil society and academics emphasized that we do not have a globally applicable definition of “terrorism.” Facebook acknowledges that this is a problem; however, since no clear governance boundaries sets a process for making policy decision, it has discarded such feedback, come up with its own global definition of terrorism, and recently broadened that definition.

Setting a definition of “terrorism,” applying it globally and expanding it at the request of governments or in the aftermath of crisis illustrates that Facebook does not have the adequate governance structure in place to respond to these requests through legitimate processes.

Policymaking is a major part of social media governance. Having an independent board to resolve disputes does not solve the problem of top-down policies and opaque policymaking processes. Social media platforms are in need of governance and the increase in the number of content take-down is not the best measure for combatting extremism.

 

Govern Fast and Break Things


December 4, 2019
By Farzaneh Badiei

Social media platforms that lack legitimate and coherent governance are prone to be called upon by various authorities to generate a quick, outcome-oriented solution in the face of catastrophes. The reactions are usually to online incidents and discoveries — for example, a sudden discovery of a violation of privacy, or some atrocity that spread online. When platforms act in response to these calls without having a legitimate governance mechanism in place, their responses are ad hoc and without a strong foundation. This can negatively affect the Internet, global online platforms and online communities — and might not even resolve the issue.

To address the problem, we need to encourage platforms to establish governance mechanisms, informed by various governance strategies, such as procedural justice. Procedural justice requires the decision makers to be neutral and transparent, to have trustworthy motives, and to treat people with respect and dignity.

Governance differs from various processes and policies that platforms already have. We can define governance as the structure through which the rules that govern our behaviors online are themselves generated, enforced and reformed. If it does not apply coherently to the ecosystem of the platforms, we will face a patchwork of ineffective solutions. We will also see more than half-baked initiatives that governments or others try to impose or that platforms adopt themselves.

To illustrate the importance of having a legitimate and coherent governance mechanism, we (at the Justice Collaboratory) will publish a series of commentaries. The commentaries will be related to issues for which social media platforms do not have a governance solution, and the efforts that are taking place to overcome those issues. The framework we use is based on the concepts of governance, legitimacy, and procedural justice.

The starting point, in this blog, is the Christchurch Call to Action (aka the Christchurch Call). The Christchurch Call is an initiative led by the New Zealand and French governments to “eradicate terrorist and violent extremist content online”, after the mass shooting at two mosques in Christchurch, New Zealand on March 15, 2019. Some tech corporations and governments made several commitments that ranged from terrorist content removal to providing due process. Until now, three international organizations and 48 other countries have joined the Christchurch Call (the United States remains conspicuously absent from this list).

Outcome oriented solution

Despite the emphasis on due process and human rights, the ultimate goal of the Christchurch Call is to reduce and limit the amount of terrorist, extremist content. Some governments involved with the initiative believe that removing content is a step toward fighting against extremism. For example, the Prime Minister of Canada believes that:

“Terrorist and extremist content online continues to spill out into the real world with deadly consequences. Today, we are stepping up to eliminate terrorist and violent extremist content online. Together, we can create a world where all people – no matter their faith, where they live, or where they are from – are safe and secure both on and offline.”

It is alarming that there is such major focus on the outcome (elimination of terrorist and violent extremist content). The Christchurch Call contains commitments to due process but they are weak. There is no strong focus on the need for a legitimate decisionmaking process. People don’t perceive governments’ actions to be legitimate just because they are the governments. This is specifically true when governments use mechanisms that are not democratic such as “cooperation” with tech corporations.

Reforming a top down approach

Top down approaches do not inherently generate distrust in users, but the processes must be transparent, give people a voice, treat the users with dignity and convey trustworthy motives or intentions. These attributes did not have a strong presence in the process that led to the Christchurch Call . The initiative was the governments’ attempt to quasi-regulate social media platforms – not through legislative efforts but through opaque cooperation with tech corporations. New Zealand and France got together with a handful of tech corporations such as Microsoft, Facebook, and Twitter, and negotiated a text that was not revealed publicly until very close to its adoption. In less than six months it went from meetings and talks to a process that will be implemented in services on which billions of users depend.

The Christchurch Call was a top down process from the start despite the efforts to include others. It is problematic because it allows big corporations to please governments by offering to solve serious social problems using temporary relief. It would be better instead to focus on legitimate governance, which would give a way to address those social problems systematically. By asking tech corporations to regulate their users with no coherent governance mechanism in place, governments endorse these platforms’ business models. Because governments endorse a handful of companies’ approaches, the corporations become more powerful, such that no one else can compete with them. This is when companies can use such authoritative, incoherent ways of providing governance as brand promotion.

The Christchurch Call has been trying to be inclusive, to give communities a voice and to address the governance related questions during the implementation phase of the commitments. The government of New Zealand has been trying hard to be transparent and adopt a process that includes various stakeholders, is consultative and considers Internet freedoms and human rights. However, the implementation plan has shortcomings. The governments and tech corporations had already decided about the institutional design of the organization that implements the Christchurch Call. Since almost the beginning of the Christchurch Call negotiations, the New Zealand government and tech corporations considered the Global Internet Forum to Counter Terrorism (GIFCT), an opaque industry led forum, as the incubator of the commitments. GIFCT is now becoming a permanent institution and has promised to changes its structure. It has come up with an advisory committee that is to include civil society and governments, and a Board comprising tech corporations. The plan is still in the making, but the “advisory” role they have considered for civil society, the transparency plans and many more governance issues are unanswered. They have various working groups that members can join. These working groups might partly address the concern about coherent approach to governance but there remains several issues: the academic working group mandate is not clear, the algorithm working group is outcome oriented instead of being governance oriented, and there is a technical working group that is separated from its policy consequences. All the final decisions are still to be made by the GIFCT Board, where the votes are reserved to members (who all need to be tech platform operators).

The conveners of GIFCT have promised that if the civil society and stakeholders participate, they can answer the process related questions together. However, the tech corporations and governments have already decided what role others should assume in this institution. To discuss whether the GIFCT’s approach is legitimate, participants need to “go inside the room”, assume the advisory role and implicitly grant the legitimacy of the platform in the first place.

This month (December 2019), various stakeholders gather to test an operationally ready “shared crisis response protocol” that GIFCT companies will follow. It is not clear through what processes the New Zealand government, law enforcement, and civil society have developed the protocol. It is also not clear how much of the feedback they will receive during the meeting would be taken into consideration. Despite the promises that this will be a living, evolving protocol, it is uncertain how the users can seek reforms.

Governments and tech corporations cannot seek governance legitimacy through involving stakeholders as after thoughts. Communities – the users of these platforms – have no say in how governance institutions should be built. Communities will not be in charge of building institutions, and they will not be involved with running those institutions, but (with this kind of government support) everyone will end up subject to those institutions.

Imagine online platforms with a legitimate governance mechanism

To make it clearer what a coherent, legitimate governance mechanism could look like, let’s imagine that platforms had a “governance mechanism” in place which could be used to respond to abhorrent live streaming of murder and terror (which is what precipitated the Christchurch Call). I postulate that the platforms could have responded to the Christchurch attack in a much more effective way if they had considered the following governance components in their processes (and in advance):

  • A way to include communities in decisionmaking processes on equal footing;

  • A known mechanism to transparently suggest a policy approach by any of the stakeholders;

  • An avenue to challenge the existing policies with the possibility to reform them, preferably in a bottom up manner and at the request of people;

  • A more effective enforcement of and compliance with the policies by being more inclusive, treating people with dignity and explaining the rationale behind the decisions.

What’s next?

The collective approach to social media governance should move away from being reactionary and outcome oriented and move toward using strategies that can bring legitimacy to the decision making process and shape, enforce and reform the policies. Governance and procedural justice in these discussions are usually set aside or are not addressed cohesively. In my next blog I will explain how lack of a coherent governance mechanism also leads to hasty adoption of outcome oriented solutions through algorithms.