Report Of The Facebook Data Transparency Advisory Group

In concert with Facebook, Justice Collaboratory faculty co-directors Tracey Meares and Tom Tyler lead a team of seven independent experts to assess the metrics included in the first two versions of its Community Standards Enforcement Report. This Data Transparency Advisory Group (DTAG) created a final report summarizing its findings.

 

 

 

 

 

 

 

 

Social media governance: can social media companies motivate voluntary rule following behavior among their users?

An article in the Journal of Experimental Criminology (2019) by Tom Tyler, Matt Katsaros, Tracey Meares, and Sudhir Venkatesh

The following social media governance commentary appears in the Justice Collaboratory's Collab in Action (CIA) blog.

The social media executive order: regulation of behavior not content


June 1, 2020
By Farzaneh Badiei

Last week Twitter flagged a tweet from the U.S. President as false. Consequently, the President decided to issue an executive order “on preventing online censorship.” While we have observed the apparent self-defeating nature of this order, there is a bigger issue at stake.

The executive order tries to take away the immunity that Section 230 of the Communications Decency Act (CDA) granted Internet platforms. The weakening of platforms’ immunity is to take place by policies that the President has asked various government agencies to put in place. The executive order claims that it intends to protect freedom of expression, apparently ignoring that Section 230 was a contributor to online freedom of expression.

The primary intention behind the executive order appears to be to hold platforms liable for their take-down decisions. The White House argument is that by de-immunizing the platforms, lawsuits against these platforms would be possible. Because of that liability, they will not make unfair decisions. In this way, the President proposes to protect freedom of expression.

But Section 230 has in fact, enabled freedom of speech by granting the very immunity under attack. Under Section 230 as so far understood, a platform or Internet service provider (ISP) is not liable for content they host: they are treated as simple intermediaries for someone else’s (i.e. the original poster’s) publication. If platforms will be at risk of liability, they will have to worry about every piece of content that users create on their platforms. This is why some have expressed concerns that the executive order can lead to censorship. The Order would seem to increase the probability that the President’s own tweets would be subject to removal.

One major point that is not discussed in these debates is that the executive order is not only about regulating content and speech. It is about regulating our behavior on the Internet. This includes the behavior of various Internet actors, online platforms and online users.

Through hierarchical and punitive means, the government wants to regulate online behavior. It wants to deter online platforms from self-regulation. By taking punitive measures, it can incentivize platforms to turn into exclusive clubs for a certain audience, or even to suppress user-generated content. It could take away the distributed content-generation on the Internet, which allows everyone to innovate and build. This could lead to even fewer creative tech giants.

This approach will hamper self-regulation both at the platform level and at the community level. Online actors so far have been governing themselves to some extent. They have also been innovative in their approach to governance (not necessarily pro-social but still innovative). If this executive order stands, however, online actors will likely be hesitant to regulate themselves and be held liable for their take-down decisions, since they will be liable for their governance structures.

The order also goes against self-regulation of communities. Online platforms to some extent give their community the autonomy to govern themselves. Online communities (like subreddits on Reddit) can have their own policies to govern themselves. If the platforms can be held liable for taking content down on their platforms, they cannot allow their communities to govern themselves, and will have to impose rules from the top.

Overall, the executive order is bad for the Internet. It is based on deterrence and punitive tactics. It tries to regulate our online behavior: how online platforms run, how we choose the online platforms, and how and whether we govern ourselves. Citizens should be given the opportunity to govern ourselves and to decide what we find acceptable online in the long run.

Pro-social media and COVID-19 disinformation


April 6, 2020
By Farzaneh Badiei

The following is a commentary from the Justice Collaboratory’s Director of Social Media Governance Initiative Farzaneh Badiei.

In the face of COVID-19, social media platforms are adopting various approaches to govern their users’ behaviors. Most platforms are carrying on with typical response techniques they believe to be effective, like blocking users, removing content, or using automated enforcement for bulk removal. However, some progressive platforms are taking pro-social approaches to address COVID-19.

Pro-social governance promotes action that guides people to behave in a way that benefits others. This can be as simple as encouraging people to follow the rules and in doing so influencing others’ to follow the rules too. In this health crisis, we want to highlight some of the pro-social initiatives that have emerged but may have gone unnoticed.

  1. Twitter and Pinterest: Twitter is helping people follow rules by deactivating suggested key terms that would take them to non-credible sources. Similarly, Pinterest limited their search results to only verified health news channels. This might reduce potential exposure to shared disinformation, which we hope prevents data void.
  2. Nextdoor: Nextdoor is encouraging people to help one another in their neighborhoods. They recently launched a “help map” that allows users to add themselves to a map announcing what help (services) they can offer. The interesting part of this feature is that the users inspired Nextdoor to launch the help map, because there was an organic increase in offers to help on the platform after the COVID-19 spread. This is a great example of how existing pro-social attitudes among users can influence and find their way into a platform’s feature set.
  3. Facebook: Facebook continues with its standard block and remove tactics but has reduced its content reviewer’s work force. As a result, it has reduced the number of contents reviews and relies on other methods. For examples, it uses third party fact checkers to fact check and rate false information. 
  4. Joint tech-companies hackathon: some tech companies have also started a hackathon competition for developers to build software that drives social impact. This is worthy, but it may be tricky.  It is not clear what they mean by “social impact” and it seems that it encourages tech community to wade into pro-social efforts for public health without including health, public policy, or legal experts. This shows that to “just be pro-social” is not as easy as it might sound, and tech companies need to take a more holistic approach in their pro-social efforts.  

The difference between an innovative pro-social approach and the standard punitive approach is that pro-social approaches are designed to encourage people’s basic desire to connect. The punitive measures focus on the outcome and technology, and this model of governance is prevailing because social media platforms believe that punitive methods are effective and measurable. In contrast, pro-social measures focus on people and their social interactions, but the effect is perceived as hard to measure and gradual.

To foster pro-social initiatives and embed them as the prevailing governance approach, platforms should highlight them, deploy them, and provide methods to measure their effect. It is equally important to illustrate that quick, technical fixes (like removing content with artificial intelligence) are often not effective in the long run — especially when they try to address a deeper social problem.

At the Justice Collaboratory’s Social Media Governance Initiative, we are hopeful that social media platforms will continue deploying pro-social initiatives; and with the help of our network of scholars and platform partners, we aim to follow these developments while providing a pro-social governance narrative and using serious science to measure their impact.

The Pro-Social Movement Starts Here


February 18, 2020
By Farzaneh Badiei

The Justice Collaboratory’s Social Media Governance Initiative (SMGI) aims to shape and lead a concerted pro-social movement for social media platforms. We want to encourage online decision makers to promote cooperation and enable communities to advocate for social norms and moral values that advance civil and civic society. Such actions can enhance trust in and legitimacy of decision makers and help promote better governance.1

A few platforms already are taking steps toward pro-social action. For example, Nextdoor has a “community vitality” project to support civil discourse and help build meaningful connections.  Recently, during the coronavirus crisis Twitter decided to disable its auto-suggest results about the issue so that people do not receive or share potentially misleading information. Despite the potentially positive effects of such initiatives, taking pro-social actions is low on the list of solutions for governing people’s behavior on the Internet. Platforms continue to resort to suspending, blocking, deleting and eliminating users from their platforms as the main mechanisms for regulation. Punitive measures are inadequate to fight terrorism, disinformation, harassment and other issues. We should foster pro-social behavior so that we can keep a global, pluralistic Internet together.

To help the social media world foster pro-social behavior, SMGI will provide research-based evidence for social media platforms and self-regulating online communities. We want to discover the problems platforms and online users and communities are struggling with and evaluate the effect of various pro-social initiatives on these platforms using theory and empirical research.

We plan to report periodically about pro-social initiatives taken by online communities and social media platforms that shape the pro-social landscape. The landscape can give us insights into more creative and less punitive governance mechanisms. We aim to document these kinds of initiatives across the Internet and strengthen the pro-social movement by evaluating these initiatives through research.

To encourage more cooperation in this field between the academics, civil society and social media platforms, we are building a network that includes those interested in the pro-social initiative movement. The network will provide a space for scholars, activists and social media platforms to collaborate and get regular updates about innovative pro-social solutions. Join our movement!  Contact us to know more at smgi@yale.edu

1 For more information about legitimacy and trust refer to: Tyler, Tom R. Why people obey the law. Princeton University Press, 2006 and Meares, Tracey L. "Norms, legitimacy and law enforcement." Or. L. Rev. 79 (2000): 391.

Govern Fast and Break Things


December 4, 2019
By Farzaneh Badiei

Social media platforms that lack legitimate and coherent governance are prone to be called upon by various authorities to generate a quick, outcome-oriented solution in the face of catastrophes. The reactions are usually to online incidents and discoveries — for example, a sudden discovery of a violation of privacy, or some atrocity that spread online. When platforms act in response to these calls without having a legitimate governance mechanism in place, their responses are ad hoc and without a strong foundation. This can negatively affect the Internet, global online platforms and online communities — and might not even resolve the issue.

To address the problem, we need to encourage platforms to establish governance mechanisms, informed by various governance strategies, such as procedural justice. Procedural justice requires the decision makers to be neutral and transparent, to have trustworthy motives, and to treat people with respect and dignity.

Governance differs from various processes and policies that platforms already have. We can define governance as the structure through which the rules that govern our behaviors online are themselves generated, enforced and reformed. If it does not apply coherently to the ecosystem of the platforms, we will face a patchwork of ineffective solutions. We will also see more than half-baked initiatives that governments or others try to impose or that platforms adopt themselves.

To illustrate the importance of having a legitimate and coherent governance mechanism, we (at the Justice Collaboratory) will publish a series of commentaries. The commentaries will be related to issues for which social media platforms do not have a governance solution, and the efforts that are taking place to overcome those issues. The framework we use is based on the concepts of governance, legitimacy, and procedural justice.

The starting point, in this blog, is the Christchurch Call to Action (aka the Christchurch Call). The Christchurch Call is an initiative led by the New Zealand and French governments to “eradicate terrorist and violent extremist content online”, after the mass shooting at two mosques in Christchurch, New Zealand on March 15, 2019. Some tech corporations and governments made several commitments that ranged from terrorist content removal to providing due process. Until now, three international organizations and 48 other countries have joined the Christchurch Call (the United States remains conspicuously absent from this list).

Outcome oriented solution

Despite the emphasis on due process and human rights, the ultimate goal of the Christchurch Call is to reduce and limit the amount of terrorist, extremist content. Some governments involved with the initiative believe that removing content is a step toward fighting against extremism. For example, the Prime Minister of Canada believes that:

“Terrorist and extremist content online continues to spill out into the real world with deadly consequences. Today, we are stepping up to eliminate terrorist and violent extremist content online. Together, we can create a world where all people – no matter their faith, where they live, or where they are from – are safe and secure both on and offline.”

It is alarming that there is such major focus on the outcome (elimination of terrorist and violent extremist content). The Christchurch Call contains commitments to due process but they are weak. There is no strong focus on the need for a legitimate decisionmaking process. People don’t perceive governments’ actions to be legitimate just because they are the governments. This is specifically true when governments use mechanisms that are not democratic such as “cooperation” with tech corporations.

Reforming a top down approach

Top down approaches do not inherently generate distrust in users, but the processes must be transparent, give people a voice, treat the users with dignity and convey trustworthy motives or intentions. These attributes did not have a strong presence in the process that led to the Christchurch Call . The initiative was the governments’ attempt to quasi-regulate social media platforms – not through legislative efforts but through opaque cooperation with tech corporations. New Zealand and France got together with a handful of tech corporations such as Microsoft, Facebook, and Twitter, and negotiated a text that was not revealed publicly until very close to its adoption. In less than six months it went from meetings and talks to a process that will be implemented in services on which billions of users depend.

The Christchurch Call was a top down process from the start despite the efforts to include others. It is problematic because it allows big corporations to please governments by offering to solve serious social problems using temporary relief. It would be better instead to focus on legitimate governance, which would give a way to address those social problems systematically. By asking tech corporations to regulate their users with no coherent governance mechanism in place, governments endorse these platforms’ business models. Because governments endorse a handful of companies’ approaches, the corporations become more powerful, such that no one else can compete with them. This is when companies can use such authoritative, incoherent ways of providing governance as brand promotion.

The Christchurch Call has been trying to be inclusive, to give communities a voice and to address the governance related questions during the implementation phase of the commitments. The government of New Zealand has been trying hard to be transparent and adopt a process that includes various stakeholders, is consultative and considers Internet freedoms and human rights. However, the implementation plan has shortcomings. The governments and tech corporations had already decided about the institutional design of the organization that implements the Christchurch Call. Since almost the beginning of the Christchurch Call negotiations, the New Zealand government and tech corporations considered the Global Internet Forum to Counter Terrorism (GIFCT), an opaque industry led forum, as the incubator of the commitments. GIFCT is now becoming a permanent institution and has promised to changes its structure. It has come up with an advisory committee that is to include civil society and governments, and a Board comprising tech corporations. The plan is still in the making, but the “advisory” role they have considered for civil society, the transparency plans and many more governance issues are unanswered. They have various working groups that members can join. These working groups might partly address the concern about coherent approach to governance but there remains several issues: the academic working group mandate is not clear, the algorithm working group is outcome oriented instead of being governance oriented, and there is a technical working group that is separated from its policy consequences. All the final decisions are still to be made by the GIFCT Board, where the votes are reserved to members (who all need to be tech platform operators).

The conveners of GIFCT have promised that if the civil society and stakeholders participate, they can answer the process related questions together. However, the tech corporations and governments have already decided what role others should assume in this institution. To discuss whether the GIFCT’s approach is legitimate, participants need to “go inside the room”, assume the advisory role and implicitly grant the legitimacy of the platform in the first place.

This month (December 2019), various stakeholders gather to test an operationally ready “shared crisis response protocol” that GIFCT companies will follow. It is not clear through what processes the New Zealand government, law enforcement, and civil society have developed the protocol. It is also not clear how much of the feedback they will receive during the meeting would be taken into consideration. Despite the promises that this will be a living, evolving protocol, it is uncertain how the users can seek reforms.

Governments and tech corporations cannot seek governance legitimacy through involving stakeholders as after thoughts. Communities – the users of these platforms – have no say in how governance institutions should be built. Communities will not be in charge of building institutions, and they will not be involved with running those institutions, but (with this kind of government support) everyone will end up subject to those institutions.

Imagine online platforms with a legitimate governance mechanism

To make it clearer what a coherent, legitimate governance mechanism could look like, let’s imagine that platforms had a “governance mechanism” in place which could be used to respond to abhorrent live streaming of murder and terror (which is what precipitated the Christchurch Call). I postulate that the platforms could have responded to the Christchurch attack in a much more effective way if they had considered the following governance components in their processes (and in advance):

  • A way to include communities in decisionmaking processes on equal footing;

  • A known mechanism to transparently suggest a policy approach by any of the stakeholders;

  • An avenue to challenge the existing policies with the possibility to reform them, preferably in a bottom up manner and at the request of people;

  • A more effective enforcement of and compliance with the policies by being more inclusive, treating people with dignity and explaining the rationale behind the decisions.

What’s next?

The collective approach to social media governance should move away from being reactionary and outcome oriented and move toward using strategies that can bring legitimacy to the decision making process and shape, enforce and reform the policies. Governance and procedural justice in these discussions are usually set aside or are not addressed cohesively. In my next blog I will explain how lack of a coherent governance mechanism also leads to hasty adoption of outcome oriented solutions through algorithms.

 

Facebook (still) lacks good governance


September 25, 2019

By Farzaneh Badiei

The following is a commentary from the Justice Collaboratory’s Director of Social Media Governance Initiative Farzaneh Badiei.

This week, Facebook made two key announcements about combating hate and extremism online and the establishment of an independent oversight board. The announcements were timely and strategic: on Wednesday, September 18, Facebook and other tech giants had a hearing at the US Senate about “Mass Violence, Extremism, and Digital Responsibility.” Beginning of this week, there was a side meeting during the UN General Assembly about the Christchurch Call, a call to eradicate terrorist and violent extremist content online.

Facebook’s efforts to address the issue of violent extremism through various initiatives and not through concrete social media governance solutions is unlikely to achieve its goals. Social media governance solutions have to cover all the three parts of policymaking, adjudication and enforcement, through which Facebook asserts authority. Currently such governance arrangements are either inadequate or non-existent.

Procedural justice is necessary to maintain the legitimacy of the decisionmaker but Facebook does not address this key theory of governance in most of the arrangements it proposes to launch. The main components of procedural justice, including how fairly Facebook treats its users, i.e. gives them a voice, treats them with neutrality and explains the reasons for its decisions, most of the time are not at the center of its governance initiatives.

At Facebook, the policymaking processes about content-take down are inconsistent with procedural justice because they are opaque, top-down, and reactive. The enforcement mechanisms do not go much beyond content moderation, and removal of accounts. The dispute resolution mechanisms do not give much of a chance to the users to be heard, and the outcomes do not explain fully why a decision was made.

The inadequacy or the lack of governance mechanisms that deal with extremist content is apparent from Facebook’s reaction to governments requests to combat extremisms online. In its announcement, Facebook mentions the Christchurch Call as one of the main reasons (but not the only reason) to make changes to its policy on terrorism and dangerous content. The Christchurch Call is an initiative of the New Zealand and French governments. It was formed in the aftermath of Christchurch attack, to eradicate violent, extremist content online. The two governments negotiated a set of commitments with a few large tech corporations (including Facebook). The negotiations took place in a bilateral fashion and the governments of New Zealand and France issued the Call without considering civil society and other stakeholders’ feedback. Only certain tech corporations were in the room. Civil society, the technical community and academics were not consulted until after the commitments were agreed upon. It is worth noting that the New Zealand government has been trying hard to include civil society in the implementation process.

During hearings and negotiations with governments, companies make promises that are mostly about tactics and techniques of taking content down (most of the time promising to take content down automatically and by using Artificial Intelligence). They are rarely about reforming policy processes through which Facebook sets its community standards about content moderation and defines the conditions under which the users should are permitted or prohibited from using its services.

Perhaps Facebook leaders believe content removal is a more efficient solution than having an elaborate decision-making system that embodies procedural justice from the beginning. It is true that content removal can give companies some impressive and tangible Key Performance Indicators (KPIs). In 2018 Zuckerberg announced that they are going to proactively handle content with Artificial Intelligence. He also stated that Facebook proactively flags 99% of “terrorist” content and that they have more than 200 people working on counter-terrorism. In its recent announcement, Facebook stated that it has expanded that team to 350 people with a much broader focus on “people and organizations that proclaim or are engaged in violence leading to real-world harm.”

Presenting a large volume of content and account removals might provide some temporary relief for the governments. However, removal of accounts and content on its own is not really a governance mechanism. While some content needs to be removed, with urgency, the decision to remove content, or ban certain organizations and individuals from the platform should be made through a governance mechanism that is procedurally just. Investing in a system that issues fair decisions and upholds procedural justice will yield better results in the long term: there will be fewer violations of the rules, the users will perceive the process as legitimate and self-enforce the dispute resolution outcome. Good governance demands a social infrastructure that can shape decisions from the front end.

Convening the oversight board was a step towards addressing the governance issue. Facebook invested a lot in coming up with such an oversight board, in consultation with various organizations around the world. Such efforts are indeed commendable, but not sufficient. The oversight board is only tasked with resolving cases of content removal that are too complex and disputed either by Facebook or by users. The volume of take-downs are very large and the board will only handle a limited number of cases. The oversight board is in charge of applying the top-down policies of Facebook. Thus, it is not clear how it can be used as a tool for holding Facebook accountable to users.

An example of a top-down policy decision is Facebook’s recent addition to the definition of terrorism. During the Christchurch call discussions, civil society and academics emphasized that we do not have a globally applicable definition of “terrorism.” Facebook acknowledges that this is a problem; however, since no clear governance boundaries sets a process for making policy decision, it has discarded such feedback, come up with its own global definition of terrorism, and recently broadened that definition.

Setting a definition of “terrorism,” applying it globally and expanding it at the request of governments or in the aftermath of crisis illustrates that Facebook does not have the adequate governance structure in place to respond to these requests through legitimate processes.

Policymaking is a major part of social media governance. Having an independent board to resolve disputes does not solve the problem of top-down policies and opaque policymaking processes. Social media platforms are in need of governance and the increase in the number of content take-down is not the best measure for combatting extremism.