Moderate Globally Impact Locally: The Countries Where Democracy Is Most Fragile Are Test Subjects for Platforms’ Content Moderation Policies

A bitterly contested election against a controversial and polarizing incumbent. Allegations of voter fraud, systemic disenfranchisement, and widespread concerns about the fairness of the process. And in the background, major social media platforms struggle to manage the public discourse, seeking to overcome a history of catastrophic failures.

While this description may sound familiar to American audiences, it could just as easily refer to Myanmar’s recent election, which took place against a backdrop of online misinformation and hate. In both cases, it is too early to judge the success of the social media platforms’ efforts to support election integrity. However, if the content policies were effective in the U.S., Americans can thank their counterparts in the Global South, who in many cases served as test subjects for their development.

It’s good to see that platforms appear to be learning from their mistakes. However, it is obviously problematic that the countries where democracy is most fragile are on the front lines of this learning curve. When things go wrong there, the results can be an order of magnitude worse than anything that America is likely to experience, as the violent dismantling of democratic structures in the Philippines and Brazil illustrate. Therein lies the tension between implementing a moderation system which governs political discourse all over the world, but is disproportionately focused on impacts in the U.S.

There are no easy answers for where the line between permitted and prohibited speech should be drawn. Moderation systems are prone to errors, particularly when they rely on automated decision-making. The platforms have to calibrate how much collateral damage against “legitimate” voices is acceptable, versus the amount of harmful content that is likely to end up slipping through the cracks. Stronger enforcement means less hate and misinformation getting through, but a greater probability that, say, activists protesting police brutality or journalists exposing corruption will find their accounts disabled. As a result, moderation policy involves a cost-benefit analysis, where platforms must decide how aggressively to enforce their rules.

This is always going to be a difficult balance to set, but it’s made vastly harder by the differences across local contexts that are subject to the platforms’ content moderation systems. A racially charged statement in Canada might cause psychological harm, but in Sri Lanka, it might lead to lynchings and communal violence. As recently as August, violent clashes in Bengaluru, India, were triggered by a Facebook post about the Prophet Muhammad. The potential harms, in other words, vary enormously.

On the other hand, in a place like Myanmar, Facebook’s stranglehold over the online space means that its decisions to crack down on debate can crush the political discourse around a particular issue. Likewise, the relative scarcity of independent traditional media outlets across much of the Global South means that people have few options outside of social media for finding fresh sources of information. Moderation systems tend to have a discriminatory impact against marginalized voices, which scales up as they are enforced more aggressively. For example, the hard line that platforms have assumed in combating “terrorist speech” has had an outsized effect on Arabic-speaking users. As a result, journalists and civil society voices who operate in more repressive corners of the world may have difficulty getting their messages out. In one particularly infamous case, YouTube’s drive to remove extremist content led to the destruction of documentary evidence of human rights abuses in Syria.

So, given that the consequences of both under-moderation and over-moderation can be dire, how do you capture all of these diverse contexts under a single moderation structure?

As is often the case in conversations around improving responsibility in the tech sector, the first step is transparency. In order to meaningfully improve a system, you must understand what problems it faces. For the platforms, this means engaging with communities that are on the sharp edge of these decisions.

Unfortunately, while there is no shortage of bright minds from every corner of the world engaged in content moderation debates, civil society and academic observers face enormous challenges understanding how policies are applied locally. Despite the proliferation of transparency reporting across major tech companies, their communications strategies are still developed with a focus on the United States. This leaves observers in the Global South in the difficult position of picking through a patchwork of conflicting announcements, blog posts, and press releases to try to understand how the evolving policy positions might apply to them.

The lack of predictability is a related challenge. For all their work to develop and expand their content policies over the years, the major platforms still have a tendency to change things up on the fly. For example, both Twitter and Facebook adopted a relatively ad hoc approach to the dissemination of a dodgy story focusing on Hunter Biden. While the willingness to adapt and improve are generally admirable qualities in a governance structure, in practice this means the platforms end up drifting according to America’s political winds. Most laws, and particularly those impacting speech, include a transition period before they come into effect. If platforms committed that, outside of a pressing emergency, changes to their rules would likewise only take effect after a period of a few weeks, and would not apply retroactively, it would go some way toward reducing the tendency to redraw policies based on the news of the day.

More broadly, legislators need to understand that this is a global conversation, and that debates in the United States and Europe can ripple outward and have disproportionate impacts on human rights in the Global South. When Germany passed the Network Enforcement Act to try to combat the platforms’ hosting of illegal content, it spawned a raft of copycat legislation among more autocratic governments. Pakistan’s proposed “Citizens Protection (Against Online Harm) Rules, 2020” would grant its government the ability to order any material removed within 24 hours. Russia’s fake news law, which explicitly refers back to the Network Enforcement Act, grants that government similarly broad powers. Likewise, American debates around intermediary liability are often used a domestic political cudgel. However, these conversations, and the grandstanding that accompanies them, can send a dangerous signal to countries like India and Turkey, which seek to exert pressure on platforms to deny expressive space to their political opponents.

While many will welcome the termination of Donald Trump’s presidency in 2021, few will be more relieved than the social media executives saddled with imposing moderation decisions on the most powerful man in the world. But while the most visible manifestation of global moderation challenges may be on his way out the door, the underlying tension, and the accountability deficit around the platforms’ role in governing the online discourse, remains. As these debates continue, it is important to consider them as global conversations, which should involve stakeholders who are on the front lines of moderation decisions around the world, in order to reflect the role that major platforms play in curating speech beyond America’s borders.

Michael Karanicolas is the Wikimedia fellow at Yale Law School, where he leads the Initiative on Intermediaries and Information.

This is the tenth, and final, installment in our “Moderate Globally, Impact Locally” series on the global impacts of content moderation. It originally appeared on Slate here.