About this blog

In addition to academic publications and events, the Wikimedia/Yale Law School Initiative on Intermediaries and Information pursues a diverse research agenda related to emerging issues in internet governance, the right to information, digital rights, privacy and data protection, and content regulation online.

This space is a home for commentary and shorter-form discussions related to these issues, as well as a central repository of written works produced as part of the WIII program.

The views expressed on this blog belong to the author(s) and do not represent the views of Yale Law School or the Information Society Project.

RightsCon Debrief: No content moderation without representation

August 9, 2020

This is the first of three articles drafted by the WIII Initiative’s summer researchers, reflecting on sessions they attended at this year’s virtual RightsCon.

Social media platforms are global content moderators. Facebook, Twitter, YouTube, Reddit and TikTok—companies that reach billions of users in countries all around the world—set rules about what content is allowed on their platform and what is not. They also build the algorithms that determine which content gets promoted to the top of users’ timelines and which get demoted. We can debate if they are ‘arbitors of truth’ or not, but they certainly have a great impact in determining which facts users in countries all around the world get to learn, which they get to ignore and how they interpret them. Yet, many of these countries, especially those in the Global South, have been relegated to passive spectators of a rulemaking process that is mostly executed by American companies (with Chinese owned TikTok as a salient exception, for the moment). These platforms set standards that are conceived in an American reality, and that are applied everywhere without properly understanding the linguistic and cultural contexts. During RightsCon 2020, speakers that participated in content moderation panels had a clear message: the Global South must be properly considered in the content moderation process.

Already by 2016, the spread of online disinformation for political purposes had been observed in Ukraine, Turkey, Mexico, and Syria (p. 19 Woolley). Yet, the negative impact that social media platforms could have in democratic processes was minimized until the extent of Russian interference in the US 2016 presidential elections came to light. Even Mark Zuckerberg would initially call “crazy” the idea that Facebook impacted the election, attitude he would later admit regretting. Facebook, like other platforms, have launched multiple efforts to address the issues that were detected in the US elections. But while election interference in the US has to be addressed, as David Kaye underscored in his last day as UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, it is as important to address it in every other country where the democratic process has been affected.

Social media platforms have initiated efforts to address their impact on elections in countries outside the US. However, there is a lack of transparency around what actions they have taken. Agustina del Campo, Director of the Center for Studies on Freedom of Expression and Access to Information (CELE) at Universidad de Palermo, in Argentina, explained how her team of researchers have found it hard to identify how the measures that platforms adopted after the 2016 US elections are being applied in Latin America. Mona Elswah, Researcher at the Computational Propaganda Project of the Oxford Internet Institute, posed the same question in the Tunisian context. It is clear that Facebook is working on their impact on elections in the U.S., but what are plans for elections in Tunisia? She complained about how some resources developed by Facebook, such as the library of Arabic political ads, does not work in many countries.

The lack of transparency is not only true for election specific measures, but in general to the global rules that platforms set. Del Campo noted that there is little documentation of how rules are applied locally by social media platforms. This opacity is exacerbated by the fact that platforms publish their rules and criteria across a number of websites. Evidence of policy direction may be found not only in their terms of service and community standards, but also in blogposts by various employees or teams, and in media commentary by senior executives. There are also internal content moderation guidelines that are not made public at all.

In a conversation with David Kaye, Maria Ressa underscored that understanding the diverse communities around the world takes time. However, internet platforms did not invest in the people and in deepening that understanding, they just focused on exponential growth. The Filipino-American journalist and CEO of Rappler, who is a victim of political persecution by the Duterte government, noted that platforms have started to hire people that speak local languages, but that this is not enough.

It is interesting to observe these calls for American companies to do a better job of globalizing their operations in a context in which the US is investigating TikTok for national security reasons. The security concerns include that the platform is removing content at the behest of the Chinese government. The concern around foreign influence on the company’s content moderation practices, and the impact of these policies on American democracy, mirror the global concerns around American platforms. What are the rules that are being applied? Are they being applied with proper consideration to the local context? Should citizens in the Global South not have a voice in how the public discourse is moderated in their country? In the US, there is now significant pressure for ByteDance, which owns TikTok, to sell the company. What if countries around the globe start taking a similar approach to demanding the divestment of American platforms’ local operations? This seems far from ideal, but in the absence of strong transparency around the rules and effective consideration of local contexts, arguments for a more heavy-handed approach to controlling online speech will only grow stronger.