Moderate Globally Impact Locally: Content Moderation Is Particularly Hard in African Countries

Until last year, a majority of Sudanese had lived their entire lives under the presidency of Omar al-Bashir. Africa has 16 of the 48 longest-serving leaders in the world, including the world’s longest-serving nonroyal leader, Equatorial Guinea’s Teodoro Obiang Nguema Mbasogo, who has been in power since 1979. These regimes also share a common feature of gross violations of human rights, including those regarding digital rights and freedom of expression online.

But things are changing. On April 11, 2019, the people finally succeeded in dislodging al-Bashir’s regime after protracted mass protests, mobilized through campaigns on platforms like Facebook and Twitter, that spotlighted grave economic hardship. Similar protests that took place between February and April 2019 prevented Algeria’s president, Abdelaziz Bouteflika, from running for a fifth term. These protests, too, were supported by social media.

However, the continent’s dictators have adopted their own digital strategies. A 2019 study from the Oxford Internet Institute revealed that governments in at least seven African countries—Angola, Egypt, Eritrea, Ethiopia, Rwanda, Sudan, and Zimbabwe—have started deploying information-control tactics to suppress, discredit, or drown out dissent on platforms. The study found similar social media manipulation by major political parties in South Africa, Nigeria, and Kenya. For example, in Nigeria, the government carried out online attacks on opposition parties and conducted smear campaigns, while in Zimbabwe such oppression consisted of a combination of pro-government and pro-party propaganda, attacks on the opposition parties, and suppression of participation.

When they are unable to control the discourse through manipulation, such governments often employ more illegitimate tools. A 2019 report by Collaboration on International ICT Policy for East and Southern Africa found that at least 22 African countries had carried out an internet shutdown in the prior four years. The Democracy Index from the Economist Intelligence Unit categorizes 17 of those countries as authoritarian, while the rest are hybrid systems with poor records on democratic rights.* Governments typically claim that these network disruptions are necessary to protect against disturbances to public order, especially during major political events. However, such shutdowns are never justified.

Much grassroots political activism in Africa relies on U.S.-based social media platforms. However, these platforms also play host to state-backed manipulation efforts and can be subject to draconian shutdowns if the political dialogue goes awry for African governments. As a result, these companies get caught up in a tenuous position. To make matters even more difficult, there is an inescapable tension between the platforms’ desire to apply global standards to content moderation adjudication on one hand and to defer to local contexts when moderating content on the other. For example, Facebook had stated that it would comply with local laws upon their reviews of requests by governments. This further raises questions as to instances when countries may have problematic laws that are incompatible with international standards.

These are complicated challenges everywhere in the world, and there are no easy answers. But a combination of factors may make content moderation particularly difficult in most African countries, including colonial legaciesauthoritarian governments, and the shrinking civic space. In many African countries, due to questionable colonial laws that are now being transplanted into cybercrime legislation, platforms are now faced with complying with problematic laws that clearly violate online free speech. For example, Ethiopia’s Computer Crime Proclamation is tethered to its criminal code, which was adopted in 1949 and has telltale colonial criminal law provisions, like criminal defamation and criminalization of false news. Furthermore, Ethiopia’s Hate Speech and Disinformation Prevention and Suppression Proclamation of 2020 requires platforms to police content by giving them 24 hours to take down disinformation or hate speech. Nigeria is currently considering a bill on disinformation that would also place undue pressure on platforms to police content.

Social media companies’ lack of transparency around content moderation decisions can exacerbate existing political tensions. For instance, in June, Facebook deactivated the accounts of as many as 60 activists in Tunisia, where the social network is extremely popular. Some were later restored, and Facebook issued a statement to the Guardian saying: “Due to a technical error we recently removed a small number of profiles, which have now been restored. We were not trying to limit anyone’s ability to post or express themselves, and apologise for any inconvenience this has caused.” But in a country where, as activist and journalist Emna Mizouni told the Guardian, “the internet equals Facebook,” describing the removal of activists’ accounts as “inconvenience”—intentional or not—is troubling. Content moderation in African nations is also difficult because of language barriers. As a 2019 Reuters report described, Facebook community standards are not always translated into local languages. In fact, Facebook didn’t have an office on the continent until 2015.

Given that authoritarian governments are on the rise in Africa, platforms might also have to deal with even more state-sponsored coordinated attacks like we have already seen in Nigeria and Zimbabwe. While those sorts of attacks are happening around the world, they pose extra threat here given the already shrinking civic space in Africa, as governments tend to combine these bad laws and authoritarian practices to pressure platforms.

Therefore, we need a more nuanced approach to ensure a rights-centered model for content moderation in African countries. A number of regulatory models have been suggested so far, including legislation, self-regulation, and co-regulation (which is basically “self-regulation with a regulatory backstop”). However, the most popular one has been the multistakeholder approach, which looks to drive participation in content moderation policies at a more granular level. Human rights experts tend to prefer this approach because it gives citizens more opportunity to take part in the process.

An example of this approach is Article 19’s Social Media Councils. Ideally, these councils would offer governance system with diverse range of expertise and perspectives that involves key stakeholders like governments, civil society, private sector, academia, and others. This system hasn’t been deployed yet, but it is promising and would work well with the regional human rights system in Africa. For example, in 2019, the African Commission on Human and People’s Rights published a new declaration on freedom of expression and access to information in Africa, including calls for multistakeholder engagement to settle key governance questions and for both states and private companies to adhere to a strict human rights–based approach in designing content moderation policies.

Considering the Ethiopian and Nigerian examples, platforms will be in for a difficult time in moderating content in Africa. It is promising that Facebook’s new Oversight Board includes some members from Africa. But that is just a start in the long process of making sure that the people whose voices are being moderated have a say.

This is the fourth installment in our “Moderate Globally, Impact Locally” series on the global impacts of content moderation. It originally appeared on Slate here, in connection with their Future Tense series.