Moderate Globally Impact Locally: Internet Content Moderation in Uganda – A Taxing Situation

As technology evolves, society is becoming increasingly interconnected. Some view this as a threat. Godfrey Mutabazi, formerly the Director of the Uganda Communications Commission, said at a digital regulation training that they have to “enforce discipline” among internet users to ensure that undesirable content is filtered out. In practical terms, this has meant instructing telecommunications service providers to enforce two social media shutdowns during the presidential elections in 2016 and in September 2017, and barring live broadcasts of parliamentary proceedings on amending the Presidential age limit. This content, of vital importance to the public discourse, was deemed to be a national security threat. In July 2018, Uganda introduced a social media tax, whose purpose was to combat “gossip content”. All these measures have faced heavy criticism.

Despite these challenges, there is little question that social media will become even more influential and powerful in Uganda, especially in the next few years. Like elsewhere in the world, the Internet and social media has allowed Ugandans to exercise freedom of expression beyond boundaries, including helping to evade restrictive government content controls. At the same time, it has brought new forms of control through content moderation, where platform administrators make decisions about whether to host or continue hosting a specific piece of content, or to grant the content relative prominence or prioritization, under their own form of emerging platform law.

In some ways, this nothing new. Traditional media outlets like television and newspapers have always selected the information to broadcast or disclose. Nevertheless, when social media companies are involved in this activity, content moderation raises new and serious concerns. People need access to the digital environment to share their opinions and ideas on a global scale. The evolution of the algorithmic society, where social media platforms govern the transmission of information, creates clashes as the public interest may diverge from those of the platforms.

The platforms are responsible for decisions such as how many and which groups of users are exposed to particular types of content, which forms of speech will have their reach and exposure boosted and which will have their exposure demoted or limited. All of these decisions depend on the companies’ own rules and procedures.

Companies are not legally required to provide due process when they undertake content moderation the way a government is. However under the international human rights framework they do have a duty to respect human rights in developing and applying their terms of service rules. Companies also have an obligation to provide access to remedies to the extent their content moderation causes, contributes, or is linked to human rights harms. In other words, when private companies set rules for content moderation, to establish minimal legitimacy they must ensure that the rules are legal, comport with international human rights principles, and are freely accepted by users.

The digital content companies are not built to moderate content at global scale. They often alienate and flatten the cultures across the markets where they operate. They have addressed problems of scale by hiring (or promising to hire) more moderators with language skills and local or regional political knowledge. This is important and represents a good start, but it is not enough.

Facebook recently made a call for moderators with expertise in seven African languages, including Luganda and Swahili (both widely spoken in Uganda). It is important they be armed with proper tools to identify fake content. The expansion of human moderation brings its own pitfalls, such as requiring moderators to view sometimes disturbing things like pornography and graphic violence before deciding if it can stay on the website or if it violates the company’s content rules.

The best avenue forward for companies would be to open up their moderation processes and proposals to public comment. This includes, when they adopt new rules about content, explaining clearly how they arrived at the changes. They also need to disclose to their users why they take certain kinds of content decisions and how the user can appeal it as well as to provide more clarity into algorithmic decision-making, given the importance of these systems to controlling expression.

In addition to developing more robust direct accountability to their users, the other tool to support the adoption of human rights norms by platforms would be to subject their rules and decisions to a form of industry-wide oversight and accountability through a form of “content council” developed in collaboration with civil society leaders, activists, and academics. These efforts to legitimize decisions about content moderation and its consequences are essential given that decisions to remove content, suspend accounts, or ban a user can have ramifications not only for free expression but also other fundamental rights, such as the right to associate, as well as for the enjoyment of economic, social, and cultural rights.

Dorothy Mukasa is a digital rights activist with Unwanted Witness Uganda, a civil society organization seeking for a free, open and secure Internet for activists, netizens, bloggers, journalists and writers as a platform for the realization of human rights and good governance.

This is the eighth installment in our “Moderate Globally, Impact Locally” series on the global impacts of content moderation.