Moderate Globally Impact Locally: Better Transparency Reporting Can Shed Light on Russian Internet Censorship

Over the last decade, the Russian internet has evolved from being freely accessible and almost uncontrolled to being a system over which the Kremlin has established very tight controls, molding it to its taste. The government can censor and block a wide range of content, control the actions of platforms and users and employ the internet as a communication tool to its own advantage. And all of this comes with very minimal oversight and opposition from the Russian-language community of the internet, the RuNet.

When it attempts to exercise control over the internet, the Russian state has come to rely on the internet private sector as its “proxy agent” and enforces compliance with state demands as it deems necessary. To give a few examples of this relationship: online communication services must provide law enforcement authorities with encryption keys, install surveillance equipment that permits the authorities to intercept online communication 24/7 and store Russian users’ data locally. Likewise, telecom companies must monitor the state register of illegal information, promptly block access to hundreds of thousands—if not millions—of URLs and collect, store and provide both the metadata and content of user communication to law enforcement authorities at their request.

Russian tech companies, like Yandex and Mail.ru, disclose very little information about how they cooperate with law enforcement authorities. Many foreign tech companies, on the other hand, regularly publish transparency reports where they disclose aggregate data regarding the Russian government’s requests for user information, content removals related to intellectual property and government demands to remove content for alleged violations of Russian law. The reports suggest that the Russian government has been trying to remove more and more content from these platforms in recent years—and that the companies have largely complied with the demands.

In a recent paper, I looked in detail at the transparency practices of Twitter, Google and Facebook—the three major online platforms—with respect to content removals initiated by the Russian government. How useful are these reports? Do companies disclose enough information to identify specific trends in content enforcement?

In addition to these transparency reports, I also analyzed the data companies have uploaded to the Lumen database, which compiles cease and desist letters concerning online content, during the second half of 2018 (the latest period covered in their transparency reports as of my research).

There are significant gaps in disclosures, as well as differences within the group: Google does not reveal government requests that targeted content on its YouTube platform; Twitter does not provide enough details about government requests that resulted in content removal for violation of its terms of service; and Facebook does not upload on Lumen, or otherwise disclose, the content of any government request it received during the analyzed period. My analysis of transparency reports covering 2019 shows that most of these deficiencies still remain.

To remedy these gaps and make transparency reports more useful for users, civil society and the academic community, the platforms should make public all government requests for content removal that they process as such. Not only will it keep governments’s and platforms’s actions in check, it will also allow platforms to compare their practices and develop a more consistent approach among themselves for handling allegedly illegal content. And rather than reporting only the number of government requests, the companies should supplement this information by reporting the number of items targeted by governmental requests and the number of users affected. As my analysis demonstrated, one government request can target hundreds, if not thousands, of items.

When it comes to the categorization of requests, the current approach is too broad to be meaningfully informative. For example, the “national security” category in Google’s report is too broad and gives a patina of legitimacy to government requests that in reality may seek to block politically sensitive content, such as calls for public protests. The categorization should instead be specific to each country and should reflect how a local government itself categorizes the content it seeks to remove.

My research also revealed a problematic trend: the Russian government has repeatedly sought to limit the disclosure of its requests on Lumen or elsewhere. At the request of the Russian government, Google does not publish government requests that seek the removal of tens of thousands of links from its search results. It also does not report whether and how it complied with these requests. In other words, even if the platforms wished to share more information, autocratic authorities may still limit their disclosure. The U.S. Congress can play an important role in remedying this problem by passing the Global Online Freedom Act or some equivalent legislation. Under this proposed bill, internet companies that meet specific criteria would be required to disclose the number of government requests they receive, their content and the actions they took in response. If enacted into law, the Act would offer a strong legal defense against local autocrats’s demands to keep secret their interactions with the platforms.

According to the 2019 Freedom on the Net report, internet freedoms around the world have been deteriorating for nine consecutive years. They are increasingly imperiled “[...] by the tools and tactics of digital authoritarianism, which have spread rapidly around the globe.” In such an environment, shedding light on the censorship practices of governments like Russia is even more critical. In the end, better transparency, enforced by legal requirements, would help to ensure a more consistent approach to questionable content across platforms, to win over users and to hold government censors accountable.

Sergei Hovyadinov holds a doctoral degree from Stanford Law School. His recent positions included an Open Technology Fund Senior Research Fellow at Ranking Digital Rights, and a legal counsel at Google where he managed legal affairs of the company in Russia and Eastern Europe.

This is the second installment in our “Moderate Globally, Impact Locally” series on the global impacts of content moderation. It originally appeared on Lawfare here.