Newly Published Citizens Protection (Against Online Harm) Rules are a Disaster for Freedom of Expression in Pakistan
Pakistan’s newly published Citizens Protection (Against Online Harm) Rules, 2020 (the Rules) pose a serious danger for free speech in that country. Ostensibly, the Rules aim at curbing harmful online content, such as hate speech, harassment, and misinformation. However, the breadth of the Rules’s restrictions, and the intrusive requirements that they place on social media platforms, would severely threaten online freedom of expression in Pakistan. The Rules would grant unprecedented censorship powers to a newly appointed National Coordinator, and they would make it all but impossible for international platforms to offer services to end-users in Pakistan.
Pakistan promulgated the Rules under two laws, the Pakistan Telecommunication (Re-organization) Act, 1996 and the Prevention of Electronic Crimes Act, 2016. They include a number of new requirements for social media platforms, as well as a system for assessing and blocking content that is deemed to violate Pakistani law. The Rules are dated to January 21, 2020, and according to s. 1(2), they are slated to come into force “at once”. This, in itself, is cause for concern. A law whose applications are this sweeping would normally allow for a reasonable transition period. For instance, the European Union’s General Data Protection Legislation (GDPR) allowed for a transition period of two years.[1]
The short transition period is particularly concerning in light of the fact that the Rules include implementation requirements which are vastly more challenging and intrusive than those contained in the GDPR. For example, s. 6 of the Rules requires companies to provide any information contained in a system which they own, manage or run to government investigators upon request “in decrypted, readable and comprehensible format”. From a security standpoint, it is better practice is for companies to store much sensitive information in a manner which is technically inaccessible to them in a decrypted format. Developing the ability to respond to the s. 6 requirement would necessitate an enormous shift in the way that many platforms operate, including abandoning any use of end-to-end encryption. These changes would massively undercut security and privacy for global users in a way that platforms would be unlikely to accept.
Certain aspects of the Rules do include a timeline for implementation, though these are even more intrusive. For example, s. 5(b) requires that all “social media companies”, defined broadly and without any minimum user baseline, establish a permanent physical office in Pakistan within three months. While this may seem reasonable if applied to multi-billion dollar operations like Facebook, the Rules apply to virtually every interactive online service, no matter how small. In addition to broader questions as to the necessity of this requirement, it is obviously unreasonable to expect that every start-up which offers services over the global Internet should open a physical office in Pakistan.
Section 5(d) requires social media companies to adhere to data localization requirements within one year of the Rules coming into force. While this is ostensibly done to protect the privacy of Pakistani Internet users, there is no evidence that data localization rules are an effective tool in this regard. Although it is not unheard of for democratic governments to implement rules on cross-border data flows, these are typically imposed only to the extent that foreign jurisdictions have weaker privacy rules than the home territory. The purpose of such controls is to ensure that a common data protection standard is enforced. Given Pakistan’s lack of robust privacy protections, a localization requirement by itself will do nothing to support privacy for Pakistani Internet users.
Overall, the cost and challenge of these three requirements alone will make it so difficult to serve the Pakistani market that, in practical terms, their likely result if enforced would be for major online platforms like Google and Facebook to pull out of operating in Pakistan altogether, rather than risk their global operations by adhering to these rules. The challenge would be even greater for smaller and less well-resourced companies. It goes without saying that the loss of virtually every major international technology company would have a catastrophic impact on freedom of expression in Pakistan, not to mention the country’s economic development.
Another concerning aspect of the Rules is the level of unchecked power that is granted to the National Coordinator, a newly established office which is appointed by notification of the Minister of Information Technology and Telecommunication. The National Coordinator, along with the Pakistan Telecommunications Authority, are essentially granted discretionary power to declare that content violates the law and should be subject to blocking or deletion.
All rules impacting online speech must, under international human rights law, comply with the standards spelled out in Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Pakistan ratified the ICCPR in June 2010.[2] Granting broad discretion to regulatory authorities over the restriction of speech is broadly understood as being incompatible with Article 19 of the ICCPR.[3] The unchecked authority granted to the National Coordinator, and to the Pakistan Telecommunications Authority, is a gross violation of this standard. The proposed powers would be virtually unprecedented in the context of progressive democracies, which generally require that restrictions on speech be applied by authorities that operate independently and at arm’s-length from political actors, such as judges.
From a freedom of expression perspective, the standards for blocking content are also problematic. Section 4(4) requires social media companies to deploy proactive enforcement mechanisms designed to prevent the livestreaming of “online content related to terrorism, extremism, hate speech, defamation, fake news, incitement to violence and national security”.
First and foremost, it should be noted that mandatory proactive content filtering systems are considered a form of prior censorship, and are therefore generally not justifiable as a restriction on freedom of expression.[4] While there have been some successful uses of filtering technologies by platforms to target child sexual abuse material (CSAM) and copyright infringing works, it is simply not possible to apply these technologies to contextual categories of speech like defamation, hate speech, fake news, “extremism”, or incitement to violence. Determining whether or not a particular image constitutes CSAM is a relatively simple assessment which can generally be made based solely on the content of the image. Questions around the context in which the image is shared, the intentions of the author, or the facts around its creation and distribution, are not especially relevant. By contrast, hate speech, defamation and incitement are entirely contextual determinations, where the illegality of material is dependent on its impact. Impact on viewers is impossible for an automated system to assess, particularly before the material is shared. Likewise, “fake news” and “extremism”, the latter of which is defined in the Rules as “vocal or active opposition to fundamental values of the state of Pakistan” are far too broad to be legitimate categories of prohibited content under international human rights law. This means that the requirement, in s. 5(e), that social media companies remove content which is “involved in spreading of fake news or defamation and violates or affects the religious, cultural, ethnic, or national security sensitivities of Pakistan” is not legitimate. Indeed, the requirement to remove material which violates Pakistan’s religious, cultural, or ethnic sensitivities seems to imply that the country is a monolith on these issues. It is difficult to square this position with the description, in s. 2(d), of Pakistan’s values as including “mutual respect and tolerance of different faiths and beliefs”.
In addition to concerns about the categories of prohibited content, there are significant concerns around the timeline for removing material. According to s. 4, material which is deemed illegal by the National Coordinator or by the Pakistan Telecommunications Authority must be removed within twenty-four hours or, during an emergency as designated by the National Coordinator, within six hours. This, too, is entirely unprecedented in the democratic world. Even Germany’s Network Enforcement Act (NetzDG), which itself has been broadly criticized for its impacts on speech,[5] only requires that “manifestly unlawful” material be removed within twenty-four hours.[6] For all other content, the response time is one week.
The speed of removals under the Rules, and the lack of due process around the determination, is particularly problematic in light of the long timeframe for appealing decisions of the National Coordinator. According to s. 11(5) of the Rules, an appeal against a decision to block content will be processed within 60 working days. In other words, while content declared illegal must be removed almost immediately, and with virtually no due process, appealing against such a determination could take up to three months.
Together, the Rules pose a catastrophic threat to online freedom of expression in Pakistan and, indeed, to the broader ability of the people of Pakistan to access the Internet and take part in the economic, social and cultural benefits flowing from the digital world. The Wikimedia/Yale Law School Initiative on Intermediaries and Information recommends the following:
1. The government should pause implementation of the Rules to allow for a reasonable period of consultation with representatives of civil society, the technology sector, the media sector, the legal community, and the public at large. After this consultation has been completed, the Rules should allow for a transition period of at least six months before coming into force.
2. The Rules should not include any data localization requirements, except as narrowly tailored to mitigate against differential privacy protection standards in particular jurisdictions (i.e. to prohibit the storage of data in places where the legal standard of data protection is weaker than that in force in Pakistan).
3. The Rules should not include a requirement to have a physical office in Pakistan or, if this requirement is included, it should only apply to companies above a certain threshold, such as having over one million active daily users in Pakistan.
4. The Rules should delegate decisions on what constitutes illegal content to courts, rather than to appointed government officials.
5. The Rules should not mandate the use of proactive content filtering systems of any kind.
6. The Rules should not prohibit content based on vague and illegitimate standards, including “fake news”, “content related to terrorism”, and “extremism”.
7. The Rules should allow a timeframe for taking down illegal material of at least one week.
8. The Rules should allow for appeals against determinations that content is illegal on a timeframe commensurate to the timeframe for content removal.
9. The Rules should not include any requirement to backdoor encryption, and any requirement to deliver data to investigative authorities should be limited to the degree technically feasible, and should require an order from a judge or other appropriate judicial authority.
The Wikimedia/Yale Law School Initiative on Intermediaries and Information is a research initiative based at Yale Law School’s Information Society Project which aims to raise awareness of threats to an open internet, especially those affecting online intermediaries and their users, and to make creative policy suggestions that protect and promote internet-facilitated access to information. The current Wikimedia Fellow is Michael Karanicolas. For more information, contact Michael at michael.karanicolas@yale.edu, or on Twitter at @YaleISP_WIII.
[1] Regulation (EU) 2016/679 of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1.
[2] UN General Assembly Resolution 2200A(XXI) of 16 December 1966, in force 23 March 1976.
[3] General Comment No. 34: Article 19 (Freedoms of opinion and expression), UNHRC, 102nd Sess, UN Doc CCPR/C/GC/34 (2011), at para 25.
[4] Joint Declaration on Freedom of Expression and the Internet, signed by the U.N. Special Rapporteur on Freedom of Opinion and Expression, O.S.C.E. Representative on Freedom of the Media, O.A.S. Special Rapporteur on Freedom of Expression & A.C.H.P.R. Special Rapporteur on Freedom of Expression and Access to Information (1 June 2011) at para. 3, online: <www.oas.org/en/iachr/expression/showarticle.asp?artID=849&lID=1>.
[5] Germany: Flawed Social Media Law Human Rights Watch, 14 February 2018, https://www.hrw.org/news/2018/02/14/germany-flawed-social-media-law.
[6] Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken [Netzwerkdurchsetzungsgesetz—NetzDG] [Network Enforcement Act], Sept. 1, 2017, Bundesgesetzblatt, Teil I [BGBl I] at 3352 (Ger.), https://www.bmjv.de/SharedDocs/Gesetzgebungsverfahren/Dokumente/NetzDG_engl.pdf?__blob=publicationFile&v=2 [https://perma.cc/W2B8-JWHT].