Towards policy and regulatory approaches for combating misinformation in India

This is the third installment of a new white paper series on alternative regulatory responses to misinformation. The articles, which were produced with the editing and support of the WIII Initiative, will be presented at a series of events on March 8, 10, and 12th. You can find more information, and registration details for our launch events, here.

I.              Introduction

Misinformation and hate speech on WhatsApp, Twitter, and Facebook have been contributing to violence and social unrest in India.[1] Currently, the government and users rely on private platforms to address the misinformation challenge on their own. This has resulted in parallel regulatory structures for hundreds of millions of Indians online through the content moderation policies of online platforms, but without the legitimacy, oversight, or formal codification which bind traditional institutions. Recent reports of politically biased moderation of speech on platforms such as Facebook,[2] opacity in how platforms address misinformation across regions and user bases,  and a general lack of public engagement by ‘Silicon Valley’ corporations in their largest market in the world have all served to rapidly erode public trust.[3] Platforms have introduced some measures to mitigate  harmful content in India, while also acknowledging the need to “do more”.[4] The Indian Government response has focused on changes to the intermediary liability regime, which are not only ill-designed to address online misinformation but also threaten the fundamental rights of free speech and privacy.[5]

As platforms emerge as “modern public squares”[6] and make speech decisions for billions of people online, there is a need for efficient regulation of the ecosystem in which misinformation thrives. Thus far, neither platforms nor the government have effectively performed this function. This paper argues for a new participative and responsive rule-making structure, and an accountable and transparent platform governance framework for India.

The paper proceeds in Part II with a brief description of India’s misinformation crisis and the role played by social media platforms and the Indian Government, and introduces the need to shift from traditional forms of rule-making to alternatives such as induced self-regulation. Part III explores possible measures to bolster regulatory capacity to address evolving platform harms and argues for a model of platform governance which involves regulatory nudges towards accountable self-regulation for India and similarly situated jurisdictions. Forms of co-regulation[7] which involve legislative backing and greater government involvement are not immediately desirable owing to jurisdictional challenges and repercussions on free speech and user rights. Accordingly, this part focuses on self-regulation through greater cooperation between platforms, governments, and civil society. Part IV discusses steps that can be taken by social media platforms to tailor platform policies and meaningfully engage with Indian users to address domestic misinformation challenges.

II.            India’s misinformation crisis

The role of platforms

In India, platform responses to misinformation have been inconsistently applied across regions and political groups, and suffer from a lack of transparency. Misinformation challenges are continuously evolving and, as a result, solutions are difficult to implement. However, technological changes that have been introduced thus far have been inadequate towards the fundamental goal of empowering users to identify fake news and restrict its circulation.

While platforms such as Facebook and WhatsApp have undertaken awareness campaigns and partnered with local law enforcement officials and policymakers to combat fake news, such measures are often ad-hoc, and not tailored to unique challenges faced by the Indian user base.[8] Many concerns have been raised regarding the efficacy of existing fact-checking programs. For instance, Facebook’s third-party fact checking program was rolled out in Karnataka in 2018 and later expanded to cover other Indian States.[9] More recently, it launched a partnership with independent entities to identify misinformation across 11 Indian languages.[10] However, some experts have expressed concern over the inefficiency of Facebook’s partner fact-checking organizations[11] and the lack of action taken by the platform to remove flagged content, rendering the fact-checking process ineffective.[12]

Likewise, WhatsApp has introduced voluntary third-party fact-checking systems[13] and restrictions on bulk-messaging[14] to make it tedious for users to forward content to hundreds of users in one go, in order to dampen the virality of harmful content. However, it has been argued that these changes are ineffective and superficial.[15] For instance, the restrictions on bulk-messaging can be bypassed through affordable “clone applications” used by political party workers to forward content to a large number of people, or the use of anonymous phone numbers to send bulk messages.[16]

Platforms have also failed to engage with communities and civil society in collaboratively addressing the misinformation challenge in India. More broadly, platforms are not adequately accountable to State and local governments, as exemplified by Facebook’s refusal to appear before a legislative committee investigating the platform’s role in riots in India’s capital.[17]

The role of government

At present, India’s government does not have the tools to effectively govern the ecosystem in which misinformation thrives. India, and similarly situated countries, face significant jurisdictional challenges in attempting to regulate American social media platforms, which often refuse to engage with governments or the regulations they impose. However, India’s rules are not fit for purpose in other ways as well.

The Information Technology (Intermediaries Guidelines) Rules, 2011 (‘Intermediary Rules’),[18] enacted under the Information Technology Act, 2000,[19] regulate all intermediaries by the same set of regulations, including ISPs, search engines, DNS providers, mainstream social media platforms, and even cyber cafes. The regulations provide a “safe harbor” to a wide range of intermediaries including social media platforms. However, this regulatory approach is problematic because in applying the same set of obligations to all intermediaries, it fails to acknowledge the diversity of technological characteristics and functions across these services.[20]

More recently, the Indian Government has notified amendments to the Intermediary Liability Rules,[21] which remain broad-brushed. The new Intermediary Liability Rules create differentiated obligations for a new category of intermediaries called “significant social media intermediaries” which are likely to include Big Tech platforms.[22] However, the criteria for their classification is vague and, according to the Internet Freedom Foundation, these Rules provide wide discretion to the government to impose “discriminatory” obligations.[23] The amended Rules place an obligation on platforms to proactively identify and disable access to content based on vaguely defined criteria. This, as argued by experts, poses a threat to user privacy and end-to-end encryption on platforms such as WhatsApp.[24] The Rules also provide wide powers to the government to force platforms to take down user content. This poses a significant threat to the free speech of platform users.[25]

The need for new forms of rule-making

At present, self-regulation by platforms has failed to generate satisfactory solutions to address misinformation in India. There are a few instances where such efforts have been initiated, but their functioning is largely opaque. The Information Trust Alliance (‘ITA’) is one example of a recent collaborative effort by platforms, digital publishers, industry bodies, fact-checking organizations, civil society and academics.[26] The aim of the ITA is to work together to develop standardized procedures for resolving complaints regarding false or disputed content.[27] The ITA has reportedly drafted a Code of Practice, which is on hold due to a lack of consensus among participating social media platforms.[28] As of early 2021, little is known about the deliberations of the ITA beyond this.

Further, as seen above, government efforts to address misinformation also face a number of challenges including jurisdictional tensions and the complex nature of imposing new rules on freedom of expression. Vague, top-down rules, such as the new Intermediary Liability Rules, which force platforms to police content or prejudice users’ freedom of speech, are not only harmful to users but also potentially unconstitutional.[29]

In order to address the misinformation challenge in India, rather than prescriptive intermediary liability laws, Indian lawmakers should work with platforms to develop self-regulation or co-regulation that is responsive and collaborative, while still sufficiently accountable to domestic policymakers. This is where “induced” self-regulation[30] or “regulated” self-regulation[31] may be an approach worth considering. In its most basic form, “induced” self-regulation refers to the creation of an enabling regulatory environment which incentivizes or induces industry actors to create rules for self-regulation, while also guarding against state interference or a compromise of the independence of entities.[32] It creates a controlled framework in which industry actors have flexibility to devise norms, but at the same time must be accountable for the enforcement of such norms.

III.          Bolstering regulatory capability to combat misinformation in India

This part discusses two possible solutions for India to strengthen its regulatory capability to combat misinformation. First, developing effective standards to combat misinformation through “induced” self-regulation. Second, creating an independent and representative national platform oversight body to work with platforms, ensure their transparency, and co-create codes for evolving platform harms.

A.   Induced self-regulation

This paper proposes that India consider a voluntary “outcomes-based” code for misinformation, including key norms for social media platforms to adhere to. Such a code should outline outcomes for platforms to achieve, including design-duties[33] and product features to empower users and fight misinformation.[34] The standards should be developed collaboratively through transparent and participative processes, and governments should evolve efficient metrics for assessing their performance.

The outcomes should be grounded in human rights principles and developed through a collaborative process involving the platforms, civil society, and other stakeholders. Some examples may include protecting the free speech and privacy of users, bolstering user autonomy and access to remedial measures and grievance redressal, and greater transparency and accountability around content moderation and efforts to combat misinformation. It is critical to ensure that the “outcomes-based” code is not vague or tilted to serve state interests, and does not incentivize platforms to adopt an overly heavy approach to removing content. The outcomes should be built around common objectives, and should provide flexibility for platforms to develop protocols and technological tools to achieve them.

Several sectors in India have successfully adopted self-regulation mechanisms. A notable example of ‘induced’ and outcomes-based self-regulation is the Securities and Exchange Board of India (‘SEBI’), which regulates the securities market and protects investor interests.[35] In doing so, SEBI promotes ‘self-regulatory organizations’ (‘SRO’),[36] investor awareness,[37] training of intermediaries in the securities market,[38] and has set up an Investor Protection and Education Fund.[39] The purpose of the SRO framework is to equip a particular group of entities in the securities market (such as investment advisors and fund managers) to act as ‘first-level regulators’ and provide a guiding framework for their registration, composition, and self-regulatory objectives. This framework also empowers SEBI with tools of inspection and audit to assess the functioning of the SROs and their compliance with high-level objectives to protect investors.[40] The SRO framework thereby allows sectoral entities to set up self-regulatory codes while following a broad set of duties and obligations outlined in law. Unlike the SEBI framework, India does not have a similar regulatory structure around platforms. While a top-down  regulatory structure presents challenges to ensuring direct accountability to platforms’ users, there are nonetheless certain aspects of the SRO framework that may be useful in guiding collaborative self-regulation for platforms.

Evolving practice globally also indicates a greater push towards accountable and collaborative self-regulation to address misinformation. For instance, the Australian Communications and Media Authority (‘ACMA’) has recommended a self-regulatory approach[41] to enable platforms to identify risks or “acute harms from misinformation”[42] and devise their own design solutions to empower users. The outcomes outlined by ACMA include reduced exposure of users to harmful misinformation on the platform, a robust system of misinformation reporting, and access to an effective “complaints handling process”.[43] The proposal argues for the need to maintain consistency in measures to tackle misinformation, using “industry-wide” inputs for risk-assessment.[44] It also seeks to push platforms to develop common objectives to curb misinformation,[45] but give them the flexibility to develop their own solutions to achieve the objectives,[46] and provide a system of “performance reporting”.[47] This mechanism aligns with the “positive state approach” where governments create institutions with incentives for private actors to tackle a challenge, rather than adopting a system which relies on fines and penalties.[48] This may be contrasted against a “negative state” approach in countries like Germany[49] and Singapore,[50] where restrictive regulation poses a threat to the free speech of users.

Some regions have developed codes to combat misinformation, such as the EU Code of Practice on Disinformation[51] (‘EU Code’) and NATO’s approach to address misinformation related to COVID-19.[52] However, such collaborative efforts lack necessary measures to monitor progress. For instance, the EU Code lacks key performance indicators, or measures to conduct efficient monitoring, and does not provide adequate information for researchers.[53] These gaps should be explored further when thinking about a similar model for India.

To avoid excessive state control or “content cartel creep”,[54] development and oversight of induced self-regulation or co-regulation should be undertaken by an independent body with adequate tools of enforcement and oversight. In India, this might be carried out by an independent platform oversight body which operates at an arm’s length from both the government as well as platforms. The platform oversight body should ensure that the development of these codes are participative and promote dialogue among state and local governments, platforms, civil society, academics, and other stakeholders.[55] The codes should provide platforms the flexibility to develop consensus and devise self-regulatory norms. Further, such codes should outline key performance indicators and lay down a metric for the independent body to monitor compliance. As seen with the SRO framework, this may be done through non-financial reporting norms, transparency reports, and regular audits. In order to ensure accountability and oversight, codes which are independently developed by platforms may be approved or accredited by the oversight body.

Since India does not have an existing legal framework around platform governance, an outcomes-based self-regulatory framework can serve as a necessary first step for stakeholders to work together to combat online misinformation. In the absence of enforcement tools, its implementation hinges on platforms’ willingness to work together. A potential oversight body may also face hiccups in terms of conducting audits or effectively monitoring implementation. However, this approach would be useful in bridging the gap between the government and social media platforms, and potentially avoid prescriptive regulations such as the amendments to the Intermediary Liability Rules which pose a larger threat to user rights. In the short-term this framework should, at the very least, prod platforms to take holistic action in curbing misinformation and creating an environment of greater transparency around their actions in this space.

B.    Creating a national platform oversight body

In India, efforts to combat misinformation have been ad-hoc or siloed, and regulators have been unable to craft regulation which effectively tackles misinformation while also preserving user rights. In order to efficiently implement self-regulatory or co-regulatory efforts discussed in the previous section, there is a need for an alternative structure of governance where governments and platforms can develop a regulatory system which is responsive to evolving platform harms. An initiative of this nature should respect the central role that platforms play in facilitating the right to freedom of expression.

While ad-hoc measures have been taken to address certain aspects of the misinformation problem, there is no systematic approach to engage with platforms to collaboratively address the issue. Several government agencies, such as the Home Ministry, the Ministry of Electronics and Information Technology (‘MEITy'), the Department for Promotion of Industry and Internal Trade, and the Election Commission of India, have on different occasions issued advisories and policies for social media platforms to curb misinformation. Government officials have also attempted to engage directly with platforms like Facebook, WhatsApp and Twitter. However there is no metric to assess the success of these engagements. The advisories and calls to action are often not implemented by platforms. In some instances, the Indian Government has also attempted to tackle misinformation through repressive policies which may be used to unfairly target credible journalism.[56]

Globally, institutions to support collaborative action around misinformation are evolving gradually. They include government task forces,[57] expert groups,[58] and public-private initiatives around issue-specific misinformation such as election interference and information related to the COVID-19 pandemic.[59] There are several proposals to establish more formal structures to address misinformation and platform harms. For instance, the Global Network Initiative advocates for a multi-stakeholder and collaborative approach which would involve “business, industry associations, civil society organizations, investors and academics”.[60] The French Interim Mission Report recommends the creation of an independent authority created by elected representatives, with government and platform actors, civil society, citizens, domain experts, academics and researchers.[61] According to the Report, this model is similar to  “banking supervisory authorities” which incentivize financial institutions to comply with prescribed disclosures and information sharing.[62] Nathaniel Persily and Alex Stamos also recommend the creation of an independent body akin to the Financial and Industrial Regulatory Authority in the USA.[63] They propose a self-regulatory body “blessed by the Congress”[64] where platforms can work together to address common challenges such as “foreign sponsored advertising” or voter suppression.[65] The U.K. White Paper on Online Harms also recommends the creation of an “independent regulatory body” which will also be responsible for creating codes for social media platforms to address hate speech and harms.[66] These proposals have largely advocated for the establishment of independent bodies to govern all aspects of social media platform functioning. In the specific context of misinformation, the Australian Competition and Consumer Commission advocates for an independent regulator to monitor voluntary measures taken by platforms to ensure authentic and trustworthy content.[67]

At present, India lacks a body which is responsible for developing tools to curb online misinformation. Further, the present regulatory architecture is not equipped with the necessary tools or resources to oversee standard-setting and compliance of some of the measures discussed in this paper. While MEITy is the primary body responsible for developing law and policy associated with technology and the Internet, the Ministry has so far been unable to effectively work with platforms to address the misinformation challenge. In general, MEITy’s rule-making around social media platforms has been opaque, and lacks insight from experts, academic, stakeholders and civil society. Further, in the context of platform governance, it is unclear if MEITy will have sufficient regulatory capacity to govern platforms and effectively monitor efforts to combat misinformation, given the size of this industry, and the importance of the platforms to facilitating freedom of expression among their hundreds of millions of users. The MEITy’s role in the blocking of social media accounts linked to the 2020–2021 Indian farmers' protest is further cause for concern in the context of safeguarding the expressive rights of users.[68]

In order to model the potential structure and functions of a platform oversight body for India, it may be useful to briefly examine existing models, such as the Press Council of India (‘PCI’),[69] the News Broadcasters Association,[70] the Advertising Standards Council of India[71] and existing or emerging statutory regulators in the field of telecommunications and data protection. None of these agencies present a perfect template of a body which is independent from both state control and industry bias. However, a proposed platform oversight body for India might consider adopting certain regulatory strengths of existing bodies, namely those where standard setting is efficient and public/stakeholder consultation processes are collaborative and transparent.[72] Generally, these bodies function best where representation from industry and other stakeholder groups is mandatory,[73] and where their functions are subject to parliamentary or external/independent oversight,[74] with a representative council empowered to draw up “codes of practice” for industries.[75] The PCI, and proposals regarding the creation of a Data Protection Authority for India,[76] also empower sectoral regulators to act as primary standard setting bodies or initiate co-regulatory codes. It should be noted that while certain structural features of the PCI may serve as a useful examples to consider, there has been significant criticism of the body and its functioning.[77] For instance, the PCI does not have sufficient power to enforce its guidelines,[78] and its members have been accused of conflicts of interest.[79]

In order to support proactive efforts to curb online misinformation across different platforms, India needs an independent and representative platform oversight body to structure self-regulatory initiatives around misinformation. It is recommended that the platform oversight body consist of government and platform representatives, policymakers, civil society, academics, and other subject-matter experts to ensure efficient and accountable functioning. Such a body may be created after adequate consultation and feasibility assessments, in order to develop the necessary buy-in from social media platforms. The scope of the body’s powers would need to be carefully considered to ensure that the platform oversight body is not placed in the role of defining the legal limits of acceptable speech. The functioning of the body should also be subject to appropriate judicial oversight.[80]

The challenges in setting such a body up suggest that a gradual approach to implementation may be best. This may include progressive development of certain responsibilities. Standards for effective non-financial disclosures,[81] the creation of a standardized metric for performance reporting on these transparency obligations, and the creation of resources for community awareness of misinformation would be a good place to start, with particular resources dedicated to supporting mid-level or smaller platforms which are struggling to comply with industry best practice due to resource constraints.[82] If consensus is achieved on these relatively simple issues, it could provide a baseline for further and more ambitious collaboration.

IV.          Promote responsive platform responses to misinformation in India

Every country’s misinformation challenge is unique, and requires a need for platforms to work collaboratively and tailor responses through meaningful engagement with the communities that they serve. The following section explores ways in which responsive platform behavior can be achieved in India.

A.   Meaningful engagement with communities and stakeholders in India

The need for a “bottom-up approach”

Speech policies of platforms have traditionally been modelled on American understandings of freedom of speech, implemented in a manner which attempts to harmonize enforcement around the world according to this standard.[83] While there have been some efforts to move away from a core focus on American speech standards and towards more international approaches, the Community Standards enforcement process is still a “top-down” model of governance.[84] Platform users have no influence over the “definition and enforcement of rules” which are placed in the hands of platform “officials”.[85]

User policies across Big-Tech platforms do not adequately appreciate the religious, linguistic and cultural diversity of the Indian user base or, for that matter, of their users in many other parts of the world. The complexity of language in their user agreements, including their Community Standards, in many instances does not correspond to the literacy level of users, including those who are using Internet services for the first time. The need to make platform policies available in local or regional languages of platform users has been emphasized by the Forum on Information and Democracy in its Report on Infodemics.[86] Further, platforms and content moderators are often not equipped to appreciate critical contexts of online speech which are unique to a geographical area or community. In her work, Chinmayi Arun has emphasized the need to appreciate the distinction between “global” and “local” elements of hate speech, and highlighted the failure of platforms to address “hyper-local harmful speech”.[87] Arun cautions that “hyper-local” elements of speech which are specific to a village or local area may be completely ignored by global corporations, which underscores the need for platform initiatives which are tailored to local realities.[88] .

While platforms like Facebook have been working with certain experts and fact-checking organizations to identify false news, the level of community involvement in developing speech rules and supporting a contextual interpretation of these rules is not consistent. In order to ensure responsive speech rules which are better tailored to these local contexts, some have advocated for a “bottom-up” approach to developing platform policies,[89] and the creation of “nested communities” to involve large and diverse groups in the rule-making process.[90] This would mean that governments, platforms and other stakeholders would work together to formulate responsive and accessible platform rules— akin to an “elected Parliament of users”[91] empowered to define community standards.[92] Echoing the need for greater engagement, a White Paper on online content regulation released by Facebook argues for the introduction of “procedural accountability regulations” which would require platforms to empower users by giving them the tools to challenge speech decisions.[93] John Bowers and Jonathan Zittrain have also emphasized the need for platforms to meaningfully “externalize” important policymaking obligations.[94] Platforms have started relying on independent third parties for certain aspects of their functioning. However, presently, these efforts are focused on outsourcing of content moderation appeals to independent entities, particularly in Facebook’s case through the creation of their Oversight Board.[95] Other models have been suggested, particularly ARTICLE 19’s proposal to establish “Social Media Councils” as a tool for “multi-stakeholder accountability” to address content moderation challenges with respect to international human rights standards.[96] However, models like the Oversight Board are largely focused on decision-making around online speech and not at the stage of actually setting the relevant standards.

The highly local and contextual nature of misinformation creates a stronger impetus to work with users and communities and develop responsive policies. Platforms can initiate “bottom-up” approaches to developing community standards by involving citizens, experts, and civil society. For issue-specific misinformation challenges, platforms might also create cross-platform initiatives in collaboration with civil society and industry bodies to oversee the development of community standards and their enforcement. This may be done in two ways.

First, specific voluntary groups may be created for high-risk or sensitive misinformation challenges for India. For instance, online misinformation pertaining to State and local elections poses a great threat to voter interests and election integrity in India. While platforms have taken measures to combat election misinformation,[97] when compared to platform efficiency in tackling election misinformation in the US, a comprehensive effort to tackle this in India remains largely unaddressed.[98] In the context of election information, specialized voluntary groups consisting of civil society members, election officials, fact-checking entities, human rights experts, and platform representatives can work together to address specific challenges for different types of elections. The ACMA classifies risks from misinformation into two categories— short term or imminent risks, and long term or systemic risks.[99] Imminent or ‘acute’ risks would include instances such as a public health outbreak, financial fraud, or a threat to election integrity.[100] Voluntary groups across these categories of misinformation can ensure effective and targeted policy and platform responses. Further, delegating platform policy-making to external entities is likely to support independence of thought and judgment, rather than having policies driven by the platforms’ own direct interests. A structured approach will ensure that entities such as civil society members, experts, and academics providing independent perspectives on platform decisions are made a part of the standard setting process from the beginning.

A second, generalized approach would be to form regional or ‘hyperlocal platform communities’ which consist of a diverse panel of platform users and residents, policy-makers at the local levels, journalists, researchers and academics, and other stakeholders. This concept is akin to recommendations for the creation of “nested communities” which can work with platforms to address local speech challenges.[101] These communities could work with platforms to audit community standards, co-develop guidelines on fake news based on local contexts, and provide inputs on speech rules. An elected and inter-platform ‘hyperlocal platform community’ might also interface with local state actors in fact-finding, investigations, and developing policies around platform governance. Procedures pertaining to their constitution, election, independence, funding, and jurisdiction might be co-developed by platforms, state actors and civil society.

It is critical that such bodies are representative and unbiased, and measures to ensure their independence should be a primary focus for researchers. An open and transparent selection procedure undertaken by inter-platform bodies like the platform oversight body may be useful in ensuring this. Buy-in from both governments and from the platforms themselves is essential to their function, as the challenges faced by the Delhi Assembly Committee in investigating Facebook’s role in February 2020 communal riots illustrates. The Committee was been largely unsuccessful in investigating the role of Facebook due to inadequate cooperation by the platform and concerns about its lack of jurisdiction in questioning Facebook.[102] A ‘hyperlocal platform community’ could have played a crucial role in assisting the Committee in reviewing and auditing Facebook’s response to specific instances of fake news locally, structuring accessible surveys of residents to understand what could be done better, and providing inputs on building effective and contextualized strategies to combat misinformation and hate speech.

These recommendations require substantial additional development to operationalize, and should be the subject of analysis through all stages of their evolution. As a result, although there is no reason why work to develop ‘hyperlocal platform communities’ need be delayed, and requirements for platforms to engage with communities should not be hardcoded in legislation until more is known about their utility and efficacy.

V.            Conclusion

Regulation of misinformation is a complex endeavor, and countries across the world are struggling to adapt laws to address evolving platform harms. In the absence of a robust regulatory framework or timely conversations around platform governance, India not only lacks the bargaining power to negotiate with foreign platforms, but also the basic tools necessary to hold them accountable to Indian law. This paper has so far outlined critical areas of reform which can be prioritized in the regulatory conversation. The objective of responsive and participatory rule-making, for state actors and platforms, forms the core of all the proposals discussed. The proposals for regulatory reform through a dedicated platform oversight body and tools for community engagement are significant objectives which will require state-platform consensus and the will to alter the regulatory landscape. It is hoped that the recommendations outlined in this paper will not only be explored for transformational action for India, but assist similarly situated countries in combating misinformation online.

Akriti Gaur is a lawyer and independent researcher. The author is grateful to Vasu Nigam, and the fellows of the Yale Information Society Project for their valuable feedback on earlier drafts of this paper.

 

[1] Chinmayi Arun, On WhatsApp, Rumours, Lynchings, and the Indian Government, Economic & Political Weekly (Feb. 2019), https://www.epw.in/journal/2019/6/insight/whatsapp-rumours-and-lynchings.html; Timothy McLaughlin, How WhatsApp Fuels Fake News and Violence in India, Wired (Dec. 2018), https://www.wired.com/story/how-whatsapp-fuels-fake-news-and-violence-in-india; Archit Mehta, Delhi Riots” The Link Between Misinformation and Radicalisation, The Wire (Aug. 2020), https://thewire.in/communalism/delhi-riots-misinformation-radicalisation-social-media.

[2] Newley Purnell and Jeff Horwitz, Facebook’s Hate-Speech Rules Collide With Indian Politics, The Wall Street Journal (Aug. 14, 2020), https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346; Billy Perrigo, Facebook’s Ties to India’s Ruling Party Complicate Its Fight Against Hate Speech, Time (Sept. 27, 2020), https://time.com/5883993/india-facebook-hate-speech-bjp/.

[3] Out of an estimated 1.5 billion people who use WhatsApp across the world, around 200 million users reside in India. India is also the largest market for Facebook, with an estimated 346 million users, WhatsApp now has 1.5 billion monthly active users, 200 million users in India, The Financial Express (Feb. 1, 2018), https://www.financialexpress.com/industry/technology/whatsapp-now-has-1-5-billion-monthly-active-users-200-million-users-in-india/1044468/.

[4] Sandeep Soni, Progress made but need to do more, says Facebook India’s Ajit Mohan on tackling hate speech, The Financial Express (Aug. 22, 2020), https://www.financialexpress.com/industry/progress-made-but-need-to-do-more-says-facebook-indias-ajit-mohan-on-tackling-hate-speech/2062425/.

[5] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf, [Hereinafter New Intermediary Liability Rules], See also Apar Gupta, Centre’s IT Rules bring answerability in digital ecosystem. But they also increase political control, The Indian Express (Feb. 26, 2021), https://indianexpress.com/article/opinion/columns/it-act-social-media-govt-control-digital-ecosystem-7204972/.

[6] Packingham v. North Carolina, 137 S. Ct. 1730 (2017), See also Amber Sinha, Beyond Public Squares, Dumb Conduits, and Gatekeepers: The Need for a New Legal Metaphor for Social Media, IT for Change (2020), https://itforchange.net/digital-new-deal/2020/11/01/beyond-public-squares-dumb-conduits-and-gatekeepers-the-need-for-a-new-legal-metaphor-for-social-media/.

[7] Various contemporary legislative proposals for regulating social media platforms advocate for a co-regulatory model as a starting point, for e.g., The Republic of France, Creating a French framework to make social media platforms more accountable: Acting in France with a European vision – Mission Report 23-24 (2019), https://www.numerique.gouv.fr/uploads/Regulation-of-social-networks_Mission-report_ENG.pdf [hereinafter French Interim Mission Report]; for a comparison of co-regulation models with voluntary compliance and self-regulation, see Reeve’s Regulatory Pyramid Model, B. Reeve, The Regulatory Pyramid Meets the Food Pyramid: Can Regulatory Theory Improve Controls on Television Food Advertising to Australian Children?, JLM 128, 133 (2011).

[8] Rasmus Kleis Nielsen, US elections vs Bihar polls: Are all social media users created equal?, Scroll.in (Nov. 10, 2020), https://scroll.in/article/978068/us-elections-vs-bihar-polls-are-all-social-media-users-created-equal.

[9] Daniel Funke, In one month, Facebook doubled the countries using its fact-checking tool - all outside the West, Poynter.org (April 18, 2018), https://www.poynter.org/fact-checking/2018/in-one-month-facebook-doubled-the-countries-using-its-fact-checking-tool-%c2%97-all-outside-the-west/, PTI, Facebook teams up with AFP to expand fact-checking programme in India, Business Today (Nov. 6, 2018), https://www.businesstoday.in/current/economy-politics/facebook-partners-with-afp-to-expand-fact-checking-programme-in-india/story/288651.html.

[10] PTI, Facebook teams up with 8 third-party fact checkers, covering 11 Indian languages, to flag Covid-19 fake news, The Economic Times (April 22, 2020), https://economictimes.indiatimes.com/magazines/panache/facebook-teams-up-with-8-third-party-fact-checkers-covering-11-indian-languages-to-flag-covid-19-fake news/articleshow/75284845.cms.

[11] Gopal Sathe, Fact-Checkers Fight Fake News On Facebook. But Who Fact-Checks Them?, HuffPost (Jan. 1, 2020), https://www.huffpost.com/archive/in/entry/fact-checking-fake-news-facebook-how-does-ifcn-work_in_5ca0fd29e4b00ba6327eb726.

[12] Anumeha Chaturvedi, It's up to Facebook to respond on its fact checking programme, say FB fact-checkers after IT minister raises concerns, The Economic Times ET Prime (Sept. 2, 2020), https://economictimes.indiatimes.com/tech/internet/its-up-to-facebook-to-respond-on-its-fact-checking-programme-says-fb-after-it-minister-raises-concerns/articleshow/77895695.cms.

[13] Scroll Staff, WhatsApp launches new fact-checking service to fight fake news ahead of elections, Scroll.in (Apr. 02, 2019), https://scroll.in/latest/918725/whatsapp-launches-new-fact-checking-service-to-fight-fake-news-ahead-of-elections.

[14] The Wire Staff, To Combat Fake News in India, WhatsApp to Limit Forwarding of Messages, The Wire (July 20, 2018), https://thewire.in/tech/to-combat-fake-news-in-india-whatsapp-to-limit-forwarding-of-messages.

[15] Prashant Reddy T., If WhatsApp Doesn't Regulate Itself, Parliament May Have to Step In, The Wire (July 18, 2018), https://thewire.in/tech/if-whatsapp-doesnt-regulate-itself-parliament-may-have-to-step-in.

[16] Munsif Vengattil, Aditya Kalra and Sankalp Phartiyal, INSIGHT – In India election, a $14 software tool helps overcome WhatsApp controls, Reuters (May 15, 2019), https://www.reuters.com/article/india-election-socialmedia-whatsapp-idINL4N22R3G3.

[17] Varun Thomas Mathew, The Arrogance of Being Facebook, a Serious Tragedy for the Rule of Law,  The Wire (Oct. 8, 2020) https://thewire.in/law/facebook-delhi-assembly-summons-rule-of-law-riots.

[18] The Information Technology (Intermediaries Guidelines) Rules, 2011, (India) https://www.meity.gov.in/writereaddata/files/GSR314E_10511%281%29_0.pdf [Hereinafter Intermediary Liability Rules].

[19] The Information Technology Act, 2000, (India), https://www.meity.gov.in/content/view-it-act-2000 [Hereinafter IT Act].

[20] Pritika Rai Advani, Intermediary Liability in India, EPW, Vol. 48, Issue No. 50, (Dec 14, 2013), https://www.epw.in/journal/2013/50/special-articles/intermediary-liability-india.html; For a discussion on the Indian Intermediary Liability framework, see Chinmay Arun and Sarvjeet Singh, NOC Online Intermediaries Case Studies Series: Online Intermediaries In India, (Feb 18, 2015), National Law University, Delhi, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2566952.

[21] New Intermediary Liability Rules, supra note 5; Prashant Reddy T., New IT Rules: The Great Stretching of ‘Due Diligence’ Requirements Under Section 79, The Wire (Feb. 27, 2021), https://thewire.in/tech/new-it-rules-the-great-stretching-of-due-diligence-requirements-under-section-79. In 2018 the Indian Government released a draft of the amendments to the Intermediary Rules which have been heavily criticized by experts, academics and civil society. See e.g. Siddharth, Intermediary Liability Amendment: Civil Society Counter Comments On Unlawful Content, Medianama, (March 22, 2019), https://www.medianama.com/2019/03/223-intermediary-liability-amendment-civil-society-counter-comments-on-unlawful-content/, Agnidipto Tarafder and Siddharth Sonkar, Attempt to Curb ‘Unlawful Content’ Online is Chilling to Free Speech, The Wire (Dec. 25, 2018), https://thewire.in/government/it-ministry-attempt-to-curb-unlawful-content-online-chilling-blow-free-speech.

[22] Rule 2(5), New Intermediary Liability Rules, supra note 5.

[23] Internet Freedom Foundation, Deep dive : How the intermediaries rules are anti-democratic and unconstitutional, (Feb. 27, 2021) https://internetfreedom.in/intermediaries-rules-2021/.

[24] Id.

[25] Rule 3(1)(d), New Intermediary Liability Rules, supra note 5.

[26] Megha Mandavia, Social Media to Join Hands to Fight Fake News, Hate Speech, The Economic Times (Feb. 19, 2020), https://economictimes.indiatimes.com/tech/internet/social-media-to-join-hands-to-fight-fake-news-hate-speech/articleshow/74200542.cms.

[27] Id.

[28] Id.

[29] Internet Freedom Foundation, supra note 23.

[30] European Parliament, Disinformation and propaganda – impact on the functioning of the rule of law in the EU and its Member States (Feb. 2019) at p. 104, https://www.europarl.europa.eu/RegData/etudes/STUD/2019/608864/IPOL_STU(2019)608864_EN.pdf, See also Oxford Handbook of Regulation 541 (Robert Baldwin, Martin Cave, and Martin Lodge eds., 2010).

[31] Id.; Article 19, The Social Media Councils: Consultation Paper (June, 2019) at p. 7, https://www.article19.org/wp-content/uploads/2019/06/A19-SMC-Consultation-paper-2019-v05.pdf.

[33] Olivier Sylvain, Intermediary Design Duties, 50 Conn. L. Rev. 203 (2018), https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=1892&context=faculty_scholarship.

[34] See generally EU Code of Practice on Disinformation, https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation (last visited Nov. 20, 2020), Daphne Keller, Who Do You Sue? - State and Platform Hybrid Power Over Online Speech , Hoover Institution,  Aegis Series Paper No. 1902 https://www.hoover.org/sites/default/files/research/docs/who-do-you-sue-state-and-platform-hybrid-power-over-online-speech_0.pdf  at p. 24.

[35] Preamble, Securities and Exchange Board of India Act, 1992 (SEBI Act) (India).

[36] § 11(2)(d), SEBI Act, 1992 (India).

[37] § 11(2)(f), SEBI Act, 1992 (India).

[38] § 11(2)(f), SEBI Act, 1992 (India).

[39] Securities and Exchange Board of India  (Investor Protection and Education Fund) Regulations, 2009 (India), https://investor.sebi.gov.in/ipef.html (last visited Nov. 27, 2020).

[40] See generally, Securities and Exchange Board of India (Self Regulatory Organizations) Regulations, 2004 (India). (last visited Nov. 27, 2020).

[41] “Misinformation and news quality on digital platforms in Australia - A position paper to guide code development”, Australian Communications and Media Authority (June 2020), https://www.acma.gov.au/sites/default/files/2020-06/Misinformation%20and%20news%20quality%20position%20paper.pdf.

[42] Id. at p. 11, These risks or acute harms should include— health and safety of vulnerable groups, threat of harm to public and private property, threat to elections or democratic processes, and imminent financial harm.

[43] Id. at. p.1.

[44] Id. at. p.31-32.

[45] Id.

[46] Id.

[47] Id.

[48] Fighting Fake News – Workshop Report, Information Society Project (Mar. 17, 2017), p. 7, https://law.yale.edu/sites/default/files/area/center/isp/documents/fighting_fake_news_-_workshop_report.pdf.

[49] Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken [Netzwerkdurchsetzungsgesetz—NetzDG] [Network Enforcement Act], Sept. 1, 2017, Bundesgesetzblatt, Teil I [BGBl I] at 3352 (Ger.).

[51] EU Code of Practice on Disinformation, supra note 34.

[52] NATO’s approach to countering disinformation: A focus on COVID-19, (July 17, 2020) https://www.nato.int/cps/en/natohq/177273.htm.

[53] European Commission, Staff Working Document titled “Assessment of the Code of Practice on Disinformation – Achievements and areas for further improvement” (Sept. 10, 2020) https://ec.europa.eu/digital-single-market/en/news/assessment-code-practice-disinformation-achievements-and-areas-further-improvement.

[54] A term coined by Evelyn Douek, which means that the “relationship between economic competition in the platform market and a healthy public sphere is complicated concentration of platform power”, Evelyn Douek, The Rise of Content Cartels, Knight First Amendment Institute (Feb. 11, 2020), https://knightcolumbia.org/content/the-rise-of-content-cartels.

[55] It is critical to ensure that the oversight body working with platforms to develop self-regulatory codes is transparent and collaborative. In India especially, a culture of opacity shrouds law-making by state actors and regulators like the Ministry of Electronics and IT are often criticized for the lack of transparency and public participation in framing rules for Internet governance. See, e.g., Ayesha Khan, Submission to the Ministry of Electronics and Information Technology, Government of India, on the Draft Non-Personal Data Governance Framework, Wikimedia/Yale Law School Initiative on Intermediaries and Information Blog (Sept. 16, 2020),  https://law.yale.edu/submission-ministry-electronics-and-information-technology-government-india-draft-non-personal-data.

[56] After widespread criticism and PMO directive, I&B ministry withdraws press release on fake news, Times of India (April 3, 2018), https://timesofindia.indiatimes.com/india/after-widespread-criticism-pmo-says-withdraw-press-release-on-fake-news/articleshow/63593207.cms, Kai Schultz and Suhasini Raj, India’s ‘Fake News’ Crackdown Crumbles Over Journalists’ Outrage, The New York Times (April 3, 2018) https://www.nytimes.com/2018/04/03/world/asia/india-fake-news.html.

[57] Rachel Aiello, Feds unveil plan to tackle fake news, interference in 2019 election, CTV News (Feb. 27, 2019), https://www.ctvnews.ca/politics/feds-unveil-plan-to-tackle-fake-news-interference-in-2019-election-1.4274273.

[58] Minister De Croo bindt strijd aan met fake news (2018) https://www.standaard.be/cnt/dmf20180502_03492898, PTI, India, with 12 nations, leads initiative at UN to counter misinformation on COVID-19, The Hindu (June 14, 2020), https://www.thehindu.com/news/national/india-with-12-nations-leads-initiative-at-un-to-counter-misinformation-on-covid-19/article31825371.ece.

[59] Such as the “EU v. Disinformation” initiative which brings together online platforms, civil society, fact-checking organisations and state actors, https://euvsdisinfo.eu/ (last visited Jan. 20, 2021).

[60] Global Network Initiative,  GNI Principles on Freedom of Expression and Privacy, https://globalnetworkinitiative.org/wp-content/uploads/2018/04/GNI-Principles-on-Freedom-of-Expression-and-Privacy.pdf.

[61] French Interim Mission Report, supra note 7, at 23-24.

[62] “This intervention approach is designed to create targeted incentives for platforms to participate in achieving a public interest objective without having a direct normative action on the service offered.” Id. at 16.

[63] The Financial and Industry Regulatory Authority is “a not-for-profit organization that – working under the supervision of the SEC – actively engages with and provides essential tools for investors, member firms and policymakers.”, https://www.finra.org (last visited Sept. 20, 2020).

[64] Nathaniel Persily and Alex Stamos, Regulating Online political Advertising by Foreign Governments and Nationals, in Securing American Elections 34 (Michael McFaul ed., 2019).

[65] Id.

[66] Department For Digital, Culture, Media And Sport & Home Office, Online Harms White Paper, 2019, Cm. 57, https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper (UK).

[67] Recommendation 14, Australian Competition and Consumer Commission (ACCC) Digital Platforms Inquiry (DPI), https://www.accc.gov.au/system/files/Digital%20platforms%20inquiry%20-%20final%20report.pdf (last visited Nov. 15, 2020).

[68] Danya Hajjaji, Twitter Rejects India's Order to Block Accounts Amid Farmer Protests, Newsweek (Feb 10, 2021), https://www.newsweek.com/twitter-rejects-india-order-block-accounts-farmer-protests-1568278.

[69] The Press Council of India (statutory body), https://www.presscouncil.nic.in/.

[70] News Broadcasters Association (private self-regulatory body), http://www.nbanewdelhi.com/about-nba.

[71] Advertising Standards Council of India (voluntary self-regulatory body), https://www.ascionline.org/.

[72] When compared to existing law-making processes in the realm of technology and digital rights, the Telecom Regulatory Authority of India has adopted an efficient framework for public consultations on rule-making which is transparent and stakeholder friendly, Anirudh Burman & Bhargavi Zaveri, Measuring Regulatory Responsiveness in India: A Framework for Empirical Assessment, Carnegie India (Apr. 2, 2019), https://carnegieindia.org/2019/04/02/measuring-regulatory-responsiveness-in-india-framework-for-empirical-assessment-pub-78871.

[73] For instance, The Press Council of India is a statutory standard-setting body for print newspapers and news agencies. Members of Parliament, eminent academics and experts, journalists, newspaper editors, managers of news agencies, and individuals who manage the newspaper business are adequately represented in the Council; See Press Council Act, No. 37 of 1978, India Code, Preamble (India), Press Council Act, No. 37 of 1978, India Code, § 5 (India).

[74] Press Council Act, No. 37 of 1978, India Code, § 20 (India).

[75] The Press Council of India is empowered to develop codes of practice for newspapers, define journalistic standards, and help “newspapers and news agencies to maintain their independence”, Press Council Act, No. 37 of 1978, India Code, § 13(2) (India). Evolving discussions on the functions of the Data Protection Authority (under the Personal Data Protection Bill, 2019) also provide for codes of practice to be jointly developed by industry actors and the government.

[76] Personal Data Protection Bill, 2019, § 41 (India).

[77] Nandita Singh, Media Experts Say Press Regulation Doesn’t Work in India, Press Council is a Joke, The Print (Apr. 9, 2018), https://theprint.in/india/governance/media-experts-say-press-regulation-doesnt-work-in-india-pci-is-a-joke/48213/.

[78] Regulation of media in India – A brief overview, PRS India (Nov. 16, 2011), https://www.prsindia.org/tags/press-council-india.

[79] Id.

[80] Global Network Initiative, Policy Brief on Content Regulation and Human Rights 30 (2020), https://globalnetworkinitiative.org/content-regulation-policy-brief/.

[81] Mark MacCarthy, Transparency Requirements for Digital Social Media Platforms: Recommendations for Policy Makers and Industry, in Transatlantic Working Group 11 (2020) https://www.ivir.nl/publicaties/download/Transparency_MacCarthy_Feb_2020.pdf.

[82] Alex Stamos et. al, Combatting State-Sponsored Disinformation Campaigns from State-Aligned Actors, in Securing American Elections 48 (Michael McFaul ed., 2019).

[83] Kate Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech”, 131 HARV. L.REV. 1598, 1625–29 (2018),  https://harvardlawreview.org/wp-content/uploads/2018/04/1598-1670_Online.pdf at p. 1621.

[84] The Justice Collaboratory, Report of the Facebook Data Transparency Advisory Group (April 2019), https://law.yale.edu/sites/default/files/area/center/justice/document/dtag_report_5.22.2019.pdf.

[85] Id. p. 31.

[86] Forum on Information & Democracy, Working Group on Infodemics – Policy Framework, p. 23, https://informationdemocracy.org/wp-content/uploads/2020/11/ForumID_Report-on-infodemics_101120.pdf; Mark MacCarthy, A dispute resolution program for social media companies, Brookings (Oct. 2020), https://www.brookings.edu/research/a-dispute-resolution-program-for-social-media-companies.

[87] Chinmayi Arun, “Rebalancing Regulation of Speech: Hyper-Local Content on Global Web-Based Platforms,” (Jan. 23, 2018), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3108238.

[88] Id.

[89] The Justice Collaboratory, supra note 84, p. 30.

[90] Id. at 33, Based on Elinor Ostrom’s principle of “nested enterprises” for governing “information commons”.

[91] Laurence Dodds, Facebook audit panel suggests creating an elected 'parliament', The Telegraph (May 23, 2019), https://www.telegraph.co.uk/technology/2019/05/23/facebook-audit-panel-suggests-creating-elected-parliament-users/.

[92] The Justice Collaboratory, supra note 84, p. 43.

[93] Monika Bickert, Charting a Way Forward – Online Content Regulation, Facebook (Feb. 2020) at p. 10, https://about.fb.com/wp-content/uploads/2020/02/Charting-A-Way-Forward_Online-Content-Regulation-White-Paper-1.pdf.

[94] John Bowers and Jonathan Zittrain, Answering impossible questions: content governance in an age of misinformation, Misinformation Review (Jan. 14, 2020), https://misinforeview.hks.harvard.edu/article/content-governance-in-an-age-of-disinformation/.

[95] The Oversight Board, https://www.oversightboard.com (last visited Nov. 21, 2020); Chinmayi Arun, The Facebook Oversight Board: An Experiment in Self-Regulation, Just Security (May 6, 2020) https://www.justsecurity.org/70021/the-facebook-oversight-board-an-experiment-in-self-regulation/.

[96] Article 19, Regulating social media: we need a new model that protects free expression (Apr. 25, 2018) https://www.article19.org/resources/regulating-social-media-need-new-model-protects-free-expression/, Article 19, The Social Media Councils: Consultation Paper (June, 2019), https://www.article19.org/wp-content/uploads/2019/06/A19-SMC-Consultation-paper-2019-v05.pdf.

[97] Ajit Mohan, Preparing for Upcoming Indian Elections, Facebook Newsroom (April 8, 2019) https://about.fb.com/news/2019/04/preparing-for-indian-elections/.

[98] In the context of electoral integrity, Nikhil Pahwa highlights the need for  measures to address “the gap between the responsibility and accountability” of significant platforms and the urgency for democracies to address the lack of platform accountability and transparency, Nikhil Pahwa, Can Regulation Douse Populism’s Online Fires?, Foreign Policy (Jan. 8, 2021), https://foreignpolicy.com/2021/01/08/capitol-mob-social-media-right-wing-violence-regulation/,  See also Rasmus Kleis Nielsen, supra note 8.

[99] Australian Communications and Media Authority, supra note 41, p.11.

[100] Id.

[101] The Justice Collaboratory, supra note 84, p. 33.

[102] Staff Reporter, Attempt to hide crucial facts on Facebook’s role in Delhi riots, says Assembly Committee, The Hindu (Sept. 15, 2020), https://www.thehindu.com/news/cities/Delhi/attempt-to-hide-crucial-facts-on-facebooks-role-in-delhi-riots-says-assembly-committee/article32608982.ece.