Moderate Globally Impact Locally: Tackling Social Media’s Hate Speech Problem in India

On 11th August 2020, an offensive Facebook post about Prophet Muhammad played a significant role in inciting violent clashes in Bengaluru in India, the worst that the city has seen in recent history. Unfortunately, this incident is not an isolated one. Hate speech and misinformation propagated through platforms like Facebook, Twitter and WhatsApp has resulted in mob violence, lynching, communal riots, and claimed many innocent lives in India.

Facebook has come under particular scrutiny recently. An article published in the Wall Street Journal has alleged that Facebook India’s Public Policy Head selectively shielded offensive posts of leaders of the ruling Bhartiya Janata Party (BJP). Facebook has since then banned the BJP leader from its platform and acknowledged the “need to do more”. While the Parliamentary Standing Committee on Information Technology led by an Opposition member has initiated an investigation into the WSJ claims, competing claims have subsequently been made by India’s Information Technology Minister accusing Facebook of bias against supporters of right-of-centre ideology.

The recent allegations against Facebook have led to widespread concern about the platform’s lack of sincerity in tackling hate speech in India. As the platform continues to be embroiled in a heated political controversy in India, it is time for greater transparency on what goes on behind the heavy curtains of outsourced content moderation in the platform’s biggest market in the world.

The current scenario has understandably amplified calls for new legislation to moderate and regulate content on social media platforms. However, it is crucial that any legislation of this nature is carefully thought out and balances public interest with constitutionally protected rights of free speech and privacy. This caveat arises from the fact that similar instances in the past have spurred the Indian Government to propose hasty and ill-thought regulations. In December 2018, the Government proposed amendments to the existing Information Technology (Intermediaries Guidelines Rules), 2011 under the Information Technology Act in India. The IT Act provides a ‘safe harbour’ to social media platforms like Facebook and Twitter. The proposed Guidelines seek to impose an obligation on platforms to identify the originator of private messages, and proactively monitor communication. Considered to be a knee-jerk reaction to the proliferation of misinformation on WhatsApp, the proposed amendments have been heavily criticised for threatening free speech and privacy of users, weakening encryption, and enabling State surveillance. These guidelines have not been enacted yet.

Admittedly, any potential approach on regulation or moderation of social media content will run the risk of offending the free speech and privacy of users. While it is important to think about the impact of such laws, it is equally important to address the circumstances under which American social media platforms operate in India. The recent critiques against Facebook demonstrate a clear lack of accountability and transparency in the content moderation and removal practices of social media platforms in India. This means that investigative reports by journalists or leaked e-mails / correspondences are often the only source of information regarding these problematic issues.

Further, the Indian Government often finds it difficult to engage with platforms like Facebook or Twitter. On several occasions, founders and global teams of these platforms have shown reluctance to actively engage with Indian lawmakers to address challenges which are unique to the Indian user base. For instance, in light of the various instances of mob lynching and communal violence fuelled by rumours and misinformation on WhatsApp, the Indian Government made several requests to WhatsApp in 2018 to devise “suitable interventions” to contain fake news and sensational messages on its platform. In the absence of a suitable response by the platform besides token changes, the Government had to once again write to WhatsApp stating that “it may not be out of place to consider the medium [WhatsApp] as an abettor (albeit unintentional)” in the instances of lynching and mob violence. Similarly, in 2019, the Parliamentary Committee on Information Technology made unsuccessful attempts to summon Jack Dorsey to investigate Twitter’s alleged political bias ahead of the General Elections.

As India may still be considering amendments to the Intermediary Guidelines, Facebook and other platforms will have to proactively cooperate with Indian lawmakers. Lack of timely cooperation in the past, as seen with the proposed amendments to the Intermediary Guidelines in 2018, has spurred the Government to initiate hasty and counterintuitive legislative proposals.

According to Time, Facebook has commissioned an independent study to analyse the platform’s impact on human rights in India. This is the first time the news of such a study has surfaced on a public platform. It is also not clear if the State or civil society members have been consulted or included in this process. Expressing deep concern over opaque platform practices in India, over 40 NGOs have written an open letter to Mark Zuckerberg to address Facebook India’s bias and ensure on-ground engagement with human rights organisations on the “India audit”.

Based on these reports, it is quite evident that social media platforms need to enter into comprehensive and participatory dialogue with members of civil society, academia, and users, as well as the Government. Further, given the recent allegations of biased moderation, Facebook and other platforms should voluntarily release information on how moderation decisions are made in India, and more importantly information about the people involved in the process. For instance, the Facebook Transparency Report and the quarterly Community Standards Enforcement Report can provide more granular information on its decision-making processes for India rather than superficial content restriction statistics. Platforms, along with stakeholders and experts, can also consider the possibility of collaborative and auditable content moderation policies which are specific to local contexts, and assess their possible impact on global information exchange. The sheer scale at which Facebook and other platforms deal with decisions on speech is mind-boggling. At the same time, they shouldn’t have to do it in silos. Community involvement and user empowerment is key.

Social media platforms are built around an idea of facilitating the free flow of information. In this context, the lack of information sharing and engagement with key stakeholders is particularly glaring. In the Indian context, with a government that is seriously considering new rules that would be harmful to both their business models and to their users freedom of expression and privacy, this insularity is particularly self-defeating. Platforms need to change tack, and get serious about robust engagement and accountability, to assure both civil society, the public, and Indian lawmakers that they can be trusted.

This article was written by Akriti Gaur a lawyer and independent researcher studying digital platforms and their impact on human rights in India and South Asia. She has previously worked on research and advisory projects related to privacy and data governance, and rights-based approaches to govern emerging technologies in India.

This is the sixth installment in our “Moderate Globally, Impact Locally” series on the global impacts of content moderation. It originally appeared on Protego Press here.