Moderate Globally Impact Locally: Digital technology as accelerant: Growth and Genocide in Myanmar
Every person in Myanmar above the age of 10 has lived part, if not most, of their life under a military dictatorship characterized by an obsession with achieving autonomy from international influences. Before the economic and political reforms of the past decade, Myanmar was one of the most isolated nations in the world. The digital revolution that has reshaped nearly every aspect of human life over the past half-century was something the average Myanmar person had no personal experience with.
Recent reforms brought an explosion of high hopes and technological access, and Myanmar underwent a digital leapfrogwith internet access jumping from nearly zero percent in 2015 to over 40 percent in 2020. At 27-years-old, I remember living in a Yangon where having a refrigerator was considered high tech, and now, there are 10-year-olds making videos on Tik Tok.
Everyone was excited for Myanmar's digital revolution to spur the economic and social changes needed to transform the country from a pariah state into the next economic frontier. Tourists, development aid, and economic investment poured into the country. The cost of SIM cards dropped from around 1,000 US dollars in 2013 to a little over 1 dollar today. This dramatic price drop was paired with a glut of relatively affordable smartphones and phone carriers that provided data packages that made social media platforms like Facebook free, or nearly free, to use. This led to the current situation where about 21 million out of the 22 million people using the internet are on Facebook. Facebook became the main conduit through which people accessed the internet, and now is used for nearly every online activity from selling livestock, watching porn, reading the news, to discussing politics.
Then, following the exodus of over 700,000 Rohingya people from Myanmar’s war-torn Rakhine State, Facebook was accused of enabling a genocide.
The ongoing civil wars in the country and the state violence against the Rohingya, characterized by the UN as ethnic cleansing with genocidal intent, put a spotlight on the potential for harm brought on by digital connectivity. Given its market dominance, Facebook has faced great scrutiny in Myanmar for the role social media has played in normalizing, promoting, and facilitating violence against minority groups.
Facebook was, and continues to be, the favored tool for disseminating hate speech and misinformation against the Rohingya people, Muslims in general, and other marginalized communities. Despite repeated warnings from civil society organizations in the country, Facebook failed to address the new challenges with the urgency and level of resources needed during the Rohingya crisis, and failed to even enforce its own community standards in many cases.
To be sure, there have been improvements in recent years, with the social media giant appointing a Myanmar focused team, expanding their number of Myanmar language content reviewers, adding minority language content reviewers, establishing more regular contact with civil society, and devoting resources and tools focused on limiting disinformation during Myanmar’s upcoming election. The company also removed the accounts of Myanmar military officials and dozens of pages on Facebook and Instagram linked to the military for engaging in "coordinated inauthentic behavior." The company defines "inauthentic behavior" as "engag[ing] in behaviors designed to enable other violations under our Community Standards," through tactics such as the use of fake accounts and bots.
Recognizing the seriousness of this issue, everyone from the EU to telecommunications companies to civil society organizations have poured resources into digital literacy programs, anti-hate-speech campaigns, social media monitoring, and advocacy to try and address this issue. Overall, the focus of much of this programming is on what Myanmar and the people of Myanmar lack—rule of law, laws protecting free speech, digital literacy, knowledge of what constitutes hate speech, and resources to fund and execute the programming that is needed.
In the frenzy of the desperate firefighting by organizations on the ground, less attention has been given to larger systemic issues that are contributing to the fire.
There is a need to pay greater attention to those coordinated groups that are working to spread conspiracy theories, false information, and hatred to understand who they are, who is funding them, and how their work can be disrupted—and, if necessary, penalized.
There is a need to reevaluate how social media platforms are designed in a way that incentivizes and rewards bad behavior.
There is also a need to question how much blame we want to assign to social media companies, and whether it is to the overall good to give them the responsibility, and therefore power, to determine what is and isn't acceptable speech.
Finally, there is a need to ask ourselves about alternatives we can build, when many governments have proven themselves more than willing to surveil and prosecute netizens under the guise of health, security, and penalizing hate speech.
It is dangerous to expect private, profit-driven multinational corporations to be given the power to draw the line between hate speech and free speech. Just as it is dangerous to give that same power to governments, especially in this time of rising ethno-nationalistic sentiments around the globe and the increasing willingness of governments to overtly and covertly gather as much data as possible to use against those they govern. We can see from the ongoing legal proceedings against Myanmar in international courts regarding the Rohingya and other ethnic minorities, and statements from UN investigative bodies on Myanmar that Facebook has failed release to them evidence of serious international crimes, that neither company policies nor national laws are enough to ensure safety, justice, and dignity for vulnerable populations.
The solution to all this, as unsexy as it sounds, is a multifaceted, multi-stakeholder, long-term effort to build strong legal and cultural institutions that disperses the power and the responsibility to create and maintain safe and inclusive online spaces between governments, individuals, the private sector, and civil society.
Aye Min Thant is the Tech for Peace Manager at Phandeeyar, an innovation lab which promotes safer and more inclusive digital spaces in Myanmar. Formerly, she was a Pulitzer Prize winning journalist who covered business, politics, and ethno-religious conflicts in Myanmar for Reuters. You can follow her on Twitter @ma_ayeminthant.
This is the third installment in our “Moderate Globally, Impact Locally” series on the global impacts of content moderation. It originally appeared on Global Voices here and on Techdirt here.