This is the second of three articles drafted by the WIII Initiative’s summer researchers, reflecting on sessions they attended at this year’s virtual RightsCon.
AccessNow held its annual conference on digital technologies and human rights from 27 – 31 July, 2020. The conference was conducted virtually with experts joining via video conferencing to discuss a range of issues, divided into ten tracks. I covered the Privacy and Surveillance track which included more than 30 sessions. Below I summarize the key themes emerging from these discussions:
- In an era of surveillance capitalism, the concept of choice is quickly disappearing: The ubiquity of these technologies means that in order to access essential services, citizens increasingly have to cede personal information to tech companies. Until governments enact comprehensive privacy laws, companies will keep selling predictions of human behavior. In order to prevent a future era of digital totalitarianism, citizens need to demand laws that deem such control unacceptable.
- Facial Recognition Technologies (FRT) are being rapidly deployed, often under the pretext of national security: The international donor community is funding surveillance in developing countries like Uganda in the form of ‘classified expenditure’ without any scrutiny. In Russia, FRT was initially introduced to track hooligans during the soccer world cup, but its use continues without any regulatory oversight. In the US, law enforcement continues to use FRT in the absence of federal regulation although some states and cities have banned its use due to the threats it poses to first amendment rights. Companies like Amazon and Microsoft are reconsidering sales of FRT, particularly to law enforcement, in the absence of adequate regulatory protections to safeguard civil liberties.
- Global use of spyware is increasing, without accountability: Countries as diverse as India, Mexico and UAE have seen a rise in spyware attacks against human rights defenders and journalists. Often, spyware is made by shell companies in partnership with state agencies, who provide a safe haven for its production thus enabling a toxic business model that violates international human rights. Civil society has sued some of these companies in the UK, Israel and Spain and the UN Special Rapporteurs on Free Expression and on Extrajudicial Executions have called for a global moratorium on the use of these technologies. Democracies have a responsibility under international law to regulate the production of technologies that pose a risk to human rights. Similar to the weapons industry, sales should be made public along with a listing of the relevant human rights violations connected with those companies or their products. International frameworks like the UN’s proposed Treaty on Business and Human Rights should lead the way in shaping regulation.
- Digital ID systems are only exacerbating existing inequalities: By tying access to benefits to the disclosure of detailed personal information, these systems rob the poor of any modicum of privacy. They are often propagated as quick digital solutions that ‘protect’ the country’s welfare system from being abused by the ‘bad guys’ who are mostly marginalized communities, such as refugees or racial minorities (eg in Ireland). China uses a more draconian version of this system which is working to integrate DNA, voice and face recognition databases to gain ‘social control’.
- COVID-19 has led to the hasty adoption of technological solutions without concurrent oversight mechanisms: In their helplessness to deal with the pandemic, governments have turned towards implementing tech-based measures such as contact-tracing apps and digital health certificates. These are often implemented through backdoors and without a prior consultation process which examines the legality, necessity and proportionality of these measures. Because these measures lack sunset clauses, there are genuine concerns that extraordinary measures employed during the pandemic will become the ‘new normal’.