The Internet of Vulnerable Things
In January 2019, a Houston attorney sued Apple for a vulnerability discovered in FaceTime, claiming that the bug “allowed for the recording of a private deposition.” While Apple is unlikely to be found liable in this case, it still raises the question of what duty manufacturers such as Apple owe to their customers to provide secure devices and software.
In general, there is no legal liability for simply selling a device with a software or hardware vulnerability. Under traditional notions of tort law and products liability, plaintiffs generally cannot sue if they have suffered no damages. Indeed, some courts are have held that data breaches only lead to an increased risk of future harm that does not rise to the level of injury needed to sustain a lawsuit. In other words, a purchaser of a defective device cannot sue the manufacturer until they are harmed by that device. Furthermore, increased risk is generally not considered a cognizable harm. Finally, manufacturers can often escape liability through licensing agreements, one of the main reasons that Microsoft was not liable for the WannaCry ransomware attack, a series of cyberattacks that held hundreds of thousands of Windows computers hostage by exploiting a security vulnerability in older versions of Microsoft’s operating system.
However, courts are becoming increasingly willing to hold companies liable for selling products with security risks or for failing to disclose [pdf] information related to security risks. Similarly, regulatory agencies such as the FTC, FCC, and SEC have also been more active in investigating companies that are producing vulnerable products.
Congress has also been legislating in this field. In 2017, it introduced a flurry of bills to impose some kind of minimum security standard on Internet of Things (IoT) devices, including the IoT Cybersecurity Improvement Act of 2017, the IoT Consumer TIPS Act of 2017, and the SMART IoT Act. In 2018, Congress passed the Cybersecurity and Infrastructure Security Act (CISA), which establishes a cybersecurity agency within the Department of Homeland Security.
At the state level, California has passed Senate Bill 327, the first state government to set security standards for IoT manufacturers. Similarly, U.K. governmental authorities have released new guidelines for cybersecurity.
However, the current crop of legislation has only started to address the issue. CISA only elevated an existing organization to agency status without setting any specific standards for cybersecurity, the California bill only requires manufacturers to incorporate “reasonable” security features—without defining what features are reasonable—and while the U.K. guidelines are specific, they don’t rise to the level of binding regulation.
Like data breaches, vulnerabilities cause risk of future harm and inflict real anxiety, and victims often experience chilling effects in their personal and professional lives. Recent changes in cybersecurity law seem to embrace the idea that the vulnerability, not the hack, is a harm itself. The result is one that ultimately benefits the public. While the lack of regulation has certainly allowed the IoT industry to innovate at an unprecedented pace, this progress has come at the cost of consumer’s privacy rights. Relying solely on market forces to drive cybersecurity in the private sector is not a sustainable option. And neither ex ante regulation nor ex post liability, on their own, are sufficient to address the problem—ex ante regulation cannot anticipate new forms of harm, and ex post liability cannot adequately address vulnerabilities that may take years to be exploited. Commentators have therefore suggested [pdf] addressing market failures using both approaches, with an end goal of reducing cyber risk. This necessitates a recognition of cybersecurity vulnerabilities as a harm. As the frequency and magnitude of cyberattacks continue to rise, it is only natural that we should be holding companies to task for developing the vulnerable products that result in these breaches.