Solomon Center Panel Examines the Use of AI in Healthcare

Four photos of people talking, arranged in a grid
Panelists at a Solomon Center discussion on the use of artificial intelligence in healthcare: clockwise from left, Sara Gerke, Sharona Hoffman, Collin Stultz, and Bonnie Kaplan.

The Solomon Center for Health Law and Policy hosted a panel discussion on the legal, ethical, and equity issues surrounding the use of artificial intelligence in healthcare. The April 3 event aimed to address the opportunities and challenges facing the use of artificial intelligence and machine learning technologies, especially as AI products and services continue to rapidly enter the market. The event was co-sponsored by the Yale Health Law and Policy Society (YHeLPS), Yale Information Society Project (ISP), and the Center for Biomedical Innovation and Technology (CBIT).

The moderator, Yale School of Medicine’s Hirsh Shekhar, was joined by four panelists: Sara Gerke, Assistant Professor of Law at Penn State Dickinson Law; Sharona Hoffman, Professor of Law & Bioethics at Case Western Reserve University and Co-Director of the university’s Law Medicine Center; Bonnie Kaplan, Faculty of the Center for Biomedical Data Science and Program on Biomedical Ethics at Yale School of Medicine; and Collin Stultz, Professor of Electrical Engineering and Computer Science at the Institute for Medical Engineering and Science, Director of the Harvard-MIT Health Sciences and Technology Program, and Associate Director of MIT’s Institute for Medical Engineering and Science. 

The panel started by constructing a common definition for artificial intelligence in healthcare. The concept has different meanings for different users, especially as generative AI and large language models such as ChatGPT are in the news. Stultz explained that artificial intelligence refers to a set of methods that aspire to give intelligence to machines. Users feed large data sets into algorithms. These algorithms can then understand and mine that data to uncover complex relationships, answer questions, and generate novel insights in a way that the users would not be able to otherwise.

Next, the panel covered the history of artificial intelligence in the medical informatics field. Kaplan provided a timeline of significant developments. The 1950s marked the beginning of a shift towards the application of pattern recognition to medical records for diagnostic and treatment purposes. The 1970s saw the development of MYCIN, which used an artificial intelligence approach to recommend antibiotics for infections. The 1980s witnessed the advent of the clinical decision support system. In 2008, the HITECH Act mandated that this system be integrated with electronic health records to facilitate Medicare and Medicaid reimbursement.

Following this, the panel discussed the regulatory and legal landscape of AI, highlighting privacy concerns. Gerke began with the current regulations. If an AI-based product is classified as a medical device, the FDA can regulate it. However, many AI-based tools are not regulated by the FDA. Gerke explained that the 21st Century Cures Act exempts some software from the medical device definition and many AI companies attempt to fall under this exception. In its current interpretation, that exception may even cover complex algorithms used in black box AI models, those that make decisions or produce information without revealing their inner workings. States have passed comprehensive privacy laws, but these laws currently only cover their own residents. However, Gerke noted, much health data collected by apps falls outside these laws’ jurisdiction and that of HIPAA. Gerke suggested that a comprehensive federal law may be needed to help protect individual privacy. 

The panel then dove into the ethical and equity challenges posed by AI in healthcare. Hoffman raised the concern that AI may continue to lead to discrimination due to algorithmic bias — being disproportionately trained on certain demographic data or drawing incorrect inferences. Panelists also raised concerns around AI’s role in the doctor-patient relationship. They noted that patients may become less comfortable if AI is increasingly used in diagnosis and treatment. There may also be questions of appropriate disclosure. Stultz said that black box methods can help guide clinicians in making decisions but recommendations by AI tools may have little to do with the clinician’s own understanding. To preserve relationships with patients and their loved ones, it may be important for clinicians to be able to explain these recommendations. Kaplan acknowledged that with more widespread data collection, individuals may not know how their personal health data is being used and this may make them uncomfortable.

Regarding liability, panelists said that AI in healthcare is at a crossroads. The current law holds that physicians are shielded from liabilities if they follow the standard of care, and AI as a confirmatory tool allows for this to continue. However, if AI becomes part of a standard of care, physicians may be held liable if they choose to deviate from AI recommendations and a patient is harmed. In that scenario, liability may be held by a number of parties, including the hospital purchasing the AI tool and the original manufacturer of the tool. Panelists noted that as technology progresses, U.S. lawmakers may want to look to Europe for examples of regulations. 

Finally, panelists addressed the topic of litigation. Panelists recognized that even with AI being developed in good faith, medical discrimination may occur without injured parties having a method to pursue litigation. They noted that this gap in the law should be addressed soon.