Experts Discuss AI’s Impact on Health Care and Law
As the use of artificial intelligence becomes more pervasive in law and medicine, the debate continues on these technologies AI should supplement, or be a substitute for, human capabilities. Yale alumni and a distinguished panel of guests, hosted by the Yale Law School Center for the Study of Corporate Law and the Solomon Center for Health Law and Policy, took up this issue at the Century Association in New York on Nov. 16 during a discussion on AI’s impact of health care and legal practice.
Panelists were David Rhew, M.D., Global Chief Medical Office and VP of Healthcare for Worldwide Commercial Business at Microsoft; Laura Safdie ’11, co-founder, Chief Operating Officer, and General Counsel at Casetext; Harlan Krumholz, M.D., S.M.; Harold H. Hines, Jr., Professor of Medicine at Yale Sc
hool of Medicine and and co-founder of Refactor Health and Hugo Health; and Lane Dilg ’04, Global Government Partnerships Lead at OpenAI. Alfred M. Rankin Professor of Law Abbe Gluck ’00, the Solomon Center’s Faculty Director, moderated the discussion.
Panelists began by discussing AI products already available to healthcare and legal professionals. These include ambient clinical intelligence, which can create accurate clinical notes from free-flowing conversations with patients, and AI document review assistants. Rhew and Safdie noted that such products can, with continual monitoring and error assessment, serve as potent tools. Dilg and Krumholz emphasized the human dynamics of AI as the technology continues to improve. AI models that analyze medical records or privileged documents create ethical challenges, particularly around privacy, that users will have to learn to navigate, they noted. Some ethical challenges may be best addressed through regulation, panelists said. One example is President Joe Biden’s recent Executive Order, the result of collaboration with OpenAI and other leading AI providers, to establish standards for AI safety and security.
Rhew pointed out that the executive order focuses not only on developing guardrails but also identifying use cases. He noted that how AI is used can have a bigger safety impact than what kind of of AI is used. AI diagnostic tools, for example, can diagnose some patients that human medical professionals cannot but have high error rates. Although these error rates make these diagnostic tools inferior to human diagnostic analyses in most situations, they could be beneficial when a patient with a terminal condition has gone without treatment because no human doctor could reach a diagnosis. Safdie and Dilg also spoke about how AI, in carefully chosen use cases, can lead to productivity gains and other benefits without an attendant increase in risk.