Exploring Liability Risks of Using AI Tools in Patient Care
Exploring Liability Risks of Using AI Tools in Patient Care
Research led by SHP’s Michelle Mello provides some clarity regarding liability over AI technologies that are rapidly being introduced to health care. She and her co-author analyzed more than 800 tort cases involving both AI and conventional software in health care and non-health-care contexts to see how decisions related to AI and liability might play out in the courts.
Last year, large language models like ChatGPT were widely released for the first time, and within a few months, similar models were already being incorporated into medical record software.
Medicine rarely incorporates cutting-edge technology so rapidly, and the integration of AI tools makes many clinicians anxious. As the health care industry grapples with the best way to use these technologies to improve care, many clinicians may wonder what happens if patients are harmed, and who should be held liable.
Research led by Michelle Mello, JD, PhD, professor of law and health policy, is designed to provide some clarity regarding liability. AI software has not yet appeared in legal decisions with much frequency, so Mello and her co-author, JD-PhD candidate Neel Guha, analyzed more than 800 tort cases involving both AI and conventional software in health care and non-health-care contexts to see how decisions related to AI and liability might play out in the courts.
An article about their research published Jan. 18 in the New England Journal of Medicine. Mello discusses their findings and what it means for health care providers in this Q&A.
How did you approach this research?
We investigated the extent to which litigation over AI-related personal injuries is already appearing in judicial decisions to understand the extent of liability risk. The signals that emerge from the courts specifically related to AI are pretty faint, but there are enough cases related to non-AI-enabled software causing injury to give us a sense of how courts are likely to approach these kinds of claims in the future.
That's important because lawyers tend to give advice that's very conservative. We didn't find that lawyers are advising clients not to use AI in medical settings, but we found presentation materials suggesting they are strongly warning clients about the liability risks of using AI in general. In my opinion, this could lead to overly conservative decision making -- not doing things that could really help patients.