Another icy bucket: who is liable when a healthcare AI system fails?

When AI contributes to patient injury, who will be held responsible? That is the question that an article in the New England Journal of Medicine (NEJM, 18 Jan, subscription required). It examines over 800 cases, pulling out the most relevant information on the 51 cases with software creating physical injury.  If you are in a healthcare provider or vendor legal department and strategic sourcing, this article deserves your greatest scrutiny.

AI and even software represent a relatively new area of tort law (an act or omission that leads to injury or harm). Responsibility is not clear because there is a lack of clear direction in existing case law, plus cases involving AI are few to date. The study reviews aspects of AI that may elevate or minimize risk. Ultimately, it comes down to minimizing risk in the adoption of AI tools as it was in clinical decision support systems and EHRs–because not adopting them may eventually be construed as malpractice. 

Cases involving medical software and AI have generally clustered around three situations. From the study:

  1. Harms to patients caused by defects in software that is used to manage care or resources. Typically, plaintiffs bring product-liability claims against the developer.
  2. Physicians having consulted software in making care decisions (e.g., to screen patients for certain conditions or generate medication regimens). In cases of harm, those physicians’ decisions are evaluated against what other specialists would have done–standard of care.
  3. Apparent malfunctions of software embedded within devices, such as implantables, surgical robots, or monitoring tools. Plaintiffs may assert malpractice claims against physicians and hospitals, alleging negligent use, installation, or maintenance of these devices, including human error in reprogramming. Plaintiffs may also sue developers, alleging defects in manufacturing, design, and warnings.

Moving ahead, the study’s recommendations on weighing liability risk against the benefits of adoption of AI in direct patient care with a “human in the loop” (not fully autonomous software) are, from the study:

  • Resist the temptation to lump all applications of AI together. Some tools are riskier than others.
  • The hallmarks of risk are: low opportunity to catch the error, high potential for patient harm, and unrealistic assumptions about clinician behavior
  • In tools that can create high risk, expect to allocate substantial time and resources to safety monitoring and gather considerable information from model developers and implementation teams. Lower risk tools should be monitored in a more general, lower-touch way. 
  • Organizations can bargain, in a buyer’s market, for terms that minimize purchasers’ liability risk. Licensing agreements should, for instance, require developers to provide information necessary for effective risk assessment and monitoring, including developers’ assumptions regarding the data that models will ingest, processes for validating models, and recommendations for auditing model performance.
  • Purchasers should also insist on favorable terms governing liability, insurance, and risk management in AI licensing contracts–in other words, indemnification. If developed in-house, ensure that you have adequate insurance to cover claims.
  • Apply lessons learned from older forms of decision support. Courts examine whether the recommendation was evidence-based and whether the physician should have heeded it for the patient in question.
  • Document, document, document
  • Legal defenses for AI require different expertise and expert witnesses than typical malpractice cases.
  • It also may be prudent to inform patients when AI models are used in diagnostic or treatment decisions–informed consent

POLITICO commentary 

Categories: Latest News.