Principals and Principles: Agentic AI in Healthcare

Traditionally, agency law governs when a principal is liable for the acts of its agent, whereas products liability law holds manufacturers and sellers liable for harm caused by defective products. As artificial intelligence (AI) models move in a more agentic direction, whereby AI has the capability to make decisions and perform tasks without human intervention, what legal regime should govern injury caused by such models? Are agentic AI models more akin to products, agents or something altogether different? We might, at least for the time being, fit AI models into existing legal doctrines more as product, and less as agent, by regulating their usage in certain especially sensitive circumstances like healthcare. That way, until technology and law develop appropriate mechanisms to protect the public from autonomous machines acting something more like principals, AI will remain dependent on humans who can be held accountable.

If an agentic AI model is a product, a deployer might be responsible for design flaws, manufacturing defects and/or failure to warn about risks. Given the learning nature of an AI model, who is the manufacturer? The AI often has code written by a developer (sometimes including open source models), learnings derived from disparate data sources and a prompt entered by the user. Who has made the product? Can a product exist without a defined creator? If so, who bears responsibility for its adverse effects? These questions become all the more difficult as AI assumes greater autonomy.

If AI is an agent, then the principal should be responsible for the actions of the agent within the scope of its authority. But who is the principal? The coder, the data source for learning, the deployer? At what point does an AI model break the chain of causation, acting outside the scope of its authority, thereby shielding the principal (whoever that may be) from liability? Can an agent exist without a principal who directs it on some level? If so, who bears responsibility for its actions? Again, autonomous AI models render these questions especially confounding.

Many of our legal structures are built around the sharp distinction between sentient humans and inanimate things. AI models are not human, but they are also not traditional property, particularly when they act with a measure of autonomy. They are like quasi-principals, which our law does not currently contemplate. In the healthcare context, unchecked autonomy of quasi-principals could wreak havoc. We want to promote innovation and receive the benefit of technology to improve outcomes and reduce cost. Yet, at the same time, we do not want to allow AI the unfettered ability to cause injury. In its November 2024 paper, “Augmented Intelligence Development, Deployment and Use in Health Care,” the American Medical Association (AMA) called for alignment of liability and incentives so that those who are “best positioned to know the AI system risks and best positioned to avert or mitigate harm do so through design, development, validation and implementation.”1 Towards that end, the AMA indicated that liability should rest with those issuing a mandate to use AI systems (e.g., payors or health systems) and that those who develop autonomous systems should procure liability insurance to protect their users. In sum, “when physicians do not know or have reason to know that there are concerns about the quality and safety of an AI-enabled technology, they should not be held liable for the performance of the technology in question.”2

We are not currently equipped to hold AI models themselves accountable for their actions, nor is there a mandate that they be properly insured by their creators. Perhaps that will develop with advances in technology and concomitant legal reform, but, at least in the short run, the best approach for healthcare may be to limit AI to enablement functions – i.e., to empower actual humans who have ultimate responsibility over choices – rather than to give AI substantial autonomy, so that AI does not act without the supervision of trained professionals. In sum, if AI agents are so circumscribed, at least until the law catches up with technology, then physicians should not be deploying AI pursuant to a mandate where they are being held accountable for its actions without the independent ability to supervise properly. At Pearl, we use AI to support our provider partners, not to displace them. Professionals continue to make medical judgments and bear ultimate moral and legal responsibility for their medical decisions, while receiving the technological leverage to provide higher quality, more efficient, and more abundant treatment for their patients.

Our Technology

Learn more about how we’re driving increases in care quality by helping clinicians focus on the right patients at the right time with actionable insights.
  1. Recommendation 5 at p. 7, available at https://www.ama-assn.org/system/files/ama-ai-principles.pdf.
  2. Id.
Jon Goldin

Jon Goldin

Chief Legal Officer, Pearl Health