Tracking the various fronts of federal artificial intelligence (AI) regulation can be a dizzying undertaking. Even within the field of healthcare, there are multiple organizations and offices that have jumped into the fray, including the White House, the Assistant Secretary for Technology Policy/Office of National Coordinator for Health Information Technology (ASTP/ONC) at the Department of Health and Human Services (HHS), the U.S. Food and Drug Administration (FDA), the HHS Office for Civil Rights (OCR) and the Centers for Medicare and Medicaid Services (CMS),1 as well as healthcare-adjacent agencies like the Federal Trade Commission (FTC). Recently, regulators have oriented healthcare AI regulation around a predictive/evidence-based distinction (described in more detail below), which does not withstand scrutiny, and which may need to give way to a more nuanced approach focused on a provider enablement/provider displacement distinction.
The ASTP/ONC has taken a leading role in healthcare AI regulation in connection with its responsibility for oversight of certain health information technology (HIT).2 In January 2024, ASTP/ONC published a final rule entitled “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” (HTI-1), which focuses on promoting transparency into the function of algorithms for HIT that contains Decision Support Interventions (DSIs).
In its DSI certification requirements, HTI-1 distinguishes evidence-based DSIs, which are “limited to those DSIs that are actively presented to users in clinical workflow to enhance, inform, or influence decision-making related to the care a patient receives and that do not meet the definition for Predictive DSI.” For instance, a clinical system that alerts providers to potential drug interactions or allergies in advance of prescriptions might be designed as an evidence-based DSI.
By contrast, predictive DSIs involve “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, recommendation, evaluation, or analysis.” An example of a predictive DSI could be a system that identifies patients at risk for developing specific conditions, like diabetes, upon considering a number of factors, such as lifestyle, genetic information and health history. HTI-1 imposes certain obligations to disclose source elements of these DSIs to promote transparency.
Developers of certified health information technology constituting predictive DSI need to provide users with technical performance information, dubbed “source attributes,” such that the user can determine whether the FAVES standard has been met — i.e., is fair, appropriate, valid, effective and safe.3 The distinction in the regulation between predictive and evidence-based DSI will prove more and more difficult to maintain as large language models and other generative AI tools learn from more and more data, including peer-reviewed journals.
To the extent the learning data set consists of the right inputs, a predictive model may have the best of both worlds – the scientific rigor of proper evidence-based reviews and the proverbial wisdom of crowds as an effective prediction tool. Furthermore, in the context of healthcare, is an AI provider enablement tool that considers various evidentiary factors to prompt the provider to consider a certain diagnosis predictive or evidence-based?
In her article To Explain or to Predict, Galit Shmueli distinguishes between predictive and explanatory models, but does not see them as extremes on a continuum. Rather, she sees them as two different dimensions: “Explanatory power and predictive accuracy are different qualities; a model will possess some level of each.”4 This seems especially true in healthcare where diagnoses and hypotheses are forward looking, but they are formed based on historical evidentiary facts that are present or backward looking.
The distinction between predictive and evidence-based models will be a challenging paradigm to implement consistently in the regulation of healthcare AI. Perhaps healthcare AI should be regulated around a different axis – i.e., whether a model is designed to empower human providers in their decision-making (by explaining the basis for its inferences and predictions) or whether it is designed to displace human providers and render independent judgments (by reference to a black box algorithm).5
That is how we think about such models at Pearl, where we are committed to empowering our provider partners to help them deploy their skills, training and judgment with easy access to salient information. We believe that such a focus presents less risk to the general public than models designed to displace providers. Our approach depends on the medical judgment of professionals as enhanced by easy access to more and better information.
- See 88 Fed. Reg. 78818 (Nov. 16, 2023) (to be codified at 45 C.F.R. pts. 156, 162, 170) (hereinafter “Health Data, Technology and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing”), at 9–16.
- HHS, HHS Reorganizes Technology, Cybersecurity, Data, and Artificial Intelligence Strategy and Policy Functions, July 25, 2024, https://www.hhs.gov/about/news/2024/07/25/hhs-reorganizes-technology-cybersecurity-data-artificial-intelligence-strategy-policy-functions.html.
- See 45 C.F.R. Partis 170, 171 (available at https://www.healthit.gov/sites/default/files/page/2023-12/hti-1-final-rule.pdf).
- Shmueli, Galit. “To Explain or to Predict?” Statistical Science 25, no. 3 (2010): 289–310, 305.
- This axis bears similarity to certain factors that exclude clinical decision support software from treatment as a medical device. See Section 520(o) of the Federal Food, Drug and Cosmetic Act (21 U.S.C. § 360j(o)(E) (excluding from the definition of device software that displays medical information, provides recommendations to providers and enables providers to review the basis for such recommendations independently).