The 21st Century Cures Act is noteworthy as the first legislative attempt at regulating artificial intelligence (“AI”) in the medical field. The Act added this provision to the FDCA:

(o) Regulation of medical and certain decisions support software: (1) The term device . . . shall not include a software function that is intended

*          *          *          *

(E) unless the function is intended . . . for the purpose of −

*          *          *          *

(ii) supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and

(iii) enabling such health care professional to independently review the basis for such recommendations that such software presents so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.

21 U.S.C. §360j(o)(1)(E). Note:  This same provision is also called “FDCA §520” – by those with an interest (either financial or regulatory) in keeping this area as arcane as possible.

The FDA has also responded with “draft guidance” (also an arcane term – meaning “we can change our minds about this at any time, and you can’t sue us”) about what the Agency considers to be a regulated “device” after the 21st Century Cures Act. “However, software functions that analyze or interpret medical device data in addition to transferring, storing, converting formats, or displaying clinical laboratory test or other device data and results remain subject to FDA’s regulatory oversight.” Id. at 12. Thus, the FDA now also has a definition of “artificial intelligence”:

A device or product that can imitate intelligent behavior or mimics human learning and reasoning. Artificial intelligence includes machine learning, neural networks, and natural language processing. Some terms used to describe artificial intelligence include: computer-aided detection/diagnosis, statistical learning, deep learning, or smart algorithms.

Emphasis added by us throughout.

We have emphasized the selected text because it identifies the underlying tension being created as AI enters the medical field.  What’s going to happen – indeed, what is already happening in some areas such as analysis of some medical images such as x-rays and MRIs – is that AI is going to generate diagnoses (such as tumor, or no tumor) and treatment output for physicians (so-called “computer-aided detection/diagnosis”) in numerous and expanding areas of medical practice.

The AI rubber is really going to hit the road when these “functions that analyze or interpret medical device data” begin to “provid[e] recommendations to a health care professional,” that said professional can no longer “independently review,” so that our health care providers will find it necessary to “rely primarily on . . . such recommendations.”  To put it bluntly, at some point in the not-too-distant future, AI will able to diagnose a disease and to propose how to treat it as well or better than human physicians. Moreover, since AI means that the machines can “teach” themselves through experience, they will evolve into something of a “black box,” running processes that humans will no longer be able to follow or to analyze independently.

Just as computers can now beat any and all humans at complex logic games such as chess and go, they will eventually be able to out-diagnose and out-treat any and all doctors.

What then?

Consider the tort system.  That’s what we do here on the Blog.

The diagnosis and treatment of disease has heretofore been considered the province of medical malpractice, with its traditions of medical “standard of care” and its professional requirements of “informed consent.”  Conversely, the medical malpractice area is governed, in most jurisdictions, by a variety of “tort reform” statutes that do things such as impose damages caps, create medical screening panels, and require early expert opinions.

Already, without considering AI, such statutory restrictions on medical malpractice plaintiffs leads these plaintiffs, whenever possible, to reframe what should be medical malpractice cases as product liability claims. Take Riegel v. Medtronic, Inc., 552 U.S. 312 (2008), for example.  In Riegel, the plaintiff was injured when his physician put the medical device (a balloon catheter) to a contraindicated use (in a heavily calcified artery).  Id. at 320. What happens when you push balloons against hard things with sharp edges? They go “pop.”  That’s what happened in Riegel.  Likewise, consider Wyeth v. Levine, 555 U.S. 555 (2009).  In Levine, the plaintiff was injured when a drug that was supposed to be injected intravenously was mistakenly injected into an artery instead.  Id. at 559-60.  Since the medical lobby has been much more successful in passing “tort reform” than have FDA-regulated product manufacturers, the plaintiffs have taken the path of least resistance, meaning product liability.

The same thing is sure to happen with AI – where plaintiffs will surely attempt to impose what amounts to absolute liability on the manufacturers of AI-enhanced medical “devices.”  Just look at the experience with AI in the automotive field.  Every accident involving a self-driving car becomes a big deal, with the AI systems presumed to be somehow at fault.  However, the negligence standard, for both auto accidents and medical malpractice, has always been that of a “reasonable man.”  That’s the crux of what will be the struggle over AI – when machines take over the functions once performed by human operators (whether drivers or doctors), are they also to be judged by a “reasonable man” negligence standard?  Or will strict (really strict) liability be imposed?

Then there’s preemption.  Under the statutory provisions we quoted at the beginning of this post, the FDA will be regulating AI that makes diagnostic and treatment “recommendations” to physicians as “devices.”  Perhaps the regulatory lawyers will figure out how to pitch AI as “substantially equivalent” to one or more already-marketed predicate devices, but if they don’t, then AI medical devices will require pre-market approval as “new” devices.  AI seems so different from prior devices – as indicated by its special treatment in the 21st Century Cures Act – that we wouldn’t be at all surprised if PMA preemption will be available to protect many, if not all, AI device manufacturers from state tort liability.

But putting preemption aside for the moment, the typical AI medical injury suit will usually involve one of two patterns – the AI equipment generates a diagnosis and proposal for treatment, and the patient’s attending physician either ;  (1) follows the machine’s output, or (2) does not follow that output.  For this hypothetical, we assume some sort of injury.

If the physician went with the machine, then the plaintiff is going to pursue a product liability action targeting AI.  Given AI’s function, and the black box nature of its operation, in such cases it will be very tempting for the physician also to seek to avoid liability by, say, blaming the machine for a misdiagnosis due to unspecified errors in the algorithm.  In such cases, both the plaintiff and the defendant physician would presumably advocate some sort of “malfunction theory” of liability that would lift the usual product liability obligation for the party asserting a “defect” to prove what the defect was.  Again, the black box nature of machine learning will force reliance on this type of theory.

So, are lawsuits targeting AI medical devices going to be allowed to proceed under strict liability?  In most jurisdiction, the “malfunction theory” is only available in strict liability.  There are two problems with the strict liability in AI situations, something the law will have to sort out over the coming years.  First, computer software has not been considered a “product” for product liability purposes, under either the Second or Third Restatements of Torts.  Second, individualized medical diagnoses and treatments has always been considered a “service” rather than a “product,” which is the reason that doctors and hospitals have not previously been subjected to strict liability, even when part of their medical practice involves the prescription of drugs and/or the implanting of medical devices.  We discussed both of these issues in detail in our earlier post on AI.  So, while we can expect plaintiffs to assert strict liability in AI diagnosis and treatment cases, defendants have good grounds for relying on the negligence “reasonable man” standard.

In case two, where injury occurs after the physician elects not to follow the machine’s output, the plaintiff likely will not have a product liability action.  For one thing, once the results of AI are ignored, it’s pretty hard to argue causation.  The plaintiffs know that, so in this situation they will rely on the AI output to point a finger at the doctor.  Unlike situation one, there is unlikely to be much intra-defendant finger-pointing, first because any AI defendant will win on causation, and second because doctors will remain the AI manufacturers’ customers, and it is not good business to tick off one’s customers.

So in case two, the malpractice question will be whether it is “standard of care” for a doctor (or other relevant health care provider) to follow an AI-generated diagnosis or treatment output, even when the doctor personally disagrees with that diagnosis/course of treatment based on his or her own independent medical judgment.  As posed, this question is close to the learned intermediary rule, only to some extent in reverse.  As with the learned intermediary rule, the basic proposition would be that the physician remains an independent actor, and that the job of the manufacturer (in this case, of AI equipment) is solely to provide the best possible information for the physician to evaluate and use as s/he sees fit.  Only, in this instance, the physician is the one protected by this rule, rather than the manufacturer.  The other alternative – forcing physicians to accept the output of AI as automatically being the medical “standard of care” – would effectively deprive physicians of professional independence, and we see little chance of the medical lobby of allowing that alternative being chosen.

Case two is thus a situation where “tort reform” could be relevant.  As AI catches on, and physicians become aware of the problem of having their therapeutic decisions being second-guessed by plaintiffs relying on AI outputs, we would not be surprised to see statutory proposals declaring AI results inadmissible in medical malpractice actions.  We think that AI manufacturers should view such efforts as opportunities, rather than threats.  An alliance between physician and AI interests to ensure that (in case one) AI is judged by a negligence “reasonable man” standard if and when preemption doesn’t apply, rather than strict liability, could be combined with an evidentiary exclusion provision (in case two) in a comprehensive AI tort reform bill that all potential defendants could get behind.

Indeed, such legislation could be critically important in ensuring a successful future for AI in the medical space.  Even assuming that the FDA straightens out the regulatory side, the other side’s already existing impetus to impose strict liability on AI could hinder its acceptance or prohibitively raise the cost of acquiring and using AI technology.  States that wish to encourage the use of medical AI would be well advised to pass such statutes.  Alternatively, so might Congress, should product liability litigation hinder the use of this life- and (secondarily) cost-saving technology.