Artificial Intelligence

The 21st Century Cures Act is noteworthy as the first legislative attempt at regulating artificial intelligence (“AI”) in the medical field. The Act added this provision to the FDCA:

(o) Regulation of medical and certain decisions support software: (1) The term device . . . shall not include a software function that is intended

*          *          *          *

(E) unless the function is intended . . . for the purpose of −

*          *          *          *

(ii) supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and

(iii) enabling such health care professional to independently review the basis for such recommendations that such software presents so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.

21 U.S.C. §360j(o)(1)(E). Note:  This same provision is also called “FDCA §520” – by those with an interest (either financial or regulatory) in keeping this area as arcane as possible.

The FDA has also responded with “draft guidance” (also an arcane term – meaning “we can change our minds about this at any time, and you can’t sue us”) about what the Agency considers to be a regulated “device” after the 21st Century Cures Act. “However, software functions that analyze or interpret medical device data in addition to transferring, storing, converting formats, or displaying clinical laboratory test or other device data and results remain subject to FDA’s regulatory oversight.” Id. at 12. Thus, the FDA now also has a definition of “artificial intelligence”:

A device or product that can imitate intelligent behavior or mimics human learning and reasoning. Artificial intelligence includes machine learning, neural networks, and natural language processing. Some terms used to describe artificial intelligence include: computer-aided detection/diagnosis, statistical learning, deep learning, or smart algorithms.

Emphasis added by us throughout.

We have emphasized the selected text because it identifies the underlying tension being created as AI enters the medical field.  What’s going to happen – indeed, what is already happening in some areas such as analysis of some medical images such as x-rays and MRIs – is that AI is going to generate diagnoses (such as tumor, or no tumor) and treatment output for physicians (so-called “computer-aided detection/diagnosis”) in numerous and expanding areas of medical practice.

The AI rubber is really going to hit the road when these “functions that analyze or interpret medical device data” begin to “provid[e] recommendations to a health care professional,” that said professional can no longer “independently review,” so that our health care providers will find it necessary to “rely primarily on . . . such recommendations.”  To put it bluntly, at some point in the not-too-distant future, AI will able to diagnose a disease and to propose how to treat it as well or better than human physicians. Moreover, since AI means that the machines can “teach” themselves through experience, they will evolve into something of a “black box,” running processes that humans will no longer be able to follow or to analyze independently.

Just as computers can now beat any and all humans at complex logic games such as chess and go, they will eventually be able to out-diagnose and out-treat any and all doctors.

What then?

Consider the tort system.  That’s what we do here on the Blog.

The diagnosis and treatment of disease has heretofore been considered the province of medical malpractice, with its traditions of medical “standard of care” and its professional requirements of “informed consent.”  Conversely, the medical malpractice area is governed, in most jurisdictions, by a variety of “tort reform” statutes that do things such as impose damages caps, create medical screening panels, and require early expert opinions.

Already, without considering AI, such statutory restrictions on medical malpractice plaintiffs leads these plaintiffs, whenever possible, to reframe what should be medical malpractice cases as product liability claims. Take Riegel v. Medtronic, Inc., 552 U.S. 312 (2008), for example.  In Riegel, the plaintiff was injured when his physician put the medical device (a balloon catheter) to a contraindicated use (in a heavily calcified artery).  Id. at 320. What happens when you push balloons against hard things with sharp edges? They go “pop.”  That’s what happened in Riegel.  Likewise, consider Wyeth v. Levine, 555 U.S. 555 (2009).  In Levine, the plaintiff was injured when a drug that was supposed to be injected intravenously was mistakenly injected into an artery instead.  Id. at 559-60.  Since the medical lobby has been much more successful in passing “tort reform” than have FDA-regulated product manufacturers, the plaintiffs have taken the path of least resistance, meaning product liability.

The same thing is sure to happen with AI – where plaintiffs will surely attempt to impose what amounts to absolute liability on the manufacturers of AI-enhanced medical “devices.”  Just look at the experience with AI in the automotive field.  Every accident involving a self-driving car becomes a big deal, with the AI systems presumed to be somehow at fault.  However, the negligence standard, for both auto accidents and medical malpractice, has always been that of a “reasonable man.”  That’s the crux of what will be the struggle over AI – when machines take over the functions once performed by human operators (whether drivers or doctors), are they also to be judged by a “reasonable man” negligence standard?  Or will strict (really strict) liability be imposed?

Then there’s preemption.  Under the statutory provisions we quoted at the beginning of this post, the FDA will be regulating AI that makes diagnostic and treatment “recommendations” to physicians as “devices.”  Perhaps the regulatory lawyers will figure out how to pitch AI as “substantially equivalent” to one or more already-marketed predicate devices, but if they don’t, then AI medical devices will require pre-market approval as “new” devices.  AI seems so different from prior devices – as indicated by its special treatment in the 21st Century Cures Act – that we wouldn’t be at all surprised if PMA preemption will be available to protect many, if not all, AI device manufacturers from state tort liability.

But putting preemption aside for the moment, the typical AI medical injury suit will usually involve one of two patterns – the AI equipment generates a diagnosis and proposal for treatment, and the patient’s attending physician either ;  (1) follows the machine’s output, or (2) does not follow that output.  For this hypothetical, we assume some sort of injury.

If the physician went with the machine, then the plaintiff is going to pursue a product liability action targeting AI.  Given AI’s function, and the black box nature of its operation, in such cases it will be very tempting for the physician also to seek to avoid liability by, say, blaming the machine for a misdiagnosis due to unspecified errors in the algorithm.  In such cases, both the plaintiff and the defendant physician would presumably advocate some sort of “malfunction theory” of liability that would lift the usual product liability obligation for the party asserting a “defect” to prove what the defect was.  Again, the black box nature of machine learning will force reliance on this type of theory.

So, are lawsuits targeting AI medical devices going to be allowed to proceed under strict liability?  In most jurisdiction, the “malfunction theory” is only available in strict liability.  There are two problems with the strict liability in AI situations, something the law will have to sort out over the coming years.  First, computer software has not been considered a “product” for product liability purposes, under either the Second or Third Restatements of Torts.  Second, individualized medical diagnoses and treatments has always been considered a “service” rather than a “product,” which is the reason that doctors and hospitals have not previously been subjected to strict liability, even when part of their medical practice involves the prescription of drugs and/or the implanting of medical devices.  We discussed both of these issues in detail in our earlier post on AI.  So, while we can expect plaintiffs to assert strict liability in AI diagnosis and treatment cases, defendants have good grounds for relying on the negligence “reasonable man” standard.

In case two, where injury occurs after the physician elects not to follow the machine’s output, the plaintiff likely will not have a product liability action.  For one thing, once the results of AI are ignored, it’s pretty hard to argue causation.  The plaintiffs know that, so in this situation they will rely on the AI output to point a finger at the doctor.  Unlike situation one, there is unlikely to be much intra-defendant finger-pointing, first because any AI defendant will win on causation, and second because doctors will remain the AI manufacturers’ customers, and it is not good business to tick off one’s customers.

So in case two, the malpractice question will be whether it is “standard of care” for a doctor (or other relevant health care provider) to follow an AI-generated diagnosis or treatment output, even when the doctor personally disagrees with that diagnosis/course of treatment based on his or her own independent medical judgment.  As posed, this question is close to the learned intermediary rule, only to some extent in reverse.  As with the learned intermediary rule, the basic proposition would be that the physician remains an independent actor, and that the job of the manufacturer (in this case, of AI equipment) is solely to provide the best possible information for the physician to evaluate and use as s/he sees fit.  Only, in this instance, the physician is the one protected by this rule, rather than the manufacturer.  The other alternative – forcing physicians to accept the output of AI as automatically being the medical “standard of care” – would effectively deprive physicians of professional independence, and we see little chance of the medical lobby of allowing that alternative being chosen.

Case two is thus a situation where “tort reform” could be relevant.  As AI catches on, and physicians become aware of the problem of having their therapeutic decisions being second-guessed by plaintiffs relying on AI outputs, we would not be surprised to see statutory proposals declaring AI results inadmissible in medical malpractice actions.  We think that AI manufacturers should view such efforts as opportunities, rather than threats.  An alliance between physician and AI interests to ensure that (in case one) AI is judged by a negligence “reasonable man” standard if and when preemption doesn’t apply, rather than strict liability, could be combined with an evidentiary exclusion provision (in case two) in a comprehensive AI tort reform bill that all potential defendants could get behind.

Indeed, such legislation could be critically important in ensuring a successful future for AI in the medical space.  Even assuming that the FDA straightens out the regulatory side, the other side’s already existing impetus to impose strict liability on AI could hinder its acceptance or prohibitively raise the cost of acquiring and using AI technology.  States that wish to encourage the use of medical AI would be well advised to pass such statutes.  Alternatively, so might Congress, should product liability litigation hinder the use of this life- and (secondarily) cost-saving technology.

In the July 7, 2017, “Artificial Intelligence” issue of Science, we were intrigued by a short piece in the “Insights” section on “Artificial Intelligence in Research” that discussed the future use of autonomous robots in surgery.  Surgeonless surgery would “allow[] work around the clock with higher productivity, accuracy, and efficiency as well as shorter hospital stays and faster recovery.” Science, at 28.  The listed drawbacks were:  “technical difficulties in the midst of a surgery,” the “loss of relevance of surgeons,” and “how to equip artificial intelligence with tools to handle . . . inherent moral responsibility.”  Id.

Fascinating.  In addition to driverless cars, do we also need to contemplate surgeonless surgery?  We’ve long been aware of the advent of robots as an adjunct to surgery.  Bexis filed a (largely unsuccessful) PLAC amicus brief in Taylor v. Intuitive Surgical, Inc., 389 P.3d 517 (Wash. 2017), but the surgical robot in Taylor in no way threatened to displace the surgeon, and the applicability (if not application) of the learned intermediary rule in Taylor was undisputed.  Id. at 526-28.

We checked the Internet, and sure enough there were plenty of articles from reputable sources:

Completely automated robotic surgery: on the horizon?” (Reuters)

Autonomous Robot Surgeon Bests Humans in World First” (Inst. of Electrical & Electronics Engineers)

Would you let a robot perform your surgery by itself?” (CNN)

The Future Of Robotic Surgery” (Forbes)

Science fiction?  Apparently not anymore.  As the last article stated:

Having totally automated procedures was once a thing of science fiction, very futuristic and not very practical. . . .  But over the last three or four years, technology has evolved and this has become a possibility.  I think potentially we’ll see some automated tasks in the medical field in the next five years.

All these articles are from 2016.

Since we’ll still be practicing law in five years, we thought we’d better start thinking about this.

First, will there be product liability litigation involving autonomous surgical robots at all?  Existing surgical robots appear to have been “cleared” by the FDA, Taylor, 389 P.3d at 520, so there hasn’t been much of a preemption barrier to bringing suit.  We’re not FDA regulatory specialists, but we have some doubt about how a fully autonomous surgical robot – described as something out of “science fiction” in the articles – could be marketed as “substantially equivalent” to existing devices.  If autonomous surgical robots, or the software that runs them, must go through FDA pre-market approval, then they would be protected by preemption, subject only to “parallel claims” that the manufacturer somehow violated relevant FDA regulations.  We are assuming, perhaps incorrectly, the continuity of the current preemption regime for medical devices.

Second, what happens to the learned intermediary rule where the product itself – an autonomous surgical robot – stands in the shoes of the traditional learned intermediary?  Plaintiffs would, of course, give the same answer as always:  Abolish the rule as outdated.  We disagree.  Any consideration of the jurisprudential reasons for the learned intermediary rule, discussed here, suggests just the opposite.  The rule exists because patients can’t be expected to understand for themselves the complexities of prescription medical products, so the law demands that the scientific and technological information necessary to make intelligent use of these products be provided to trained, professional “learned intermediaries,” who are then expected to counsel their patients about individualized treatment decisions.

Does this rationale apply to autonomous surgical robots?  Absolutely.  These products will be some of the most advanced and complex medical technology yet produced, and the law cannot expect their manufacturers simply to provide patients with the instructions for use, tell them to “have at it” and make up their own minds.  More than ever, patients will need medical professionals to explain the risks, benefits, and alternatives of automated surgery.  Who, then, becomes the learned intermediary when the traditional role of the surgeon is performed by a “product” in a potential legal action?  Looking to the purposes of the learned intermediary rule, our answer, at this point, is whichever physician whose legal duty it is to conduct the informed consent discussion with the patient.  The learned intermediary rule exists in large part to ensure that the doctor who will be advising the patient has adequate information to do so.  The professional standard that the medical community ultimately adopts to handle informed consent in automated surgery is its own business.  But however the medical community resolves that issue, the duty of the robot manufacturer should be the same as ever:  to provide information about the product adequate to enable the learned intermediary to evaluate that information, along with the patient’s medical history, in order to make proper treatment decisions and to explain these decisions to the patient.

Third, what will the advent of autonomous surgical robots do to the legal distinction between “services” and “product sales” that has traditionally protected health care providers – including hospitals − from strict liability?  We don’t know.  The answer probably depends on how the medical community integrates these robots into the health care system generally.  If robotic surgery is carried out under the close supervision of medical professionals, then probably not much will change in terms of the sales/services distinction.  That has been the case with currently available robot-assisted surgery.  See Moll v. Intuitive Surgical, Inc., 2014 WL 1389652, at *4 (E.D. La. April 1, 2014) (robot use did not remove surgical claim from scope of malpractice statute).

However, if cost consciousness leads to “routine” automated surgery being conducted with only technicians on hand to ensure that the robots are functioning properly, then the entire exercise starts to look more like the use of a product than the provision of medical services. Once again, it will be up to the medical community to develop its standards of care for the use of autonomous surgical robots.  If necessary, the law will adapt.

A number of sources of potential liability associated with automated surgery, such as failure to detect an unexpected cancer,or a non-robot-related intra-operative complication (like an adverse reaction to anesthesia) would appear to implicate medical malpractice theories of liability (e.g. “lost chance”) rather than product liability.  How will courts handle claims at the intersection of medical malpractice and product liability − that, however good the robotic software is at its intended surgical use, it does not allow the robot to react to the unexpected like human surgeons can?

Fourth, in terms of product liability, what’s the “product?”  Here, we mean whether the software, including the MRIs, CAT scans and other patient imaging data, is considered something separate from the physical robot itself.  Is the software purchased, or provided, separately from the hardware that is the visible robot?  This distinction could make a big difference in available theories of liability.  It could also be important in determining component part liability in cases where the hardware and software manufacturers point fingers at one another.  In such cases, possible defendants include healthcare professionals, hospitals that maintain the robots, manufacturers of robotic hardware, and providers of software – both the software that runs the robot and patient-specific electronic scans.  As now, there is also the possibility that the patient may not follow proper instructions.  Will autonomous surgical robots be required to have aviation-style “black boxes” to provide post-accident information?

The prevailing view under current law has been that software is not a “product.”  “Courts have yet to extend products liability theories to bad software, computer viruses, or web sites with inadequate security or defective design.”  James A. Henderson, “Tort vs. Technology: Accommodating Disruptive Innovation,” 47 Ariz. St. L.J. 1145, 1165-66 n.135 (2015).  The current restatement defines a “product” as “tangible personal property.”  Restatement (Third) of Torts, Products Liability §19(a) (1998).  In a variety of contexts, software has not been considered “tangible.”  See 2005 UCC Revisions to §§2-105(1), 9-102; Uniform Computer Information Transactions Act §102(a)(33) (NCCUSL 2002); ClearCorrect Operating, LLC v. ITC, 810 F.3d 1283, 1290-94 (Fed. Cir. 2015); United States v. Aleynikov, 676 F.3d 71, 76-77 (2d Cir. 2012); Wilson v. Midway Games, Inc., 198 F. Supp.2d 167, 173 (D. Conn. 2002) (product liability case); Sanders v. Acclaim Entertainment, Inc., 188 F. Supp.2d 1264, 1278-79 (D. Colo. 2002) (product liability case).  However, a couple of cases have gone the other way.  Winter v. G.P. Putnam’s Sons, 938 F.2d 1033, 1036 (9th Cir. 1991) (dictum in case involving books); Corley v. Stryker Corp., 2014 WL 3375596 at *3-4 (Mag. W.D. La. May 27, 2014), adopted, 2014 WL 3125990 (W.D. La. July 3, 2014).  Also of possible note, a legally non-binding 2016 FDA draft guidance considers software to be a “medical device” subject to FDA regulation in situations that would probably include autonomous surgery.

The availability – or not – of strict liability could be a big deal in cases alleging injuries arising from fully automated surgery performed by autonomous surgical robots.  What caused the injury?  Was there a problem with the robot’s hardware (such as a blade or needle malfunction)?  Was the robot incorrectly maintained?  These issues would not implicate the robot’s software.  On the other hand, was there a defect in the surgical software’s algorithms (that is, a design defect)?  Was the software designed properly but somehow corrupted (that is, a manufacturing defect), or hacked (intervening cause).  Or, to introduce a different defendant, was there some sort of error in the electronic patient-imaging files that told the robot how to operate on this particular patient?

In strict liability, a “product” defect is the key element of liability (as is a “good” for warranty claims).  A product malfunction, in the absence of reasonable secondary causes, in many jurisdictions can establish a jury submissible case.  In negligence, the plaintiff must also prove breach of duty, and an accident is not generally considered probative of such a breach.  Res ipsa loquitur – the negligence version of circumstantial proof of defect – is almost unheard-of in the context of medical treatment.  If there is a “product,” then strict liability is available.  If there isn’t a “product,” the plaintiff is obliged to prove negligence.  This distinction can be important, given how difficult proof of defect is likely be.  Cf. Pohly v. Intuitive Surgical, Inc., 2017 WL 900760, at *2-3 (N.D. Cal. March 7, 2017) (rejecting theory that invisible “microcracks” caused burns during robot-assisted surgery).

These are the issues that jump out at us as we consider the possibility of autonomous surgical robots for the first time.  There are undoubtedly others.  The technological possibilities are amazing.  As defense lawyers, it is our job to ensure that these possibilities are realized, and are not put out of reach by excessive liability.