Photo of Bexis

With the ALI annual meeting approaching, the new drafts of the various restatements and other projects are becoming available. We’ve been reviewing Tentative Draft No. 5 of the Restatement (Third) of Torts, Liability for Physical and Emotional Harm. Like all ALI publications, it’s available for purchase from the ALI, see: here
The thing that catches our eye at the moment (the thing is over 140 pages long, after all), is this stub chapter – Chapter 5 “Factual Cause” – which has only one section in it, §28. Why it’s numbered “28” we don’t know. Section 28 states a general rule that plaintiffs have the burden of causation in the first paragraph, and then a version of the old Summers v. Tice form of alternative causation in the second paragraph. If we remember correctly, that was §433B of the second Restatement. There is only one comment under §28, incongruously denominated “comment e,” and that comment has very little to do with either of the black letter paragraphs.
Instead, comment e purports to abolish – that’s right, abolish – any requirement that an expert witness testify to a “reasonable degree of medical certainty” or “medical probability” or any of the similar formulations that courts have used for generations to ensure that in-court expert witnesses use the same standards before a jury that they would use in their everyday practices. This was one of the projects that neither of us signed on for the Members’ Consultative Group (because it had been underway for too long when ALI let us in), so don’t blame us! We can’t be everywhere. There’d be no time to blog.
Instead of any such standards, an expert witness would be allowed to state an opinion – in a civil case, anyway – that he or she held merely as “more likely than not”:

[T]his section adopts the same preponderance standard that is universally applied in civil cases. Direct and cross examination can be employed to flesh out the degree of certainty with which an expert’s opinion is held and to identify opinions that are speculative and therefore inadmissible.

Restatement (Third) of Torts, Liability for Physical and Emotional Harm §28, comment e (Tentative Draft 5, 2007).The comment offers three reasons for doing away with the “reasonable professional certainty” requirement:
First: The old standard “is problematic because the medical and scientific communities have no such ‘reasonable certainty’ standard.” Id.Second: “There is a troubling inconsistency in imposing a higher threshold for the admissibility of expert testimony than is required for a party to meet the burden of proof.” Id.Third: “[T]he reasonable-certainty standard provides no assurance of the quality of the expert’s qualifications, expertise, investigation, methodology, or reasoning.” Id.We respectfully dissent.We come from Ohio and Pennsylvania, and we’re both very comfortable with the “reasonable degree of professional certainty” standard that both these states employ. See State v. Jackson, 751 N.E.2d 946, 961 (Ohio 2001); McMahon v. Young, 276 A.2d 534, 535 (Pa. 1971); Corrado v. Thomas Jefferson University Hospital, 790 A.2d 1022, 1027, 1031 (Pa. Super. 2001). Nor do our experts seem to have much trouble with the standard. Of course, our experts are doing what experts are supposed to do, and that’s evaluate cases by the same standards that they would use when they’re not in court. It’s only when experts seek to fudge their ordinary professional standards in order to give lesser in-court testimony that the problems arise.“Reasonable degree of medical certainty” has always been a relatively rigorous standard under the law. It’s supposed to be. It’s the same standard that the law requires for “pulling the plug” in the “right-to-die” context. See Ala. Code §22-8A-3; Cal. Prob. Code §4701; 16 Del. Code §2501; Ga. Code §31-39-2; Iowa Code §144A.2; Neb. Rev. Stat. §30-3402; N.J. Stat. Ann. §26:2H-55; N.Y. Supr. Ct. Proc. Act §1750-b (4); N.C. Gen. Stat. §90-321; 20 Pa. Cons. Stat. Ann. §5401; Ohio Rev. Code §§2133.02(A)(2-3), 1337.13(E). This standard arose to enforce the same standards of professional certainty in court that experts use in their regular employment. E.g., McMahon, 276 A.2d at 535 (“doctors must make decisions in their own profession every day based on their own expert opinions”).Let’s critically examine the three reasons given by comment e. The first is that the relevant communities don’t employ professionally a “reasonable certainty” standard. That’s probably true in the abstract – doctors and other professionals probably don’t frame the decisions that they make in these terms. But so what? “More likely than not” is little more than a coin flip. It’s supposed to be, because juries have to reach decisions. But juries aren’t skilled professionals. Is a coin flip standard really what well-educated professionals use to make decisions in their non-legal practices? We rather doubt it.We’re going to stick to medical doctors here, because that’s what we know best. We know that we would be very uncomfortable with an attending physician who is willing to make a life or death medical decision – like the “pulling the plug” statutes mentioned previously – on the basis of the beneficial result simply being “more likely” than some other, adverse result. We’re not physicians ourselves, but we remember that somebody in that profession once said something like “first, do no harm.” A diagnosis based upon a mere “more likely than not” standard doesn’t meet that standard. Rather, it seems to fly in the face of a great deal of advances in medical skill. Even if a particular treatment objectively offers less than 50% likelihood of success, a physican only employs it because s/he’s determined that there’s nothing else that offers a better chance – not because s/he flipped a coin to judge which treatment might work best.So while it’s true that “reasonable certainty” is not the way in which doctors (and, we presume other professionals like the economists at the Federal Reserve) articulate their decisionmaking process, that fact doesn’t support the change that Draft Restatement comment e proposes. If anything, it’s a step in the wrong direction. “More likely than not” seems to us to be farther away from the actual comfort level that professionals need before making a decision than the existing “reasonable professional certainty” standard.According to the Reporter’s Notes (pp. 135-36), somebody did some ad hoc research among some doctors and scientists attending a recent medico-legal conference (not necessarily a representative sampling, we suspect, given the setting) by eliciting criticism of the “reasonable medical certainty” standard. We don’t think that all the necessary questions were asked. We wonder what the commentary would have been if these same professionals were asked if they reached decisions by flipping a coin.So considering the first justification for draft comment e – while the underlying fact is probably true, it’s a non sequitur. We don’t see that fact as supporting the direction in which this comment seeks to take the law.The second critique is that it’s “inconsistent” to judge expert testimony by a higher standard than that which the lay jurors are called upon to reach their ultimate decision. We don’t think so. There’s only an inconsistency if one doesn’t consider the reasons why the legal system requires expert testimony in so many situations. Experts are called upon to give opinions on scientific and technical matters that lay jurors aren’t supposed to be capable of evaluating on their own.

Unlike an ordinary witness, an expert is permitted wide latitude to offer opinions, including those that are not based on firsthand knowledge or observation. Presumably, this relaxation of the usual requirement of firsthand knowledge. . .is premised on an assumption that the expert’s opinion will have a reliable basis in the knowledge and experience of his discipline.

Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592 (1993). Experts are therefore held to a standard commensurate with their purpose in the legal system – that of their own discipline. “It is to make certain that an expert, whether basing testimony upon professional studies or personal experience, employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.” Kumho Tire Co., Ltd. v. Carmichael, 526 U.S. 137, 152 (1999) (emphasis added); see McMahon, supra. That’s why we have never wanted experts who purport to give professional opinions that, in reality, are barely better than a coin flip. We want them to apply their expertise as fully and as rigorously in the courtroom as outside it.If the legal system were intended to operate with coin-flip experts, then why have experts at all? We could just let lay juries decide technical questions themselves. Juries, not experts, apply the “more likely than not” standard because jurors and experts play fundamentally different roles in the process. Jurors have to decide cases. Experts are only tolerated when their professional opinions are “helpful” to what the jury has to do. An expert whose opinion is no more certain than what the jury could reach on its own just isn’t “helpful.”There’s also a flip side to the “inconsistency” argument. One of the benefits of the “reasonable professional certainty” standard is that it’s uniform across all forms of litigation. The leading Ohio “reasonable professional certainty” decision, after all, is a criminal case. To clear up the “inconsistency” that draft comment e identifies would require the employment of internally inconsistent standards to identical expert testimony depending on the type of case to which it was relevant. Only certain types of civil cases allow juries to reach decisions based upon what is “more likely than not.” Other civil cases (most commonly, fraud) employ a “clear and convincing” standard. Criminal cases require a “beyond a reasonable doubt” standard. If the law requires expert testimony to match the standard that the jury is called upon to decide, then the admissibility of identical testimony – say an accountant evaluating the damage caused by a disputed transaction – could be determined by at least three different (or should we say “inconsistent”) standards, depending upon whether the accountant’s testimony were relevant to damages in a contract case, damages in a fraud case, or amount of harm in a criminal case.Thus we reject the inconsistency argument on two grounds: (a) it ignores the different roles that expert witnesses and lay jurors play in the legal system, and (b) it just trades one kind of inconsistency for another.The third and final ground offered is that the “reasonable professional certainty” standard offers “no assurance” of the “quality” of any aspect of expert testimony. The basic problem is that the comment’s solution offers even less protection for “quality.” This last argument sets up the perfect as the enemy of the good. If preserving “quality” is the ideal, then an “absolute certainty” standard would undoubtedly be the best – but then we’d almost never have admissible expert testimony.The “reasonable professional certainty” standard has the great advantage that it emphasizes the “professional” aspect of expert testimony. The “more likely than not standard” does away with any link to an expert’s professionalism, and thus permits opinions that would probably amount to professional malpractice if offered anywhere outside of the courtroom. It is certainly true that there’s no “assurance” of “quality” with the “reasonable professional certainty” standard. Dumbing that standard down to a coin flip, however, offers even less “assurance” that expert testimony will serve the purposes for which it is intended.We do agree with comment e on one point, however, and that’s that the “reasonable professional certainty” standard could do with more content. In keeping with the reasons for expert testimony, that content should not come from legal formulations, but rather from paying more heed to how the relevant profession actually reaches decisions. That requires resort to source material outside the ordinary legal realm. Again, we’re most familiar with doctors so what we can offer in this regard is limited to medical literature.There is, in fact, quite a body of medical literature devoted to the process of medical decisionmaking. This literature makes extremely clear that, in the diagnosis and treatment of patients, physicians routinely employ rigorous decision-making analyses, not coin flips:

In the diagnostic process, the clinician makes a series of inferences about the nature of malfunctions of the body. These inferences are derived from existing observations (historical data, physical findings, and routine tests) as well as from invasive tests and responses to various manipulations. Inferential reasoning proceeds until the clinician has discovered a diagnostic category sufficiently acceptable to either establish a prognosis, yield a therapeutic action, or both.

Jerome P. Kassirer, “Diagnostic Reasoning,” 110 Ann. Intern. Med. 893 (1989).Over the past ten to fifteen years, physicians have come to place considerable reliance upon what the medical community calls “evidence-based medicine.” This approach supplements a treating physicians’ own experiences, observations and instincts with scientific and epidemiologic data to achieve the best possible diagnosis and appropriate treatment. The expectation is that physicians will seek out the best evidence to assist in making medical decisions. This means no cherry-picking and no rote reliance on just what a medical expert is handed by his or her employer.

Evidence-based practice is the integration of best research evidence with clinical expertise and patient values. In clinical applications, providers use the best evidence available to decide, together with their patients, on the suitable options for care.

Kathleen N. Lohr, “Rating the Strength of Scientific Evidence: Relevance for Quality Improvement Programs,” 16 Int’l J. for Quality in Health Care 9, 10 (2004). The medical community realizes that scientific or statistical evidence cannot be applied in a vacuum, and clinical experience and observations will always play a key role in the diagnosis and treatment of patients:

Clinical experience and the development of clinical instincts (particularly with respect to diagnosis) are a crucial and necessary part of becoming a competent physician. Many aspects of clinical practice cannot, or will not, ever be adequately tested. Clinical experience and its lessons are particularly important in these situations. At the same time, systematic attempts to record observations in a reproducible and unbiased fashion markedly increase the confidence one can have in knowledge about patient prognosis, the value of diagnostic tests, and the efficacy of treatment. In the absence of systematic observation one must be cautious in the interpretation derived from clinical experience and intuition, for it may at times be misleading.

American Medical Association Evidence-Based Medicine Working Group, “Evidence-Based Medicine: A New Approach to Teaching the Practice of Medicine,” 268 JAMA 2420, 2421 (1992). Successful practice of medicine is not something that can be accomplished by reading a few medical records:

Competence depends on using expert scientific, clinical and humanistic judgment in clinical reasoning. Although expert clinicians often use pattern recognition for routine problems and hypothetico-deductive reasoning for complex problems outside their area of expertise, expert clinical reasoning usually involves working interpretations that are elaborated into branching networks of concepts. These networks help professionals initiate a process of problem solving from minimal information and use subsequent information to refine their understanding of the problem. Reflection allows practitioners to examine their own clinical reasoning strategies.

Ronald M. Epstein, et al., “Defining and Assessing Professional Competence,” 287 JAMA 226, 226-27 (2002). See A. Cecile J.W. Janssens, et al., “A New Logistic Regression Approach for the Evaluation of Diagnostic Test Results,” 25 Medical Decision Making 168 (2005) (detailing statistical models for determining the usefulness of further diagnostic testing of patients to increase the certainty of diagnosis).The medical literature consistently emphasizes the need for an integrated approach to decision-making, using all available evidence, to achieve the highest possible accuracy and confidence available under the circumstances. Application of state-of-the-art methodology for statistical analysis requires approaches that go beyond mere “subjective judgments”:

In the end, statistical inference. . .can take us only so far. In fact, our clinical decisions are rarely based on subjective judgments or objective data alone, but rather on something between and beyond the two – the ethical doctrines that ultimately imbue the decisions with meaning and value. . . . The current emphasis on clinical outcomes and prescriptive guidelines is a clear reflection of both the influence on modern medical practice and the importance of probabilistic reasoning to clinical decision-making. In this context, good decisions succeed in balancing the objective scientific data against our subjective ethical values; they are evidence-based, but not evidence bound.

George A. Diamond, et al., “Prior Convictions: Bayesian Approaches to the Analysis and Interpretation of Clinical Megatrials,” 43 J. Am. Coll. Cardiol. 1929, 1936 (2004).Thus the literature establishes that doctors should take an integrated approach to clinical decision-making, combining evidence-based approaches with their own experience and perspective, to reach the highest degree of confidence – not mere 51% – in diagnosis and treatment:

Mindfulness [in the clinical setting] can link evidence-based and relationship-centered care to help overcome the limitations of both approaches. The success of evidence-based approaches depends on the ability of the practitioner to decide which issues require further investigation and how to frame a question. These, in turn, require that the practitioner identify his or her own biases and the influences of the patient-physician relationship on framing of the question to investigate. This personal knowledge should also be considered a form of evidence and could be integrated into decisions making to incorporate patients’ preferences. Evidence-based data that are not specific to one patient-physician relationship would then be applied in a more mindful way.

Ronald M. Epstein, “Mindful Practice,” 282 JAMA 833, 837 (1999).The overriding theme of all these discussions in the medical literature (and we’ve barely scraped the surface) is that doctors should seek all available methods to attain the most accurate and appropriate diagnosis and treatment for each patient. In discussing the principles of evidence-based medicine, the Evidence-Based Medicine Working Group of the AMA outlined a hierarchy of medical evidence based upon its reliability and usefulness to treating physicians, and emphasized that:

Foremost among these principles are that value judgments underlie every clinical decision, that clinicians should seek evidence from as high in the hierarchy as possible, and that every clinical decision demands attention to the particular circumstances of the patient.

Gordon H. Guyatt, et al., “Users’ Guides to the Medical Literature, XXV. Evidence-Based Medicine: Principles for Applying the Users’ Guides to Patient Care,” 284 JAMA 1290, 1295 (2000).Medical and research professionals constantly refine and improve their diagnostic approaches – to drive medicine as far away from the essentially random chance model of 51% as possible. “A fundamental tenet of all scientific and scholarly work is that every aspect of it must be subjected to critical appraisal; only those findings and principles that withstand such appraisal become established.” Tom Jefferson, et al., “Measuring the Quality of Editorial Peer Review,” 287 JAMA 2786 (2002). The National Academy of Sciences, the National Academy of Engineering and the Institute of Medicine have all stated that “[i]ndividual scientists have a fundamental responsibility to ensure that their results are reproducible, that their research is reported thoroughly enough so that the results are reproducible, and that significant errors are corrected when they are recognized.” Responsible Science: Ensuring the Integrity of the Research Process, at 7 (National Academy Press 1992).We could go on – indeed, in other forums we have gone on – about how a coin flip approach to expert testimony is contrary to what the law is trying to accomplish with such testimony. We think the point, however, is clear. To the extent that the “reasonable professional certainty” standard needs more content to make it explicable to both lay jurors and experts alike, that content is not to be found in yet another legal standard like “more likely than not” (or “clear and convincing” or “beyond a reasonable doubt”), but rather in the standards that the relevant profession chooses to follow.