Almost ten years ago Bexis argued that the Federal Rules were technologically out-of-date and proposed a number of topics that would benefit from rules-based codification. One of those topics involved machine learning – specifically use of predictive coding in ediscovery.
That didn’t go anywhere, but on May 2, 2025, the Advisory Committee on Evidence Rules proposed language for a new rule – Fed. R. Evid. 707 – addressed to the impact of artificial-intelligence-generated evidence in the courtroom. Here’s the proposed language:
Rule 707. Machine-Generated Evidence
When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only it if satisfies the requirements of Rule 702 (a)-(d). This rule does not apply to the output of basic scientific instruments.
Committee on Rules of Practice and Procedure, Agenda Book, at Appendix B, page 75 of 486 (June 10, 2025). This proposal is the product of three years of research and investigation. Id.
Here’s what the committee had to say about the new Rule 707:
- The rule applies to “artificial-intelligence, machine-learning, and the admissibility of evidence.” Id. at 5.
- The new rule “would essentially apply the Rule 702 standard to evidence that is the product of machine learning.” Id. at 28.
- “The rule would exempt the output of basic scientific instruments or routinely relied upon commercial software” – exemptions that the Advisory Committee may well decide to further explain in subsequent drafts. Id.
- The new evidence rule will be coordinated with the Civil Rules committee potentially to address “disclosure issues” concerning “source codes and trade secrets.” Id. at 29.
- The concern with “machine learning . . . is that it might be unreliable, and yet the unreliability will be buried in the program and difficult to detect.” Id. at 58.
- Hearsay rules are meaningless because “a machine cannot be cross-examined.” Id.
- Amending Fed. R. Evid. 702 to address “admissibility of machine evidence” was impractical because Rule 702 was “a rule of general applicability” and it had just been amended in 2023. Id.
- Some committee advisors consider that an exception exempting “the output of basic scientific instruments or routinely relied upon commercial software” would be too broad, since the intent of the exception is to avoid the new rule’s application to “to common instruments.” Id. at 29.
- The final sentence of the rule is intended to give trial courts sufficient latitude to avoid unnecessary litigation over the output from simple scientific instruments that are relied upon in everyday life. Id. at 77.
The “solution,” according to the Committee, “was to draft a new Rule 707 providing that if machine-generated evidence is introduced without an expert witness,” it would still be “considered expert testimony,” and would be expressly subject to Rule 702’s standards. Id. at 58 (emphasis added). The Committee gave several examples of how this might come about: (1) securities litigation – “machine output analyzing stock trading patterns to establish causation”; (2) copyright litigation – “analysis of digital data to determine whether two works are substantially similar”; and (3) patent litigation – “machine learning that assesses the complexity of software programs to determine the likelihood that code was misappropriated.” Id. In each of these cases, the committee concluded that offering machine output through either a non-expert or a “certification of authenticity” was at least “possible.” Id.
We’re not so sure about that.
Given the highly technical nature of artificial intelligence, machine learning, and the like, we think these examples are unlikely ever to occur, and would be risky for the proponent of the evidence even to try to present such evidence doing so through an expert. At least in the product liability litigation sphere where we spend our time, we the situations where machine generated evidence could be offered without an expert witness to explain it would be more like this (possibility of artificial intelligence generated deceased plaintiff statement). Further, even if the Committee’s examples are theoretically possible, they are quite unlikely to happen in practice. Any proponent that went through the trouble to put together such evidence would want the jury (or other factfinder) to give it the full weight that the proponent thinks it deserves, and expert validation would be much more persuasive than some lay witness who would be destroyed on cross-examination by being unable to explain the technology.
So why impose this limit on Rule 707 at all? The result would be an extremely narrow rule that rarely, if ever, comes into play – sort of like Fed. R. Evid. 706. Don’t get us wrong. We think a rule in this area is a good idea, but one as limited as the current proposal doesn’t meet the need, and may not be worth the effort.
The current proposed comment to Rule 707 justifies the Rule because:
Machine-generated evidence can involve the use of a computer-based process or system to make predictions or draw inferences from existing data. When a machine draws inferences and makes predictions, there are concerns about the reliability of that process, akin to the reliability concerns about expert witnesses.
(6/10/2025) Agenda Book at 75. Alternatively, the drafters were concerned not to create incentives to offer machine-generated evidence without expert support and interpretation:
The rule is not intended to encourage parties to opt for machine-generated evidence over live expert witnesses. Indeed the point of the rule is to provide reliability-based protections when a party chooses to proffer machine evidence instead of a live expert.
Id. at 76.
We were struck by the potentially broader applicability of the anticipated Rule 707 analysis. The proponent of such machine generated evidence would be required to:
- Consider[] whether the inputs into the process are sufficient for purposes of ensuring the validity of the resulting output. For example, the court should consider whether the training data for a machine learning process is sufficiently representative to render an accurate output for the population involved in the case at hand.
- Consider[] whether the process has been validated in circumstances sufficiently similar to the case at hand.
Other “problems” that the drafters expected the new rule to address are: “using the process for purposes that were not intended,” “analytical error or incompleteness,” “inaccuracy or bias built into the underlying data or formulas,” and “lack of interpretability of the machine’s process.” Id. at 75.
But machine-generated evidence presents these issues, no matter what witness presents it.
So our reaction is that the proposed rule should remove the limiting phrase “without an expert witness” as superfluous. Consider the last set of bullet points above. All of those things: sufficiency of the inputs; validation in sufficiently similar fact patterns; using the analysis for unintended purposes; analytical error; incomplete analyses; inaccuracy; built-in bias; and opacity of the analytic processes – are concerns whether or not the computer generated material has a litigation “expert” standing behind it. Indeed, in our experience that such experts are experts precisely in building bias and inaccuracy into computer models. We discussed just such a case back in November – where an ultimately excluded expert offered a computerized “regression analysis” that was so biased that it would generate a statistically significant result no matter what data – including totally extraneous data – was inputted.
So we think that limiting proposed Rule 707 to “machine-generated evidence . . . offered without an expert witness,” is totally unnecessary. All such evidence, with or without an expert witness vouching for it, suffers from the same potential problems. Such evidence is, from the jury’s perspective, essentially a black box – into which paid litigation experts pour whatever their employers need to prove their cases. Rule 707 should be an independent rule establishing a set of admissibility criteria applicable to all “machine generated evidence,” since expert or no, such evidence presents the same reliability problems.
In that sense, we think that Rule 707, as reported out of committee, is much too narrow, perhaps so narrow as to apply to nothing at all in practice.
On the other hand, defining Rule 707 as applying to all “machine generated evidence” may be too broad. To us, that phrase easily encompasses all of those fancy accident reconstruction graphics that both sides’ experts employ in civil cases and crime scene ballistics in criminal prosecutions. To the extent that the proposed rule is directed at machine learning and generative artificial intelligence, use of the term “machine generated evidence” may be too broad. We note that the alternative phrase, “machine learning,” has been offered, with a definition:
Machine learning is an application of artificial intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.
Committee on Rules of Practice and Procedure, Agenda Book, at page 72 (May 2, 2025). To avoid Rule 707 sweeping more broadly than intended, the drafters may want to reconsider this alternative phraseology.
In sum, we think that the Rules of Evidence are sorely in need of a rule directed specifically to black box computerized evidence – and that this need is equally acute whether an expert is used to introduce such evidence, or no. We hope that Rule 707 can be that vehicle.
* * * *
A final footnote: a second proposed rule change, Fed. R. Evid. 901(c), dealing with “deep fakes” as evidence and creating an objection procedure, has been tabled for the time being, because such fakes are simply a more sophisticated form of forgery:
[T]he Advisory Committee agreed that this is an important issue but is not sure that it requires a rule amendment at this time. At bottom, deepfakes are a sophisticated form of video or audio generated by AI. . . . [F]orgery is a problem that courts have long had to confront − even if the means of creating the forgery and the sophistication of the forged evidence are now different. The Advisory Committee thus generally thought that courts have the tools to address the problem. . . .
(6/10/2025) Agenda Book at 29. If deep fakes create “problems that the existing tools cannot adequately address,” a rules amendment can be developed to address those new problems. Id.