We last reviewed the case law on predictive coding (also called “technology assisted review” (“TAR”)), about 2 ½ years ago. Back then, we concluded:
The case law has exploded. Where only a handful of cases existed back then [2012], now we find dozens. Substantively, we’re happy to report that courts don’t seem to have anything bad to say about using computers to undertake relevance review for documents subject to production in litigation.
How are things now?
“[I]t is ‘black letter law’ that courts will permit a producing party to utilize TAR” when “there was never an agreement to utilize a different search methodology.” Entrata, Inc. v. Yardi Systems, Inc., 2018 WL 5470454, at *7 (D. Utah Oct. 29, 2018). Predictive coding continues to enjoy judicial approval:
The predictive coding process required a significant amount of attorney time at the outset to devise and implement, and required the creation of an agreed to strict set of procedural and process safeguards, due to its relative novelty. Despite the initial “front loaded” investment of time required, the predictive coding system provided a unique way to, in part, realistically manage the immense amount of information needed to be produced and reviewed in this MDL. The predictive coding system, although not perfect or fully realized, nonetheless, provided an innovative efficiency to the discovery process when compared to the existing, prevailing methods of review.
In re Actos (Pioglitazone) Products Liability Litigation, 274 F. Supp.3d 485, 499 (W.D. La. 2017). See Story v. Fiat Chrysler Automotive, 2018 WL 5307230, at *3 (Mag. N.D. Ind. Oct. 26, 2018) (“The Court encourages counsel . . . to consider that key word searches or technology assisted review are appropriate and useful ways to narrow the volume of an otherwise overly-broad request”); Duffy v. Lawrence Memorial Hospital, 2017 WL 1277808, at *3 (Mag. D. Kan. March 31, 2017) (“Technology-assisted review can (and does) yield more accurate results than exhaustive manual review, with much lower effort.”) (citations and quotation marks omitted); FCA US LLC v. Cummins, Inc., 2017 WL 2806896, at *1 (E.D. Mich. March 28, 2017) (“Applying TAR to the universe of electronic material before any keyword search reduces the universe of electronic material is the preferred method.”); Davine v. Golub Corp., 2017 WL 549151, at *1 (Mag. D. Mass. Feb. 8, 2017) (“Defendants are entitled to rely on their predictive coding model for purposes of identifying relevant responsive documents”); In re Bair Hugger Forced Air Warming Products Liability Litigation, 2016 WL 3702959, at *1 (D. Minn. July 8, 2016) (MDL order providing for predictive coding). Cf. Dremak v. Urban Outfitters, Inc., 2018 WL 1441834, at *8 (Cal. App. March 23, 2018) (unreported) (finding predictive coding “reasonable and necessary to the litigation” and taxing the cost to unsuccessful plaintiffs), review denied (Cal. July 11, 2018).
Another thing that appears settled is the right of the party producing electronically stored information (“ESI”) to choose the means by which it conducts electronic discovery. Several cases have addressed this issue, with the result being that neither side can force the other to use this technology. In a follow-up to a case mentioned in our prior post, the Tax Court in Dynamo Holdings Ltd. Partnership v. CIR, 2016 WL 4204067 (T.C. July 13, 2016), overruled an IRS objection to a taxpayer’s predictive coding-based production that amounted to the IRS seeking a redo. In response to the IRS’s demand for a 95% recall rate (returning 95% of all relevant documents) – that had a precision rate of only 3% (97% of the total documents produced would be irrelevant) − the court chastised the IRS for perpetuating two “myths” about discovery: First, that human review is “perfect,” or at least the “gold standard.” Id. at*5. It isn’t:
[H]uman review is far from perfect. . . . [I]f two sets of human reviewers review the same set of documents to identify what is responsive, research shows that those reviewers will disagree with each on more than half of the responsiveness claims.
Id. (citations omitted). The second “myth” is that a “perfect response” to discovery is necessary. Id. It isn’t, nor is it even possible:
[O]ur Rules do not require a perfect response. . . . Like the Tax Court Rules, the Federal Rule of Civil Procedure 26(g) only requires a party to make a “reasonable inquiry” when making discovery responses. The fact that a responding party uses predictive coding to respond to a request for production does not change the standard for measuring the completeness of the response.
Id. at *5-6 (citations omitted). Thus, predictive coding, even if imperfect for all the reasons stated by the IRS, was nonetheless proper discovery. “Petitioners made a reasonable inquiry in responding to the Commissioner’s discovery demands when they used predictive coding to produce any documents that the algorithm determined was responsive.” Id. at *6.
The converse was true in In re Viagra (Sildenafil Citrate) Products Liability Litigation, 2016 WL 7336411, at *1 (Mag. N.D. Cal. Oct. 14, 2016). The MDL plaintiffs attempted to force the defendant to conduct ediscovery by predictive coding rather than the defendant’s chosen search term-based methodology. That was a no go, as the rules did not give the requesting party the right to dictate the producing party’s search methods:
[N]o court has ordered a party to engage in TAR and/or predictive coding over the objection of the party. The few courts that have considered this issue have all declined to compel predictive coding. . . . [T]he responding party is the one best situated to decide how to search for and produce ESI responsive to discovery requests. The responding party can use the search method of its choice.
Id. (citations and quotation marks omitted). Accord Rockford v. Mallinckrodt ARD, Inc., 326 F.R.D. 489, 493 (N.D. Ill. 2018) (“This Court will not micromanage the litigation and force TAR onto the parties.”); T.D.P. v. City of Oakland, 2017 WL 3026925, at *4-5 (Mag. N.D. Cal. July 17, 2017) (rejecting plaintiff’s effort to force defendant to use predictive coding); Hyles v. New York City, 2016 WL 4077114, at *2-3 (Mag. S.D.N.Y. Aug. 1, 2016) (even though “TAR is cheaper, more efficient and superior to keyword searching” “it is not up to the Court, or the requesting party . . ., to force . . . the responding party to use TAR when it prefers to use” something else).
What can happen down the road, post-production, if the requestor still thinks that the producer’s methods are inadequate? Requestors must have “specific examples of deficiencies” before seeking discovery into an opponent’s TAR process. Entrata, Inc. v. Yardi Systems, Inc., 2018 WL 3055755, at *3 (Mag. D. Utah June 20, 2018), objections dismissed, 2018 WL 5470454 (D. Utah Oct. 29, 2018).
One way of getting a handle on the recall rate “is to randomly sample the null set.” Rockford, 326 F.R.D. at 493. This so-called “null set” represents the universe of electronic documents reviewed and found non-responsive.
Conducting a random sample of the null set is a part of the TAR process. The purpose of randomly sampling the null set after a TAR review is to validate the process and provide reasonable assurance that the production is complete. Validation and quality assurance are fundamental principles to ESI production. The process provides the reasonable inquiry supporting the certification under Rule 26(g) [of reasonable completeness].
Id. at 494 (citations omitted). “[A] random sample of the null set provides validation and quality assurance of the document production.” Id. See Winfield v. City of New York, 2017 WL 5664852, at *11 (Mag. S.D.N.Y. Nov. 27, 2017) (“Plaintiffs have presented sufficient evidence to justify their request for sample sets of non-privileged documents” even though “neither this Court nor Plaintiffs have identified anything in the TAR process itself that is inherently defective”).
Finally, for the technophobes among us, we’ll close this post with a soothing quote from one of the more recent cases:
The Court pauses here for a moment to calm down litigators less familiar with ESI. (You know who you are.) In life, there are many things to be scared of, including, but not limited to, spiders, sharks, and clowns – definitely clowns, even Fizbo. ESI is not something to be scared of. The same is true for all the terms and jargon related to ESI. . . . The underlying principles governing discovery do not change just because ESI is involved. So don’t freak out.
Rockford, 326 F.R.D. at 492 n.2 (citations and quotation marks omitted).