We’re interested in artificial intelligence, particularly as it affects medical devices, but we don’t know all that much about it, and it’s yet to make much of an impact in our product liability sandbox. Fortunately, we know some folks who do stay informed on this topic, and that’s what today’s guest post is about. In this post Reed Smith‘s Mildred Segura, Maryanne Woo, and Corinne Fierro, examine the FDA’s most recent activity in this area. As always our guest posters are 100% responsible for their content, deserving of all the credit (and any blame).
The current state of things has increased nostalgia levels with renewed interest in old TV shows. They are even rebooting Kate and Allie. In reviewing FDA’s Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, released last month, the key takeaways harken back to the days of 22 minute, multi-camera programs with catchy theme-songs. So here’s a little trip in the Wayback Machine (the dog-operated one) to show you what the FDA has in store for future-protecting AI/ML SaMD (i.e. Dr. Theopolis in real life).
FDA requested feedback on its 2019 Proposed Regulatory Framework for Modifications to Artificial Intelligence/ Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper (explained in detail here), and clearly many stakeholders picked up the red Bat phone and called in.
The news is that there is much discussion to be had, and we are likely at least two years out from having a final regulatory framework in place. To get to that finale, FDA set out five goals:
FDA’s Predetermined Change Control Plan envisions AI SaMD manufacturers allowing FDA to access real-world performance data of the products so that FDA and manufacturers could continue to monitor the AI as it grew and changed in the field. While the intention was defined, the details were as ambiguous as the state of Ross and Rachel’s relationship (yes, they were on a break, that doesn’t make it ok to sleep with the copy girl).
One of the issues to be defined is when manufacturers need to report to FDA regarding modifications in the algorithm over time. The balance to be struck is to avoid Gunther’s bad timing but also to not obsess over every detail like Monica’s 11 categories of towels.
Based upon the comments (and concerns) about how this plan would actually be implemented, FDA’s goal is to publish a Draft Guidance on the Predetermined Change Control Plan at some point in 2021 for commentary. FDA also indicated it will publish a revamped version of its proposed framework incorporating public feedback, but gave no indication of when it would do so.
FDA’s 2019 Discussion Paper defined the term Good Machine Learning Practice (“GMLP”) as best practices for AI/ML for data management, training, evaluation, documentation, and extraction. Stakeholders have expressed interest in FDA “harmonizing” efforts to create one norm. This makes sense, because having a set standard for GMLP is important for guiding the industry and enabling oversight of AI through manufacturer adherence to best practices and standards. FDA is currently involved in efforts to standardize GMLP, and restated its commitment to do so, just like Norm is committed to Vera.
FDA’s goal of a “patient-centered” approach to the development of AI focuses on transparency to the users and patients in general. The level of transparency intimated by FDA goes beyond disclosing the risks and benefits of the device The Action Plan talks about the need for manufacturers to clearly describe the training data, explain why certain inputs were selected, the logic (when possible), the intended role, and proof of concept. How this is to be accomplished in device labeling requires further discussion. (What might be right for you, may not be right for some.) FDA will hold a public workshop to elicit additional community input. While the goal is to encourage patient trust in AI medical device technology, the responsibilities of the manufacturers to achieve that end is still unclear.
Bias in clinical research has long been documented, especially the overrepresentation of people from European descent and the exclusion of women altogether in clinical trials. The bias has led to disparate effectiveness in patient outcomes. Recognizing this issue, FDA reiterated the need for algorithms that are racially and ethnically inclusive and reflective of the patient population. It is committed to continuing to develop and expand its research partnerships to identify and eliminate bias in machine learning algorithms and ensuring they are robust and resilient enough to withstand changing clinical inputs and conditions.
Real-World Performance Monitoring (“RWP monitoring”) is the practice of AI designers collecting data on real-world use of their AI so that they may improve products or respond to user concerns. Currently, manufacturers do not know what data are appropriate to collect, how much data FDA expects to be reported, or how they can use user feedback for improvement. Going forward, FDA will support RWP monitoring by forming a Mulder and Scully partnership with manufacturers (like Mulder and Scully) on a voluntary basis with the aim of creating a framework for RWP data collection and use. FDA will also engage the public on this effort.
Long story short, FDA continues to engage with the AI/ML SaMD space, and is deepening its efforts to create a regulatory system that works for manufacturers and the public. FDA recognizes the importance of this to be done right as it continues to receive a high volume of marketing submissions and pre-submissions for products leveraging AI/ML technologies, which FDA expects to increase over time. Short story long, a major concern is that technology in the AI/ML SaMD field is developing so rapidly that FDA may never catch up at the rate it is going. In any event, this space bears watching as it develops as we track the impact it may have on the existing product liability landscape from our custom craft van.