How Rhino Health’s Platform Helps AI Developers Align with the FDA’s Good Machine Learning Practices
Updated: Oct 20, 2022
The Software as a Medical Device (SaMD) regulatory landscape is at a crossroads. Recognizing the challenges associated with regulating Artificial Intelligence (AI) and Machine Learning (ML) SaMD as traditional medical devices, the FDA released its first discussion paper in 2019, outlining a distinct framework for the regulation of AI/ML SaMD. Since then, additional action plans, discussion papers, guiding principles, and public workshops have been held, all aimed to address this difficult to achieve but crucial objective: Ensuring the safety and effectiveness, and transparency, of these new clinical tools.
Underpinning this evolving guidance is the need to move from allowing only “locked” (frozen at the point of approval without the ability to learn from new data) algorithms into the market to those that are “living” (improved as more data becomes available). To do so, the FDA has released guidance regarding a Total Product Life Cycle (TPLC) approach to regulation. This approach includes a pathway to enabling AI modifications, namely a Predetermined Change Control Plan, requiring documenting the “what” and “how” of the modifications – the SaMD Pre-Specifications (SPS) and Algorithm Change Protocol (ACP), respectively. Necessary to support the TPLC approach are Good Machine Learning Practices (GMLP). These are a set of principles that intend to improve transparency, performance, and generalizability of marketed applications while mitigating bias, thus ensuring ongoing safety and effectiveness throughout the product lifecycle.
Despite the buzz about healthcare AI, today’s harsh truth is that many applications fall far short of expectations once introduced into the real-world clinical setting. This fact is reflected in the 2020 American College of Radiology’s (ACR) Data Science Institute AI Survey, where only ~30% of respondents reported using any type of AI in 2020 - with 5.7% of users reporting AI always worked and 94.3% reporting inconsistent performance. One critical issue is that while algorithms are locked, devices, protocols, and clinical practice continue to evolve in short cycles. Recognizing this disconnect, the FDA has proposed the TPLC framework, yet today the industry lacks a process to audit and monitor in-market applications. Without that oversight, ongoing algorithm evolution cannot occur.
Within the TPLC approach, AI developers are not required to undergo a formal FDA submission process for each algorithmic modification and are able to make (agreed upon) post-market modifications. In exchange for this “longer leash,” the FDA requires the AI developers to have a robust Predetermined Change Protocol Plan, inclusive of the SPS and ACP, and compliance with GMLP. These controls were implemented to establish transparency and instill confidence in the safety and effectiveness of “living” AI/ML SaMDs, however, compliance with such controls is not a set-it and forget-it undertaking. Demonstrating adherence to the Predetermined Change Protocol Plan and GMLP requires both training and testing on diverse data and ongoing real-world performance monitoring to evaluate the post-market device. For many in the industry, these requirements exacerbate an existing challenge: how to access and utilize the large and diverse datasets needed to comply with regulatory requirements?
The traditional approach to managing and utilizing health data is not sufficient. It’s extremely difficult, time-consuming, and costly for AI developers to strike data-sharing agreements with individual hospitals in order to access the requisite sensitive data (e.g., clinical, imaging, outcomes), let alone with multiple hospitals/health systems necessary to support GMLP. Data in any given institution is usually siloed and data across different institutions is often not standardized. Privacy preservation requirements make it difficult to correlate clinical context and patient outcomes. Many healthcare institutions are not comfortable exporting or transferring patient data “off-site” to an external cloud - especially in the context of evolving regulatory requirements such as CCPA, GDPR, and others. Data becomes “stale,” and ongoing access to updated data is often not included as part of these agreements. These logistical and legal burdens of aggregating and utilizing a large and diverse dataset have led many to settle for just “good enough” data. While “good enough” may have been sufficient yesterday and today, it will certainly not support the paradigm of tomorrow.
The New Standard
Federated Learning (FL) has unlocked a new opportunity for the industry - making it possible to scale datasets beyond a single institution and leverage much larger, much more diverse data - all without ever actually moving data or transferring ownership (as we explain in our recent video from RSNA 2021). Patient privacy is always protected. And continuous improvement is possible throughout the full lifecycle of an AI/ML application. To deliver on this promise, we at Rhino Health have created an end-to-end, turnkey platform leveraging federated learning. Our Platform enables healthcare AI developers and medical researchers to get up and running quickly and have rapid access to health data from a large and growing network of collaborators around the world.
Setting the new standard for the industry, the Rhino Health Platform brings the AI/ML algorithm to the local data - where it lives. With FL, computation is done locally - on the edge. And the central server then blends parameters, yielding a generalizable, global model. This eliminates the need to transfer sensitive and big data. Our Platform provides AI developers secure, seamless and scalable access to the data needed to enable safe, effective, and transparent evolving algorithms. And with transparency paramount, all processes are documented and the model itself remains protected, requiring no disclosure of the “secret sauce” and only displaying model performance metrics.
As more companies explore and embrace this approach, it’s important to understand how the Rhino Health Platform and federated access to data enable alignment with many of the GMLP Guiding Principles, such as:
The bottom line is that these principles require healthcare AI developers to have ongoing access to a significant volume and diversity of data, including linkage to the data’s provenance and a deeper understanding of the characteristics and clinical context. Data must be of high integrity. It must be heterogeneous. It must come from distinct sources and sites. The reference set must be diverse in order to ensure generalizability. Testing must happen in a representative, relevant context. Data collection must be transparent. And streamlined access to Real World Data (RWD) must be ongoing.
As we recently discussed with the FDA’s Digital Health Center for Excellence (DHCoE), we are dedicated to improving patient care via AI/ML devices and are excited to support AI developers, regulatory agencies and partners on this journey.
To learn more about our end-to-end Platform, its enablement of the evolving FDA framework, and the ways AI innovators from across the healthcare ecosystem are using it, please reach out to me at firstname.lastname@example.org.