top of page
Search

The Evolving Data Needs of the Biopharma Industry

Updated: Oct 20, 2022

Gathering more than 400 speakers and attendees from 35 countries, Precision Medicine World Conference (PMWC) Silicon Valley is the largest precision medicine conference in the world. It was an honor to be invited as a panelist on the Data and Artificial Intelligence (AI) and Data Sciences track, interacting with attendees of diverse backgrounds and affiliations to promote cross-functional fertilization and collaboration to accelerate precision medicine.


Throughout the many conversations I had during the conference, I couldn’t help but reflect on how the healthcare data analytics landscape has evolved since I entered the space as a young Doctor about a decade ago. Since then, leaps in technology have resulted in much deeper data being generated, while maturing analytics platforms have made end-to-end algorithm development and deployment accessible to a degree where that enables healthcare institutions to explore the development of advanced analytics algorithms (e.g. ‘Machine Learning’ and ‘Artificial Intelligence’) and increasingly leverage them in e.g. clinical workflow, preventative care and drug development.


While the progress of the past decade is indeed a cause for optimism, significant hurdles still stand in the way for unlocking the promise of advanced analytics in healthcare. First, we need to actively overcome data sharing barriers to enable algorithm development on more diverse data, to prevent insights from being skewed towards clinical distributions at a narrow set of participating sites. Second, the algorithm development workflow must be cross-functional and involve both clinical- and business leaders from the outset, to enable transparency and instill the trust required to ensure that algorithms can move beyond academic publications to the bedside, with the ultimate goal of enabling the continuous learning, monitoring and refinement that is needed for a learning healthcare system to work.


Data privacy regulation and awareness have come a long way, and the consensus at PMWC was that embracing and enforcing data privacy is key to earning patients’ trust. With that said, algorithm development, quality benchmarking and other use cases that promise to transform the quality of care, still almost exclusively rely on pooling individual level data from participating institutions in central repositories. As copying and transferring data across institutions results in loss of control of who accesses the data and for what it is used, the current paradigm does little to mitigate prevailing issues of trust and privacy. Robust solutions to this challenge become ever more relevant in the light of HHS’ most recent guidance to protect patient privacy under HIPAA, making enforcement a priority.


With the above challenges in mind, I was excited about the opportunity to speak at PMWC on how the increasing maturity of technologies such as containerization, distributed compute, federated learning and health information standards lets us rethink the conventional analytics paradigm. Rather than sharing individual-level de-identified data with loss of control as a result, these technologies allow for algorithm development across institutional boundaries in a distributed fashion. This is possible by sending instructions for computations to be made behind institutional firewalls, and sharing back only aggregated results and model parameters to a centralized orchestrator. With this approach, distributed compute and federated learning enable collaborative algorithm development and data analysis without the need for sharing individual level data.


Up to just over a year ago, such a distributed system was within reach of only a few institutions, as it required a level of bespoke development and adaptation that was beyond the capabilities of most data scientists and software developers. It would also typically become an ad-hoc, and costly to maintain solution (if at all). Needless to say, this was an obvious obstacle to executing distributed analytics at scale. Acknowledging this barrier to implementation, our team at Rhino Health is proud to offer the first industrial-grade, scalable distributed compute and federated learning platform for advanced analytics and AI development that does not require complex adaptations and software engineering for the participating institutions. By combining these cutting-edge techniques for privacy preservation with a comprehensive logging system and granular permissions settings that makes work fully auditable, we hope that the Rhino Platform will help instill trust that enables collaborations across institutions and industry verticals for the ultimate benefit of patients.


Given the complexities of acting on insights and incorporating algorithms in the clinical workflow, it is incredibly important to involve cross-functional stakeholders early on to create the sense of ownership and confidence that is required for successful adoption in the clinic. Involving cross-functional stakeholders extensively in algorithm development serves the additional purpose of securing the clinical- and organizational intuition that helps unlock deeper insights and tailor algorithms in a way that lets them translate to real-world impact. Such a cross-functional approach is sometimes already seen in the development of intelligent algorithms in the biopharma industry, as well as in triage systems for patient selection at leading US healthcare institutions, where seasoned clinicians provide much of the clinical intuition fuelling advanced predictive models based on real-world data.


Traditionally, interacting with an algorithm development pipeline has required technical knowledge beyond what most cross-functional stakeholders have, adding barriers to involving non-technical cross-functional stakeholders in the development process. With these challenges in mind, we have designed our user-friendly UI to offer a workbench that makes the end-to-end MLOps workflow collaborative, letting cross-functional teams review work in progress and make important design choices shoulder-to-shoulder. A programmatic interface enables data scientists and engineers to interact seamlessly with the remote data using the same tools. A user-friendly GUI supports effective logistics and communication also with non-technical stakeholders, such as clinicians and business leaders.


Our goal with forging these capabilities into a state-of-the-art platform that is easy to install and to use is to bring about a more connected and collaborative healthcare industry, both within- and across verticals. Our mission is to enable a large network of institutions that can reliably benchmark health outcomes, assess the impact of care processes and capacity decisions, as well as track performance of clinical interventions in the real world. Thus, we will enable a continuously learning healthcare system, that adapts based on quantified experience and current trends to make truly patient-centric care reality.


To learn more about how the Rhino Health Platform enables collaborative data science with distribute health data, click here to read a post from our VP of Product.




199 views0 comments
bottom of page