top of page

ChatGPT Needs Federated Learning to Achieve its Potential in Healthcare

Updated: Apr 4

ChatGPT and its OpenAI sibling, DALL·E 2, have garnered breathless, though well-deserved, headlines over the last year. These products are examples of ‘generative AI’, which is a subset of AI using deep learning to create synthetic data that appears like real data. The applications of this technique range from benign to controversial: from creating images of rhinos dancing to formulaically generating new levels for video games to detecting financial fraud to passing the bar exam. There are questions about the quality¹, novelty², and ethics³ of generative AI. Addressing those concerns is vital, but all would agree on the massive potential for this technology once those are addressed. This blog post will avoid the sensationalism surrounding the emergence of these products to instead explore the intersection of generative AI, federated learning (FL), and healthcare.

Image credit: DALL-E

There are many exciting applications of generative AI in healthcare, but these will likely not reach their full potential without using the privacy-preserving technique of FL. These applications span the healthcare gamut, as illustrated below:

  • Genomics: Generating synthetic DNA sequences for gene editing experiments to test the effects of different genetic variations.⁴

  • Drug Design: Generating new molecular structures that are both optimized for certain outcomes and still obey the laws of chemistry & physics, then prove in simulations to be most receptive to some treatment.⁵

  • Clinical Trial Design: Generating synthetic data for clinical trials can speed up what is an incredibly expensive process by creating a large amount of data for simulation and hypothesis testing covering a range of patient populations, treatment outcomes, and adverse events.⁶

  • Medical Imaging: Generating synthetic medical images to augment existing datasets, for instance, creating new X-rays or MRI images with specific characteristics missing from the original set to improve model development. Generative AI can also be used to create images from one modality to another (e.g. CT from MRI⁷), or generate images from text inputs (e.g. our DALL-E generated ‘dancing rhino’ example above).

  • Personalized Medicine: Generating personalized medical treatment plans such as a specific diet or exercise regimen based on the individual’s characteristics.⁸

  • Medical Education: Generative AI can also be used as a training tool - similar to generating new video game levels, med students could be presented with generated patient scenarios to test their diagnosis skills or treatment planning.⁹

Healthcare AI/ML pioneers are putting the above list into practice. Kather et. al. published a useful summary of real examples in npj Digital Medicine¹⁰. Importantly, Kather & team also concluded that while tests of generative AI to date have been promising, the results have not been generalizable. Because the original models were not specifically designed for healthcare, the models need further refining to be truly useful to practitioners. This is where FL can help.

One of the primary roadblocks to AI/ML development and adoption in healthcare is the lack of large sets of sufficiently diverse data. Models trained at one site may not translate to another due to differences in patient populations, hospital workflows, data models, or even the equipment used to generate an image. To be truly effective, ChatGPT and other generative AI models must be trained on huge volumes of patient data including images, provider notes, outcomes data, etc. - all from a sufficiently diverse set of institutions.

Healthcare data controllers rightly want to protect their patients’ privacy, and to comply with privacy regulations such as Health Insurance Portability and Accountability Act (HIPAA) in the US or General Data Protection Regulation (GDPR) in the EU. As a result, they are appropriately conservative in how much or what form of data they make available to model developers. This conservatism typically manifests in data being shared as isolated snapshots, disaggregating the patient journey of diagnosis, treatment, and outcome. That complex multimodal mosaic would provide the richest inputs for AI/ML developers, but also presents the most risk for re-identification.

FL can cut this “Gordian Knot”. The generative AI techniques described above largely try to address the data diversity roadblock by augmenting existing data sets with synthetic data. The challenge: without sufficiently diverse training data, the resulting models could still be biased without the developer being aware. The FL technique brings the model to the data versus centralizing data. Leaving the data at rest on hospital servers means infosec teams can be more comfortable allowing developers to access the rich tapestry of multimodal data while also allowing the data custodians to exercise discretion over the exact uses of their data, opting in or out as they see fit.

Healthcare AI/ML innovators are now using the Rhino Health Platform (RHP)’s FL, distributed computing, and edge computing capabilities to facilitate these hospital-hospital, hospital-industry, and industry-industry collaborations. The RHP has been tested and is trusted by users across the globe. Developers are using the RHP for a variety of model types, using a range of data - from images to clinical notes, deidentified to PHI. Some examples include deep learning for cancer detection, the development of new privacy preservation techniques, image quality assessment, hospital outcome measurement, and predictive patient journeys. These are just some of the many powerful applications of FL and edge computing using the RHP.

Rhino Health is emerging as the healthcare data collaboration standard for generative AI and beyond. We are installed at the leading institutions - having been built in partnership with leading KOLs, passed muster with rigorous hospital infosec reviews, and achieved the comprehensive ISO27001 and SOC2 information security certifications. These have built the foundations of a global network powering healthcare AI innovation. RHP-enabled access to rich data will allow the developers of generative AI models to achieve the full potential of these incredible tools by ensuring they are trained on sufficiently broad and representative samples while still letting hospital infosec teams sleep peacefully. ChatGPT has shown the world the potential of generative AI. Rhino Health is helping healthcare AI developers actually achieve it.

By Ittai Dayan, Co-Founder and CEO; and Chris Laws, VP of Operations

  1. Zrimec, J., Fu, X., Muhammad, A.S. et al. Controlling gene expression with deep generative design of regulatory DNA. Nat Commun 13, 5099 (2022).

  2. Grisoni, F., Huisman, B. J., Button, A. L., Moret, M., Atz, K., Merk, D., &; Schneider, G. (2021). Combining generative artificial intelligence and on-chip synthesis for de novo drug design. Science Advances, 7(24).

  3. Harrer, S., Shah, P., Antony, B., &; Hu, J. (2019). Artificial Intelligence for Clinical Trial Design. Trends in Pharmacological Sciences, 40(8), 577–591.

  4. Lei, Y., Harms, J., Wang, T., Liu, Y., Shu, H. K., Jani, A. B., Curran, W. J., Mao, H., Liu, T., &; Yang, X. (2019). MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Medical Physics, 46(8), 3565–3581.

  5. Schork, N. J. (2019). Artificial Intelligence and personalized medicine. Precision Medicine in Cancer Therapy, 265–283.

  6. Kather, J.N., Ghaffari Laleh, N., Foersch, S. et al. Medical domain knowledge in domain-agnostic generative AI. npj Digit. Med. 5, 90 (2022).

531 views0 comments


bottom of page