2024 ARRS ANNUAL MEETING - ABSTRACTS

RETURN TO ABSTRACT LISTING


E5007. Maximizing Generative Artificial Intelligence Potential in Radiology: Open Source and Local Language Models
Authors
  1. Kim Theodore; George Washington University School of Medicine
  2. Oleksiy Melnyk; George Washington University School of Medicine
  3. Mary Heekin; George Washington University School of Medicine
  4. Ahmed Ismail; George Washington University School of Medicine
  5. Naureen Zahra; No Affiliation
  6. Mahsa Najafzadeh; No Affiliation
  7. Ramin Javan; George Washington University School of Medicine
Background
Generative artificial intelligence (AI) has seen a significant uptick in interest and usage since the introduction of ChatGPT in the beginning of 2023, with the field of medicine and radiology being no exception. With such rapid development of AI tools, each with potential to positively impact clinical practice, research, and education, it is hard to discern how to best utilize this innovative technology to maximize its potential and application. This educational exhibit seeks to elucidate its readers of various open-source and local AI language models that could, down the line, be more applicable to radiology, due to its focus on privacy and size-constraints.

Educational Goals / Teaching Points
1. What makes a language model open-source or local. 2. Examples of open-source language models. 3. Examples of local language models. 4. How to access and use open-source or local language models.

Key Anatomic/Physiologic Issues and Imaging Findings/Techniques
Not applicable

Conclusion
Open-source language models have made tremendous advancements to be comparable traditional large language models (LLMs) and offer themselves to be a great alternative to those constrained by cost or want a customized experience. Yet, there are still disadvantages to open-source language models, largely that it is still expensive and requires individual know-how. Their performance benchmarks can also be overestimated, and still require catch-up to traditional LLMs. Local language models are smaller models that can run on your personal computers or smartphones. Their ability to be run offline can give privacy advantage over traditional LLMs, which require cloud access, making it potentially beneficial in clinical context where safe handling of patient data is required. They also are often faster than traditional LLMs, which may offer advantage where immediate results and feedback is warranted during patient care. However, some drawbacks include that its smaller size leads to reduced accuracy and can lead to increased algorithmic bias.