2024 ARRS ANNUAL MEETING - ABSTRACTS

RETURN TO ABSTRACT LISTING


E5204. A Review of ChatGPT Use Cases in Radiology and Practical Applications
Authors
  1. Mary Heekin; George Washington University Hospital; George Washington University School of Medicine
  2. Ahmed Ismail; George Washington University Hospital; George Washington University School of Medicine
  3. Oleksiy Melnyk; George Washington University Hospital; George Washington University School of Medicine
  4. Ahmed Abdelmonem; George Washington University Hospital
  5. Theodore Kim; George Washington University Hospital; George Washington University School of Medicine
  6. Nima Ghorashi; George Washington University Hospital; George Washington University School of Medicine
  7. Ramin Javan; George Washington University Hospital
Background
Artificial intelligence (AI) is rapidly transforming as advanced tools powered by large language models (LLM) are integrated into clinical practice, research, and education. It is imperative for radiologists to have a strong understanding of the capabilities, limitations, and applications of LLMs to leverage AI-driven innovations for improved patient care, productivity, and advancement of the field. The purpose of this study is to review ChatGPT/GPT-4 use cases in radiology and practical applications to foster further conversation on the topic.

Educational Goals / Teaching Points
ChatGPT/GPT-4 may support radiologists by simplifying report generation and offering decision, diagnostic, and preprocedure guidance. Studies have shown ChatGPT's ability to create radiologic reports that are mostly correct and complete. ChatGPT/GPT-4 can offer evidence-based recommendations following American College of Radiology (ACR) guidelines, and chatbots trained with context-based algorithms have outperformed radiologists in this task. These models can list differential diagnoses based on clinical history and image findings with more than 60% accuracy, suggest appropriate radiological studies, and answer questions on radiation dose. ChatGPT may improve communication with patients by translating reports into plain language and writing discharge summaries at a basic reading level. It may reduce administrative burden by recommending ICD/CPT codes and drafting insurance authorization letters. GPT-4 has potential to assist in image anomaly detection and data aggregation, though the current model could not recognize objects in photos. Within research, ChatGPT can identify knowledge gaps and assist with literature searches by summarizing texts and extracting key points. It suggested statistical tests when given a dataset and successfully wrote code to calculate desired ratios. The AI has written scientific abstracts, offered tips for structuring a competition case report, and can generate or edit text to follow specific submission guidelines. ChatGPT can enhance medical education by acting as a tutor, creating clinical scenarios to practice history-taking and differential diagnosis and by writing quiz questions. It can help radiologists engage in health policy by answering questions on latest changes and summarizing or drafting statements. ChatGPT can also impact public health by identifying at-risk populations for screening and has demonstrated ability to appropriately answer patient questions on breast cancer screening and prevention.

Key Anatomic/Physiologic Issues and Imaging Findings/Techniques
We performed a literature search using PubMed, Google Scholar, and Cochrane with keywords, "Artificial Intelligence" and "ChatGPT" to identify use cases and implementation examples of ChatGPT/GPT-4. Relevant articles within the last 5 years were included.

Conclusion
As AI continues to rapidly evolve and gain popularity, it is crucial to remain aware of its potential uses and limitations. Researchers have shown clinical and practical applications of ChatGPT/GPT-4 with varying results. AI cannot replace radiologists but may support enhanced decision-making, performance, and efficiency.