2024 ARRS ANNUAL MEETING - ABSTRACTS

RETURN TO ABSTRACT LISTING


E5182. Understanding Generative AI: Basics and Terminology
Authors
  1. Oleksiy Melnyk; George Washington University School of Medicine School of Medicine
  2. Mary Heekin; George Washington University School of Medicine School of Medicine
  3. Ahmed Ismail; George Washington University School of Medicine School of Medicine
  4. Nima Ghorashi; George Washington University School of Medicine School of Medicine
  5. Ahmed Abdelmonem; George Washington University School of Medicine School of Medicine
  6. Ramin Javan; George Washington University Hospital, Radiology Department
  7. Theodore Kim; George Washington University School of Medicine School of Medicine
Background
Given the rise of generative artificial intelligence (AI) and large language models (LLM), it is crucial to revisit and be informed about the basic concepts and terminology. Familiarity with key AI concepts is essential to fully appreciate the generative AI potential in healthcare, clinical radiology, and research.

Educational Goals / Teaching Points
Outlined are AI-based terms for radiologists who want to learn more about generative AI and LLM concepts without prior knowledge. Terminology covered includes: artificial intelligence, machine learning, artificial neural networks, deep learning, supervised and unsupervised learning, backpropagation, reinforcement learning, natural language processing, autonomous agents, recurrent neural networks, self-attention, transformer, computer vision, convolutional neural networks, pretraining, fine-tuning, transfer learning, generative models, and language models.

Key Anatomic/Physiologic Issues and Imaging Findings/Techniques
Machine Learning: AI that enables programs to learn from large datasets by identifying patterns and improving performance through experience, iterative feedback, and fine-tuning. Artificial neural networks (ANN): Algorithms with an architecture inspired by biological neural networks consisting of interconnected nodes, or artificial “neurons,” forming an input layer, hidden layers for processing information, and an output layer. Deep learning (DL): ML that uses deep neural networks (DNNs), consisting of multiple layers of weighted interconnected nodes for autonomous data analysis, pattern recognition and decision making. Backpropagation: supervised learning technique that computes the error between predicted and actual outputs, backpropagating it through the layers and adjusting connection weights. This is done over many iterations or epochs. Reinforcement learning (RL): unsupervised learning technique where rewards are maximized through feedback in a problem-oriented environment, which can be further reinforced with human feedback, known as “alignment.” Recurrent neural networks (RNNs): networks designed to process sequential data with temporal dependencies, such as time series or natural language, by utilizing internal loops that allow information to persist. Self-attention: enables models to assign varying degrees of importance to different parts of the input during processing, focusing on relevant information for making predictions. Transformer: RNN architecture that employs self-attention, focusing on different parts of the input sequence simultaneously and enabling parallelization, a technique that divides tasks into smaller subtasks executed concurrently across multiple processing units.

Conclusion
Generative AI represents one of the most disruptive technologies in modern times and has now become publicly available, garnering immense opportunities. Those wishing to take advantage of the potential applications of these tools in radiology will benefit from understanding the basic LLM terminology.