2023 ARRS ANNUAL MEETING - ABSTRACTS

RETURN TO ABSTRACT LISTING


E2709. May We Have Your Attention? A Primer for Natural Language Processing (NLP) in Radiology
Authors
  1. Ali Tejani; UT Southwestern Medical Center
  2. Yee Ng; UT Southwestern Medical Center
  3. Yin Xi; UT Southwestern Medical Center
  4. Vincent Parenti; UT Southwestern Medical Center
  5. Gaurav Khatri; UT Southwestern Medical Center
  6. Jesse Rayan; UT Southwestern Medical Center
Background
Natural language processing (NLP) refers to the branch of artificial intelligence (AI) that is used to analyze and manipulate human language. NLP enables significant workflow efficiencies by facilitating non-interpretive use cases in radiology. Increasing amounts of medical, specifically radiology, data allows for the creation of potent NLP solutions for clinical, educational, and research initiatives. The advent of transformer-based models, such as Bidirectional Encoder Representations from Transformers (BERT), has accelerated NLP endeavors, outperforming traditional NLP models on a range of tasks. BERT and BERT-derived models have recently been featured for several radiology use cases, providing a glimpse for the innovative solutions fostered by advanced NLP techniques.

Educational Goals / Teaching Points
It is essential that radiologists recognize this technology and understand its foundational principles, as NLP-powered tools will continue to shape clinical practice and research. Though resources exist to describe the role of NLP in radiology, a through discussion of transformer-based models is needed. This exhibit presents a more thorough review of its technical aspects and current use cases. Accordingly, this exhibit provides an overview of the field of NLP, emphasizing the development of transformer-based models and radiology-specific use cases. A brief history of NLP, leading to the introduction of BERT and BERT-derived models, and the current state of NLP in radiology are covered. This exhibit also presents basic technical concepts essential to understanding how NLP processes data, highlighting potential challenges to implementation and techniques to mitigate potential bias.

Key Anatomic/Physiologic Issues and Imaging Findings/Techniques
This exhibit will introduce NLP and transformer-model terminology, including: sequence-to-sequence (seq2seq) models, recurrent neural networks (RNN), attention mechanism, encoder, decoder, masked language modeling, next sentence prediction, word embeddings, tokenization, Transformer XL, BERT, and foundation models.

Conclusion
NLP solutions, particularly transformer-based models, already affect routine tasks and are expanding to use cases within radiology. Accordingly, radiologists should understand basic principles underlying these solutions to leverage their potential and avoid pitfalls that may detrimentally affect patient care or research endeavors.