2024 ARRS ANNUAL MEETING - ABSTRACTS

RETURN TO ABSTRACT LISTING


E2209. Stay Ahead of the AI Curve: A Core Review of Recurrent Neural Networks and Natural Language Processing for Radiologists
Authors
  1. Jeremy Nguyen; Tulane University School of Medicine
  2. Madison Gerahian; Tulane University School of Medicine
  3. Neel Gupta; Tulane University School of Medicine
  4. Cynthia Hanemann; Tulane University School of Medicine
Background
Artificial neural networks are designed to mimic the operation of the human brain. The neural networks are typically composed of layers of artificial neurons that have the ability to process input and forward output to other neurons in the network. There are three main layers in a typical deep learning neural network, the input layer, the hidden layer, and the output layer. The neurons are connected by weights that influence a signal's strength to the neurons. A recurrent neural network (RNN) is a class of neural network, based on a feed-forward architecture. RNNs utilize algorithms commonly used for temporal process, such as language translation RNNS can recognize data's sequential characteristics and use patterns to predict the next likely output. This exhibit will provide a concise tutorial of RNNs without complex mathematics. The radiologist will learn the fundamental architecture of neural networks and subsequently RNNs. The radiologist will learn a distinct feature of RNNs from other types of artificial neural networks, because RNNs use feedback loops to process a sequence of data that yields the final output. These feedback loops in effect allow the RNN to retain memory. The radiologist will also gain an understanding of Natural Language Processing (NLP), which is an application of RNNs.

Educational Goals / Teaching Points
To describe the structure and operation of an “artificial neuron” in a neural network; discuss the basic architecture of a neural network including the input, hidden and output layers; describe the operation of a feed-forward neural network; describe the architecture of a Recurrent Neural Network (RNN); explain the intuitive operation of an RNN with emphasis on memory retention; discuss how the RNN in processing sequential data; discuss the concept of Natural Language Processing (NLP); and describe the application of RNN to NLP.

Key Anatomic/Physiologic Issues and Imaging Findings/Techniques
Findings include artificial neuron, neural networks, feed-forward neural networks, RNNs, and natural language processing.

Conclusion
RNN is a class of artificial neural networks where connections between neurons allowing the network to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This design makes RNNs applicable to tasks such as language modelling, text generation, speech recognition, and generating image descriptions. After the completion of this tutorial, the radiologist will have a firm conceptual knowledge of RNNs and NLP, giving the radiologist a firm foundation to further explore advanced applications of RNN in other domains.