Abstracts

RETURN TO ABSTRACT LISTING


2355. Multi-View-Enabled Deep Learning for Automated Radiographic View Classification and Fracture Detection of the Elbow
Authors * Denotes Presenting Author
  1. Emine Doganay; University of Pittsburgh
  2. Gene Kitamura; University of Pittsburgh
  3. Lu Yang; Chongqing University Cancer Hospital; University of Pittsburgh
  4. Jun Luo *; University of Pittsburgh
  5. Shandong Wu; University of Pittsburgh
Objective:
Developing deep learning models for fracture detection have potential benefits to reduce treatment lead-time. Multi-view (frontal and lateral) radiographic images are clinically acquired in elbow imaging, but the labeling of these views is not always accurate. In this study, we developed a two-step deep learning method to assign view labels and then detect fractures on adult elbow radiographic images.

Materials and Methods:
4740 elbow radiographic studies each containing frontal and lateral views for a total of 9480 images were retrospectively collected at a single institution (patient age: 50.44, standard deviation: 20.42). Images were reviewed by a board-certified Radiologist. 1598 images (631 frontal and 967 lateral views) were mislabeled on the header. Therefore, for the first step of our study, we built a deep learning model to recognize frontal vs. lateral views to correctly assign view labels. Then in the second step, we built another deep learning model to utilize both the frontal and lateral views to classify the 682 fractured cases against the 4058 non-fractured cases. The Inception-Resnetv2 deep learning network was used, and the frontal and lateral view features were fused at the later layers of the network. We also evaluated the models only using a single view. The dataset was split into 90% for training and 10% for testing. Area under the receiver operating characteristic curve (AUC) and accuracy were measured.

Results:
For view labeling, we obtained 97% accuracy in correctly identifying frontal vs. lateral views. For fracture detection, the AUC of the two-view-fused model was 0.96 and accuracy was 97%. When using only a single view, the AUC/accuracy was 0.94/89% for the frontal view and 0.95/91% for the lateral view.

Conclusion:
Our study showed that deep learning is highly accurate in automatically categorizing elbow radiographic views and detecting elbow fractures. Deep learning models can correctly assign view labels to assist in radiology workflow and automatically triage elbow radiographs to reduce treatment lead-time.