Abstracts

RETURN TO ABSTRACT LISTING


E1928. Deep Learning Detection of Triquetral Fractures Using Cascaded Algorithms to Mimic Radiologist Search Pattern
Authors
  1. Mark Ren; Johns Hopkins University School of Medicine
  2. Paul Yi; Johns Hopkins University School of Medicine
Objective:
In medical imaging, deep convolutional neural networks (DCNNs) have demonstrated the ability to quickly and automatically detect radiological findings, including fractures. Triquetral fractures are subtle findings that typically require a radiologist to zoom in on the area of interest. The small size of triquetral fracture fragments may preclude accurate DCNN identification on whole images. The purpose of this study was to evaluate a two-stage deep learning method to identify triquetral fractures by mimicking this radiologist search pattern.

Materials and Methods:
We obtained and annotated 282 lateral wrist radiographs (53 with triquetral fracture) to train and validate two DCNN stages: 1) an object detector that identifies and crops the region of the dorsal wrist including the dorsal triquetrum and fracture fragments, if present; 2) a classifier for triquetral fractures. All of these radiographs were used to train and validate the object detector, and the 53 images with triquetral fracture were paired with 53 non-fracture images to train and validate the classifier. Images presented to the classifier were automatically cropped by the object detector to the dorsal triquetrum. A second classifier was trained on uncropped images for comparison. Standard image augmentation was used to expand training and validation data. An external test set of 50 lateral wrist radiographs (25 fractures) was used to evaluate the algorithm. Gradient-class activation mapping (Grad-CAM) was used to visually inspect regions of images for which the DCNN assigned greater importance in deciding the final classification.

Results:
The object detector accurately localized the dorsal triquetrum on 100% of validation and external test images, with a mean average precision >0.99. On the full images, mean 5-fold cross-validation accuracy of triquetral fracture detection for the one-stage classifier was 76%, with a receiver operating characteristic (ROC) area under the curve (AUC) of 0.90. On the two-stage deep learning pipeline, mean accuracy was 87% with an AUC of 0.95 (p<0.02). Grad-CAM heatmaps of the cropped classifier in the two-stage pipeline showed appropriate localization of fracture fragments, while heatmaps of the one-stage classifier showed general activation across the wrist, without specific focus on the dorsal wrist, triquetrum, or fracture fragments.

Conclusion:
A two-stage deep learning pipeline significantly increases accuracy in the detection of triquetral fractures on radiographs compared with a one-stage DCNN classifier. Using a small dataset, we achieved performance comparable with DCNNs classifying much larger abnormalities with thousands of training images. The object detection stage correctly cropped the region of interest on all images, providing evidence that this technique may be used multiple times across a radiograph while introducing minimal error. By focusing attention on specific image regions in a manner mimicking a radiologist search pattern, deep learning algorithms can improve detection of subtle findings that may otherwise be missed.