ARRS 2022 Abstracts

RETURN TO ABSTRACT LISTING


E2075. Whole Kidney Segmentation on Non-Contrast CT Data Sets Using a Convolutional Neural Network
Authors
  1. Lucas Aronson; University of Wisconson Madison School of Medicine and Public Health
  2. Ruben Ngnitewe Massa; University of Wisconson Madison School of Medicine and Public Health
  3. Andrew Wentland; University of Wisconson Madison School of Medicine and Public Health
Objective:
Automated segmentation tools are needed for processing large volumes of data. In 2019, the Kidney Tumor Segmentation Challenge (KiTS19) sought machine learning algorithms that could reliably segment the kidneys and kidney tumors. One of the top-performing algorithms from this KiTS19 challenge is the open-source Medical Imaging Segmentation with Convolutional Neural Nets (MIScnn) Python package. The MIScnn algorithm yielded a kidney segmentation Dice coefficient of 0.9544. Yet, it should be noted that the KiTS19 dataset provides only abdominal contrast-enhanced CT images, and therefore the machine learning models derived therefrom cannot be used on noncontrast images. The goal of this study is to implement and retrain the MIScnn program for segmentation of the kidneys from noncontrast CT datasets.

Materials and Methods:
Manual segmentation of the kidneys was performed on 55 abdominal noncontrast CT datasets, yielding 110 segmented kidneys. The data were divided into a 66/33 train/test split using a three-fold cross-validation method consisting of 100 epochs and 50 iterations. Training metrics including Tversky loss, Dice coefficient, soft Dice coefficient, and Dice cross-entropy were monitored during the cross-validation stage for optimization of training model hyperparameters and assessment of model functionality.

Results:
Visual comparison of predicted model segmentations compared to ground truth segmentations demonstrated highly similar and consistent results, without appreciable irregularities in the segmentations from this limited test set. The final performance metrics, which correspond to the optimal model saved at the end of training, include Tversky loss = 0.1253, Dice coefficient = 0.9972, soft Dice coefficient = 0.9376, and Dice cross-entropy = 0.2699.

Conclusion:
The MIScnn can be implemented and retrained to automatically segment kidneys from CT images without contrast-enhancement to a high degree of accuracy. These findings facilitate large-scale segmentation projects to be done on a vast array of non-contrast image data sets in the future.