ARRS 2022 Abstracts

RETURN TO ABSTRACT LISTING


2093. Machine Learning System to Fully Automate the Detecting Hepatocellular Carcinoma with Liver Imaging Reporting and Data System (LI-RADS)
Authors * Denotes Presenting Author
  1. Jennifer Jin; California State University, San Bernardino
  2. Soo Kim; Soongsil University
  3. Michael Christie; Loma Linda University Medical Center
  4. Abner Wilding; Loma Linda University Medical Center
  5. Shelley Villamor; Loma Linda University School of Medicine
  6. Abigail Beaven; Loma Linda University School of Medicine
  7. Daniel Jin *; Loma Linda University Medical Center
Objective:
HCC is the most common type of primary liver cancer. Liver Imaging Reporting and Data System (LI-RADS) is a guideline for classifying image findings in liver lesions. It defines major criteria for each phase of CT scans and ancillary features to determine LI-RADS classification. The objective of this study is to develop a set of software techniques and implement a machine learning (ML) system that can fully automate the process of detecting HCC with LI-RADS score.

Materials and Methods:
We designed 7 ML models to automate the LI-RADS process. The different ML models require a different content in the training set, so we created well-labeled liver CT scans from various sources. A retrospective IRB-approved dataset was used to train ML models for segmenting organs and HCC tumors. The dataset contained 184 CT series (64,531 CT slices), which were acquired from Loma Linda University Medical Center(LLUMC) and LiTS from MICCAI17. The same dataset was used to train the ML model for classifying image features and for classifying tumor types contains 89 CT series (6,065 CT slices), and was acquired from LLUMC and Radiopaedia Image Case repository. The dataset for classifying LI-RADS stages contained 8000 stage instances and was derived from all datasets by applying Poisson distribution. We defined a software process that performs all the key steps in LI-RADS. Mask Region Based Convolutional Neural Network (Mask R-CNN) was used for segmenting liver/Couinaud classification, tumor, and hepatic vessels. CNN-based classification was used for classifying features of tumor images and types of tumors. Support Vector Machines (SVM) classification was used for LI-RADS grading. We designed a software process that consists of 28 steps to fully automate the LI-RADS classification. The 7 ML models participated in the process as means to CT image analytics.

Results:
We evaluated the system with 20% of the acquired phased CT scans as test sets, then measured the system performance using 4 metrics: Dice Similarity Coefficients (DSC), Accuracy, Precision, and Recall. DSC measures the performance of segmentations, the average DSC for organ segmentation with post-processing was 86.8%, and that for tumor segmentation was 64.7%. Accuracy measures the performance of classifications and the average accuracy of classifying features of tumor images was 51.1%. Accuracy for classifying tumor types was 62.6%, and LR-stages was 97.7%. The compound performance integrating the 4 measures with weights was 74.1%. The ML models for tumor segmentation, tumor feature classification, and tumor type classification present a technical challenge in automating LI-RADS due to the nature of small image sizes, and variability of tumor types and shapes.

Conclusion:
We validated the accuracy of an ML software process for automating LI-RADS; every single step in LI-RADS was automated by the software. The Mask R-CNN, CNN classification, and SVM algorithms were shown to be effective in training the 7 ML models. The compound performance of 74.1% is not optimal, but it is a significant pioneering step for fully automating LI-RADS classification.