E3401. Ethical Considerations for Reimbursement of AI Software Used in Radiology Under CMS and Value-Based Models
  1. Sophie Curie; Texas A&M School of Medicine
  2. Clarissa Martin; University of Pennsylvania
  3. Joseph Waller; Christiana Care Hospital
  4. Muhammad Umair; Johns Hopkins School of Medicine
Use of artificial intelligence (AI) and machine learning (ML) in medicine will change the landscape of radiology. The FDA has approved nearly 400 AI and ML-enabled medical devices in the field of radiology. Currently, select AI systems can be reimbursed through Centers for Medicare & Medicaid Services (CMS) by using an assigned Category Procedural Terminology (CPT) Category III code or a time-limited New Technology Add-On Payment (NTAP) designation. Reimbursement of AI systems encourages development and adoption, warranting formal guidance on which systems should be reimbursable. Although CMS has published an AI Playbook, there are no official guidelines regarding ethical or efficacy standards. We sought to elucidate a set of principles to which AI systems seeking reimbursement should be compared.

Materials and Methods:
A systematic review of the literature was conducted, using string queries of “ai artificial intelligence” and “ethics” or “reimbursement” and “radiology” or “imaging” in Google Scholar and PubMed with publication dates from 2010 to the beginning of 2023. Studies were screened to remove duplicates, studies not published in English, guidelines from government organizations that do not represent the United States, preclinical studies, and having a primary focus not directly relevant to the ethics of AI applications in radiology. Finally, articles were read in full and deemed to be acceptable if they provided a discussion or mention of data protection, data sharing, financial interests, reimbursement, or the IRB process. Thematic analysis was conducted to derive common patterns across the included articles (<em>n</em> = 15).

Our analysis found six principles deemed to be inherent to any AI or ML solution seeking reimbursement. An adequately educated human must be able to accurately describe how the algorithm has come to a decision (“transparency”) and this decision-making process must be translatable to the patient in plain terms (“explainability”). The algorithm must be able to consistently and reliably come to an accurate diagnosis (“effectiveness”). The system must be trained using demographically diverse data and include measures to mitigate potential diagnostic bias if necessary (“equitability”). Finally, the algorithm must possess measures to maintain patient safety and security from outside cyberattacks.

We conducted a systematic review of the literature to assess existing frameworks of ethical use of AI. The themes that we elucidated from our dataset can be used to construct a formal set of guidelines in which AI applications in radiology should operate to become eligible to receive reimbursement under value-based systems.