Enhancing Transparency in AI-Based Medical Imaging
Enhancing Transparency in AI-Based Medical Imaging
Authors:
Ayithi Dileep Kumar1, Battina Hemanth Kumar2, J. Teja3, B. Krishna Bhargavi4, Mr R. Ravi5
1,2,3,4 B.Tech student &5 Assistant Professor
Department of Information Technology, MVGR College of Engineering (A), Vizianagaram,
Andhra Pradesh, India
Abstract—Medical imaging diagnosis is time-consuming and relies on doctors’ long hours of expert work. Convolutional Neural Networks (CNNs) have recently shown a lot of promise in diagnosing from images such as endoscopy scans, reaching high accuracy on gastroenterological images, specifically in distinguishing between bleeding and non-bleeding cases using Video Capsule Endoscopy (VCE). However, many people find that explaining the reasons behind the predictions produced by such black box models to be a crucial challenge towards gaining users’ trust. Explainable Artificial Intelligence (XAI) aims to improve the understandability and transparency of the decisions made by Artificial Intelligence systems, and can thus support doctors in their diagnosis decisions. In this project, we develop and train a CNN to distinguish between bleeding and non-bleeding cases using VCE images. To increase the understanding of the deep learning model, we employ three different explanations methods (LIME, SHAP, and Contextual Importance and Importance and Utility (CIU)). In an extensive user study with 60 participants, we evaluate the quality, performance, and benefit of the used explanation methods. The results show that the Contextual Importance and Utility (CIU) provides the best explanations regarding to clarity, efficiency and helpfulness for the users, compared to the other examined methods. The study’s results provide insights in the potential of XAI in the medical diagnosis context to increase the usability and trustworthiness of deep learning models.