Publication: Enhancing medical AI interpretability using heatmap visualization techniques
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Elsevier
Abstract
Artificial intelligence (AI) is becoming increasingly vital in modern life, but as AI approaches get more complicated, they may become more difficult to explain, leading to a lack of trust in their result, especially in the medical field. In medical AI applications, explainable artificial intelligence (XAI) approaches are critical for ensuring that machine learning models are interpretable and trustworthy, which is critical for improving patient outcomes. This chapter aims to address this issue by examining the efficacy of several XAI visualization approaches for medical images, notably endoscopic images and skin cancer images. It compares several XAI visualization approaches on the Kvasir endoscopy dataset with those on the HAM10000 skin cancer dataset. On the Kvasir dataset, an EfficientNet-v2-s model is trained, and several GradCAM techniques are used to create heatmaps. On the HAM10000 dataset, a VGG16 model is trained using both the layer-wise relevance propagation (LRP) and GradCAM approaches. The Intersection over Union (IOU) approach is used as an evaluation metric. This research seeks to enhance heatmaps and boost coverage of regions of interest by combining techniques. The findings of the research indicate that the combination of GradCAM methods produces highly effective heatmaps that capture the most relevant features in medical images. This experiments are confirmed by the IOU method, with the combination of GradCAM++ and LayerCAM techniques generating the best results. Furthermore, our findings indicate that the GradCAM approaches beat LRP in terms of producing accurate and trustworthy heatmaps. © 2025 Elsevier B.V., All rights reserved.
