EXPLAINABLE DEEP LEARNING MODELS FOR MEDICAL IMAGE DIAGNOSIS: BRIDGING ACCURACY AND INTERPRETABILITY

Main Article Content

Sunny Nguyen,Raiden Nguyen,Oni Samuel Boluwatife

Abstract

The growing use of deep learning models in diagnosing medical images has resulted in impressive gains in terms of the accuracy in diagnosis. Still, the lack of transparency in these models tends to increase the resistance of these models in clinical use. This paper will fill the accuracy-interpretability gap by creating explainable deep learning models that do not affect predictive accuracy. An explainability methodology was incorporated into a convolutional neural network (CNN) architecture using the state-of-the-art explainability methods, Gradient-weighted Class Activation Mapping (Grad-CAM), Local Interpretable Model-Agnostic Explanations (LIME), and Integrated Gradients, to visualize model reasoning and locate important diagnostic regions of medical images. Benchmark radiology and histopathology data sets were evaluated experimentally to assess their accuracy and F1-score, interpretability measures, and the level of clinician agreement. The proposed model attained a diagnostic accuracy of 95.8 and an interpretability score of 0.89, which indicates a high similarity of model explanations and expert annotations. Using an explainability tool compared to the control group led to a marked increase in clinical trust and comprehension of model predictions. The results highlight that deep learning systems can be highly diagnostic and, at the same time, produce a transparent decision-making process. The study can be considered significant to developing human-centric artificial intelligence in healthcare as it contributes to accountability, reliability, and interpretability in medical image analysis.

Article Details

Section
Articles