The application of AI in the healthcare diagnostics has further developed the state of disease detection, risk assessment of patients, and clinical decision-making, yet the black-box nature of the vast majority of machine learning (ML) models serves as a major factor preventing such approaches to be used in clinical practice. This project investigates the potential benefit of explainable AI (XAI) on medical transparency through optimising the trade-off between predictive performance and interpretability. We compare convolutional neural networks (CNNs), CNNs augmented with Gradient-weighted Class Activation Mapping (Grad-CAM) and Random Forests with SHapley Additive Explanations (SHAP) using the NIH ChestXray14 dataset. The results indicate that CNNs have the highest accuracy of 91 percent and AUC of 0.95, which however reflects less on clinical trust and accountability due to lack of clear interpretability. CNN + Grad-CAM models have comparable (90%) accuracy with the closest alignment with radiological reasoning in the visual explanation, whereas Random Forest + SHAP models did not necessarily perform any worse with a slightly lower accuracy (90%) but have the highest level of interpretability (fidelity = 0.89). These results highlight one of the key tradeoffs between the raw predictive performances and medical transparency: interpretability is a necessary procedure and not a luxury.
Explainable AI, Healthcare Diagnostics, Machine Learning, Transparency, SHAP, Grad-CAM, Interpretability
IRE Journals:
SUJON SARKAR "Enhancing Medical Transparency: An Explainable AI Approach to Machine Learning-Based Healthcare Diagnostics" Iconic Research And Engineering Journals Volume 9 Issue 3 2025 Page 2101-2113
IEEE:
SUJON SARKAR
"Enhancing Medical Transparency: An Explainable AI Approach to Machine Learning-Based Healthcare Diagnostics" Iconic Research And Engineering Journals, 9(3)