Quantifying uncertainty in deep learning of radiologic images

S Faghani, M Moassefi, P Rouzrokh, B Khosravi… - Radiology, 2023 - pubs.rsna.org
In recent years, deep learning (DL) has shown impressive performance in radiologic image
analysis. However, for a DL model to be useful in a real-world setting, its confidence in a …

[HTML][HTML] Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods

SS Band, A Yarahmadi, CC Hsu, M Biyari… - Informatics in Medicine …, 2023 - Elsevier
This paper investigates the applications of explainable AI (XAI) in healthcare, which aims to
provide transparency, fairness, accuracy, generality, and comprehensibility to the results …

Towards trustworthy and aligned machine learning: A data-centric survey with causality perspectives

H Liu, M Chaudhary, H Wang - arXiv preprint arXiv:2307.16851, 2023 - arxiv.org
The trustworthiness of machine learning has emerged as a critical topic in the field,
encompassing various applications and research areas such as robustness, security …

Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning

E ŞAHiN, NN Arslan, D Özdemir - Neural Computing and Applications, 2024 - Springer
Deep learning models have revolutionized numerous fields, yet their decision-making
processes often remain opaque, earning them the characterization of “black-box” models …

Fast diffusion-based counterfactuals for shortcut removal and generation

N Weng, P Pegios, E Petersen, A Feragen… - European Conference on …, 2025 - Springer
Shortcut learning is when a model–eg a cardiac disease classifier–exploits correlations
between the target label and a spurious shortcut feature, eg a pacemaker, to predict the …

[HTML][HTML] A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging

M Champendal, H Müller, JO Prior… - European journal of …, 2023 - Elsevier
Abstract Purpose To review eXplainable Artificial Intelligence/(XAI) methods available for
medical imaging/(MI). Method A scoping review was conducted following the Joanna Briggs …

Using generative AI to investigate medical imagery models and datasets

O Lang, D Yaya-Stupp, I Traynis, H Cole-Lewis… - …, 2024 - thelancet.com
Background AI models have shown promise in performing many medical imaging tasks.
However, our ability to explain what signals these models have learned is severely lacking …

Explainable AI for Medical Data: Current Methods, Limitations, and Future Directions

MI Hossain, G Zamzmi, PR Mouton, MS Salekin… - ACM Computing …, 2023 - dl.acm.org
With the power of parallel processing, large datasets, and fast computational resources,
deep neural networks (DNNs) have outperformed highly trained and experienced human …

Studying the impact of augmentations on medical confidence calibration

A Rao, JY Lee, O Aalami - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
The clinical explainability of convolutional neural networks (CNN) heavily relies on the joint
interpretation of a model's predicted diagnostic label and associated confidence. A highly …

Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization

O Rotem, T Schwartz, R Maor, Y Tauber… - Nature …, 2024 - nature.com
The success of deep learning in identifying complex patterns exceeding human intuition
comes at the cost of interpretability. Non-linear entanglement of image features makes deep …