Towards clinical application of artificial intelligence in ultrasound imaging

M Komatsu, A Sakai, A Dozen, K Shozu, S Yasutomi… - Biomedicines, 2021 - mdpi.com
Artificial intelligence (AI) is being increasingly adopted in medical research and applications.
Medical AI devices have continuously been approved by the Food and Drug Administration …

Towards a science of human-ai decision making: a survey of empirical studies

V Lai, C Chen, QV Liao, A Smith-Renner… - arXiv preprint arXiv …, 2021 - arxiv.org
As AI systems demonstrate increasingly strong predictive performance, their adoption has
grown in numerous domains. However, in high-stakes domains such as criminal justice and …

Diffusion visual counterfactual explanations

M Augustin, V Boreiko, F Croce… - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Visual Counterfactual Explanations (VCEs) are an important tool to understand the
decisions of an image classifier. They are “small” but “realistic” semantic changes of the …

Evaluating explainable AI: Which algorithmic explanations help users predict model behavior?

P Hase, M Bansal - arXiv preprint arXiv:2005.01831, 2020 - arxiv.org
Algorithmic approaches to interpreting machine learning models have proliferated in recent
years. We carry out human subject tests that are the first of their kind to isolate the effect of …

Explaining the black-box model: A survey of local interpretation methods for deep neural networks

Y Liang, S Li, C Yan, M Li, C Jiang - Neurocomputing, 2021 - Elsevier
Recently, a significant amount of research has been investigated on interpretation of deep
neural networks (DNNs) which are normally processed as black box models. Among the …

On generating plausible counterfactual and semi-factual explanations for deep learning

EM Kenny, MT Keane - Proceedings of the AAAI Conference on …, 2021 - ojs.aaai.org
There is a growing concern that the recent progress made in AI, especially regarding the
predictive competence of deep learning models, will be undermined by a failure to properly …

Instance-based counterfactual explanations for time series classification

E Delaney, D Greene, MT Keane - International conference on case …, 2021 - Springer
In recent years, there has been a rapidly expanding focus on explaining the predictions
made by black-box AI systems that handle image and tabular data. However, considerably …

Fastif: Scalable influence functions for efficient model interpretation and debugging

H Guo, NF Rajani, P Hase, M Bansal… - arXiv preprint arXiv …, 2020 - arxiv.org
Influence functions approximate the" influences" of training data-points for test predictions
and have a wide variety of applications. Despite the popularity, their computational cost …

Explanation by progressive exaggeration

S Singla, B Pollack, J Chen… - arXiv preprint arXiv …, 2019 - arxiv.org
As machine learning methods see greater adoption and implementation in high stakes
applications such as medical image diagnosis, the need for model interpretability and …

Dissect: Disentangled simultaneous explanations via concept traversals

A Ghandeharioun, B Kim, CL Li, B Jou, B Eoff… - arXiv preprint arXiv …, 2021 - arxiv.org
Explaining deep learning model inferences is a promising venue for scientific
understanding, improving safety, uncovering hidden biases, evaluating fairness, and …