Explaining models by propagating Shapley values of local components

H Chen, S Lundberg, SI Lee - Explainable AI in Healthcare and Medicine …, 2021 - Springer
In healthcare, making the best possible predictions with complex models (eg, neural
networks, ensembles/stacks of different models) can impact patient welfare. In order to make …

Irof: a low resource evaluation metric for explanation methods

L Rieger, LK Hansen - arXiv preprint arXiv:2003.08747, 2020 - arxiv.org
The adoption of machine learning in health care hinges on the transparency of the used
algorithms, necessitating the need for explanation methods. However, despite a growing …

The explanation game: Explaining machine learning models using shapley values

L Merrick, A Taly - Machine Learning and Knowledge Extraction: 4th IFIP …, 2020 - Springer
A number of techniques have been proposed to explain a machine learning model's
prediction by attributing it to the corresponding input features. Popular among these are …

Global explanations of neural networks: Mapping the landscape of predictions

M Ibrahim, M Louie, C Modarres, J Paisley - Proceedings of the 2019 …, 2019 - dl.acm.org
A barrier to the wider adoption of neural networks is their lack of interpretability. While local
explanation methods exist for one prediction, most global attributions still reduce neural …

[HTML][HTML] SurvSHAP (t): time-dependent explanations of machine learning survival models

M Krzyziński, M Spytek, H Baniecki, P Biecek - Knowledge-Based Systems, 2023 - Elsevier
Abstract Machine and deep learning survival models demonstrate similar or even improved
time-to-event prediction capabilities compared to classical statistical learning methods yet …

Counterfactual shapley additive explanations

E Albini, J Long, D Dervovic, D Magazzeni - Proceedings of the 2022 …, 2022 - dl.acm.org
Feature attributions are a common paradigm for model explanations due to their simplicity in
assigning a single numeric score for each input feature to a model. In the actionable …

Towards unifying feature attribution and counterfactual explanations: Different means to the same end

R Kommiya Mothilal, D Mahajan, C Tan… - Proceedings of the 2021 …, 2021 - dl.acm.org
Feature attributions and counterfactual explanations are popular approaches to explain a
ML model. The former assigns an importance score to each input feature, while the latter …

Synthetic benchmarks for scientific research in explainable machine learning

Y Liu, S Khandagale, C White… - arXiv preprint arXiv …, 2021 - arxiv.org
As machine learning models grow more complex and their applications become more high-
stakes, tools for explaining model predictions have become increasingly important. This has …

[HTML][HTML] Explaining a series of models by propagating Shapley values

H Chen, SM Lundberg, SI Lee - Nature communications, 2022 - nature.com
Local feature attribution methods are increasingly used to explain complex machine
learning models. However, current methods are limited because they are extremely …

Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models

W Samek, T Wiegand, KR Müller - arXiv preprint arXiv:1708.08296, 2017 - arxiv.org
With the availability of large databases and recent improvements in deep learning
methodology, the performance of AI systems is reaching or even exceeding the human level …