Algorithms to estimate Shapley value feature attributions

H Chen, IC Covert, SM Lundberg, SI Lee - Nature Machine Intelligence, 2023 - nature.com
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …

Opportunities and challenges in explainable artificial intelligence (xai): A survey

A Das, P Rad - arXiv preprint arXiv:2006.11371, 2020 - arxiv.org
Nowadays, deep neural networks are widely used in mission critical systems such as
healthcare, self-driving vehicles, and military which have direct impact on human lives …

Explainable machine learning in deployment

U Bhatt, A Xiang, S Sharma, A Weller, A Taly… - Proceedings of the …, 2020 - dl.acm.org
Explainable machine learning offers the potential to provide stakeholders with insights into
model behavior by using various methods such as feature importance scores, counterfactual …

Explaining by removing: A unified framework for model explanation

I Covert, S Lundberg, SI Lee - Journal of Machine Learning Research, 2021 - jmlr.org
Researchers have proposed a wide variety of model explanation approaches, but it remains
unclear how most methods are related or when one method is preferable to another. We …

Problems with Shapley-value-based explanations as feature importance measures

IE Kumar, S Venkatasubramanian… - International …, 2020 - proceedings.mlr.press
Game-theoretic formulations of feature importance have become popular as a way to"
explain" machine learning models. These methods define a cooperative game between the …

Shapley values for feature selection: The good, the bad, and the axioms

D Fryer, I Strümke, H Nguyen - Ieee Access, 2021 - ieeexplore.ieee.org
The Shapley value has become popular in the Explainable AI (XAI) literature, thanks, to a
large extent, to a solid theoretical foundation, including four “favourable and fair” axioms for …

Understanding global feature contributions with additive importance measures

I Covert, SM Lundberg, SI Lee - Advances in Neural …, 2020 - proceedings.neurips.cc
Understanding the inner workings of complex machine learning models is a long-standing
problem and most recent research has focused on local interpretability. To assess the role of …

Feature relevance quantification in explainable AI: A causal problem

D Janzing, L Minorics… - … Conference on artificial …, 2020 - proceedings.mlr.press
We discuss promising recent contributions on quantifying feature relevance using Shapley
values, where we observed some confusion on which probability distribution is the right one …

On the tractability of SHAP explanations

G Van den Broeck, A Lykov, M Schleich… - Journal of Artificial …, 2022 - jair.org
SHAP explanations are a popular feature-attribution mechanism for explainable AI. They
use game-theoretic notions to measure the influence of individual features on the prediction …

Impossibility theorems for feature attribution

B Bilodeau, N Jaques, PW Koh… - Proceedings of the …, 2024 - National Acad Sciences
Despite a sea of interpretability methods that can produce plausible explanations, the field
has also empirically seen many failure cases of such methods. In light of these results, it …