Algorithms to estimate Shapley value feature attributions
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …
learning models. However, their estimation is complex from both theoretical and …
Opportunities and challenges in explainable artificial intelligence (xai): A survey
Nowadays, deep neural networks are widely used in mission critical systems such as
healthcare, self-driving vehicles, and military which have direct impact on human lives …
healthcare, self-driving vehicles, and military which have direct impact on human lives …
Explainable machine learning in deployment
Explainable machine learning offers the potential to provide stakeholders with insights into
model behavior by using various methods such as feature importance scores, counterfactual …
model behavior by using various methods such as feature importance scores, counterfactual …
Explaining by removing: A unified framework for model explanation
Researchers have proposed a wide variety of model explanation approaches, but it remains
unclear how most methods are related or when one method is preferable to another. We …
unclear how most methods are related or when one method is preferable to another. We …
Problems with Shapley-value-based explanations as feature importance measures
IE Kumar, S Venkatasubramanian… - International …, 2020 - proceedings.mlr.press
Game-theoretic formulations of feature importance have become popular as a way to"
explain" machine learning models. These methods define a cooperative game between the …
explain" machine learning models. These methods define a cooperative game between the …
Shapley values for feature selection: The good, the bad, and the axioms
The Shapley value has become popular in the Explainable AI (XAI) literature, thanks, to a
large extent, to a solid theoretical foundation, including four “favourable and fair” axioms for …
large extent, to a solid theoretical foundation, including four “favourable and fair” axioms for …
Understanding global feature contributions with additive importance measures
Understanding the inner workings of complex machine learning models is a long-standing
problem and most recent research has focused on local interpretability. To assess the role of …
problem and most recent research has focused on local interpretability. To assess the role of …
Feature relevance quantification in explainable AI: A causal problem
D Janzing, L Minorics… - … Conference on artificial …, 2020 - proceedings.mlr.press
We discuss promising recent contributions on quantifying feature relevance using Shapley
values, where we observed some confusion on which probability distribution is the right one …
values, where we observed some confusion on which probability distribution is the right one …
On the tractability of SHAP explanations
SHAP explanations are a popular feature-attribution mechanism for explainable AI. They
use game-theoretic notions to measure the influence of individual features on the prediction …
use game-theoretic notions to measure the influence of individual features on the prediction …
Impossibility theorems for feature attribution
Despite a sea of interpretability methods that can produce plausible explanations, the field
has also empirically seen many failure cases of such methods. In light of these results, it …
has also empirically seen many failure cases of such methods. In light of these results, it …