Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models

T Heskes, E Sijben, IG Bucur… - Advances in neural …, 2020 - proceedings.neurips.cc
Shapley values underlie one of the most popular model-agnostic methods within
explainable artificial intelligence. These values are designed to attribute the difference …

The explanation game: Explaining machine learning models using shapley values

L Merrick, A Taly - Machine Learning and Knowledge Extraction: 4th IFIP …, 2020 - Springer
A number of techniques have been proposed to explain a machine learning model's
prediction by attributing it to the corresponding input features. Popular among these are …

Problems with Shapley-value-based explanations as feature importance measures

IE Kumar, S Venkatasubramanian… - International …, 2020 - proceedings.mlr.press
Game-theoretic formulations of feature importance have become popular as a way to"
explain" machine learning models. These methods define a cooperative game between the …

Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability

C Frye, C Rowat, I Feige - Advances in neural information …, 2020 - proceedings.neurips.cc
Explaining AI systems is fundamental both to the development of high performing models
and to the trust placed in them by their users. The Shapley framework for explainability has …

Shapley Residuals: Quantifying the limits of the Shapley value for explanations

I Kumar, C Scheidegger… - Advances in …, 2021 - proceedings.neurips.cc
Popular feature importance techniques compute additive approximations to nonlinear
models by first defining a cooperative game describing the value of different subsets of the …

[HTML][HTML] Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

K Aas, M Jullum, A Løland - Artificial Intelligence, 2021 - Elsevier
Explaining complex or seemingly simple machine learning models is an important practical
problem. We want to explain individual predictions from such models by learning simple …

The many Shapley values for model explanation

M Sundararajan, A Najmi - International conference on …, 2020 - proceedings.mlr.press
The Shapley value has become the basis for several methods that attribute the prediction of
a machine-learning model on an input to its base features. The use of the Shapley value is …

From shapley values to generalized additive models and back

S Bordt, U von Luxburg - International Conference on …, 2023 - proceedings.mlr.press
In explainable machine learning, local post-hoc explanation algorithms and inherently
interpretable models are often seen as competing approaches. This work offers a partial …

Shapley flow: A graph-based approach to interpreting model predictions

J Wang, J Wiens, S Lundberg - International Conference on …, 2021 - proceedings.mlr.press
Many existing approaches for estimating feature importance are problematic because they
ignore or hide dependencies among features. A causal graph, which encodes the …

Reliable post hoc explanations: Modeling uncertainty in explainability

D Slack, A Hilgard, S Singh… - Advances in neural …, 2021 - proceedings.neurips.cc
As black box explanations are increasingly being employed to establish model credibility in
high stakes settings, it is important to ensure that these explanations are accurate and …