Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models
Shapley values underlie one of the most popular model-agnostic methods within
explainable artificial intelligence. These values are designed to attribute the difference …
explainable artificial intelligence. These values are designed to attribute the difference …
The explanation game: Explaining machine learning models using shapley values
A number of techniques have been proposed to explain a machine learning model's
prediction by attributing it to the corresponding input features. Popular among these are …
prediction by attributing it to the corresponding input features. Popular among these are …
Problems with Shapley-value-based explanations as feature importance measures
IE Kumar, S Venkatasubramanian… - International …, 2020 - proceedings.mlr.press
Game-theoretic formulations of feature importance have become popular as a way to"
explain" machine learning models. These methods define a cooperative game between the …
explain" machine learning models. These methods define a cooperative game between the …
Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability
Explaining AI systems is fundamental both to the development of high performing models
and to the trust placed in them by their users. The Shapley framework for explainability has …
and to the trust placed in them by their users. The Shapley framework for explainability has …
Shapley Residuals: Quantifying the limits of the Shapley value for explanations
I Kumar, C Scheidegger… - Advances in …, 2021 - proceedings.neurips.cc
Popular feature importance techniques compute additive approximations to nonlinear
models by first defining a cooperative game describing the value of different subsets of the …
models by first defining a cooperative game describing the value of different subsets of the …
[HTML][HTML] Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Explaining complex or seemingly simple machine learning models is an important practical
problem. We want to explain individual predictions from such models by learning simple …
problem. We want to explain individual predictions from such models by learning simple …
The many Shapley values for model explanation
M Sundararajan, A Najmi - International conference on …, 2020 - proceedings.mlr.press
The Shapley value has become the basis for several methods that attribute the prediction of
a machine-learning model on an input to its base features. The use of the Shapley value is …
a machine-learning model on an input to its base features. The use of the Shapley value is …
From shapley values to generalized additive models and back
S Bordt, U von Luxburg - International Conference on …, 2023 - proceedings.mlr.press
In explainable machine learning, local post-hoc explanation algorithms and inherently
interpretable models are often seen as competing approaches. This work offers a partial …
interpretable models are often seen as competing approaches. This work offers a partial …
Shapley flow: A graph-based approach to interpreting model predictions
Many existing approaches for estimating feature importance are problematic because they
ignore or hide dependencies among features. A causal graph, which encodes the …
ignore or hide dependencies among features. A causal graph, which encodes the …
Reliable post hoc explanations: Modeling uncertainty in explainability
As black box explanations are increasingly being employed to establish model credibility in
high stakes settings, it is important to ensure that these explanations are accurate and …
high stakes settings, it is important to ensure that these explanations are accurate and …