Shapley flow: A graph-based approach to interpreting model predictions

J Wang, J Wiens, S Lundberg - International Conference on …, 2021 - proceedings.mlr.press
Many existing approaches for estimating feature importance are problematic because they
ignore or hide dependencies among features. A causal graph, which encodes the …

Problems with Shapley-value-based explanations as feature importance measures

IE Kumar, S Venkatasubramanian… - International …, 2020 - proceedings.mlr.press
Game-theoretic formulations of feature importance have become popular as a way to"
explain" machine learning models. These methods define a cooperative game between the …

Shapley Residuals: Quantifying the limits of the Shapley value for explanations

I Kumar, C Scheidegger… - Advances in …, 2021 - proceedings.neurips.cc
Popular feature importance techniques compute additive approximations to nonlinear
models by first defining a cooperative game describing the value of different subsets of the …

The explanation game: Explaining machine learning models using shapley values

L Merrick, A Taly - Machine Learning and Knowledge Extraction: 4th IFIP …, 2020 - Springer
A number of techniques have been proposed to explain a machine learning model's
prediction by attributing it to the corresponding input features. Popular among these are …

Weightedshap: analyzing and improving shapley based feature attributions

Y Kwon, JY Zou - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Shapley value is a popular approach for measuring the influence of individual features.
While Shapley feature attribution is built upon desiderata from game theory, some of its …

Consistent individualized feature attribution for tree ensembles

SM Lundberg, GG Erion, SI Lee - arXiv preprint arXiv:1802.03888, 2018 - arxiv.org
Interpreting predictions from tree ensemble methods such as gradient boosting machines
and random forests is important, yet feature attribution for trees is often heuristic and not …

L-shapley and c-shapley: Efficient model interpretation for structured data

J Chen, L Song, MJ Wainwright, MI Jordan - arXiv preprint arXiv …, 2018 - arxiv.org
We study instancewise feature importance scoring as a method for model interpretation. Any
such method yields, for each predicted instance, a vector of importance scores associated …

A unified approach to interpreting model predictions

SM Lundberg, SI Lee - Advances in neural information …, 2017 - proceedings.neurips.cc
Understanding why a model makes a certain prediction can be as crucial as the prediction's
accuracy in many applications. However, the highest accuracy for large modern datasets is …

Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models

T Heskes, E Sijben, IG Bucur… - Advances in neural …, 2020 - proceedings.neurips.cc
Shapley values underlie one of the most popular model-agnostic methods within
explainable artificial intelligence. These values are designed to attribute the difference …

Evaluating and aggregating feature-based model explanations

U Bhatt, A Weller, JMF Moura - arXiv preprint arXiv:2005.00631, 2020 - arxiv.org
A feature-based model explanation denotes how much each input feature contributes to a
model's output for a given data point. As the number of proposed explanation functions …