Shapley flow: A graph-based approach to interpreting model predictions
Many existing approaches for estimating feature importance are problematic because they
ignore or hide dependencies among features. A causal graph, which encodes the …
ignore or hide dependencies among features. A causal graph, which encodes the …
Problems with Shapley-value-based explanations as feature importance measures
IE Kumar, S Venkatasubramanian… - International …, 2020 - proceedings.mlr.press
Game-theoretic formulations of feature importance have become popular as a way to"
explain" machine learning models. These methods define a cooperative game between the …
explain" machine learning models. These methods define a cooperative game between the …
Shapley Residuals: Quantifying the limits of the Shapley value for explanations
I Kumar, C Scheidegger… - Advances in …, 2021 - proceedings.neurips.cc
Popular feature importance techniques compute additive approximations to nonlinear
models by first defining a cooperative game describing the value of different subsets of the …
models by first defining a cooperative game describing the value of different subsets of the …
The explanation game: Explaining machine learning models using shapley values
A number of techniques have been proposed to explain a machine learning model's
prediction by attributing it to the corresponding input features. Popular among these are …
prediction by attributing it to the corresponding input features. Popular among these are …
Weightedshap: analyzing and improving shapley based feature attributions
Shapley value is a popular approach for measuring the influence of individual features.
While Shapley feature attribution is built upon desiderata from game theory, some of its …
While Shapley feature attribution is built upon desiderata from game theory, some of its …
Consistent individualized feature attribution for tree ensembles
Interpreting predictions from tree ensemble methods such as gradient boosting machines
and random forests is important, yet feature attribution for trees is often heuristic and not …
and random forests is important, yet feature attribution for trees is often heuristic and not …
L-shapley and c-shapley: Efficient model interpretation for structured data
We study instancewise feature importance scoring as a method for model interpretation. Any
such method yields, for each predicted instance, a vector of importance scores associated …
such method yields, for each predicted instance, a vector of importance scores associated …
A unified approach to interpreting model predictions
SM Lundberg, SI Lee - Advances in neural information …, 2017 - proceedings.neurips.cc
Understanding why a model makes a certain prediction can be as crucial as the prediction's
accuracy in many applications. However, the highest accuracy for large modern datasets is …
accuracy in many applications. However, the highest accuracy for large modern datasets is …
Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models
Shapley values underlie one of the most popular model-agnostic methods within
explainable artificial intelligence. These values are designed to attribute the difference …
explainable artificial intelligence. These values are designed to attribute the difference …
Evaluating and aggregating feature-based model explanations
A feature-based model explanation denotes how much each input feature contributes to a
model's output for a given data point. As the number of proposed explanation functions …
model's output for a given data point. As the number of proposed explanation functions …