Threading the needle of on and off-manifold value functions for shapley explanations
Abstract A popular explainable AI (XAI) approach to quantify feature importance of a given
model is via Shapley values. These Shapley values arose in cooperative games, and hence …
model is via Shapley values. These Shapley values arose in cooperative games, and hence …
Comparison of contextual importance and utility with lime and Shapley values
Different explainable AI (XAI) methods are based on different notions of 'ground truth'. In
order to trust explanations of AI systems, the ground truth has to provide fidelity towards the …
order to trust explanations of AI systems, the ground truth has to provide fidelity towards the …
Decomposing global feature effects based on feature interactions
Global feature effect methods, such as partial dependence plots, provide an intelligible
visualization of the expected marginal feature effect. However, such global feature effect …
visualization of the expected marginal feature effect. However, such global feature effect …
On the connection between game-theoretic feature attributions and counterfactual explanations
Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and
two of the most popular types of explanations are feature attributions, and counterfactual …
two of the most popular types of explanations are feature attributions, and counterfactual …
[HTML][HTML] A comparative study of methods for estimating model-agnostic Shapley value explanations
Shapley values originated in cooperative game theory but are extensively used today as a
model-agnostic explanation framework to explain predictions made by complex machine …
model-agnostic explanation framework to explain predictions made by complex machine …
Explaining predictive models using Shapley values and non-parametric vine copulas
In this paper the goal is to explain predictions from complex machine learning models. One
method that has become very popular during the last few years is Shapley values. The …
method that has become very popular during the last few years is Shapley values. The …
[PDF][PDF] Trying to outrun causality with machine learning: Limitations of model explainability techniques for identifying predictive variables
MJ Vowels - stat, 2022 - researchgate.net
Abstract Machine Learning explainability techniques have been proposed as a means of
'explaining'or interrogating a model in order to understand why a particular decision or …
'explaining'or interrogating a model in order to understand why a particular decision or …
Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles
While shallow decision trees may be interpretable, larger ensemble models like gradient-
boosted trees, which often set the state of the art in machine learning problems involving …
boosted trees, which often set the state of the art in machine learning problems involving …
Accurate shapley values for explaining tree-based models
SI Amoukou, T Salaün, N Brunel - … conference on artificial …, 2022 - proceedings.mlr.press
Abstract Although Shapley Values (SV) are widely used in explainable AI, they can be
poorly understood and estimated, implying that their analysis may lead to spurious …
poorly understood and estimated, implying that their analysis may lead to spurious …
Explaining preferences with shapley values
While preference modelling is becoming one of the pillars of machine learning, the problem
of preference explanation remains challenging and underexplored. In this paper, we …
of preference explanation remains challenging and underexplored. In this paper, we …