Threading the needle of on and off-manifold value functions for shapley explanations

CK Yeh, KY Lee, F Liu… - … Conference on Artificial …, 2022 - proceedings.mlr.press
Abstract A popular explainable AI (XAI) approach to quantify feature importance of a given
model is via Shapley values. These Shapley values arose in cooperative games, and hence …

Comparison of contextual importance and utility with lime and Shapley values

K Främling, M Westberg, M Jullum… - … Autonomous Agents and …, 2021 - Springer
Different explainable AI (XAI) methods are based on different notions of 'ground truth'. In
order to trust explanations of AI systems, the ground truth has to provide fidelity towards the …

Decomposing global feature effects based on feature interactions

J Herbinger, B Bischl, G Casalicchio - arXiv preprint arXiv:2306.00541, 2023 - arxiv.org
Global feature effect methods, such as partial dependence plots, provide an intelligible
visualization of the expected marginal feature effect. However, such global feature effect …

On the connection between game-theoretic feature attributions and counterfactual explanations

E Albini, S Sharma, S Mishra, D Dervovic… - Proceedings of the …, 2023 - dl.acm.org
Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and
two of the most popular types of explanations are feature attributions, and counterfactual …

[HTML][HTML] A comparative study of methods for estimating model-agnostic Shapley value explanations

LHB Olsen, IK Glad, M Jullum, K Aas - Data Mining and Knowledge …, 2024 - Springer
Shapley values originated in cooperative game theory but are extensively used today as a
model-agnostic explanation framework to explain predictions made by complex machine …

Explaining predictive models using Shapley values and non-parametric vine copulas

K Aas, T Nagler, M Jullum, A Løland - Dependence modeling, 2021 - degruyter.com
In this paper the goal is to explain predictions from complex machine learning models. One
method that has become very popular during the last few years is Shapley values. The …

[PDF][PDF] Trying to outrun causality with machine learning: Limitations of model explainability techniques for identifying predictive variables

MJ Vowels - stat, 2022 - researchgate.net
Abstract Machine Learning explainability techniques have been proposed as a means of
'explaining'or interrogating a model in order to understand why a particular decision or …

Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles

M Muschalik, F Fumagalli, B Hammer… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
While shallow decision trees may be interpretable, larger ensemble models like gradient-
boosted trees, which often set the state of the art in machine learning problems involving …

Accurate shapley values for explaining tree-based models

SI Amoukou, T Salaün, N Brunel - … conference on artificial …, 2022 - proceedings.mlr.press
Abstract Although Shapley Values (SV) are widely used in explainable AI, they can be
poorly understood and estimated, implying that their analysis may lead to spurious …

Explaining preferences with shapley values

R Hu, SL Chau, J Ferrando Huertas… - Advances in Neural …, 2022 - proceedings.neurips.cc
While preference modelling is becoming one of the pillars of machine learning, the problem
of preference explanation remains challenging and underexplored. In this paper, we …