Algorithms to estimate Shapley value feature attributions

H Chen, IC Covert, SM Lundberg, SI Lee - Nature Machine Intelligence, 2023 - nature.com
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …

Interpretable machine learning–a brief history, state-of-the-art and challenges

C Molnar, G Casalicchio, B Bischl - Joint European conference on …, 2020 - Springer
We present a brief history of the field of interpretable machine learning (IML), give an
overview of state-of-the-art interpretation methods and discuss challenges. Research in IML …

Causal machine learning: A survey and open problems

J Kaddour, A Lynch, Q Liu, MJ Kusner… - arXiv preprint arXiv …, 2022 - arxiv.org
Causal Machine Learning (CausalML) is an umbrella term for machine learning methods
that formalize the data-generation process as a structural causal model (SCM). This …

[HTML][HTML] Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

K Aas, M Jullum, A Løland - Artificial Intelligence, 2021 - Elsevier
Explaining complex or seemingly simple machine learning models is an important practical
problem. We want to explain individual predictions from such models by learning simple …

Explaining by removing: A unified framework for model explanation

I Covert, S Lundberg, SI Lee - Journal of Machine Learning Research, 2021 - jmlr.org
Researchers have proposed a wide variety of model explanation approaches, but it remains
unclear how most methods are related or when one method is preferable to another. We …

General pitfalls of model-agnostic interpretation methods for machine learning models

C Molnar, G König, J Herbinger, T Freiesleben… - … Workshop on Extending …, 2020 - Springer
An increasing number of model-agnostic interpretation techniques for machine learning
(ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) …

Shapley explainability on the data manifold

C Frye, D de Mijolla, T Begley, L Cowton… - arXiv preprint arXiv …, 2020 - arxiv.org
Explainability in AI is crucial for model development, compliance with regulation, and
providing operational nuance to predictions. The Shapley framework for explainability …

Shapley flow: A graph-based approach to interpreting model predictions

J Wang, J Wiens, S Lundberg - International Conference on …, 2021 - proceedings.mlr.press
Many existing approaches for estimating feature importance are problematic because they
ignore or hide dependencies among features. A causal graph, which encodes the …

Explaining a series of models by propagating Shapley values

H Chen, SM Lundberg, SI Lee - Nature communications, 2022 - nature.com
Local feature attribution methods are increasingly used to explain complex machine
learning models. However, current methods are limited because they are extremely …

Explainability in music recommender systems

D Afchar, A Melchiorre, M Schedl, R Hennequin… - AI Magazine, 2022 - ojs.aaai.org
The most common way to listen to recorded music nowadays is via streaming platforms,
which provide access to tens of millions of tracks. To assist users in effectively browsing …