Algorithms to estimate Shapley value feature attributions
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …
learning models. However, their estimation is complex from both theoretical and …
Interpretable machine learning–a brief history, state-of-the-art and challenges
We present a brief history of the field of interpretable machine learning (IML), give an
overview of state-of-the-art interpretation methods and discuss challenges. Research in IML …
overview of state-of-the-art interpretation methods and discuss challenges. Research in IML …
Causal machine learning: A survey and open problems
Causal Machine Learning (CausalML) is an umbrella term for machine learning methods
that formalize the data-generation process as a structural causal model (SCM). This …
that formalize the data-generation process as a structural causal model (SCM). This …
[HTML][HTML] Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Explaining complex or seemingly simple machine learning models is an important practical
problem. We want to explain individual predictions from such models by learning simple …
problem. We want to explain individual predictions from such models by learning simple …
Explaining by removing: A unified framework for model explanation
Researchers have proposed a wide variety of model explanation approaches, but it remains
unclear how most methods are related or when one method is preferable to another. We …
unclear how most methods are related or when one method is preferable to another. We …
General pitfalls of model-agnostic interpretation methods for machine learning models
An increasing number of model-agnostic interpretation techniques for machine learning
(ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) …
(ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) …
Shapley explainability on the data manifold
C Frye, D de Mijolla, T Begley, L Cowton… - arXiv preprint arXiv …, 2020 - arxiv.org
Explainability in AI is crucial for model development, compliance with regulation, and
providing operational nuance to predictions. The Shapley framework for explainability …
providing operational nuance to predictions. The Shapley framework for explainability …
Shapley flow: A graph-based approach to interpreting model predictions
Many existing approaches for estimating feature importance are problematic because they
ignore or hide dependencies among features. A causal graph, which encodes the …
ignore or hide dependencies among features. A causal graph, which encodes the …
Explaining a series of models by propagating Shapley values
Local feature attribution methods are increasingly used to explain complex machine
learning models. However, current methods are limited because they are extremely …
learning models. However, current methods are limited because they are extremely …
Explainability in music recommender systems
The most common way to listen to recorded music nowadays is via streaming platforms,
which provide access to tens of millions of tracks. To assist users in effectively browsing …
which provide access to tens of millions of tracks. To assist users in effectively browsing …