From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
Algorithms to estimate Shapley value feature attributions
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …
learning models. However, their estimation is complex from both theoretical and …
A survey on neural network interpretability
Along with the great success of deep neural networks, there is also growing concern about
their black-box nature. The interpretability issue affects people's trust on deep learning …
their black-box nature. The interpretability issue affects people's trust on deep learning …
The shapley value in machine learning
Over the last few years, the Shapley value, a solution concept from cooperative game theory,
has found numerous applications in machine learning. In this paper, we first discuss …
has found numerous applications in machine learning. In this paper, we first discuss …
Explainable deep learning: A field guide for the uninitiated
Deep neural networks (DNNs) are an indispensable machine learning tool despite the
difficulty of diagnosing what aspects of a model's input drive its decisions. In countless real …
difficulty of diagnosing what aspects of a model's input drive its decisions. In countless real …
Explaining by removing: A unified framework for model explanation
Researchers have proposed a wide variety of model explanation approaches, but it remains
unclear how most methods are related or when one method is preferable to another. We …
unclear how most methods are related or when one method is preferable to another. We …
Causal machine learning: A survey and open problems
Causal Machine Learning (CausalML) is an umbrella term for machine learning methods
that formalize the data-generation process as a structural causal model (SCM). This …
that formalize the data-generation process as a structural causal model (SCM). This …
[HTML][HTML] Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Explaining complex or seemingly simple machine learning models is an important practical
problem. We want to explain individual predictions from such models by learning simple …
problem. We want to explain individual predictions from such models by learning simple …
How interpretable machine learning can benefit process understanding in the geosciences
Abstract Interpretable Machine Learning (IML) has rapidly advanced in recent years, offering
new opportunities to improve our understanding of the complex Earth system. IML goes …
new opportunities to improve our understanding of the complex Earth system. IML goes …
SHAP-based explanation methods: a review for NLP interpretability
E Mosca, F Szigeti, S Tragianni… - Proceedings of the …, 2022 - aclanthology.org
Abstract Model explanations are crucial for the transparent, safe, and trustworthy
deployment of machine learning models. The SHapley Additive exPlanations (SHAP) …
deployment of machine learning models. The SHapley Additive exPlanations (SHAP) …