Explaining models by propagating Shapley values of local components
In healthcare, making the best possible predictions with complex models (eg, neural
networks, ensembles/stacks of different models) can impact patient welfare. In order to make …
networks, ensembles/stacks of different models) can impact patient welfare. In order to make …
Irof: a low resource evaluation metric for explanation methods
The adoption of machine learning in health care hinges on the transparency of the used
algorithms, necessitating the need for explanation methods. However, despite a growing …
algorithms, necessitating the need for explanation methods. However, despite a growing …
The explanation game: Explaining machine learning models using shapley values
A number of techniques have been proposed to explain a machine learning model's
prediction by attributing it to the corresponding input features. Popular among these are …
prediction by attributing it to the corresponding input features. Popular among these are …
Global explanations of neural networks: Mapping the landscape of predictions
A barrier to the wider adoption of neural networks is their lack of interpretability. While local
explanation methods exist for one prediction, most global attributions still reduce neural …
explanation methods exist for one prediction, most global attributions still reduce neural …
[HTML][HTML] SurvSHAP (t): time-dependent explanations of machine learning survival models
Abstract Machine and deep learning survival models demonstrate similar or even improved
time-to-event prediction capabilities compared to classical statistical learning methods yet …
time-to-event prediction capabilities compared to classical statistical learning methods yet …
Counterfactual shapley additive explanations
Feature attributions are a common paradigm for model explanations due to their simplicity in
assigning a single numeric score for each input feature to a model. In the actionable …
assigning a single numeric score for each input feature to a model. In the actionable …
Towards unifying feature attribution and counterfactual explanations: Different means to the same end
Feature attributions and counterfactual explanations are popular approaches to explain a
ML model. The former assigns an importance score to each input feature, while the latter …
ML model. The former assigns an importance score to each input feature, while the latter …
Synthetic benchmarks for scientific research in explainable machine learning
As machine learning models grow more complex and their applications become more high-
stakes, tools for explaining model predictions have become increasingly important. This has …
stakes, tools for explaining model predictions have become increasingly important. This has …
[HTML][HTML] Explaining a series of models by propagating Shapley values
Local feature attribution methods are increasingly used to explain complex machine
learning models. However, current methods are limited because they are extremely …
learning models. However, current methods are limited because they are extremely …
Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models
With the availability of large databases and recent improvements in deep learning
methodology, the performance of AI systems is reaching or even exceeding the human level …
methodology, the performance of AI systems is reaching or even exceeding the human level …