Opportunities and obstacles for deep learning in biology and medicine
T Ching, DS Himmelstein… - Journal of the …, 2018 - royalsocietypublishing.org
Deep learning describes a class of machine learning algorithms that are capable of
combining raw inputs into layers of intermediate features. These algorithms have recently …
combining raw inputs into layers of intermediate features. These algorithms have recently …
Explainable deep learning: A field guide for the uninitiated
Deep neural networks (DNNs) are an indispensable machine learning tool despite the
difficulty of diagnosing what aspects of a model's input drive its decisions. In countless real …
difficulty of diagnosing what aspects of a model's input drive its decisions. In countless real …
Axiom-based grad-cam: Towards accurate visualization and explanation of cnns
To have a better understanding and usage of Convolution Neural Networks (CNNs), the
visualization and interpretation of CNNs has attracted increasing attention in recent years. In …
visualization and interpretation of CNNs has attracted increasing attention in recent years. In …
A diagnostic study of explainability techniques for text classification
P Atanasova - Accountable and Explainable Methods for Complex …, 2024 - Springer
Recent developments in machine learning have introduced models that approach human
performance at the cost of increased architectural complexity. Efforts to make the rationales …
performance at the cost of increased architectural complexity. Efforts to make the rationales …
Towards better understanding of gradient-based attribution methods for deep neural networks
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging
problem that has gain increasing attention over the last few years. While several methods …
problem that has gain increasing attention over the last few years. While several methods …
Learning to explain: An information-theoretic perspective on model interpretation
We introduce instancewise feature selection as a methodology for model interpretation. Our
method is based on learning a function to extract a subset of features that are most …
method is based on learning a function to extract a subset of features that are most …
The (un) reliability of saliency methods
Saliency methods aim to explain the predictions of deep neural networks. These methods
lack reliability when the explanation is sensitive to factors that do not contribute to the model …
lack reliability when the explanation is sensitive to factors that do not contribute to the model …
Benchmarking deep learning interpretability in time series predictions
Saliency methods are used extensively to highlight the importance of input features in model
predictions. These methods are mostly used in vision and language tasks, and their …
predictions. These methods are mostly used in vision and language tasks, and their …
Debugging tests for model explanations
We investigate whether post-hoc model explanations are effective for diagnosing model
errors--model debugging. In response to the challenge of explaining a model's prediction, a …
errors--model debugging. In response to the challenge of explaining a model's prediction, a …
Learning important features through propagating activation differences
A Shrikumar, P Greenside… - … conference on machine …, 2017 - proceedings.mlr.press
The purported “black box” nature of neural networks is a barrier to adoption in applications
where interpretability is essential. Here we present DeepLIFT (Deep Learning Important …
where interpretability is essential. Here we present DeepLIFT (Deep Learning Important …