Interpretable deep learning: Interpretation, interpretability, trustworthiness, and beyond
Deep neural networks have been well-known for their superb handling of various machine
learning and artificial intelligence tasks. However, due to their over-parameterized black-box …
learning and artificial intelligence tasks. However, due to their over-parameterized black-box …
Counterfactual explanations and algorithmic recourses for machine learning: A review
Machine learning plays a role in many deployed decision systems, often in ways that are
difficult or impossible to understand by human stakeholders. Explaining, in a human …
difficult or impossible to understand by human stakeholders. Explaining, in a human …
Deep neural networks and tabular data: A survey
Heterogeneous tabular data are the most commonly used form of data and are essential for
numerous critical and computationally demanding applications. On homogeneous datasets …
numerous critical and computationally demanding applications. On homogeneous datasets …
A survey of uncertainty in deep neural networks
Over the last decade, neural networks have reached almost every field of science and
become a crucial part of various real world applications. Due to the increasing spread …
become a crucial part of various real world applications. Due to the increasing spread …
Interpretable machine learning: Fundamental principles and 10 grand challenges
Interpretability in machine learning (ML) is crucial for high stakes decisions and
troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …
troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …
Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty
Algorithmic transparency entails exposing system properties to various stakeholders for
purposes that include understanding, improving, and contesting predictions. Until now, most …
purposes that include understanding, improving, and contesting predictions. Until now, most …
Explaining in style: Training a gan to explain a classifier in stylespace
Image classification models can depend on multiple different semantic attributes of the
image. An explanation of the decision of the classifier needs to both discover and visualize …
image. An explanation of the decision of the classifier needs to both discover and visualize …
Uncertainty quantification with pre-trained language models: A large-scale empirical analysis
Pre-trained language models (PLMs) have gained increasing popularity due to their
compelling prediction performance in diverse natural language processing (NLP) tasks …
compelling prediction performance in diverse natural language processing (NLP) tasks …
Carla: a python library to benchmark algorithmic recourse and counterfactual explanation algorithms
M Pawelczyk, S Bielawski, J Heuvel, T Richter… - arXiv preprint arXiv …, 2021 - arxiv.org
Counterfactual explanations provide means for prescriptive model explanations by
suggesting actionable feature changes (eg, increase income) that allow individuals to …
suggesting actionable feature changes (eg, increase income) that allow individuals to …
Sample-efficient optimization in the latent space of deep generative models via weighted retraining
A Tripp, E Daxberger… - Advances in Neural …, 2020 - proceedings.neurips.cc
Many important problems in science and engineering, such as drug design, involve
optimizing an expensive black-box objective function over a complex, high-dimensional, and …
optimizing an expensive black-box objective function over a complex, high-dimensional, and …