Interpretable machine learning: Fundamental principles and 10 grand challenges
Interpretability in machine learning (ML) is crucial for high stakes decisions and
troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …
troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …
Covid-19 image data collection: Prospective predictions are the future
Across the world's coronavirus disease 2019 (COVID-19) hot spots, the need to streamline
patient diagnosis and management has become more pressing than ever. As one of the …
patient diagnosis and management has become more pressing than ever. As one of the …
Hidden stratification causes clinically meaningful failures in machine learning for medical imaging
Machine learning models for medical image analysis often suffer from poor performance on
important subsets of a population that are not identified during training or testing. For …
important subsets of a population that are not identified during training or testing. For …
Debugging tests for model explanations
We investigate whether post-hoc model explanations are effective for diagnosing model
errors--model debugging. In response to the challenge of explaining a model's prediction, a …
errors--model debugging. In response to the challenge of explaining a model's prediction, a …
Domino: Discovering systematic errors with cross-modal embeddings
Machine learning models that achieve high overall accuracy often make systematic errors
on important subsets (or slices) of data. Identifying underperforming slices is particularly …
on important subsets (or slices) of data. Identifying underperforming slices is particularly …
Post hoc explanations may be ineffective for detecting unknown spurious correlation
We investigate whether three types of post hoc model explanations–feature attribution,
concept activation, and training point ranking–are effective for detecting a model's reliance …
concept activation, and training point ranking–are effective for detecting a model's reliance …
A case-based interpretable deep learning model for classification of mass lesions in digital mammography
Interpretability in machine learning models is important in high-stakes decisions such as
whether to order a biopsy based on a mammographic exam. Mammography poses …
whether to order a biopsy based on a mammographic exam. Mammography poses …
Estimating example difficulty using variance of gradients
In machine learning, a question of great interest is understanding what examples are
challenging for a model to classify. Identifying atypical examples ensures the safe …
challenging for a model to classify. Identifying atypical examples ensures the safe …
A survey of deep learning for scientific discovery
M Raghu, E Schmidt - arXiv preprint arXiv:2003.11755, 2020 - arxiv.org
Over the past few years, we have seen fundamental breakthroughs in core problems in
machine learning, largely driven by advances in deep neural networks. At the same time, the …
machine learning, largely driven by advances in deep neural networks. At the same time, the …
Establishing data provenance for responsible artificial intelligence systems
Data provenance, a record that describes the origins and processing of data, offers new
promises in the increasingly important role of artificial intelligence (AI)-based systems in …
promises in the increasingly important role of artificial intelligence (AI)-based systems in …