Interpretable machine learning: Fundamental principles and 10 grand challenges

C Rudin, C Chen, Z Chen, H Huang… - Statistic …, 2022 - projecteuclid.org
Interpretability in machine learning (ML) is crucial for high stakes decisions and
troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …

Covid-19 image data collection: Prospective predictions are the future

JP Cohen, P Morrison, L Dao, K Roth… - arXiv preprint arXiv …, 2020 - arxiv.org
Across the world's coronavirus disease 2019 (COVID-19) hot spots, the need to streamline
patient diagnosis and management has become more pressing than ever. As one of the …

Hidden stratification causes clinically meaningful failures in machine learning for medical imaging

L Oakden-Rayner, J Dunnmon, G Carneiro… - Proceedings of the ACM …, 2020 - dl.acm.org
Machine learning models for medical image analysis often suffer from poor performance on
important subsets of a population that are not identified during training or testing. For …

Debugging tests for model explanations

J Adebayo, M Muelly, I Liccardi, B Kim - arXiv preprint arXiv:2011.05429, 2020 - arxiv.org
We investigate whether post-hoc model explanations are effective for diagnosing model
errors--model debugging. In response to the challenge of explaining a model's prediction, a …

Domino: Discovering systematic errors with cross-modal embeddings

S Eyuboglu, M Varma, K Saab, JB Delbrouck… - arXiv preprint arXiv …, 2022 - arxiv.org
Machine learning models that achieve high overall accuracy often make systematic errors
on important subsets (or slices) of data. Identifying underperforming slices is particularly …

Post hoc explanations may be ineffective for detecting unknown spurious correlation

J Adebayo, M Muelly, H Abelson… - International conference on …, 2022 - openreview.net
We investigate whether three types of post hoc model explanations–feature attribution,
concept activation, and training point ranking–are effective for detecting a model's reliance …

A case-based interpretable deep learning model for classification of mass lesions in digital mammography

AJ Barnett, FR Schwartz, C Tao, C Chen… - Nature Machine …, 2021 - nature.com
Interpretability in machine learning models is important in high-stakes decisions such as
whether to order a biopsy based on a mammographic exam. Mammography poses …

Estimating example difficulty using variance of gradients

C Agarwal, D D'souza… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
In machine learning, a question of great interest is understanding what examples are
challenging for a model to classify. Identifying atypical examples ensures the safe …

A survey of deep learning for scientific discovery

M Raghu, E Schmidt - arXiv preprint arXiv:2003.11755, 2020 - arxiv.org
Over the past few years, we have seen fundamental breakthroughs in core problems in
machine learning, largely driven by advances in deep neural networks. At the same time, the …

Establishing data provenance for responsible artificial intelligence systems

K Werder, B Ramesh, R Zhang - ACM Transactions on Management …, 2022 - dl.acm.org
Data provenance, a record that describes the origins and processing of data, offers new
promises in the increasingly important role of artificial intelligence (AI)-based systems in …