Concept bottleneck models

PW Koh, T Nguyen, YS Tang… - International …, 2020 - proceedings.mlr.press
We seek to learn models that we can interact with using high-level concepts: if the model did
not think there was a bone spur in the x-ray, would it still predict severe arthritis? State-of-the …

Learning bottleneck concepts in image classification

B Wang, L Li, Y Nakashima… - Proceedings of the ieee …, 2023 - openaccess.thecvf.com
Interpreting and explaining the behavior of deep neural networks is critical for many tasks.
Explainable AI provides a way to address this challenge, mostly by providing per-pixel …

Promises and pitfalls of black-box concept learning models

A Mahinpei, J Clark, I Lage, F Doshi-Velez… - arXiv preprint arXiv …, 2021 - arxiv.org
Machine learning models that incorporate concept learning as an intermediate step in their
decision making process can match the performance of black-box predictive models while …

Static and dynamic concepts for self-supervised video representation learning

R Qian, S Ding, X Liu, D Lin - European Conference on Computer Vision, 2022 - Springer
In this paper, we propose a novel learning scheme for self-supervised video representation
learning. Motivated by how humans understand videos, we propose to first learn general …

Semantically Interpretable Activation Maps: what-where-how explanations within CNNs

D Marcos, S Lobry, D Tuia - 2019 IEEE/CVF International …, 2019 - ieeexplore.ieee.org
A main issue preventing the use of Convolutional Neural Networks (CNN) in end user
applications is the low level of transparency in the decision process. Previous work on CNN …

[HTML][HTML] Semantic bottlenecks: Quantifying and improving inspectability of deep representations

M Losch, M Fritz, B Schiele - International Journal of Computer Vision, 2021 - Springer
Today's deep learning systems deliver high performance based on end-to-end training but
are notoriously hard to inspect. We argue that there are at least two reasons making …

Salad: Self-assessment learning for action detection

G Vaudaux-Ruth… - Proceedings of the …, 2021 - openaccess.thecvf.com
Literature on self-assessment in machine learning mainly focuses on the production of well-
calibrated algorithms through consensus frameworks ie calibration is seen as a problem …

A novel intrinsically explainable model with semantic manifolds established via transformed priors

G Shi, M Yang, D Gao - Knowledge-Based Systems, 2022 - Elsevier
Because humans instinctively trust and interact with explainable representations instead of
latent features, intrinsically interpretable models (IIMs) aimed at representations with …

Automated Molecular Concept Generation and Labeling with Large Language Models

S Zhang, B Xia, Z Zhang, Q Wu, F Sun, Z Hu… - arXiv preprint arXiv …, 2024 - arxiv.org
Artificial intelligence (AI) is significantly transforming scientific research. Explainable AI
methods, such as concept-based models (CMs), are promising for driving new scientific …

Interpretable Prognostics with Concept Bottleneck Models

F Forest, K Rombach, O Fink - arXiv preprint arXiv:2405.17575, 2024 - arxiv.org
Deep learning approaches have recently been extensively explored for the prognostics of
industrial assets. However, they still suffer from a lack of interpretability, which hinders their …