" Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

SSY Kim, EA Watkins, O Russakovsky, R Fong… - Proceedings of the …, 2023 - dl.acm.org
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-
users' explainability needs and behaviors around XAI explanations. To address this gap and …

Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability

VV Ramaswamy, SSY Kim, R Fong… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Concept-based interpretability methods aim to explain a deep neural network
model's components and predictions using a pre-defined set of semantic concepts. These …

Take 5: Interpretable image classification with a handful of features

T Norrenbrock, M Rudolph, B Rosenhahn - arXiv preprint arXiv …, 2023 - arxiv.org
Deep Neural Networks use thousands of mostly incomprehensible features to identify a
single class, a decision no human can follow. We propose an interpretable sparse and low …

[HTML][HTML] Navigating the landscape of concept-supported XAI: Challenges, innovations, and future directions

Z Shams Khoozani, AQM Sabri, WC Seng… - Multimedia Tools and …, 2024 - Springer
This comprehensive review of concept-supported interpretation methods in Explainable
Artificial Intelligence (XAI) navigates the multifaceted landscape. As machine learning …

Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks

B Rasheed, M Abdelhamid, A Khan, I Menezes… - IEEE …, 2024 - ieeexplore.ieee.org
Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and
vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate …

Estimation of Concept Explanations Should be Uncertainty Aware

V Piratla, J Heo, S Singh, A Weller - arXiv preprint arXiv:2312.08063, 2023 - arxiv.org
Model explanations are very valuable for interpreting and debugging prediction models. We
study a specific kind of global explanations called Concept Explanations, where the goal is …

Establishing Appropriate Trust in AI through Transparency and Explainability

SSY Kim - Extended Abstracts of the CHI Conference on Human …, 2024 - dl.acm.org
As AI systems are increasingly transforming our society, it is critical to support relevant
stakeholders to have appropriate understanding and trust in these systems. My dissertation …

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

VV Ramaswamy, SSY Kim, R Fong… - arXiv preprint arXiv …, 2023 - arxiv.org
Concept-based explanations for convolutional neural networks (CNNs) aim to explain model
behavior and outputs using a pre-defined set of semantic concepts (eg, the model …

TextCAVs: Debugging vision models using text

A Nicolson, Y Gal, JA Noble - arXiv preprint arXiv:2408.08652, 2024 - arxiv.org
Concept-based interpretability methods are a popular form of explanation for deep learning
models which provide explanations in the form of high-level human interpretable concepts …