" Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-
users' explainability needs and behaviors around XAI explanations. To address this gap and …
users' explainability needs and behaviors around XAI explanations. To address this gap and …
Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability
Abstract Concept-based interpretability methods aim to explain a deep neural network
model's components and predictions using a pre-defined set of semantic concepts. These …
model's components and predictions using a pre-defined set of semantic concepts. These …
Take 5: Interpretable image classification with a handful of features
Deep Neural Networks use thousands of mostly incomprehensible features to identify a
single class, a decision no human can follow. We propose an interpretable sparse and low …
single class, a decision no human can follow. We propose an interpretable sparse and low …
[HTML][HTML] Navigating the landscape of concept-supported XAI: Challenges, innovations, and future directions
Z Shams Khoozani, AQM Sabri, WC Seng… - Multimedia Tools and …, 2024 - Springer
This comprehensive review of concept-supported interpretation methods in Explainable
Artificial Intelligence (XAI) navigates the multifaceted landscape. As machine learning …
Artificial Intelligence (XAI) navigates the multifaceted landscape. As machine learning …
Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks
Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and
vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate …
vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate …
Estimation of Concept Explanations Should be Uncertainty Aware
Model explanations are very valuable for interpreting and debugging prediction models. We
study a specific kind of global explanations called Concept Explanations, where the goal is …
study a specific kind of global explanations called Concept Explanations, where the goal is …
Establishing Appropriate Trust in AI through Transparency and Explainability
SSY Kim - Extended Abstracts of the CHI Conference on Human …, 2024 - dl.acm.org
As AI systems are increasingly transforming our society, it is critical to support relevant
stakeholders to have appropriate understanding and trust in these systems. My dissertation …
stakeholders to have appropriate understanding and trust in these systems. My dissertation …
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Concept-based explanations for convolutional neural networks (CNNs) aim to explain model
behavior and outputs using a pre-defined set of semantic concepts (eg, the model …
behavior and outputs using a pre-defined set of semantic concepts (eg, the model …
TextCAVs: Debugging vision models using text
Concept-based interpretability methods are a popular form of explanation for deep learning
models which provide explanations in the form of high-level human interpretable concepts …
models which provide explanations in the form of high-level human interpretable concepts …