Medical visual question answering: A survey
Abstract Medical Visual Question Answering (VQA) is a combination of medical artificial
intelligence and popular VQA challenges. Given a medical image and a clinically relevant …
intelligence and popular VQA challenges. Given a medical image and a clinically relevant …
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing interpretability, adversarial robustness and zero-shot learning
F Mumuni, A Mumuni - Cognitive Systems Research, 2023 - Elsevier
We review current and emerging knowledge-informed and brain-inspired cognitive systems
for realizing adversarial defenses, eXplainable Artificial Intelligence (XAI), and zero-shot or …
for realizing adversarial defenses, eXplainable Artificial Intelligence (XAI), and zero-shot or …
Learning to receive help: Intervention-aware concept embedding models
M Espinosa Zarlenga, K Collins… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Concept Bottleneck Models (CBMs) tackle the opacity of neural architectures by
constructing and explaining their predictions using a set of high-level concepts. A special …
constructing and explaining their predictions using a set of high-level concepts. A special …
Sparsity-guided holistic explanation for llms with interpretable inference-time intervention
Abstract Large Language Models (LLMs) have achieved unprecedented breakthroughs in
various natural language processing domains. However, the enigmatic``black-box''nature of …
various natural language processing domains. However, the enigmatic``black-box''nature of …
Sparse feature circuits: Discovering and editing interpretable causal graphs in language models
We introduce methods for discovering and applying sparse feature circuits. These are
causally implicated subnetworks of human-interpretable features for explaining language …
causally implicated subnetworks of human-interpretable features for explaining language …
Human uncertainty in concept-based ai systems
Placing a human in the loop may help abate the risks of deploying AI systems in safety-
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …
Faithful vision-language interpretation via concept bottleneck models
The demand for transparency in healthcare and finance has led to interpretable machine
learning (IML) models, notably the concept bottleneck models (CBMs), valued for their …
learning (IML) models, notably the concept bottleneck models (CBMs), valued for their …
Robust and interpretable medical image classifiers via concept bottleneck models
Medical image classification is a critical problem for healthcare, with the potential to alleviate
the workload of doctors and facilitate diagnoses of patients. However, two challenges arise …
the workload of doctors and facilitate diagnoses of patients. However, two challenges arise …
Auxiliary losses for learning generalizable concept-based models
I Sheth, S Ebrahimi Kahou - Advances in Neural …, 2024 - proceedings.neurips.cc
The increasing use of neural networks in various applications has lead to increasing
apprehensions, underscoring the necessity to understand their operations beyond mere …
apprehensions, underscoring the necessity to understand their operations beyond mere …
Incremental residual concept bottleneck models
Abstract Concept Bottleneck Models (CBMs) map the black-box visual representations
extracted by deep neural networks onto a set of interpretable concepts and use the concepts …
extracted by deep neural networks onto a set of interpretable concepts and use the concepts …