Medical visual question answering: A survey

Z Lin, D Zhang, Q Tao, D Shi, G Haffari, Q Wu… - Artificial Intelligence in …, 2023 - Elsevier
Abstract Medical Visual Question Answering (VQA) is a combination of medical artificial
intelligence and popular VQA challenges. Given a medical image and a clinically relevant …

Improving deep learning with prior knowledge and cognitive models: A survey on enhancing interpretability, adversarial robustness and zero-shot learning

F Mumuni, A Mumuni - Cognitive Systems Research, 2023 - Elsevier
We review current and emerging knowledge-informed and brain-inspired cognitive systems
for realizing adversarial defenses, eXplainable Artificial Intelligence (XAI), and zero-shot or …

Learning to receive help: Intervention-aware concept embedding models

M Espinosa Zarlenga, K Collins… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Concept Bottleneck Models (CBMs) tackle the opacity of neural architectures by
constructing and explaining their predictions using a set of high-level concepts. A special …

Sparsity-guided holistic explanation for llms with interpretable inference-time intervention

Z Tan, T Chen, Z Zhang, H Liu - … of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Abstract Large Language Models (LLMs) have achieved unprecedented breakthroughs in
various natural language processing domains. However, the enigmatic``black-box''nature of …

Sparse feature circuits: Discovering and editing interpretable causal graphs in language models

S Marks, C Rager, EJ Michaud, Y Belinkov… - arXiv preprint arXiv …, 2024 - arxiv.org
We introduce methods for discovering and applying sparse feature circuits. These are
causally implicated subnetworks of human-interpretable features for explaining language …

Human uncertainty in concept-based ai systems

KM Collins, M Barker, M Espinosa Zarlenga… - Proceedings of the …, 2023 - dl.acm.org
Placing a human in the loop may help abate the risks of deploying AI systems in safety-
critical settings (eg, a clinician working with a medical AI system). However, mitigating risks …

Faithful vision-language interpretation via concept bottleneck models

S Lai, L Hu, J Wang, L Berti-Equille… - The Twelfth International …, 2023 - openreview.net
The demand for transparency in healthcare and finance has led to interpretable machine
learning (IML) models, notably the concept bottleneck models (CBMs), valued for their …

Robust and interpretable medical image classifiers via concept bottleneck models

A Yan, Y Wang, Y Zhong, Z He, P Karypis… - arXiv preprint arXiv …, 2023 - arxiv.org
Medical image classification is a critical problem for healthcare, with the potential to alleviate
the workload of doctors and facilitate diagnoses of patients. However, two challenges arise …

Auxiliary losses for learning generalizable concept-based models

I Sheth, S Ebrahimi Kahou - Advances in Neural …, 2024 - proceedings.neurips.cc
The increasing use of neural networks in various applications has lead to increasing
apprehensions, underscoring the necessity to understand their operations beyond mere …

Incremental residual concept bottleneck models

C Shang, S Zhou, H Zhang, X Ni… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Concept Bottleneck Models (CBMs) map the black-box visual representations
extracted by deep neural networks onto a set of interpretable concepts and use the concepts …