Human-in-the-loop machine learning: a state of the art
E Mosqueira-Rey, E Hernández-Pereira… - Artificial Intelligence …, 2023 - Springer
Researchers are defining new types of interactions between humans and machine learning
algorithms generically called human-in-the-loop machine learning. Depending on who is in …
algorithms generically called human-in-the-loop machine learning. Depending on who is in …
Explainable artificial intelligence: a comprehensive review
Thanks to the exponential growth in computing power and vast amounts of data, artificial
intelligence (AI) has witnessed remarkable developments in recent years, enabling it to be …
intelligence (AI) has witnessed remarkable developments in recent years, enabling it to be …
Masked feature prediction for self-supervised visual pre-training
Abstract We present Masked Feature Prediction (MaskFeat) for self-supervised pre-training
of video models. Our approach first randomly masks out a portion of the input sequence and …
of video models. Our approach first randomly masks out a portion of the input sequence and …
Language in a bottle: Language model guided concept bottlenecks for interpretable image classification
Abstract Concept Bottleneck Models (CBM) are inherently interpretable models that factor
model decisions into human-readable concepts. They allow people to easily understand …
model decisions into human-readable concepts. They allow people to easily understand …
From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
From attribution maps to human-understandable explanations through concept relevance propagation
The field of explainable artificial intelligence (XAI) aims to bring transparency to today's
powerful but opaque deep learning models. While local XAI methods explain individual …
powerful but opaque deep learning models. While local XAI methods explain individual …
[HTML][HTML] Transparency of deep neural networks for medical image analysis: A review of interpretability methods
Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for
diagnosis and treatment decisions. Deep neural networks have shown the same or better …
diagnosis and treatment decisions. Deep neural networks have shown the same or better …
Algorithms to estimate Shapley value feature attributions
Feature attributions based on the Shapley value are popular for explaining machine
learning models. However, their estimation is complex from both theoretical and …
learning models. However, their estimation is complex from both theoretical and …
[HTML][HTML] Notions of explainability and evaluation approaches for explainable artificial intelligence
Abstract Explainable Artificial Intelligence (XAI) has experienced a significant growth over
the last few years. This is due to the widespread application of machine learning, particularly …
the last few years. This is due to the widespread application of machine learning, particularly …
Representation engineering: A top-down approach to ai transparency
In this paper, we identify and characterize the emerging area of representation engineering
(RepE), an approach to enhancing the transparency of AI systems that draws on insights …
(RepE), an approach to enhancing the transparency of AI systems that draws on insights …