[HTML][HTML] Beyond explaining: Opportunities and challenges of XAI-based model improvement
Abstract Explainable Artificial Intelligence (XAI) is an emerging research field bringing
transparency to highly complex and opaque machine learning (ML) models. Despite the …
transparency to highly complex and opaque machine learning (ML) models. Despite the …
Line: Out-of-distribution detection by leveraging important neurons
It is important to quantify the uncertainty of input samples, especially in mission-critical
domains such as autonomous driving and healthcare, where failure predictions on out-of …
domains such as autonomous driving and healthcare, where failure predictions on out-of …
Explainable deep classification models for domain generalization
Conventionally, AI models are thought to trade off explainability for lower accuracy. We
develop a training strategy that not only leads to a more explainable AI system for object …
develop a training strategy that not only leads to a more explainable AI system for object …
Hint: Hierarchical neuron concept explainer
To interpret deep networks, one main approach is to associate neurons with human-
understandable concepts. However, existing methods often ignore the inherent connections …
understandable concepts. However, existing methods often ignore the inherent connections …
Learning reliable visual saliency for model explanations
By highlighting important features that contribute to model prediction, visual saliency is used
as a natural form to interpret the working mechanism of deep neural networks. Numerous …
as a natural form to interpret the working mechanism of deep neural networks. Numerous …
Guided zoom: Zooming into network evidence to refine fine-grained model decisions
In state-of-the-art deep single-label classification models, the top-accuracy is usually
significantly higher than the top-1 accuracy. This is more evident in fine-grained datasets …
significantly higher than the top-1 accuracy. This is more evident in fine-grained datasets …
Explaining cross-domain recognition with interpretable deep classifier
The recent advances in deep learning predominantly construct models in their internal
representations, and it is opaque to explain the rationale behind and decisions to human …
representations, and it is opaque to explain the rationale behind and decisions to human …
Xai-enhanced semantic segmentation models for visual quality inspection
Visual quality inspection systems, crucial in sectors like manufacturing and logistics, employ
computer vision and machine learning for precise, rapid defect detection. However, their …
computer vision and machine learning for precise, rapid defect detection. However, their …
Iterative and adaptive sampling with spatial attention for black-box model explanations
Deep neural networks have achieved great success in many real-world applications, yet it
remains unclear and difficult to explain their decision-making process to an end user. In this …
remains unclear and difficult to explain their decision-making process to an end user. In this …
Beyond explaining: XAI-based Adaptive Learning with SHAP Clustering for Energy Consumption Prediction
This paper presents an approach integrating explainable artificial intelligence (XAI)
techniques with adaptive learning to enhance energy consumption prediction models, with a …
techniques with adaptive learning to enhance energy consumption prediction models, with a …