Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond
The evaluation of explanation methods is a research topic that has not yet been explored
deeply, however, since explainability is supposed to strengthen trust in artificial intelligence …
deeply, however, since explainability is supposed to strengthen trust in artificial intelligence …
Finding the right XAI method—a guide for the evaluation and ranking of explainable AI methods in climate science
Explainable artificial intelligence (XAI) methods shed light on the predictions of machine
learning algorithms. Several different approaches exist and have already been applied in …
learning algorithms. Several different approaches exist and have already been applied in …
[HTML][HTML] A review of visualisation-as-explanation techniques for convolutional neural networks and their evaluation
Visualisation techniques are powerful tools to understand the behaviour of Artificial
Intelligence (AI) systems. They can be used to identify important features contributing to the …
Intelligence (AI) systems. They can be used to identify important features contributing to the …
SoK: Explainable machine learning in adversarial environments
M Noppel, C Wressnegger - 2024 IEEE Symposium on Security …, 2024 - ieeexplore.ieee.org
Modern deep learning methods have long been considered black boxes due to the lack of
insights into their decision-making process. However, recent advances in explainable …
insights into their decision-making process. However, recent advances in explainable …
The meta-evaluation problem in explainable AI: identifying reliable estimators with MetaQuantus
One of the unsolved challenges in the field of Explainable AI (XAI) is determining how to
most reliably estimate the quality of an explanation method in the absence of ground truth …
most reliably estimate the quality of an explanation method in the absence of ground truth …
Explaining bayesian neural networks
To make advanced learning machines such as Deep Neural Networks (DNNs) more
transparent in decision making, explainable AI (XAI) aims to provide interpretations of DNNs' …
transparent in decision making, explainable AI (XAI) aims to provide interpretations of DNNs' …
Singe: Sparsity via integrated gradients estimation of neuron relevance
The leap in performance in state-of-the-art computer vision methods is attributed to the
development of deep neural networks. However it often comes at a computational price …
development of deep neural networks. However it often comes at a computational price …
DORA: Exploring outlier representations in deep neural networks
Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal
representations. However, the concepts they learn remain opaque, a problem that becomes …
representations. However, the concepts they learn remain opaque, a problem that becomes …
Propagating Transparency: A Deep Dive into the Interpretability of Neural Networks
In the rapidly evolving landscape of deep learning (DL), understanding the inner workings of
neural networks remains a significant challenge. This need for transparency and …
neural networks remains a significant challenge. This need for transparency and …
Hypericons for interpretability: decoding abstract concepts in visual data
DS Martinez Pandiani, N Lazzari, M Erp… - International Journal of …, 2023 - Springer
In an era of information abundance and visual saturation, the need for resources to organise
and access the vast expanse of visual data is paramount. Abstract concepts-such as comfort …
and access the vast expanse of visual data is paramount. Abstract concepts-such as comfort …