Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond

A Hedström, L Weber, D Krakowczyk, D Bareeva… - Journal of Machine …, 2023 - jmlr.org
The evaluation of explanation methods is a research topic that has not yet been explored
deeply, however, since explainability is supposed to strengthen trust in artificial intelligence …

Finding the right XAI method—a guide for the evaluation and ranking of explainable AI methods in climate science

PL Bommer, M Kretschmer, A Hedström… - … Intelligence for the …, 2024 - journals.ametsoc.org
Explainable artificial intelligence (XAI) methods shed light on the predictions of machine
learning algorithms. Several different approaches exist and have already been applied in …

[HTML][HTML] A review of visualisation-as-explanation techniques for convolutional neural networks and their evaluation

E Mohamed, K Sirlantzis, G Howells - Displays, 2022 - Elsevier
Visualisation techniques are powerful tools to understand the behaviour of Artificial
Intelligence (AI) systems. They can be used to identify important features contributing to the …

SoK: Explainable machine learning in adversarial environments

M Noppel, C Wressnegger - 2024 IEEE Symposium on Security …, 2024 - ieeexplore.ieee.org
Modern deep learning methods have long been considered black boxes due to the lack of
insights into their decision-making process. However, recent advances in explainable …

The meta-evaluation problem in explainable AI: identifying reliable estimators with MetaQuantus

A Hedström, P Bommer, KK Wickstrøm… - arXiv preprint arXiv …, 2023 - arxiv.org
One of the unsolved challenges in the field of Explainable AI (XAI) is determining how to
most reliably estimate the quality of an explanation method in the absence of ground truth …

Explaining bayesian neural networks

K Bykov, MMC Höhne, A Creosteanu, KR Müller… - arXiv preprint arXiv …, 2021 - arxiv.org
To make advanced learning machines such as Deep Neural Networks (DNNs) more
transparent in decision making, explainable AI (XAI) aims to provide interpretations of DNNs' …

Singe: Sparsity via integrated gradients estimation of neuron relevance

E Yvinec, A Dapogny, M Cord… - Advances in Neural …, 2022 - proceedings.neurips.cc
The leap in performance in state-of-the-art computer vision methods is attributed to the
development of deep neural networks. However it often comes at a computational price …

DORA: Exploring outlier representations in deep neural networks

K Bykov, M Deb, D Grinwald, KR Müller… - arXiv preprint arXiv …, 2022 - arxiv.org
Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal
representations. However, the concepts they learn remain opaque, a problem that becomes …

Propagating Transparency: A Deep Dive into the Interpretability of Neural Networks

A Somani, A Horsch, A Bopardikar… - Nordic Machine …, 2024 - journals.uio.no
In the rapidly evolving landscape of deep learning (DL), understanding the inner workings of
neural networks remains a significant challenge. This need for transparency and …

Hypericons for interpretability: decoding abstract concepts in visual data

DS Martinez Pandiani, N Lazzari, M Erp… - International Journal of …, 2023 - Springer
In an era of information abundance and visual saturation, the need for resources to organise
and access the vast expanse of visual data is paramount. Abstract concepts-such as comfort …