From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai

M Nauta, J Trienes, S Pathak, E Nguyen… - ACM Computing …, 2023 - dl.acm.org
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …

Explainable artificial intelligence: an analytical review

PP Angelov, EA Soares, R Jiang… - … : Data Mining and …, 2021 - Wiley Online Library
This paper provides a brief analytical review of the current state‐of‐the‐art in relation to the
explainability of artificial intelligence in the context of recent advances in machine learning …

[HTML][HTML] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence

S Ali, T Abuhmed, S El-Sappagh, K Muhammad… - Information fusion, 2023 - Elsevier
Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated
applications, but the outcomes of many AI models are challenging to comprehend and trust …

[HTML][HTML] Notions of explainability and evaluation approaches for explainable artificial intelligence

G Vilone, L Longo - Information Fusion, 2021 - Elsevier
Abstract Explainable Artificial Intelligence (XAI) has experienced a significant growth over
the last few years. This is due to the widespread application of machine learning, particularly …

Evaluating the quality of machine learning explanations: A survey on methods and metrics

J Zhou, AH Gandomi, F Chen, A Holzinger - Electronics, 2021 - mdpi.com
The most successful Machine Learning (ML) systems remain complex black boxes to end-
users, and even experts are often unable to understand the rationale behind their decisions …

A survey on neural network interpretability

Y Zhang, P Tiňo, A Leonardis… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Along with the great success of deep neural networks, there is also growing concern about
their black-box nature. The interpretability issue affects people's trust on deep learning …

Linevul: A transformer-based line-level vulnerability prediction

M Fu, C Tantithamthavorn - … of the 19th International Conference on …, 2022 - dl.acm.org
Software vulnerabilities are prevalent in software systems, causing a variety of problems
including deadlock, information loss, or system failures. Thus, early predictions of software …

Captum: A unified and generic model interpretability library for pytorch

N Kokhlikyan, V Miglani, M Martin, E Wang… - arXiv preprint arXiv …, 2020 - arxiv.org
In this paper we introduce a novel, unified, open-source model interpretability library for
PyTorch [12]. The library contains generic implementations of a number of gradient and …

Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond

A Hedström, L Weber, D Krakowczyk, D Bareeva… - Journal of Machine …, 2023 - jmlr.org
The evaluation of explanation methods is a research topic that has not yet been explored
deeply, however, since explainability is supposed to strengthen trust in artificial intelligence …

Drug discovery with explainable artificial intelligence

J Jiménez-Luna, F Grisoni, G Schneider - Nature Machine Intelligence, 2020 - nature.com
Deep learning bears promise for drug discovery, including advanced image analysis,
prediction of molecular structure and function, and automated generation of innovative …