From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai
The rising popularity of explainable artificial intelligence (XAI) to understand high-performing
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
black boxes raised the question of how to evaluate explanations of machine learning (ML) …
Explainable artificial intelligence: an analytical review
This paper provides a brief analytical review of the current state‐of‐the‐art in relation to the
explainability of artificial intelligence in the context of recent advances in machine learning …
explainability of artificial intelligence in the context of recent advances in machine learning …
[HTML][HTML] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated
applications, but the outcomes of many AI models are challenging to comprehend and trust …
applications, but the outcomes of many AI models are challenging to comprehend and trust …
[HTML][HTML] Notions of explainability and evaluation approaches for explainable artificial intelligence
Abstract Explainable Artificial Intelligence (XAI) has experienced a significant growth over
the last few years. This is due to the widespread application of machine learning, particularly …
the last few years. This is due to the widespread application of machine learning, particularly …
Evaluating the quality of machine learning explanations: A survey on methods and metrics
The most successful Machine Learning (ML) systems remain complex black boxes to end-
users, and even experts are often unable to understand the rationale behind their decisions …
users, and even experts are often unable to understand the rationale behind their decisions …
A survey on neural network interpretability
Along with the great success of deep neural networks, there is also growing concern about
their black-box nature. The interpretability issue affects people's trust on deep learning …
their black-box nature. The interpretability issue affects people's trust on deep learning …
Linevul: A transformer-based line-level vulnerability prediction
M Fu, C Tantithamthavorn - … of the 19th International Conference on …, 2022 - dl.acm.org
Software vulnerabilities are prevalent in software systems, causing a variety of problems
including deadlock, information loss, or system failures. Thus, early predictions of software …
including deadlock, information loss, or system failures. Thus, early predictions of software …
Captum: A unified and generic model interpretability library for pytorch
N Kokhlikyan, V Miglani, M Martin, E Wang… - arXiv preprint arXiv …, 2020 - arxiv.org
In this paper we introduce a novel, unified, open-source model interpretability library for
PyTorch [12]. The library contains generic implementations of a number of gradient and …
PyTorch [12]. The library contains generic implementations of a number of gradient and …
Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond
The evaluation of explanation methods is a research topic that has not yet been explored
deeply, however, since explainability is supposed to strengthen trust in artificial intelligence …
deeply, however, since explainability is supposed to strengthen trust in artificial intelligence …
Drug discovery with explainable artificial intelligence
Deep learning bears promise for drug discovery, including advanced image analysis,
prediction of molecular structure and function, and automated generation of innovative …
prediction of molecular structure and function, and automated generation of innovative …