基于逐层增量分解的深度网络神经元相关性解释方法

陈艺元, 李建威, 邵文泽, 孙玉宝 - 自动化学报, 2024 - aas.net.cn
神经网络的黑箱特性严重阻碍了人们关于网络决策的直观分析与理解. 尽管文献报道了多种基于
神经元贡献度分配的决策解释方法, 但是现有方法的解释一致性难以保证, 鲁棒性更是有待改进 …

Interpreting deep learning model using rule-based method

X Wang, J Wang, K Tang - arXiv preprint arXiv:2010.07824, 2020 - arxiv.org
Deep learning models are favored in many research and industry areas and have reached
the accuracy of approximating or even surpassing human level. However they've long been …

[HTML][HTML] Explaining deep neural networks: A survey on the global interpretation methods

R Saleem, B Yuan, F Kurugollu, A Anjum, L Liu - Neurocomputing, 2022 - Elsevier
A substantial amount of research has been carried out in Explainable Artificial Intelligence
(XAI) models, especially in those which explain the deep architectures of neural networks. A …

Breaking batch normalization for better explainability of deep neural networks through layer-wise relevance propagation

M Guillemot, C Heusele, R Korichi, S Schnebert… - arXiv preprint arXiv …, 2020 - arxiv.org
The lack of transparency of neural networks stays a major break for their use. The Layerwise
Relevance Propagation technique builds heat-maps representing the relevance of each …

Deep Interpretation with Sign Separated and Contribution Recognized Decomposition

LYW Hui, DW Soh - … in Computational Intelligence: 16th International Work …, 2021 - Springer
Network interpretation in context of explainable AI continues to gather interest not only
because of the need to explain algorithm decisions, but also because of potential …

Beyond Pixels: A Sample Based Method for understanding the decisions of Neural Networks

O Dibua, M Austin, K Kafle - openreview.net
Interpretability in deep learning is one of the largest obstacles to more widespread adoption
of deep learning in critical applications. A variety of methods have been introduced to …

Interpreting multivariate shapley interactions in dnns

H Zhang, Y Xie, L Zheng, D Zhang… - Proceedings of the AAAI …, 2021 - ojs.aaai.org
This paper aims to explain deep neural networks (DNNs) from the perspective of multivariate
interactions. In this paper, we define and quantify the significance of interactions among …

A robust unsupervised ensemble of feature-based explanations using restricted boltzmann machines

V Borisov, J Meier, J Heuvel, H Jalali… - arXiv preprint arXiv …, 2021 - arxiv.org
Understanding the results of deep neural networks is an essential step towards wider
acceptance of deep learning algorithms. Many approaches address the issue of interpreting …

Noisegrad—enhancing explanations by introducing stochasticity to model weights

K Bykov, A Hedström, S Nakajima… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Many efforts have been made for revealing the decision-making process of black-box
learning machines such as deep neural networks, resulting in useful local and global …

Discerning decision-making process of deep neural networks with hierarchical voting transformation

Y Sun, H Zhu, C Qin, F Zhuang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Neural network based deep learning techniques have shown great success for numerous
applications. While it is expected to understand their intrinsic decision-making processes …