A survey on neural network interpretability

Y Zhang, P Tiňo, A Leonardis… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Along with the great success of deep neural networks, there is also growing concern about
their black-box nature. The interpretability issue affects people's trust on deep learning …

Going beyond xai: A systematic survey for explanation-guided learning

Y Gao, S Gu, J Jiang, SR Hong, D Yu, L Zhao - ACM Computing Surveys, 2024 - dl.acm.org
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing
DNNs become more complex and diverse, ranging from improving a conventional model …

Uncovering expression signatures of synergistic drug responses via ensembles of explainable machine-learning models

JD Janizek, AB Dincer, S Celik, H Chen… - Nature biomedical …, 2023 - nature.com
Abstract Machine learning may aid the choice of optimal combinations of anticancer drugs
by explaining the molecular basis of their synergy. By combining accurate models with …

[PDF][PDF] SoK: Explainable machine learning in adversarial environments

M Noppel, C Wressnegger - 2024 IEEE Symposium on …, 2023 - oaklandsok.github.io
Modern deep learning methods have long been considered black boxes due to the lack of
insights into their decision-making process. However, recent advances in explainable …

Interpretable and robust ai in eeg systems: A survey

X Zhou, C Liu, L Zhai, Z Jia, C Guan, Y Liu - arXiv preprint arXiv …, 2023 - arxiv.org
The close coupling of artificial intelligence (AI) and electroencephalography (EEG) has
substantially advanced human-computer interaction (HCI) technologies in the AI era …

Feedbacklogs: Recording and incorporating stakeholder feedback into machine learning pipelines

M Barker, E Kallina, D Ashok, K Collins… - Proceedings of the 3rd …, 2023 - dl.acm.org
As machine learning (ML) pipelines affect an increasing array of stakeholders, there is a
growing need for documenting how input from stakeholders is recorded and incorporated …

Self-interpretable time series prediction with counterfactual explanations

J Yan, H Wang - International Conference on Machine …, 2023 - proceedings.mlr.press
Interpretable time series prediction is crucial for safety-critical areas such as healthcare and
autonomous driving. Most existing methods focus on interpreting predictions by assigning …

Reckoning with the disagreement problem: Explanation consensus as a training objective

A Schwarzschild, M Cembalest, K Rao… - Proceedings of the …, 2023 - dl.acm.org
As neural networks increasingly make critical decisions in high-stakes settings, monitoring
and explaining their behavior in an understandable and trustworthy manner is a necessity …

Repeat and Concatenate: 2D to 3D Image Translation with 3D to 3D Generative Modeling

A Corona-Figueroa, HPH Shum… - Proceedings of the …, 2024 - openaccess.thecvf.com
This paper investigates a 2D to 3D image translation method with a straightforward
technique enabling correlated 2D X-ray to 3D CT-like reconstruction. We observe that …

Synchronization-Inspired Interpretable Neural Networks

W Han, Z Qin, J Liu, C Böhm… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Synchronization is a ubiquitous phenomenon in nature that enables the orderly presentation
of information. In the human brain, for instance, functional modules such as the visual, motor …