Robust explainability: A tutorial on gradient-based attribution methods for deep neural networks

IE Nielsen, D Dera, G Rasool… - IEEE Signal …, 2022 - ieeexplore.ieee.org
The rise in deep neural networks (DNNs) has led to increased interest in explaining their
predictions. While many methods for this exist, there is currently no consensus on how to …

Backpropagated gradient representations for anomaly detection

G Kwon, M Prabhushankar, D Temel… - Computer Vision–ECCV …, 2020 - Springer
Learning representations that clearly distinguish between normal and abnormal data is key
to the success of anomaly detection. Most of existing anomaly detection algorithms use …

Traffic sign detection under challenging conditions: A deeper look into performance variations and spectral characteristics

D Temel, MH Chen, G AlRegib - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
Traffic signs are critical for maintaining the safety and efficiency of our roads. Therefore, we
need to carefully assess the capabilities and limitations of automated traffic sign detection …

Contrastive explanations in neural networks

M Prabhushankar, G Kwon, D Temel… - … Conference on Image …, 2020 - ieeexplore.ieee.org
Visual explanations are logical arguments based on visual features that justify the
predictions made by neural networks. Current modes of visual explanations answer …

Gaussian Switch Sampling: A Second-Order Approach to Active Learning

R Benkert, M Prabhushankar, G AlRegib… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
In active learning, acquisition functions define informativeness directly on the representation
position within the model manifold. However, for most machine learning models (in …

VOICE: Variance of Induced Contrastive Explanations to quantify Uncertainty in Neural Network Interpretability

M Prabhushankar, G AlRegib - IEEE Journal of Selected Topics …, 2024 - ieeexplore.ieee.org
In this paper, we visualize and quantify the predictive uncertainty of gradient-based post hoc
visual explanations for neural networks. Predictive uncertainty refers to the variability in the …

Novelty detection through model-based characterization of neural networks

G Kwon, M Prabhushankar, D Temel… - … Conference on Image …, 2020 - ieeexplore.ieee.org
In this paper, we propose a model-based characterization of neural networks to detect novel
input types and conditions. Novelty detection is crucial to identify abnormal inputs that can …

Explanatory paradigms in neural networks: Towards relevant and contextual explanations

G AlRegib, M Prabhushankar - IEEE Signal Processing …, 2022 - ieeexplore.ieee.org
In this article, we present a leap-forward expansion to the study of explainability in neural
networks by considering explanations as answers to abstract reasoning-based questions …

Gradient-based severity labeling for biomarker classification in oct

K Kokilepersaud, M Prabhushankar… - … on Image Processing …, 2022 - ieeexplore.ieee.org
In this paper, we propose a novel selection strategy for contrastive learning for medical
images. On natural images, contrastive learning uses augmentations to select positive and …

Challenging environments for traffic sign detection: Reliability assessment under inclement conditions

D Temel, T Alshawi, MH Chen, G AlRegib - arXiv preprint arXiv …, 2019 - arxiv.org
State-of-the-art algorithms successfully localize and recognize traffic signs over existing
datasets, which are limited in terms of challenging condition type and severity. Therefore, it …