Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Cameras: Enhanced resolution and sanity preserving class activation mapping for image saliency

MAAK Jalwana, N Akhtar… - Proceedings of the …, 2021 - openaccess.thecvf.com
Backpropagation image saliency aims at explaining model predictions by estimating model-
centric importance of individual pixels in the input. However, class-insensitivity of the earlier …

[HTML][HTML] Opti-CAM: Optimizing saliency maps for interpretability

H Zhang, F Torres, R Sicre, Y Avrithis… - Computer Vision and …, 2024 - Elsevier
Methods based on class activation maps (CAM) provide a simple mechanism to interpret
predictions of convolutional neural networks by using linear combinations of feature maps …

Attack to fool and explain deep networks

N Akhtar, MAAK Jalwana… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Deep visual models are susceptible to adversarial perturbations to inputs. Although these
signals are carefully crafted, they still appear noise-like patterns to humans. This observation …

[HTML][HTML] Texture-based latent space disentanglement for enhancement of a training dataset for ANN-based classification of fruit and vegetables

K Hameed, D Chai, A Rassau - Information Processing in Agriculture, 2023 - Elsevier
Abstract The capability of Convolutional Neural Networks (CNNs) for sparse representation
has significant application to complex tasks like Representation Learning (RL). However …

Adversarial attack using sparse representation of feature maps

M Jahangir, F Shafait - IEEE Access, 2022 - ieeexplore.ieee.org
Deep neural networks can be fooled by small imperceptible perturbations called adversarial
examples. Although these examples are carefully crafted, they involve two major concerns …

Orthogonal deep models as defense against black-box attacks

MAAK Jalwana, N Akhtar, M Bennamoun… - IEEE Access, 2020 - ieeexplore.ieee.org
Deep learning has demonstrated state-of-the-art performance for a variety of challenging
computer vision tasks. On one hand, this has enabled deep visual models to pave the way …

Transferable 3D Adversarial Textures using End-to-end Optimization

C Pestana, N Akhtar, N Rahnavard… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep visual models are known to be vulnerable to adversarial attacks. The last few years
have seen numerous techniques to compute adversarial inputs for these models. However …

Saliency Maps Give a False Sense of Explanability to Image Classifiers: An Empirical Evaluation across Methods and Metrics

H Zhang, FT Figueroa, H Hermanns - The 16th Asian Conference …, 2024 - openreview.net
The interpretability of deep neural networks (DNNs) has emerged as a crucial area of
research, particularly in image classification tasks where decisions often lack transparency …