Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Adversarial attacks and defenses in machine learning-empowered communication systems and networks: A contemporary survey

Y Wang, T Sun, S Li, X Yuan, W Ni… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
Adversarial attacks and defenses in machine learning and deep neural network (DNN) have
been gaining significant attention due to the rapidly growing applications of deep learning in …

Reflection backdoor: A natural backdoor attack on deep neural networks

Y Liu, X Ma, J Bailey, F Lu - Computer Vision–ECCV 2020: 16th European …, 2020 - Springer
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …

Neural attention distillation: Erasing backdoor triggers from deep neural networks

Y Li, X Lyu, N Koren, L Lyu, B Li, X Ma - arXiv preprint arXiv:2101.05930, 2021 - arxiv.org
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …

Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon

Y Zhong, X Liu, D Zhai, J Jiang… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Estimating the risk level of adversarial examples is essential for safely deploying machine
learning models in the real world. One popular approach for physical-world attacks is to …

Naturalistic physical adversarial patch for object detectors

YCT Hu, BH Kung, DS Tan, JC Chen… - Proceedings of the …, 2021 - openaccess.thecvf.com
Most prior works on physical adversarial attacks mainly focus on the attack performance but
seldom enforce any restrictions over the appearance of the generated adversarial patches …

Physical attack on monocular depth estimation with optimal adversarial patches

Z Cheng, J Liang, H Choi, G Tao, Z Cao, D Liu… - European conference on …, 2022 - Springer
Deep learning has substantially boosted the performance of Monocular Depth Estimation
(MDE), a critical component in fully vision-based autonomous driving (AD) systems (eg …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Dual attention suppression attack: Generate adversarial camouflage in physical world

J Wang, A Liu, Z Yin, S Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Deep learning models are vulnerable to adversarial examples. As a more threatening type
for practical deep learning systems, physical adversarial examples have received extensive …

Exploring architectural ingredients of adversarially robust deep neural networks

H Huang, Y Wang, S Erfani, Q Gu… - Advances in Neural …, 2021 - proceedings.neurips.cc
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks. A range of
defense methods have been proposed to train adversarially robust DNNs, among which …