Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Resilient machine learning for networked cyber physical systems: A survey for machine learning security to securing machine learning for CPS

FO Olowononi, DB Rawat, C Liu - … Communications Surveys & …, 2020 - ieeexplore.ieee.org
Cyber Physical Systems (CPS) are characterized by their ability to integrate the physical and
information or cyber worlds. Their deployment in critical infrastructure have demonstrated a …

Enhancing the transferability of adversarial attacks through variance tuning

X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with
imperceptible perturbations. Though adversarial attacks have achieved incredible success …

Improving adversarial transferability via neuron attribution-based attacks

J Zhang, W Wu, J Huang, Y Huang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. It is thus
imperative to devise effective attack algorithms to identify the deficiencies of DNNs …

Nesterov accelerated gradient and scale invariance for adversarial attacks

J Lin, C Song, K He, L Wang, JE Hopcroft - arXiv preprint arXiv …, 2019 - arxiv.org
Deep learning models are vulnerable to adversarial examples crafted by applying human-
imperceptible perturbations on benign inputs. However, under the black-box setting, most …

Admix: Enhancing the transferability of adversarial attacks

X Wang, X He, J Wang, K He - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Deep neural networks are known to be extremely vulnerable to adversarial examples under
white-box setting. Moreover, the malicious adversaries crafted on the surrogate (source) …

A fourier perspective on model robustness in computer vision

D Yin, R Gontijo Lopes, J Shlens… - Advances in Neural …, 2019 - proceedings.neurips.cc
Achieving robustness to distributional shift is a longstanding and challenging goal of
computer vision. Data augmentation is a commonly used approach for improving …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Deepsweep: An evaluation framework for mitigating DNN backdoor attacks using data augmentation

H Qiu, Y Zeng, S Guo, T Zhang, M Qiu… - Proceedings of the …, 2021 - dl.acm.org
Public resources and services (eg, datasets, training platforms, pre-trained models) have
been widely adopted to ease the development of Deep Learning-based applications …

Structure invariant transformation for better adversarial transferability

X Wang, Z Zhang, J Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Given the severe vulnerability of Deep Neural Networks (DNNs) against adversarial
examples, there is an urgent need for an effective adversarial attack to identify the …