Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Interpretable deep learning: Interpretation, interpretability, trustworthiness, and beyond

X Li, H Xiong, X Li, X Wu, X Zhang, J Liu, J Bian… - … and Information Systems, 2022 - Springer
Deep neural networks have been well-known for their superb handling of various machine
learning and artificial intelligence tasks. However, due to their over-parameterized black-box …

Do adversarially robust imagenet models transfer better?

H Salman, A Ilyas, L Engstrom… - Advances in Neural …, 2020 - proceedings.neurips.cc
Transfer learning is a widely-used paradigm in deep learning, where models pre-trained on
standard datasets can be efficiently adapted to downstream tasks. Typically, better pre …

Partial success in closing the gap between human and machine vision

R Geirhos, K Narayanappa, B Mitzkus… - Advances in …, 2021 - proceedings.neurips.cc
A few years ago, the first CNN surpassed human performance on ImageNet. However, it
soon became clear that machines lack robustness on more challenging test cases, a major …

Adversarial examples improve image recognition

C Xie, M Tan, B Gong, J Wang… - Proceedings of the …, 2020 - openaccess.thecvf.com
Adversarial examples are commonly viewed as a threat to ConvNets. Here we present an
opposite perspective: adversarial examples can be used to improve image recognition …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

{X-Adv}: Physical adversarial object attacks against x-ray prohibited item detection

A Liu, J Guo, J Wang, S Liang, R Tao, W Zhou… - 32nd USENIX Security …, 2023 - usenix.org
Adversarial attacks are valuable for evaluating the robustness of deep learning models.
Existing attacks are primarily conducted on the visible light spectrum (eg, pixel-wise texture …

A comprehensive study on robustness of image classification models: Benchmarking and rethinking

C Liu, Y Dong, W Xiang, X Yang, H Su, J Zhu… - International Journal of …, 2024 - Springer
The robustness of deep neural networks is frequently compromised when faced with
adversarial examples, common corruptions, and distribution shifts, posing a significant …

Revisiting adversarial robustness distillation: Robust soft labels make student better

B Zi, S Zhao, X Ma, YG Jiang - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Adversarial training is one effective approach for training robust deep neural networks
against adversarial attacks. While being able to bring reliable robustness, adversarial …

Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …