Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

How to certify machine learning based safety-critical systems? A systematic literature review

F Tambon, G Laberge, L An, A Nikanjam… - Automated Software …, 2022 - Springer
Abstract Context Machine Learning (ML) has been at the heart of many innovations over the
past years. However, including it in so-called “safety-critical” systems such as automotive or …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

A survey on learning to reject

XY Zhang, GS Xie, X Li, T Mei… - Proceedings of the IEEE, 2023 - ieeexplore.ieee.org
Learning to reject is a special kind of self-awareness (the ability to know what you do not
know), which is an essential factor for humans to become smarter. Although machine …

Robust feature learning for adversarial defense via hierarchical feature alignment

X Zhang, J Wang, T Wang, R Jiang, J Xu, L Zhao - Information Sciences, 2021 - Elsevier
Deep neural networks have demonstrated excellent performance in most computer vision
tasks in recent years. However, they are vulnerable to adversarial perturbations generated …

Mt3: Meta test-time training for self-supervised test-time adaption

A Bartler, A Bühler, F Wiewel… - International …, 2022 - proceedings.mlr.press
An unresolved problem in Deep Learning is the ability of neural networks to cope with
domain shifts during test-time, imposed by commonly fixing network parameters after …

Adversarial robustness via random projection filters

M Dong, C Xu - Proceedings of the IEEE/CVF Conference …, 2023 - openaccess.thecvf.com
Abstract Deep Neural Networks show superior performance in various tasks but are
vulnerable to adversarial attacks. Most defense techniques are devoted to the adversarial …

Adversarial attacks and defenses using feature-space stochasticity

J Ukita, K Ohki - Neural Networks, 2023 - Elsevier
Recent studies in deep neural networks have shown that injecting random noise in the input
layer of the networks contributes towards ℓ p-norm-bounded adversarial perturbations …

Push & pull: Transferable adversarial examples with attentive attack

L Gao, Z Huang, J Song, Y Yang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Targeted attack aims to mislead the classification model to a specific class, and it can be
further divided into black-box and white-box targeted attack depending on whether the …

A simple fine-tuning is all you need: Towards robust deep learning via adversarial fine-tuning

A Jeddi, MJ Shafiee, A Wong - arXiv preprint arXiv:2012.13628, 2020 - arxiv.org
Adversarial Training (AT) with Projected Gradient Descent (PGD) is an effective approach for
improving the robustness of the deep neural networks. However, PGD AT has been shown …