Advances in adversarial attacks and defenses in computer vision: A survey
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …
ability to accurately solve complex problems is employed in vision research to learn deep …
How to certify machine learning based safety-critical systems? A systematic literature review
Abstract Context Machine Learning (ML) has been at the heart of many innovations over the
past years. However, including it in so-called “safety-critical” systems such as automotive or …
past years. However, including it in so-called “safety-critical” systems such as automotive or …
Threat of adversarial attacks on deep learning in computer vision: A survey
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …
computer vision, it has become the workhorse for applications ranging from self-driving cars …
A survey on learning to reject
Learning to reject is a special kind of self-awareness (the ability to know what you do not
know), which is an essential factor for humans to become smarter. Although machine …
know), which is an essential factor for humans to become smarter. Although machine …
Robust feature learning for adversarial defense via hierarchical feature alignment
Deep neural networks have demonstrated excellent performance in most computer vision
tasks in recent years. However, they are vulnerable to adversarial perturbations generated …
tasks in recent years. However, they are vulnerable to adversarial perturbations generated …
Mt3: Meta test-time training for self-supervised test-time adaption
An unresolved problem in Deep Learning is the ability of neural networks to cope with
domain shifts during test-time, imposed by commonly fixing network parameters after …
domain shifts during test-time, imposed by commonly fixing network parameters after …
Adversarial robustness via random projection filters
Abstract Deep Neural Networks show superior performance in various tasks but are
vulnerable to adversarial attacks. Most defense techniques are devoted to the adversarial …
vulnerable to adversarial attacks. Most defense techniques are devoted to the adversarial …
Adversarial attacks and defenses using feature-space stochasticity
J Ukita, K Ohki - Neural Networks, 2023 - Elsevier
Recent studies in deep neural networks have shown that injecting random noise in the input
layer of the networks contributes towards ℓ p-norm-bounded adversarial perturbations …
layer of the networks contributes towards ℓ p-norm-bounded adversarial perturbations …
Push & pull: Transferable adversarial examples with attentive attack
L Gao, Z Huang, J Song, Y Yang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Targeted attack aims to mislead the classification model to a specific class, and it can be
further divided into black-box and white-box targeted attack depending on whether the …
further divided into black-box and white-box targeted attack depending on whether the …
A simple fine-tuning is all you need: Towards robust deep learning via adversarial fine-tuning
Adversarial Training (AT) with Projected Gradient Descent (PGD) is an effective approach for
improving the robustness of the deep neural networks. However, PGD AT has been shown …
improving the robustness of the deep neural networks. However, PGD AT has been shown …