Generalizing universal adversarial perturbations for deep neural networks
Previous studies have shown that universal adversarial attacks can fool deep neural
networks over a large set of input images with a single human-invisible perturbation …
networks over a large set of input images with a single human-invisible perturbation …
Generalizing universal adversarial attacks beyond additive perturbations
The previous study has shown that universal adversarial attacks can fool deep neural
networks over a large set of input images with a single human-invisible perturbation …
networks over a large set of input images with a single human-invisible perturbation …
Double targeted universal adversarial perturbations
Despite their impressive performance, deep neural networks (DNNs) are widely known to be
vulnerable to adversarial attacks, which makes it challenging for them to be deployed in …
vulnerable to adversarial attacks, which makes it challenging for them to be deployed in …
Learning universal adversarial perturbation by adversarial example
Deep learning models have shown to be susceptible to universal adversarial perturbation
(UAP), which has aroused wide concerns in the community. Compared with the …
(UAP), which has aroused wide concerns in the community. Compared with the …
Data-free adversarial perturbations for practical black-box attack
Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted
to fool pre-trained models. Adversarial examples often exhibit black-box attacking …
to fool pre-trained models. Adversarial examples often exhibit black-box attacking …
Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning
Strong adversarial examples are crucial for evaluating and enhancing the robustness of
deep neural networks. However the performance of popular attacks is usually sensitive for …
deep neural networks. However the performance of popular attacks is usually sensitive for …
Ensemble adversarial training: Attacks and defenses
Adversarial examples are perturbed inputs designed to fool machine learning models.
Adversarial training injects such examples into training data to increase robustness. To …
Adversarial training injects such examples into training data to increase robustness. To …
Learning transferable adversarial perturbations
K kanth Nakka, M Salzmann - Advances in Neural Information …, 2021 - openreview.net
While effective, deep neural networks (DNNs) are vulnerable to adversarial attacks. In
particular, recent work has shown that such attacks could be generated by another deep …
particular, recent work has shown that such attacks could be generated by another deep …
Improving transferability of universal adversarial perturbation with feature disruption
Deep neural networks (DNNs) are shown to be vulnerable to universal adversarial
perturbations (UAP), a single quasi-imperceptible perturbation that deceives the DNNs on …
perturbations (UAP), a single quasi-imperceptible perturbation that deceives the DNNs on …
Improved adversarial robustness by reducing open space risk via tent activations
Adversarial examples contain small perturbations that can remain imperceptible to human
observers but alter the behavior of even the best performing deep learning models and yield …
observers but alter the behavior of even the best performing deep learning models and yield …