Generalizing universal adversarial perturbations for deep neural networks

Y Zhang, W Ruan, F Wang, X Huang - Machine Learning, 2023 - Springer
Previous studies have shown that universal adversarial attacks can fool deep neural
networks over a large set of input images with a single human-invisible perturbation …

Generalizing universal adversarial attacks beyond additive perturbations

Y Zhang, W Ruan, F Wang… - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
The previous study has shown that universal adversarial attacks can fool deep neural
networks over a large set of input images with a single human-invisible perturbation …

Double targeted universal adversarial perturbations

P Benz, C Zhang, T Imtiaz… - Proceedings of the …, 2020 - openaccess.thecvf.com
Despite their impressive performance, deep neural networks (DNNs) are widely known to be
vulnerable to adversarial attacks, which makes it challenging for them to be deployed in …

Learning universal adversarial perturbation by adversarial example

M Li, Y Yang, K Wei, X Yang, H Huang - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Deep learning models have shown to be susceptible to universal adversarial perturbation
(UAP), which has aroused wide concerns in the community. Compared with the …

Data-free adversarial perturbations for practical black-box attack

Z Huan, Y Wang, X Zhang, L Shang, C Fu… - Advances in Knowledge …, 2020 - Springer
Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted
to fool pre-trained models. Adversarial examples often exhibit black-box attacking …

Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning

Z Fang, R Wang, T Huang… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Strong adversarial examples are crucial for evaluating and enhancing the robustness of
deep neural networks. However the performance of popular attacks is usually sensitive for …

Ensemble adversarial training: Attacks and defenses

F Tramèr, A Kurakin, N Papernot, I Goodfellow… - arXiv preprint arXiv …, 2017 - arxiv.org
Adversarial examples are perturbed inputs designed to fool machine learning models.
Adversarial training injects such examples into training data to increase robustness. To …

Learning transferable adversarial perturbations

K kanth Nakka, M Salzmann - Advances in Neural Information …, 2021 - openreview.net
While effective, deep neural networks (DNNs) are vulnerable to adversarial attacks. In
particular, recent work has shown that such attacks could be generated by another deep …

Improving transferability of universal adversarial perturbation with feature disruption

D Wang, W Yao, T Jiang, X Chen - IEEE Transactions on Image …, 2023 - ieeexplore.ieee.org
Deep neural networks (DNNs) are shown to be vulnerable to universal adversarial
perturbations (UAP), a single quasi-imperceptible perturbation that deceives the DNNs on …

Improved adversarial robustness by reducing open space risk via tent activations

A Rozsa, TE Boult - arXiv preprint arXiv:1908.02435, 2019 - arxiv.org
Adversarial examples contain small perturbations that can remain imperceptible to human
observers but alter the behavior of even the best performing deep learning models and yield …