Improving the transferability of adversarial samples by path-augmented method

J Zhang, J Huang, W Wang, Y Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks have achieved unprecedented success on diverse vision tasks.
However, they are vulnerable to adversarial noise that is imperceptible to humans. This …

Transferable adversarial attacks on vision transformers with token gradient regularization

J Zhang, Y Huang, W Wu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Vision transformers (ViTs) have been successfully deployed in a variety of computer vision
tasks, but they are still vulnerable to adversarial samples. Transfer-based attacks use a local …

Diffute: Universal text editing diffusion model

H Chen, Z Xu, Z Gu, Y Li, C Meng… - Advances in Neural …, 2024 - proceedings.neurips.cc
Diffusion model based language-guided image editing has achieved great success recently.
However, existing state-of-the-art diffusion models struggle with rendering correct text and …

Hierarchical dynamic image harmonization

H Chen, Z Gu, Y Li, J Lan, C Meng, W Wang… - Proceedings of the 31st …, 2023 - dl.acm.org
Image harmonization is a critical task in computer vision, which aims to adjust the
foreground to make it compatible with the background. Recent works mainly focus on using …

Adversarial Training: A Survey

M Zhao, L Zhang, J Ye, H Lu, B Yin, X Wang - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial training (AT) refers to integrating adversarial examples--inputs altered with
imperceptible perturbations that can significantly impact model predictions--into the training …

Backpropagation path search on adversarial transferability

Z Xu, Z Gu, J Zhang, S Cui, C Meng… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples, dictating the imperativeness
to test the model's robustness before deployment. Transfer-based attackers craft adversarial …

Towards transferable adversarial attacks on vision transformers for image classification

X Guo, P Chen, Z Lu, H Chai, X Du, X Wu - Journal of Systems Architecture, 2024 - Elsevier
The deployment of high-performance Vision Transformer (ViT) models has garnered
attention from both industry and academia. However, their vulnerability to adversarial …

Edge Detectors Can Make Deep Convolutional Neural Networks More Robust

J Ding, JC Zhao, YZ Sun, P Tan, JW Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep convolutional neural networks (DCNN for short) are vulnerable to examples with small
perturbations. Improving DCNN's robustness is of great significance to the safety-critical …

Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning

J Ding, JC Zhao, YZ Sun, P Tan, JE Ma… - arXiv preprint arXiv …, 2023 - arxiv.org
Deep convolutional neural network (DCNN for short) models are vulnerable to examples
with small perturbations. Adversarial training (AT for short) is a widely used approach to …

A Patch-wise Adversarial Denoising Could Enhance the Robustness of Adversarial Training

S Zhao, S Liu, B Zhang, Y Zhai, Z Liu… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
The adversarial examples have demonstrated the vulnerability of machine learning models.
While the data augmentation strategy has been a cornerstone in circumventing overfitting …