A survey on transferability of adversarial examples across deep neural networks

J Gu, X Jia, P de Jorge, W Yu, X Liu, A Ma… - arXiv preprint arXiv …, 2023 - arxiv.org
The emergence of Deep Neural Networks (DNNs) has revolutionized various domains,
enabling the resolution of complex tasks spanning image recognition, natural language …

Towards evaluating transfer-based attacks systematically, practically, and fairly

Q Li, Y Guo, W Zuo, H Chen - Advances in Neural …, 2024 - proceedings.neurips.cc
The adversarial vulnerability of deep neural networks (DNNs) has drawn great attention due
to the security risk of applying these models in real-world applications. Based on …

Generative watermarking against unauthorized subject-driven image synthesis

Y Ma, Z Zhao, X He, Z Li, M Backes, Y Zhang - arXiv preprint arXiv …, 2023 - arxiv.org
Large text-to-image models have shown remarkable performance in synthesizing high-
quality images. In particular, the subject-driven model makes it possible to personalize the …

Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models

Y Liu, T Cong, Z Zhao, M Backes, Y Shen… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have led to significant improvements in many tasks across
various domains, such as code interpretation, response generation, and ambiguity handling …

On the Adversarial Transferability of Generalized" Skip Connections"

Y Wang, Y Mo, D Wu, M Li, X Ma, Z Lin - arXiv preprint arXiv:2410.08950, 2024 - arxiv.org
Skip connection is an essential ingredient for modern deep models to be deeper and more
powerful. Despite their huge success in normal scenarios (state-of-the-art classification …

Turn fake into real: Adversarial head turn attacks against deepfake detection

W Wang, Z Zhao, N Sebe, B Lepri - arXiv preprint arXiv:2309.01104, 2023 - arxiv.org
Malicious use of deepfakes leads to serious public concerns and reduces people's trust in
digital media. Although effective deepfake detectors have been proposed, they are …

Sok: Pitfalls in evaluating black-box attacks

F Suya, A Suri, T Zhang, J Hong… - … IEEE Conference on …, 2024 - ieeexplore.ieee.org
Numerous works study black-box attacks on image classifiers, where adversaries generate
adversarial examples against unknown target models without having access to their internal …

TransMix: Crafting highly transferable adversarial examples to evade face recognition models

YM Khedr, X Liu, K He - Image and Vision Computing, 2024 - Elsevier
The main challenge in deceiving face recognition (FR) models lies in the target model under
the black-box setting. Existing works seek to generate adversarial examples to improve the …

NeRFail: Neural Radiance Fields-Based Multiview Adversarial Attack

W Jiang, H Zhang, X Wang, Z Guo… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Adversarial attacks,\ie generating adversarial perturbations with a small magnitude to
deceive deep neural networks, are important for investigating and improving model …

Boosting the adversarial transferability of surrogate models with dark knowledge

D Yang, Z Xiao, W Yu - 2023 IEEE 35th International …, 2023 - ieeexplore.ieee.org
Deep neural networks (DNNs) are vulnerable to adversarial examples. And, the adversarial
examples have transferability, which means that an adversarial example for a DNN model …