A survey on transferability of adversarial examples across deep neural networks
The emergence of Deep Neural Networks (DNNs) has revolutionized various domains,
enabling the resolution of complex tasks spanning image recognition, natural language …
enabling the resolution of complex tasks spanning image recognition, natural language …
Towards evaluating transfer-based attacks systematically, practically, and fairly
The adversarial vulnerability of deep neural networks (DNNs) has drawn great attention due
to the security risk of applying these models in real-world applications. Based on …
to the security risk of applying these models in real-world applications. Based on …
Generative watermarking against unauthorized subject-driven image synthesis
Large text-to-image models have shown remarkable performance in synthesizing high-
quality images. In particular, the subject-driven model makes it possible to personalize the …
quality images. In particular, the subject-driven model makes it possible to personalize the …
Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models
Large Language Models (LLMs) have led to significant improvements in many tasks across
various domains, such as code interpretation, response generation, and ambiguity handling …
various domains, such as code interpretation, response generation, and ambiguity handling …
On the Adversarial Transferability of Generalized" Skip Connections"
Skip connection is an essential ingredient for modern deep models to be deeper and more
powerful. Despite their huge success in normal scenarios (state-of-the-art classification …
powerful. Despite their huge success in normal scenarios (state-of-the-art classification …
Turn fake into real: Adversarial head turn attacks against deepfake detection
Malicious use of deepfakes leads to serious public concerns and reduces people's trust in
digital media. Although effective deepfake detectors have been proposed, they are …
digital media. Although effective deepfake detectors have been proposed, they are …
Sok: Pitfalls in evaluating black-box attacks
Numerous works study black-box attacks on image classifiers, where adversaries generate
adversarial examples against unknown target models without having access to their internal …
adversarial examples against unknown target models without having access to their internal …
TransMix: Crafting highly transferable adversarial examples to evade face recognition models
YM Khedr, X Liu, K He - Image and Vision Computing, 2024 - Elsevier
The main challenge in deceiving face recognition (FR) models lies in the target model under
the black-box setting. Existing works seek to generate adversarial examples to improve the …
the black-box setting. Existing works seek to generate adversarial examples to improve the …
NeRFail: Neural Radiance Fields-Based Multiview Adversarial Attack
Adversarial attacks,\ie generating adversarial perturbations with a small magnitude to
deceive deep neural networks, are important for investigating and improving model …
deceive deep neural networks, are important for investigating and improving model …
Boosting the adversarial transferability of surrogate models with dark knowledge
Deep neural networks (DNNs) are vulnerable to adversarial examples. And, the adversarial
examples have transferability, which means that an adversarial example for a DNN model …
examples have transferability, which means that an adversarial example for a DNN model …