Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Domain watermark: Effective and harmless dataset copyright protection is closed at hand
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …
datasets, based on which users can evaluate and improve their methods. In this paper, we …
Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection
Y Li, Y Bai, Y Jiang, Y Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …
Label poisoning is all you need
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
order to gain control over its predictions on images with a specific attacker-defined trigger. A …
A survey of neural trojan attacks and defenses in deep learning
Artificial Intelligence (AI) relies heavily on deep learning-a technology that is becoming
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …
Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …
models such as CLIP they can be easily countered by specialized backdoor defenses for …
What can discriminator do? towards box-free ownership verification of generative adversarial networks
Abstract In recent decades, Generative Adversarial Network (GAN) and its variants have
achieved unprecedented success in image synthesis. However, well-trained GANs are …
achieved unprecedented success in image synthesis. However, well-trained GANs are …
Coprotector: Protect open-source code against unauthorized training usage with data poisoning
Github Copilot, trained on billions of lines of public code, has recently become the buzzword
in the computer science research and practice community. Although it is designed to help …
in the computer science research and practice community. Although it is designed to help …
Defending against model stealing via verifying embedded external features
Obtaining a well-trained model involves expensive data collection and training procedures,
therefore the model is a valuable intellectual property. Recent studies revealed that …
therefore the model is a valuable intellectual property. Recent studies revealed that …
Promptcare: Prompt copyright protection by watermark injection and verification
Large language models (LLMs) have witnessed a meteoric rise in popularity among the
general public users over the past few months, facilitating diverse downstream tasks with …
general public users over the past few months, facilitating diverse downstream tasks with …