Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Domain watermark: Effective and harmless dataset copyright protection is closed at hand

J Guo, Y Li, L Wang, ST Xia… - Advances in Neural …, 2024 - proceedings.neurips.cc
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …

Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection

Y Li, Y Bai, Y Jiang, Y Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …

Label poisoning is all you need

R Jha, J Hayase, S Oh - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in
order to gain control over its predictions on images with a specific attacker-defined trigger. A …

A survey of neural trojan attacks and defenses in deep learning

J Wang, GM Hassan, N Akhtar - arXiv preprint arXiv:2202.07183, 2022 - arxiv.org
Artificial Intelligence (AI) relies heavily on deep learning-a technology that is becoming
increasingly popular in real-life applications of AI, even in the safety-critical and high-risk …

Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning

S Liang, M Zhu, A Liu, B Wu, X Cao… - Proceedings of the …, 2024 - openaccess.thecvf.com
While existing backdoor attacks have successfully infected multimodal contrastive learning
models such as CLIP they can be easily countered by specialized backdoor defenses for …

What can discriminator do? towards box-free ownership verification of generative adversarial networks

Z Huang, B Li, Y Cai, R Wang, S Guo… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract In recent decades, Generative Adversarial Network (GAN) and its variants have
achieved unprecedented success in image synthesis. However, well-trained GANs are …

Coprotector: Protect open-source code against unauthorized training usage with data poisoning

Z Sun, X Du, F Song, M Ni, L Li - … of the ACM Web Conference 2022, 2022 - dl.acm.org
Github Copilot, trained on billions of lines of public code, has recently become the buzzword
in the computer science research and practice community. Although it is designed to help …

Defending against model stealing via verifying embedded external features

Y Li, L Zhu, X Jia, Y Jiang, ST Xia, X Cao - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Obtaining a well-trained model involves expensive data collection and training procedures,
therefore the model is a valuable intellectual property. Recent studies revealed that …

Promptcare: Prompt copyright protection by watermark injection and verification

H Yao, J Lou, Z Qin, K Ren - 2024 IEEE Symposium on Security …, 2024 - ieeexplore.ieee.org
Large language models (LLMs) have witnessed a meteoric rise in popularity among the
general public users over the past few months, facilitating diverse downstream tasks with …