Domain watermark: Effective and harmless dataset copyright protection is closed at hand

J Guo, Y Li, L Wang, ST Xia… - Advances in Neural …, 2024 - proceedings.neurips.cc
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …

Backdoor defense via adaptively splitting poisoned dataset

K Gao, Y Bai, J Gu, Y Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Backdoor defenses have been studied to alleviate the threat of deep neural networks
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …

Not all samples are born equal: Towards effective clean-label backdoor attacks

Y Gao, Y Li, L Zhu, D Wu, Y Jiang, ST Xia - Pattern Recognition, 2023 - Elsevier
Recent studies demonstrated that deep neural networks (DNNs) are vulnerable to backdoor
attacks. The attacked model behaves normally on benign samples, while its predictions are …

Nearest is not dearest: Towards practical defense against quantization-conditioned backdoor attacks

B Li, Y Cai, H Li, F Xue, Z Li… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Abstract Model quantization is widely used to compress and accelerate deep neural
networks. However recent studies have revealed the feasibility of weaponizing model …

Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency

J Guo, Y Li, X Chen, H Guo, L Sun, C Liu - arXiv preprint arXiv:2302.03251, 2023 - arxiv.org
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries
embed a hidden backdoor trigger during the training process for malicious prediction …

Black-box dataset ownership verification via backdoor watermarking

Y Li, M Zhu, X Yang, Y Jiang, T Wei… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Deep learning, especially deep neural networks (DNNs), has been widely and successfully
adopted in many critical applications for its high effectiveness and efficiency. The rapid …

Black-box backdoor defense via zero-shot image purification

Y Shi, M Du, X Wu, Z Guan, J Sun… - Advances in Neural …, 2023 - proceedings.neurips.cc
Backdoor attacks inject poisoned samples into the training data, resulting in the
misclassification of the poisoned input during a model's deployment. Defending against …

Towards faithful xai evaluation via generalization-limited backdoor watermark

M Ya, Y Li, T Dai, B Wang, Y Jiang… - The Twelfth International …, 2023 - openreview.net
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …

Untargeted backdoor attack against object detection

C Luo, Y Li, Y Jiang, ST Xia - ICASSP 2023-2023 IEEE …, 2023 - ieeexplore.ieee.org
Recent studies revealed that deep neural networks (DNNs) are exposed to backdoor threats
when training with third-party resources (such as training samples or backbones). The back …

Does few-shot learning suffer from backdoor attacks?

X Liu, X Jia, J Gu, Y Xun, S Liang, X Cao - Proceedings of the AAAI …, 2024 - ojs.aaai.org
The field of few-shot learning (FSL) has shown promising results in scenarios where training
data is limited, but its vulnerability to backdoor attacks remains largely unexplored. We first …