Domain watermark: Effective and harmless dataset copyright protection is closed at hand
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …
datasets, based on which users can evaluate and improve their methods. In this paper, we …
Backdoor defense via adaptively splitting poisoned dataset
Backdoor defenses have been studied to alleviate the threat of deep neural networks
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …
(DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt …
Not all samples are born equal: Towards effective clean-label backdoor attacks
Recent studies demonstrated that deep neural networks (DNNs) are vulnerable to backdoor
attacks. The attacked model behaves normally on benign samples, while its predictions are …
attacks. The attacked model behaves normally on benign samples, while its predictions are …
Nearest is not dearest: Towards practical defense against quantization-conditioned backdoor attacks
Abstract Model quantization is widely used to compress and accelerate deep neural
networks. However recent studies have revealed the feasibility of weaponizing model …
networks. However recent studies have revealed the feasibility of weaponizing model …
Scale-up: An efficient black-box input-level backdoor detection via analyzing scaled prediction consistency
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries
embed a hidden backdoor trigger during the training process for malicious prediction …
embed a hidden backdoor trigger during the training process for malicious prediction …
Black-box dataset ownership verification via backdoor watermarking
Deep learning, especially deep neural networks (DNNs), has been widely and successfully
adopted in many critical applications for its high effectiveness and efficiency. The rapid …
adopted in many critical applications for its high effectiveness and efficiency. The rapid …
Black-box backdoor defense via zero-shot image purification
Backdoor attacks inject poisoned samples into the training data, resulting in the
misclassification of the poisoned input during a model's deployment. Defending against …
misclassification of the poisoned input during a model's deployment. Defending against …
Towards faithful xai evaluation via generalization-limited backdoor watermark
Saliency-based representation visualization (SRV)($ eg $, Grad-CAM) is one of the most
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …
classical and widely adopted explainable artificial intelligence (XAI) methods for its simplicity …
Untargeted backdoor attack against object detection
Recent studies revealed that deep neural networks (DNNs) are exposed to backdoor threats
when training with third-party resources (such as training samples or backbones). The back …
when training with third-party resources (such as training samples or backbones). The back …
Does few-shot learning suffer from backdoor attacks?
The field of few-shot learning (FSL) has shown promising results in scenarios where training
data is limited, but its vulnerability to backdoor attacks remains largely unexplored. We first …
data is limited, but its vulnerability to backdoor attacks remains largely unexplored. We first …