Poisoning web-scale training datasets is practical
Deep learning models are often trained on distributed, webscale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
Invisible backdoor attack with sample-specific triggers
Recently, backdoor attacks pose a new security threat to the training process of deep neural
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
networks (DNNs). Attackers intend to inject hidden backdoors into DNNs, such that the …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Backdoor defense via decoupling the training process
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …
Better trigger inversion optimization in backdoor scanning
Backdoor attacks aim to cause misclassification of a subject model by stamping a trigger to
inputs. Backdoors could be injected through malicious training and naturally exist. Deriving …
inputs. Backdoors could be injected through malicious training and naturally exist. Deriving …
Hidden trigger backdoor attack on {NLP} models via linguistic style manipulation
The vulnerability of deep neural networks (DNN) to backdoor (trojan) attacks is extensively
studied for the image domain. In a backdoor attack, a DNN is modified to exhibit expected …
studied for the image domain. In a backdoor attack, a DNN is modified to exhibit expected …
Rethinking the trigger of backdoor attack
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs),
such that the prediction of the infected model will be maliciously changed if the hidden …
such that the prediction of the infected model will be maliciously changed if the hidden …
Aeva: Black-box backdoor detection using adversarial extreme value analysis
Deep neural networks (DNNs) are proved to be vulnerable against backdoor attacks. A
backdoor is often embedded in the target DNNs through injecting a backdoor trigger into …
backdoor is often embedded in the target DNNs through injecting a backdoor trigger into …
Model orthogonalization: Class distance hardening in neural networks for better security
The distance between two classes for a deep learning classifier can be measured by the
level of difficulty in flipping all (or majority of) samples in a class to the other. The class …
level of difficulty in flipping all (or majority of) samples in a class to the other. The class …
Few-shot backdoor defense using shapley estimation
Deep neural networks have achieved impressive performance in a variety of tasks over the
last decade, such as autonomous driving, face recognition, and medical diagnosis …
last decade, such as autonomous driving, face recognition, and medical diagnosis …