Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Backdoor attacks and countermeasures on deep learning: A comprehensive review
This work provides the community with a timely comprehensive review of backdoor attacks
and countermeasures on deep learning. According to the attacker's capability and affected …
and countermeasures on deep learning. According to the attacker's capability and affected …
Anti-backdoor learning: Training clean models on poisoned data
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …
While existing defense methods have demonstrated promising results on detecting or …
Lira: Learnable, imperceptible and robust backdoor attacks
Recently, machine learning models have demonstrated to be vulnerable to backdoor
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
attacks, primarily due to the lack of transparency in black-box models such as deep neural …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Reflection backdoor: A natural backdoor attack on deep neural networks
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
training time. A backdoor attack installs a backdoor into the victim model by injecting a …
Neural attention distillation: Erasing backdoor triggers from deep neural networks
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …
attack that injects a trigger pattern into a small proportion of training data so as to control the …
Privacy and robustness in federated learning: Attacks and defenses
As data are increasingly being stored in different silos and societies becoming more aware
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …
Backdoor attack with imperceptible input and latent modification
Recent studies have shown that deep neural networks (DNN) are vulnerable to various
adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model …
adversarial attacks. In particular, an adversary can inject a stealthy backdoor into a model …
Blind backdoors in deep learning models
E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …
on compromising the loss-value computation in the model-training code. We use it to …