Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
An overview of backdoor attacks against deep neural networks and possible defences
Together with impressive advances touching every aspect of our society, AI technology
based on Deep Neural Networks (DNN) is bringing increasing security concerns. While …
based on Deep Neural Networks (DNN) is bringing increasing security concerns. While …
Adversarial neuron pruning purifies backdoored deep models
As deep neural networks (DNNs) are growing larger, their requirements for computational
resources become huge, which makes outsourcing training more popular. Training in a third …
resources become huge, which makes outsourcing training more popular. Training in a third …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …
practitioners to automate and outsource the curation of training data in order to achieve state …
Better trigger inversion optimization in backdoor scanning
Backdoor attacks aim to cause misclassification of a subject model by stamping a trigger to
inputs. Backdoors could be injected through malicious training and naturally exist. Deriving …
inputs. Backdoors could be injected through malicious training and naturally exist. Deriving …
Rethinking the trigger of backdoor attack
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs),
such that the prediction of the infected model will be maliciously changed if the hidden …
such that the prediction of the infected model will be maliciously changed if the hidden …
Backdoor defense with machine unlearning
Backdoor injection attack is an emerging threat to the security of neural networks, however,
there still exist limited effective defense methods against the attack. In this paper, we …
there still exist limited effective defense methods against the attack. In this paper, we …
Backdoorl: Backdoor attack against competitive reinforcement learning
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement
learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify …
learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify …
Model poisoning attack in differential privacy-based federated learning
Although federated learning can provide privacy protection for individual raw data, some
studies have shown that the shared parameters or gradients under federated learning may …
studies have shown that the shared parameters or gradients under federated learning may …