Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

An overview of backdoor attacks against deep neural networks and possible defences

W Guo, B Tondi, M Barni - IEEE Open Journal of Signal …, 2022 - ieeexplore.ieee.org
Together with impressive advances touching every aspect of our society, AI technology
based on Deep Neural Networks (DNN) is bringing increasing security concerns. While …

Adversarial neuron pruning purifies backdoored deep models

D Wu, Y Wang - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
As deep neural networks (DNNs) are growing larger, their requirements for computational
resources become huge, which makes outsourcing training more popular. Training in a third …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C Xie, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Better trigger inversion optimization in backdoor scanning

G Tao, G Shen, Y Liu, S An, Q Xu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Backdoor attacks aim to cause misclassification of a subject model by stamping a trigger to
inputs. Backdoors could be injected through malicious training and naturally exist. Deriving …

Rethinking the trigger of backdoor attack

Y Li, T Zhai, B Wu, Y Jiang, Z Li, S Xia - arXiv preprint arXiv:2004.04692, 2020 - arxiv.org
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs),
such that the prediction of the infected model will be maliciously changed if the hidden …

Backdoor defense with machine unlearning

Y Liu, M Fan, C Chen, X Liu, Z Ma… - IEEE INFOCOM 2022 …, 2022 - ieeexplore.ieee.org
Backdoor injection attack is an emerging threat to the security of neural networks, however,
there still exist limited effective defense methods against the attack. In this paper, we …

Backdoorl: Backdoor attack against competitive reinforcement learning

L Wang, Z Javed, X Wu, W Guo, X Xing… - arXiv preprint arXiv …, 2021 - arxiv.org
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement
learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify …

Model poisoning attack in differential privacy-based federated learning

M Yang, H Cheng, F Chen, X Liu, M Wang, X Li - Information Sciences, 2023 - Elsevier
Although federated learning can provide privacy protection for individual raw data, some
studies have shown that the shared parameters or gradients under federated learning may …