Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

AI security for geoscience and remote sensing: Challenges and future trends

Y Xu, T Bai, W Yu, S Chang… - … and Remote Sensing …, 2023 - ieeexplore.ieee.org
Recent advances in artificial intelligence (AI) have significantly intensified research in the
geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based …

Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Anti-backdoor learning: Training clean models on poisoned data

Y Li, X Lyu, N Koren, L Lyu, B Li… - Advances in Neural …, 2021 - proceedings.neurips.cc
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Backdoorbench: A comprehensive benchmark of backdoor learning

B Wu, H Chen, M Zhang, Z Zhu, S Wei… - Advances in …, 2022 - proceedings.neurips.cc
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …

Neural attention distillation: Erasing backdoor triggers from deep neural networks

Y Li, X Lyu, N Koren, L Lyu, B Li, X Ma - arXiv preprint arXiv:2101.05930, 2021 - arxiv.org
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …

Narcissus: A practical clean-label backdoor attack with limited information

Y Zeng, M Pan, HA Just, L Lyu, M Qiu… - Proceedings of the 2023 …, 2023 - dl.acm.org
Backdoor attacks introduce manipulated data into a machine learning model's training set,
causing the model to misclassify inputs with a trigger during testing to achieve a desired …

Backdoor defense via decoupling the training process

K Huang, Y Li, B Wu, Z Qin, K Ren - arXiv preprint arXiv:2202.03423, 2022 - arxiv.org
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …

Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection

Y Li, Y Bai, Y Jiang, Y Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …