Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Adversarial attacks and countermeasures on image classification-based deep learning models in autonomous driving systems: A systematic review

B Badjie, J Cecílio, A Casimiro - ACM Computing Surveys, 2024 - dl.acm.org
The rapid development of artificial intelligence (AI) and breakthroughs in Internet of Things
(IoT) technologies have driven the innovation of advanced autonomous driving systems …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Attack of the tails: Yes, you really can backdoor federated learning

H Wang, K Sreenivasan, S Rajput… - Advances in …, 2020 - proceedings.neurips.cc
Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in
the form of backdoors during training. The goal of a backdoor is to corrupt the performance …

Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning

Z Wang, J Zhai, S Ma - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
Deep neural networks are vulnerable to Trojan attacks. Existing attacks use visible patterns
(eg, a patch or image transformations) as triggers, which are vulnerable to human …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C Xie, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Blind backdoors in deep learning models

E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …

Witches' brew: Industrial scale data poisoning via gradient matching

J Geiping, L Fowl, WR Huang, W Czaja… - arXiv preprint arXiv …, 2020 - arxiv.org
Data Poisoning attacks modify training data to maliciously control a model trained on such
data. In this work, we focus on targeted poisoning attacks which cause a reclassification of …

Privacy in large language models: Attacks, defenses and future directions

H Li, Y Chen, J Luo, J Wang, H Peng, Y Kang… - arXiv preprint arXiv …, 2023 - arxiv.org
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …

Mm-bd: Post-training detection of backdoor attacks with arbitrary backdoor pattern types using a maximum margin statistic

H Wang, Z Xiang, DJ Miller… - 2024 IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Backdoor attacks are an important type of adversarial threat against deep neural network
classifiers, wherein test samples from one or more source classes will be (mis) classified to …