Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
Adversarial attacks and countermeasures on image classification-based deep learning models in autonomous driving systems: A systematic review
The rapid development of artificial intelligence (AI) and breakthroughs in Internet of Things
(IoT) technologies have driven the innovation of advanced autonomous driving systems …
(IoT) technologies have driven the innovation of advanced autonomous driving systems …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Attack of the tails: Yes, you really can backdoor federated learning
Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in
the form of backdoors during training. The goal of a backdoor is to corrupt the performance …
the form of backdoors during training. The goal of a backdoor is to corrupt the performance …
Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning
Deep neural networks are vulnerable to Trojan attacks. Existing attacks use visible patterns
(eg, a patch or image transformations) as triggers, which are vulnerable to human …
(eg, a patch or image transformations) as triggers, which are vulnerable to human …
Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …
practitioners to automate and outsource the curation of training data in order to achieve state …
Blind backdoors in deep learning models
E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …
on compromising the loss-value computation in the model-training code. We use it to …
Witches' brew: Industrial scale data poisoning via gradient matching
Data Poisoning attacks modify training data to maliciously control a model trained on such
data. In this work, we focus on targeted poisoning attacks which cause a reclassification of …
data. In this work, we focus on targeted poisoning attacks which cause a reclassification of …
Privacy in large language models: Attacks, defenses and future directions
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …
effectively tackle various downstream NLP tasks and unify these tasks into generative …
Mm-bd: Post-training detection of backdoor attacks with arbitrary backdoor pattern types using a maximum margin statistic
Backdoor attacks are an important type of adversarial threat against deep neural network
classifiers, wherein test samples from one or more source classes will be (mis) classified to …
classifiers, wherein test samples from one or more source classes will be (mis) classified to …