Wild patterns reloaded: A survey of machine learning security against training data poisoning
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …
and large training datasets. The training data is used to learn new models or update existing …
AI security for geoscience and remote sensing: Challenges and future trends
Recent advances in artificial intelligence (AI) have significantly intensified research in the
geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based …
geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based …
Poisoning web-scale training datasets is practical
N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …
Anti-backdoor learning: Training clean models on poisoned data
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …
While existing defense methods have demonstrated promising results on detecting or …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
Backdoorbench: A comprehensive benchmark of backdoor learning
Backdoor learning is an emerging and vital topic for studying deep neural networks'
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …
vulnerability (DNNs). Many pioneering backdoor attack and defense methods are being …
Neural attention distillation: Erasing backdoor triggers from deep neural networks
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time
attack that injects a trigger pattern into a small proportion of training data so as to control the …
attack that injects a trigger pattern into a small proportion of training data so as to control the …
Narcissus: A practical clean-label backdoor attack with limited information
Backdoor attacks introduce manipulated data into a machine learning model's training set,
causing the model to misclassify inputs with a trigger during testing to achieve a desired …
causing the model to misclassify inputs with a trigger during testing to achieve a desired …
Backdoor defense via decoupling the training process
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …
attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few …
Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection
Y Li, Y Bai, Y Jiang, Y Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …
rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets …