Blind backdoors in deep learning models

E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …

Backdoor embedding in convolutional neural network models via invisible perturbation

H Zhong, C Liao, AC Squicciarini, S Zhu… - Proceedings of the Tenth …, 2020 - dl.acm.org
Deep learning models have consistently outperformed traditional machine learning models
in various classification tasks, including image classification. As such, they have become …

Dynamic backdoor attacks against machine learning models

A Salem, R Wen, M Backes, S Ma… - 2022 IEEE 7th …, 2022 - ieeexplore.ieee.org
Machine learning (ML) has made tremendous progress during the past decade and is being
adopted in various critical real-world applications. However, recent research has shown that …

Backdoor embedding in convolutional neural network models via invisible perturbation

C Liao, H Zhong, A Squicciarini, S Zhu… - arXiv preprint arXiv …, 2018 - arxiv.org
Deep learning models have consistently outperformed traditional machine learning models
in various classification tasks, including image classification. As such, they have become …

Neural cleanse: Identifying and mitigating backdoor attacks in neural networks

B Wang, Y Yao, S Shan, H Li… - … IEEE symposium on …, 2019 - ieeexplore.ieee.org
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor
attacks, where hidden associations or triggers override normal classification to produce …

Anti-backdoor learning: Training clean models on poisoned data

Y Li, X Lyu, N Koren, L Lyu, B Li… - Advances in Neural …, 2021 - proceedings.neurips.cc
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …

Handcrafted backdoors in deep neural networks

S Hong, N Carlini, A Kurakin - Advances in Neural …, 2022 - proceedings.neurips.cc
When machine learning training is outsourced to third parties, $ backdoor $$ attacks $
become practical as the third party who trains the model may act maliciously to inject hidden …

Latent backdoor attacks on deep neural networks

Y Yao, H Li, H Zheng, BY Zhao - Proceedings of the 2019 ACM SIGSAC …, 2019 - dl.acm.org
Recent work proposed the concept of backdoor attacks on deep neural networks (DNNs),
where misclassification rules are hidden inside normal models, only to be triggered by very …

Composite backdoor attack for deep neural network by mixing existing benign features

J Lin, L Xu, Y Liu, X Zhang - Proceedings of the 2020 ACM SIGSAC …, 2020 - dl.acm.org
With the prevalent use of Deep Neural Networks (DNNs) in many applications, security of
these networks is of importance. Pre-trained DNNs may contain backdoors that are injected …

Bypassing backdoor detection algorithms in deep learning

R Shokri - 2020 IEEE European Symposium on Security and …, 2020 - ieeexplore.ieee.org
Deep learning models are vulnerable to various adversarial manipulations of their training
data, parameters, and input sample. In particular, an adversary can modify the training data …