Blind backdoors in deep learning models
E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …
on compromising the loss-value computation in the model-training code. We use it to …
Backdoor embedding in convolutional neural network models via invisible perturbation
Deep learning models have consistently outperformed traditional machine learning models
in various classification tasks, including image classification. As such, they have become …
in various classification tasks, including image classification. As such, they have become …
Dynamic backdoor attacks against machine learning models
Machine learning (ML) has made tremendous progress during the past decade and is being
adopted in various critical real-world applications. However, recent research has shown that …
adopted in various critical real-world applications. However, recent research has shown that …
Backdoor embedding in convolutional neural network models via invisible perturbation
Deep learning models have consistently outperformed traditional machine learning models
in various classification tasks, including image classification. As such, they have become …
in various classification tasks, including image classification. As such, they have become …
Neural cleanse: Identifying and mitigating backdoor attacks in neural networks
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor
attacks, where hidden associations or triggers override normal classification to produce …
attacks, where hidden associations or triggers override normal classification to produce …
Anti-backdoor learning: Training clean models on poisoned data
Backdoor attack has emerged as a major security threat to deep neural networks (DNNs).
While existing defense methods have demonstrated promising results on detecting or …
While existing defense methods have demonstrated promising results on detecting or …
Handcrafted backdoors in deep neural networks
When machine learning training is outsourced to third parties, $ backdoor $$ attacks $
become practical as the third party who trains the model may act maliciously to inject hidden …
become practical as the third party who trains the model may act maliciously to inject hidden …
Latent backdoor attacks on deep neural networks
Recent work proposed the concept of backdoor attacks on deep neural networks (DNNs),
where misclassification rules are hidden inside normal models, only to be triggered by very …
where misclassification rules are hidden inside normal models, only to be triggered by very …
Composite backdoor attack for deep neural network by mixing existing benign features
With the prevalent use of Deep Neural Networks (DNNs) in many applications, security of
these networks is of importance. Pre-trained DNNs may contain backdoors that are injected …
these networks is of importance. Pre-trained DNNs may contain backdoors that are injected …
Bypassing backdoor detection algorithms in deep learning
R Shokri - 2020 IEEE European Symposium on Security and …, 2020 - ieeexplore.ieee.org
Deep learning models are vulnerable to various adversarial manipulations of their training
data, parameters, and input sample. In particular, an adversary can modify the training data …
data, parameters, and input sample. In particular, an adversary can modify the training data …