An overview of backdoor attacks against deep neural networks and possible defences

W Guo, B Tondi, M Barni - IEEE Open Journal of Signal …, 2022 - ieeexplore.ieee.org
Together with impressive advances touching every aspect of our society, AI technology
based on Deep Neural Networks (DNN) is bringing increasing security concerns. While …

Backdoor learning for nlp: Recent advances, challenges, and future research directions

M Omar - arXiv preprint arXiv:2302.06801, 2023 - arxiv.org
Although backdoor learning is an active research topic in the NLP domain, the literature
lacks studies that systematically categorize and summarize backdoor attacks and defenses …

Poison ink: Robust and invisible backdoor attack

J Zhang, C Dongdong, Q Huang, J Liao… - … on Image Processing, 2022 - ieeexplore.ieee.org
Recent research shows deep neural networks are vulnerable to different types of attacks,
such as adversarial attacks, data poisoning attacks, and backdoor attacks. Among them …

Stealthy backdoor attack for code models

Z Yang, B Xu, JM Zhang, HJ Kang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Code models, such as CodeBERT and CodeT5, offer general-purpose representations of
code and play a vital role in supporting downstream automated software engineering tasks …

Poison attack and poison detection on deep source code processing models

J Li♂, Z Li, HZ Zhang, G Li, Z Jin, X Hu… - ACM Transactions on …, 2024 - dl.acm.org
In the software engineering (SE) community, deep learning (DL) has recently been applied
to many source code processing tasks, achieving state-of-the-art results. Due to the poor …

Audio-domain position-independent backdoor attack via unnoticeable triggers

C Shi, T Zhang, Z Li, H Phan, T Zhao, Y Wang… - Proceedings of the 28th …, 2022 - dl.acm.org
Deep learning models have become key enablers of voice user interfaces. With the growing
trend of adopting outsourced training of these models, backdoor attacks, stealthy yet …

PTB: Robust physical backdoor attacks against deep neural networks in real world

M Xue, C He, Y Wu, S Sun, Y Zhang, J Wang, W Liu - Computers & Security, 2022 - Elsevier
Deep neural networks (DNN) models have been widely applied in many tasks. However,
recent researches have shown that DNN models are vulnerable to backdoor attacks. A …

Baffle: Hiding backdoors in offline reinforcement learning datasets

C Gong, Z Yang, Y Bai, J He, J Shi, K Li… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Reinforcement learning (RL) makes an agent learn from trial-and-error experiences
gathered during the interaction with the environment. Recently, offline RL has become a …

Test-time backdoor attacks on multimodal large language models

D Lu, T Pang, C Du, Q Liu, X Yang, M Lin - arXiv preprint arXiv …, 2024 - arxiv.org
Backdoor attacks are commonly executed by contaminating training data, such that a trigger
can activate predetermined harmful effects during the test phase. In this work, we present …

Poison attack and defense on deep source code processing models

J Li, Z Li, H Zhang, G Li, Z Jin, X Hu, X Xia - arXiv preprint arXiv …, 2022 - arxiv.org
In the software engineering community, deep learning (DL) has recently been applied to
many source code processing tasks. Due to the poor interpretability of DL models, their …