Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey

Y Wan, Y Qu, W Ni, Y Xiang, L Gao… - … Surveys & Tutorials, 2024 - ieeexplore.ieee.org
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …

Backdoor attacks and countermeasures in natural language processing models: A comprehensive security review

P Cheng, Z Wu, W Du, H Zhao, W Lu, G Liu - arXiv preprint arXiv …, 2023 - arxiv.org
Applicating third-party data and models has become a new paradigm for language modeling
in NLP, which also introduces some potential security vulnerabilities because attackers can …

SynGhost: Imperceptible and Universal Task-agnostic Backdoor Attack in Pre-trained Language Models

P Cheng, W Du, Z Wu, F Zhang, L Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
Pre-training has been a necessary phase for deploying pre-trained language models
(PLMs) to achieve remarkable performance in downstream tasks. However, we empirically …

Cbas: Character-level backdoor attacks against chinese pre-trained language models

X He, F Hao, T Gu, L Chang - ACM Transactions on Privacy and Security, 2024 - dl.acm.org
Pre-trained language models (PLMs) aim to assist computers in various domains to provide
natural and efficient language interaction and text processing capabilities. However, recent …

[PDF][PDF] Backdoor Attacks and Defenses in Natural Language Processing

W You - cs.uoregon.edu
Textual backdoor attacks pose a serious threat to natural language processing (NLP)
systems. These attacks corrupt a language model (LM) by inserting malicious “poison” …