Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey
Due to the greatly improved capabilities of devices, massive data, and increasing concern
about data privacy, Federated Learning (FL) has been increasingly considered for …
about data privacy, Federated Learning (FL) has been increasingly considered for …
Backdoor attacks and countermeasures in natural language processing models: A comprehensive security review
Applicating third-party data and models has become a new paradigm for language modeling
in NLP, which also introduces some potential security vulnerabilities because attackers can …
in NLP, which also introduces some potential security vulnerabilities because attackers can …
SynGhost: Imperceptible and Universal Task-agnostic Backdoor Attack in Pre-trained Language Models
Pre-training has been a necessary phase for deploying pre-trained language models
(PLMs) to achieve remarkable performance in downstream tasks. However, we empirically …
(PLMs) to achieve remarkable performance in downstream tasks. However, we empirically …
Cbas: Character-level backdoor attacks against chinese pre-trained language models
X He, F Hao, T Gu, L Chang - ACM Transactions on Privacy and Security, 2024 - dl.acm.org
Pre-trained language models (PLMs) aim to assist computers in various domains to provide
natural and efficient language interaction and text processing capabilities. However, recent …
natural and efficient language interaction and text processing capabilities. However, recent …
[PDF][PDF] Backdoor Attacks and Defenses in Natural Language Processing
W You - cs.uoregon.edu
Textual backdoor attacks pose a serious threat to natural language processing (NLP)
systems. These attacks corrupt a language model (LM) by inserting malicious “poison” …
systems. These attacks corrupt a language model (LM) by inserting malicious “poison” …