Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment

L Xu, H Xie, SZJ Qin, X Tao, FL Wang - arXiv preprint arXiv:2312.12148, 2023 - arxiv.org
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …

Setting the trap: Capturing and defeating backdoors in pretrained language models through honeypots

RR Tang, J Yuan, Y Li, Z Liu… - Advances in Neural …, 2023 - proceedings.neurips.cc
In the field of natural language processing, the prevalent approach involves fine-tuning
pretrained language models (PLMs) using local samples. Recent research has exposed the …

Parafuzz: An interpretability-driven technique for detecting poisoned samples in nlp

L Yan, Z Zhang, G Tao, K Zhang… - Advances in …, 2024 - proceedings.neurips.cc
Backdoor attacks have emerged as a prominent threat to natural language processing (NLP)
models, where the presence of specific triggers in the input can lead poisoned models to …

Chatgpt as an attack tool: Stealthy textual backdoor attack via blackbox generative model trigger

J Li, Y Yang, Z Wu, VG Vydiswaran, C Xiao - arXiv preprint arXiv …, 2023 - arxiv.org
Textual backdoor attacks pose a practical threat to existing systems, as they can
compromise the model by inserting imperceptible triggers into inputs and manipulating …

Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions

O Mengara, A Avila, TH Falk - IEEE Access, 2024 - ieeexplore.ieee.org
Deep neural network (DNN) classifiers are potent instruments that can be used in various
security-sensitive applications. Nonetheless, they are vulnerable to certain attacks that …

Tijo: Trigger inversion with joint optimization for defending multimodal backdoored models

I Sur, K Sikka, M Walmer… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present a Multimodal Backdoor defense technique TIJO (Trigger Inversion
using Joint Optimization). Recently Walmer et al. demonstrated successful backdoor attacks …

Textguard: Provable defense against backdoor attacks on text classification

H Pei, J Jia, W Guo, B Li, D Song - arXiv preprint arXiv:2311.11225, 2023 - arxiv.org
Backdoor attacks have become a major security threat for deploying machine learning
models in security-critical applications. Existing research endeavors have proposed many …

Black-box backdoor defense via zero-shot image purification

Y Shi, M Du, X Wu, Z Guan, J Sun… - Advances in Neural …, 2024 - proceedings.neurips.cc
Backdoor attacks inject poisoned samples into the training data, resulting in the
misclassification of the poisoned input during a model's deployment. Defending against …

Backdoor attacks and countermeasures in natural language processing models: A comprehensive security review

P Cheng, Z Wu, W Du, G Liu - arXiv preprint arXiv:2309.06055, 2023 - arxiv.org
Deep Neural Networks (DNNs) have led to unprecedented progress in various natural
language processing (NLP) tasks. Owing to limited data and computation resources, using …

Bite: Textual backdoor attacks with iterative trigger injection

J Yan, V Gupta, X Ren - arXiv preprint arXiv:2205.12700, 2022 - arxiv.org
Backdoor attacks have become an emerging threat to NLP systems. By providing poisoned
training data, the adversary can embed a" backdoor" into the victim model, which allows …