A survey of backdoor attacks and defenses on large language models: Implications for security measures

S Zhao, M Jia, Z Guo, L Gan, X Xu, X Wu, J Fu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs), which bridge the gap between human language
understanding and complex problem-solving, achieve state-of-the-art performance on …

Artwork protection against neural style transfer using locally adaptive adversarial color attack

Z Guo, J Dong, Y Qian, K Wang, W Li, Z Guo… - ECAI 2024, 2024 - ebooks.iospress.nl
Neural style transfer (NST) generates new images by combining the style of one image with
the content of another. However, unauthorized NST can exploit artwork, raising concerns …

Enhancing federated semi-supervised learning with out-of-distribution filtering amidst class mismatches

J Jin, F Ni, S Dai, K Li, B Hong - Journal of Computer Technology …, 2024 - suaspress.org
Federated Learning (FL) has gained prominence as a method for training models on edge
computing devices, enabling the preservation of data privacy by eliminating the need to …

Mitigating backdoor threats to large language models: Advancement and challenges

Q Liu, W Mo, T Tong, J Xu, F Wang… - 2024 60th Annual …, 2024 - ieeexplore.ieee.org
The advancement of Large Language Models (LLMs) has significantly impacted various
domains, including Web search, healthcare, and software development. However, as these …

[PDF][PDF] A comprehensive evaluation and comparison of enhanced learning methods

J Song, H Liu, K Li, J Tian, Y Mo - Academic Journal of Science and …, 2024 - drpress.org
This paper provides a comprehensive evaluation and comparison of current reinforcement
learning methods. By analyzing the strengths and weaknesses of the main methods, such as …

Clean-label backdoor attack and defense: An examination of language model vulnerability

S Zhao, X Xu, L Xiao, J Wen, LA Tuan - Expert Systems with Applications, 2024 - Elsevier
Prompt-based learning, a paradigm that creates a bridge between pre-training and fine-
tuning stages, has proven to be highly effective concerning various NLP tasks, particularly in …

Obliviate: Neutralizing Task-agnostic Backdoors within the Parameter-efficient Fine-tuning Paradigm

J Kim, M Song, SH Na, S Shin - arXiv preprint arXiv:2409.14119, 2024 - arxiv.org
Parameter-efficient fine-tuning (PEFT) has become a key training strategy for large language
models. However, its reliance on fewer trainable parameters poses security risks, such as …

Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge Distillation

S Zhao, X Wu, CD Nguyen, M Jia, Y Feng… - arXiv preprint arXiv …, 2024 - arxiv.org
Parameter-efficient fine-tuning (PEFT) can bridge the gap between large language models
(LLMs) and downstream tasks. However, PEFT has been proven vulnerable to malicious …

Dose My Opinion Count? A CNN-LSTM Approach for Sentiment Analysis of Indian General Elections

N Zhang, J Xiong, Z Zhao, M Feng… - Journal of Theory …, 2024 - centuryscipub.com
Sentiment analysis on social media platforms is a critical area of research for understanding
public opinion, particularly during significant events like elections. This paper presents a …

SecFFT: Safeguarding Federated Fine-Tuning for Large Vision Language Models against Covert Backdoor Attacks in IoRT Networks

Z Zhou, C Xu, B Wang, T Li, S Huang… - IEEE Internet of …, 2024 - ieeexplore.ieee.org
As the large vision language models and embodied intelligent robotic networks continue to
advance at a remarkable pace, particularly in applications spanning smart cities, power …