A survey of backdoor attacks and defenses on large language models: Implications for security measures
Large Language Models (LLMs), which bridge the gap between human language
understanding and complex problem-solving, achieve state-of-the-art performance on …
understanding and complex problem-solving, achieve state-of-the-art performance on …
Artwork protection against neural style transfer using locally adaptive adversarial color attack
Neural style transfer (NST) generates new images by combining the style of one image with
the content of another. However, unauthorized NST can exploit artwork, raising concerns …
the content of another. However, unauthorized NST can exploit artwork, raising concerns …
Enhancing federated semi-supervised learning with out-of-distribution filtering amidst class mismatches
Federated Learning (FL) has gained prominence as a method for training models on edge
computing devices, enabling the preservation of data privacy by eliminating the need to …
computing devices, enabling the preservation of data privacy by eliminating the need to …
Mitigating backdoor threats to large language models: Advancement and challenges
The advancement of Large Language Models (LLMs) has significantly impacted various
domains, including Web search, healthcare, and software development. However, as these …
domains, including Web search, healthcare, and software development. However, as these …
[PDF][PDF] A comprehensive evaluation and comparison of enhanced learning methods
This paper provides a comprehensive evaluation and comparison of current reinforcement
learning methods. By analyzing the strengths and weaknesses of the main methods, such as …
learning methods. By analyzing the strengths and weaknesses of the main methods, such as …
Clean-label backdoor attack and defense: An examination of language model vulnerability
Prompt-based learning, a paradigm that creates a bridge between pre-training and fine-
tuning stages, has proven to be highly effective concerning various NLP tasks, particularly in …
tuning stages, has proven to be highly effective concerning various NLP tasks, particularly in …
Obliviate: Neutralizing Task-agnostic Backdoors within the Parameter-efficient Fine-tuning Paradigm
Parameter-efficient fine-tuning (PEFT) has become a key training strategy for large language
models. However, its reliance on fewer trainable parameters poses security risks, such as …
models. However, its reliance on fewer trainable parameters poses security risks, such as …
Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge Distillation
Parameter-efficient fine-tuning (PEFT) can bridge the gap between large language models
(LLMs) and downstream tasks. However, PEFT has been proven vulnerable to malicious …
(LLMs) and downstream tasks. However, PEFT has been proven vulnerable to malicious …
Dose My Opinion Count? A CNN-LSTM Approach for Sentiment Analysis of Indian General Elections
Sentiment analysis on social media platforms is a critical area of research for understanding
public opinion, particularly during significant events like elections. This paper presents a …
public opinion, particularly during significant events like elections. This paper presents a …
SecFFT: Safeguarding Federated Fine-Tuning for Large Vision Language Models against Covert Backdoor Attacks in IoRT Networks
As the large vision language models and embodied intelligent robotic networks continue to
advance at a remarkable pace, particularly in applications spanning smart cities, power …
advance at a remarkable pace, particularly in applications spanning smart cities, power …