[HTML][HTML] Foundation and large language models: fundamentals, challenges, opportunities, and social impacts
D Myers, R Mohawesh, VI Chellaboina, AL Sathvik… - Cluster …, 2024 - Springer
Abstract Foundation and Large Language Models (FLLMs) are models that are trained using
a massive amount of data with the intent to perform a variety of downstream tasks. FLLMs …
a massive amount of data with the intent to perform a variety of downstream tasks. FLLMs …
Threats to pre-trained language models: Survey and taxonomy
Pre-trained language models (PTLMs) have achieved great success and remarkable
performance over a wide range of natural language processing (NLP) tasks. However, there …
performance over a wide range of natural language processing (NLP) tasks. However, there …
Backdoor learning: A survey
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …
that the attacked models perform well on benign samples, whereas their predictions will be …
On protecting the data privacy of large language models (llms): A survey
Large language models (LLMs) are complex artificial intelligence systems capable of
understanding, generating and translating human language. They learn language patterns …
understanding, generating and translating human language. They learn language patterns …
Prompt as triggers for backdoor attack: Examining the vulnerability in language models
The prompt-based learning paradigm, which bridges the gap between pre-training and fine-
tuning, achieves state-of-the-art performance on several NLP tasks, particularly in few-shot …
tuning, achieves state-of-the-art performance on several NLP tasks, particularly in few-shot …
Large language model alignment: A survey
Recent years have witnessed remarkable progress made in large language models (LLMs).
Such advancements, while garnering significant attention, have concurrently elicited various …
Such advancements, while garnering significant attention, have concurrently elicited various …
Badprompt: Backdoor attacks on continuous prompts
The prompt-based learning paradigm has gained much research attention recently. It has
achieved state-of-the-art performance on several NLP tasks, especially in the few-shot …
achieved state-of-the-art performance on several NLP tasks, especially in the few-shot …
A unified evaluation of textual backdoor learning: Frameworks and benchmarks
Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a
backdoor in the training phase, the adversary could control model predictions via predefined …
backdoor in the training phase, the adversary could control model predictions via predefined …
Badpre: Task-agnostic backdoor attacks to pre-trained nlp foundation models
Pre-trained Natural Language Processing (NLP) models can be easily adapted to a variety
of downstream language tasks. This significantly accelerates the development of language …
of downstream language tasks. This significantly accelerates the development of language …
Badchain: Backdoor chain-of-thought prompting for large language models
Large language models (LLMs) are shown to benefit from chain-of-thought (COT) prompting,
particularly when tackling tasks that require systematic reasoning processes. On the other …
particularly when tackling tasks that require systematic reasoning processes. On the other …