A survey of knowledge enhanced pre-trained language models
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …
supervised learning method, have yielded promising performance on various tasks in …
Domain specialization as the key to make large language models disruptive: A comprehensive survey
Large language models (LLMs) have significantly advanced the field of natural language
processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of …
processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of …
Parameter-efficient fine-tuning of large-scale pre-trained language models
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning
paradigm, it has been continuously shown that larger models tend to yield better …
paradigm, it has been continuously shown that larger models tend to yield better …
What does a platypus look like? generating customized prompts for zero-shot image classification
Open-vocabulary models are a promising new paradigm for image classification. Unlike
traditional classification models, open-vocabulary models classify among any arbitrary set of …
traditional classification models, open-vocabulary models classify among any arbitrary set of …
Rlprompt: Optimizing discrete text prompts with reinforcement learning
Prompting has shown impressive success in enabling large pretrained language models
(LMs) to perform diverse NLP tasks, especially when only few downstream data are …
(LMs) to perform diverse NLP tasks, especially when only few downstream data are …
P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks
Prompt tuning, which only tunes continuous prompts with a frozen language model,
substantially reduces per-task storage and memory usage at training. However, in the …
substantially reduces per-task storage and memory usage at training. However, in the …
Black-box tuning for language-model-as-a-service
Extremely large pre-trained language models (PTMs) such as GPT-3 are usually released
as a service. It allows users to design task-specific prompts to query the PTMs through some …
as a service. It allows users to design task-specific prompts to query the PTMs through some …
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models
Despite the success, the process of fine-tuning large-scale PLMs brings prohibitive
adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining …
adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining …
[HTML][HTML] Pre-trained models: Past, present and future
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved
great success and become a milestone in the field of artificial intelligence (AI). Owing to …
great success and become a milestone in the field of artificial intelligence (AI). Owing to …
Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction
Recently, prompt-tuning has achieved promising results for specific few-shot classification
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …