Domain specialization as the key to make large language models disruptive: A comprehensive survey
Large language models (LLMs) have significantly advanced the field of natural language
processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of …
processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of …
Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment
With the continuous growth in the number of parameters of transformer-based pretrained
language models (PLMs), particularly the emergence of large language models (LLMs) with …
language models (PLMs), particularly the emergence of large language models (LLMs) with …
Llama-adapter v2: Parameter-efficient visual instruction model
How to efficiently transform large language models (LLMs) into instruction followers is
recently a popular research direction, while training LLM for multi-modal reasoning remains …
recently a popular research direction, while training LLM for multi-modal reasoning remains …
Crosslingual generalization through multitask finetuning
Multitask prompted finetuning (MTF) has been shown to help large language models
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …
Toolllm: Facilitating large language models to master 16000+ real-world apis
Despite the advancements of open-source large language models (LLMs), eg, LLaMA, they
remain significantly limited in tool-use capabilities, ie, using external tools (APIs) to fulfill …
remain significantly limited in tool-use capabilities, ie, using external tools (APIs) to fulfill …
Rlprompt: Optimizing discrete text prompts with reinforcement learning
Prompting has shown impressive success in enabling large pretrained language models
(LMs) to perform diverse NLP tasks, especially when only few downstream data are …
(LMs) to perform diverse NLP tasks, especially when only few downstream data are …
Fine-tuning language models with just forward passes
Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …
Reasoning with language model prompting: A survey
Reasoning, as an essential ability for complex problem-solving, can provide back-end
support for various real-world applications, such as medical diagnosis, negotiation, etc. This …
support for various real-world applications, such as medical diagnosis, negotiation, etc. This …
Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings
Integrating large language models (LLMs) with various tools has led to increased attention
in the field. Existing approaches either involve fine-tuning the LLM, which is both …
in the field. Existing approaches either involve fine-tuning the LLM, which is both …