Retrieval-based prompt selection for code-related few-shot learning
Large language models trained on massive code corpora can generalize to new tasks
without the need for task-specific fine-tuning. In few-shot learning, these models take as …
without the need for task-specific fine-tuning. In few-shot learning, these models take as …
Code generation tools (almost) for free? a study of few-shot, pre-trained language models on code
Few-shot learning with large-scale, pre-trained language models is a powerful way to
answer questions about code, eg, how to complete a given code example, or even generate …
answer questions about code, eg, how to complete a given code example, or even generate …
Few-shot training LLMs for project-specific code-summarization
Very large language models (LLMs), such as GPT-3 and Codex have achieved state-of-the-
art performance on several natural-language tasks, and show great promise also for code. A …
art performance on several natural-language tasks, and show great promise also for code. A …
Cutting down on prompts and parameters: Simple few-shot learning with language models
Prompting language models (LMs) with training examples and task descriptions has been
seen as critical to recent successes in few-shot learning. In this work, we show that …
seen as critical to recent successes in few-shot learning. In this work, we show that …
Making pre-trained language models better few-shot learners
The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance
solely by leveraging a natural-language prompt and a few task demonstrations as input …
solely by leveraging a natural-language prompt and a few task demonstrations as input …
Tuning language models as training data generators for augmentation-enhanced few-shot learning
Recent studies have revealed the intriguing few-shot learning ability of pretrained language
models (PLMs): They can quickly adapt to a new task when fine-tuned on a small amount of …
models (PLMs): They can quickly adapt to a new task when fine-tuned on a small amount of …
Reordering examples helps during priming-based few-shot learning
S Kumar, P Talukdar - arXiv preprint arXiv:2106.01751, 2021 - arxiv.org
The ability to learn from limited data, or few-shot learning, is a desirable and often critical
requirement for NLP systems. While many existing methods do poorly at learning from a …
requirement for NLP systems. While many existing methods do poorly at learning from a …
GPS: Genetic prompt search for efficient few-shot learning
Prompt-based techniques have demostrated great potential for improving the few-shot
generalization of pretrained language models. However, their performance heavily relies on …
generalization of pretrained language models. However, their performance heavily relies on …
RAFT: A real-world few-shot text classification benchmark
Large pre-trained language models have shown promise for few-shot learning, completing
text-based tasks given only a few task-specific examples. Will models soon solve …
text-based tasks given only a few task-specific examples. Will models soon solve …
Perfect: Prompt-free and efficient few-shot learning with language models
RK Mahabadi, L Zettlemoyer, J Henderson… - arXiv preprint arXiv …, 2022 - arxiv.org
Current methods for few-shot fine-tuning of pretrained masked language models (PLMs)
require carefully engineered prompts and verbalizers for each new task to convert examples …
require carefully engineered prompts and verbalizers for each new task to convert examples …