Retrieval-based prompt selection for code-related few-shot learning

N Nashid, M Sintaha, A Mesbah - 2023 IEEE/ACM 45th …, 2023 - ieeexplore.ieee.org
Large language models trained on massive code corpora can generalize to new tasks
without the need for task-specific fine-tuning. In few-shot learning, these models take as …

Code generation tools (almost) for free? a study of few-shot, pre-trained language models on code

P Bareiß, B Souza, M d'Amorim, M Pradel - arXiv preprint arXiv …, 2022 - arxiv.org
Few-shot learning with large-scale, pre-trained language models is a powerful way to
answer questions about code, eg, how to complete a given code example, or even generate …

Few-shot training LLMs for project-specific code-summarization

T Ahmed, P Devanbu - Proceedings of the 37th IEEE/ACM International …, 2022 - dl.acm.org
Very large language models (LLMs), such as GPT-3 and Codex have achieved state-of-the-
art performance on several natural-language tasks, and show great promise also for code. A …

Cutting down on prompts and parameters: Simple few-shot learning with language models

RL Logan IV, I Balažević, E Wallace, F Petroni… - arXiv preprint arXiv …, 2021 - arxiv.org
Prompting language models (LMs) with training examples and task descriptions has been
seen as critical to recent successes in few-shot learning. In this work, we show that …

Making pre-trained language models better few-shot learners

T Gao, A Fisch, D Chen - arXiv preprint arXiv:2012.15723, 2020 - arxiv.org
The recent GPT-3 model (Brown et al., 2020) achieves remarkable few-shot performance
solely by leveraging a natural-language prompt and a few task demonstrations as input …

Tuning language models as training data generators for augmentation-enhanced few-shot learning

Y Meng, M Michalski, J Huang… - International …, 2023 - proceedings.mlr.press
Recent studies have revealed the intriguing few-shot learning ability of pretrained language
models (PLMs): They can quickly adapt to a new task when fine-tuned on a small amount of …

Reordering examples helps during priming-based few-shot learning

S Kumar, P Talukdar - arXiv preprint arXiv:2106.01751, 2021 - arxiv.org
The ability to learn from limited data, or few-shot learning, is a desirable and often critical
requirement for NLP systems. While many existing methods do poorly at learning from a …

GPS: Genetic prompt search for efficient few-shot learning

H Xu, Y Chen, Y Du, N Shao, Y Wang, H Li… - arXiv preprint arXiv …, 2022 - arxiv.org
Prompt-based techniques have demostrated great potential for improving the few-shot
generalization of pretrained language models. However, their performance heavily relies on …

RAFT: A real-world few-shot text classification benchmark

N Alex, E Lifland, L Tunstall, A Thakur… - arXiv preprint arXiv …, 2021 - arxiv.org
Large pre-trained language models have shown promise for few-shot learning, completing
text-based tasks given only a few task-specific examples. Will models soon solve …

Perfect: Prompt-free and efficient few-shot learning with language models

RK Mahabadi, L Zettlemoyer, J Henderson… - arXiv preprint arXiv …, 2022 - arxiv.org
Current methods for few-shot fine-tuning of pretrained masked language models (PLMs)
require carefully engineered prompts and verbalizers for each new task to convert examples …