Contrastive learning for prompt-based few-shot language learners

Y Jian, C Gao, S Vosoughi - arXiv preprint arXiv:2205.01308, 2022 - arxiv.org
The impressive performance of GPT-3 using natural language prompts and in-context
learning has inspired work on better fine-tuning of moderately-sized models under this …

User Feedback-based Online Learning for Intent Classification

K Gönç, B Sağlam, O Dalmaz, T Çukur… - Proceedings of the 25th …, 2023 - dl.acm.org
Intent classification is a key task in natural language processing (NLP) that aims to infer the
goal or intention behind a user's query. Most existing intent classification methods rely on …

Making pre-trained language models better learn few-shot spoken language understanding in more practical scenarios

Y Wang, J Mei, B Zou, R Fan, T He… - Findings of the …, 2023 - aclanthology.org
Most previous few-shot Spoken Language Understanding (SLU) models typically need to be
trained on a set of data-rich source domains and adapt to the target domain with a few …

Embedding Hallucination for Few-Shot Language Fine-tuning

Y Jian, C Gao, S Vosoughi - arXiv preprint arXiv:2205.01307, 2022 - arxiv.org
Few-shot language learners adapt knowledge from a pre-trained model to recognize novel
classes from a few-labeled sentences. In such settings, fine-tuning a pre-trained language …