Contrastive learning for prompt-based few-shot language learners
The impressive performance of GPT-3 using natural language prompts and in-context
learning has inspired work on better fine-tuning of moderately-sized models under this …
learning has inspired work on better fine-tuning of moderately-sized models under this …
User Feedback-based Online Learning for Intent Classification
Intent classification is a key task in natural language processing (NLP) that aims to infer the
goal or intention behind a user's query. Most existing intent classification methods rely on …
goal or intention behind a user's query. Most existing intent classification methods rely on …
Making pre-trained language models better learn few-shot spoken language understanding in more practical scenarios
Most previous few-shot Spoken Language Understanding (SLU) models typically need to be
trained on a set of data-rich source domains and adapt to the target domain with a few …
trained on a set of data-rich source domains and adapt to the target domain with a few …
Embedding Hallucination for Few-Shot Language Fine-tuning
Few-shot language learners adapt knowledge from a pre-trained model to recognize novel
classes from a few-labeled sentences. In such settings, fine-tuning a pre-trained language …
classes from a few-labeled sentences. In such settings, fine-tuning a pre-trained language …