[HTML][HTML] Ptr: Prompt tuning with rules for text classification
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
Unified dialog model pre-training for task-oriented dialog understanding and generation
Recently, pre-training methods have shown remarkable success in task-oriented dialog
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …
Generalized category discovery with decoupled prototypical network
Abstract Generalized Category Discovery (GCD) aims to recognize both known and novel
categories from a set of unlabeled data, based on another dataset labeled with only known …
categories from a set of unlabeled data, based on another dataset labeled with only known …
New intent discovery with pre-training and contrastive learning
New intent discovery aims to uncover novel intent categories from user utterances to expand
the set of supported intent classes. It is a critical task for the development and service …
the set of supported intent classes. It is a critical task for the development and service …
Conda: Contrastive domain adaptation for ai-generated text detection
Large language models (LLMs) are increasingly being used for generating text in a variety
of use cases, including journalistic news articles. Given the potential malicious nature in …
of use cases, including journalistic news articles. Given the potential malicious nature in …
Contrastive data and learning for natural language processing
Current NLP models heavily rely on effective representation learning algorithms. Contrastive
learning is one such technique to learn an embedding space such that similar data sample …
learning is one such technique to learn an embedding space such that similar data sample …
Space-2: Tree-structured semi-supervised contrastive pre-training for task-oriented dialog understanding
Pre-training methods with contrastive learning objectives have shown remarkable success
in dialog understanding tasks. However, current contrastive learning solely considers the …
in dialog understanding tasks. However, current contrastive learning solely considers the …
Promptmix: A class boundary augmentation method for large language model distillation
Data augmentation is a widely used technique to address the problem of text classification
when there is a limited amount of training data. Recent work often tackles this problem using …
when there is a limited amount of training data. Recent work often tackles this problem using …
NLU++: A multi-label, slot-rich, generalisable dataset for natural language understanding in task-oriented dialogue
We present NLU++, a novel dataset for natural language understanding (NLU) in task-
oriented dialogue (ToD) systems, with the aim to provide a much more challenging …
oriented dialogue (ToD) systems, with the aim to provide a much more challenging …
Mask-guided bert for few shot text classification
Transformer-based language models have achieved significant success in various domains.
However, the data-intensive nature of the transformer architecture requires much labeled …
However, the data-intensive nature of the transformer architecture requires much labeled …