[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …

Unified dialog model pre-training for task-oriented dialog understanding and generation

W He, Y Dai, M Yang, J Sun, F Huang, L Si… - Proceedings of the 45th …, 2022 - dl.acm.org
Recently, pre-training methods have shown remarkable success in task-oriented dialog
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …

Generalized category discovery with decoupled prototypical network

W An, F Tian, Q Zheng, W Ding, QY Wang… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Abstract Generalized Category Discovery (GCD) aims to recognize both known and novel
categories from a set of unlabeled data, based on another dataset labeled with only known …

New intent discovery with pre-training and contrastive learning

Y Zhang, H Zhang, LM Zhan, XM Wu, A Lam - arXiv preprint arXiv …, 2022 - arxiv.org
New intent discovery aims to uncover novel intent categories from user utterances to expand
the set of supported intent classes. It is a critical task for the development and service …

Conda: Contrastive domain adaptation for ai-generated text detection

A Bhattacharjee, T Kumarage, R Moraffah… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) are increasingly being used for generating text in a variety
of use cases, including journalistic news articles. Given the potential malicious nature in …

Contrastive data and learning for natural language processing

R Zhang, Y Ji, Y Zhang… - Proceedings of the 2022 …, 2022 - aclanthology.org
Current NLP models heavily rely on effective representation learning algorithms. Contrastive
learning is one such technique to learn an embedding space such that similar data sample …

Space-2: Tree-structured semi-supervised contrastive pre-training for task-oriented dialog understanding

W He, Y Dai, B Hui, M Yang, Z Cao, J Dong… - arXiv preprint arXiv …, 2022 - arxiv.org
Pre-training methods with contrastive learning objectives have shown remarkable success
in dialog understanding tasks. However, current contrastive learning solely considers the …

Promptmix: A class boundary augmentation method for large language model distillation

G Sahu, O Vechtomova, D Bahdanau… - arXiv preprint arXiv …, 2023 - arxiv.org
Data augmentation is a widely used technique to address the problem of text classification
when there is a limited amount of training data. Recent work often tackles this problem using …

NLU++: A multi-label, slot-rich, generalisable dataset for natural language understanding in task-oriented dialogue

I Casanueva, I Vulić, GP Spithourakis… - arXiv preprint arXiv …, 2022 - arxiv.org
We present NLU++, a novel dataset for natural language understanding (NLU) in task-
oriented dialogue (ToD) systems, with the aim to provide a much more challenging …

Mask-guided bert for few shot text classification

W Liao, Z Liu, H Dai, Z Wu, Y Zhang, X Huang… - arXiv preprint arXiv …, 2023 - arxiv.org
Transformer-based language models have achieved significant success in various domains.
However, the data-intensive nature of the transformer architecture requires much labeled …