Klue: Korean language understanding evaluation

S Park, J Moon, S Kim, WI Cho, J Han, J Park… - arXiv preprint arXiv …, 2021 - arxiv.org
We introduce Korean Language Understanding Evaluation (KLUE) benchmark. KLUE is a
collection of 8 Korean natural language understanding (NLU) tasks, including Topic …

" Do you follow me?": A Survey of Recent Approaches in Dialogue State Tracking

L Jacqmin, LM Rojas-Barahona, B Favre - arXiv preprint arXiv:2207.14627, 2022 - arxiv.org
While communicating with a user, a task-oriented dialogue system has to track the user's
needs at each turn according to the conversation history. This process called dialogue state …

Unified dialog model pre-training for task-oriented dialog understanding and generation

W He, Y Dai, M Yang, J Sun, F Huang, L Si… - Proceedings of the 45th …, 2022 - dl.acm.org
Recently, pre-training methods have shown remarkable success in task-oriented dialog
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …

Leveraging slot descriptions for zero-shot cross-domain dialogue state tracking

Z Lin, B Liu, S Moon, P Crook, Z Zhou, Z Wang… - arXiv preprint arXiv …, 2021 - arxiv.org
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented
dialogue in unseen domains without the expense of collecting in-domain data. In this paper …

Multiwoz 2.4: A multi-domain task-oriented dialogue dataset with essential annotation corrections to improve state tracking evaluation

F Ye, J Manotumruksa, E Yilmaz - arXiv preprint arXiv:2104.00773, 2021 - arxiv.org
The MultiWOZ 2.0 dataset has greatly stimulated the research of task-oriented dialogue
systems. However, its state annotations contain substantial noise, which hinders a proper …

A causal lens for controllable text generation

Z Hu, LE Li - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
Controllable text generation concerns two fundamental tasks of wide applications, namely
generating text of given attributes (ie, attribute-conditional generation), and minimally editing …

Zero-shot dialogue state tracking via cross-task transfer

Z Lin, B Liu, A Madotto, S Moon, P Crook… - arXiv preprint arXiv …, 2021 - arxiv.org
Zero-shot transfer learning for dialogue state tracking (DST) enables us to handle a variety
of task-oriented dialogue domains without the expense of collecting in-domain data. In this …

Dialogue summaries as dialogue states (DS2), template-guided summarization for few-shot dialogue state tracking

J Shin, H Yu, H Moon, A Madotto, J Park - arXiv preprint arXiv:2203.01552, 2022 - arxiv.org
Annotating task-oriented dialogues is notorious for the expensive and difficult data collection
process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this …

Controllable dialogue simulation with in-context learning

Z Li, W Chen, S Li, H Wang, J Qian, X Yan - arXiv preprint arXiv …, 2022 - arxiv.org
Building dialogue systems requires a large corpus of annotated dialogues. Such datasets
are usually created via crowdsourcing, which is expensive and time-consuming. In this …

Space-2: Tree-structured semi-supervised contrastive pre-training for task-oriented dialog understanding

W He, Y Dai, B Hui, M Yang, Z Cao, J Dong… - arXiv preprint arXiv …, 2022 - arxiv.org
Pre-training methods with contrastive learning objectives have shown remarkable success
in dialog understanding tasks. However, current contrastive learning solely considers the …