Galaxy: A generative pre-trained model for task-oriented dialog with semi-supervised learning and explicit policy injection
Pre-trained models have proved to be powerful in enhancing task-oriented dialog systems.
However, current pre-training methods mainly focus on enhancing dialog understanding …
However, current pre-training methods mainly focus on enhancing dialog understanding …
Unified dialog model pre-training for task-oriented dialog understanding and generation
Recently, pre-training methods have shown remarkable success in task-oriented dialog
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …
Space-2: Tree-structured semi-supervised contrastive pre-training for task-oriented dialog understanding
Pre-training methods with contrastive learning objectives have shown remarkable success
in dialog understanding tasks. However, current contrastive learning solely considers the …
in dialog understanding tasks. However, current contrastive learning solely considers the …
Doing personal laps: Llm-augmented dialogue construction for personalized multi-session conversational search
The future of conversational agents will provide users with personalized information
responses. However, a significant challenge in developing models is the lack of large-scale …
responses. However, a significant challenge in developing models is the lack of large-scale …
Dial2vec: Self-guided contrastive learning of unsupervised dialogue embeddings
In this paper, we introduce the task of learning unsupervised dialogue embeddings. Trivial
approaches such as combining pre-trained word or sentence embeddings and encoding …
approaches such as combining pre-trained word or sentence embeddings and encoding …
Space-3: Unified dialog model pre-training for task-oriented dialog understanding and generation
Recently, pre-training methods have shown remarkable success in task-oriented dialog
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …
How coherent are neural models of coherence?
L Pishdad, F Fancellu, R Zhang… - Proceedings of the 28th …, 2020 - aclanthology.org
Despite the recent advances in coherence modelling, most such models including state-of-
the-art neural ones, are evaluated on either contrived proxy tasks such as the standard order …
the-art neural ones, are evaluated on either contrived proxy tasks such as the standard order …
[HTML][HTML] Advancing open domain dialog: The fifth alexa prize socialbot grand challenge
Creating conversational dialog systems that are able to converse naturally and engagingly
with humans on any topic remains one of the fundamental challenges of artificial …
with humans on any topic remains one of the fundamental challenges of artificial …
Implicit discourse relation identification for open-domain dialogues
Discourse relation identification has been an active area of research for many years, and the
challenge of identifying implicit relations remains largely an unsolved task, especially in the …
challenge of identifying implicit relations remains largely an unsolved task, especially in the …
PRAL: A tailored pre-training model for task-oriented dialog generation
Large pre-trained language generation models such as GPT-2 have demonstrated their
effectiveness as language priors by reaching state-of-the-art results in various language …
effectiveness as language priors by reaching state-of-the-art results in various language …