A survey of knowledge enhanced pre-trained language models
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …
supervised learning method, have yielded promising performance on various tasks in …
A survey of knowledge-enhanced text generation
The goal of text-to-text generation is to make machines express like a human in many
applications such as conversation, summarization, and translation. It is one of the most …
applications such as conversation, summarization, and translation. It is one of the most …
Retrieval augmentation reduces hallucination in conversation
Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue
models often suffer from factual incorrectness and hallucination of knowledge (Roller et al …
models often suffer from factual incorrectness and hallucination of knowledge (Roller et al …
Knowledge-grounded dialogue generation with pre-trained language models
We study knowledge-grounded dialogue generation with pre-trained language models. To
leverage the redundant external knowledge under capacity constraint, we propose …
leverage the redundant external knowledge under capacity constraint, we propose …
Plato-2: Towards building an open-domain chatbot via curriculum learning
To build a high-quality open-domain chatbot, we introduce the effective training process of
PLATO-2 via curriculum learning. There are two stages involved in the learning process. In …
PLATO-2 via curriculum learning. There are two stages involved in the learning process. In …
Increasing faithfulness in knowledge-grounded dialogue with controllable features
Knowledge-grounded dialogue systems are intended to convey information that is based on
evidence provided in a given source text. We discuss the challenges of training a generative …
evidence provided in a given source text. We discuss the challenges of training a generative …
A survey of multi-task learning in natural language processing: Regarding task relatedness and training methods
Multi-task learning (MTL) has become increasingly popular in natural language processing
(NLP) because it improves the performance of related tasks by exploiting their …
(NLP) because it improves the performance of related tasks by exploiting their …
Zero-resource knowledge-grounded dialogue generation
While neural conversation models have shown great potentials towards generating
informative and engaging responses via introducing external knowledge, learning such a …
informative and engaging responses via introducing external knowledge, learning such a …
Hindsight: Posterior-guided training of retrievers for improved open-ended generation
Many text generation systems benefit from using a retriever to retrieve passages from a
textual knowledge corpus (eg, Wikipedia) which are then provided as additional context to …
textual knowledge corpus (eg, Wikipedia) which are then provided as additional context to …
A probabilistic end-to-end task-oriented dialog model with latent belief states towards semi-supervised learning
Y Zhang, Z Ou, H Wang, J Feng - arXiv preprint arXiv:2009.08115, 2020 - arxiv.org
Structured belief states are crucial for user goal tracking and database query in task-oriented
dialog systems. However, training belief trackers often requires expensive turn-level …
dialog systems. However, training belief trackers often requires expensive turn-level …