A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

A survey of knowledge-enhanced text generation

W Yu, C Zhu, Z Li, Z Hu, Q Wang, H Ji… - ACM Computing …, 2022 - dl.acm.org
The goal of text-to-text generation is to make machines express like a human in many
applications such as conversation, summarization, and translation. It is one of the most …

Retrieval augmentation reduces hallucination in conversation

K Shuster, S Poff, M Chen, D Kiela, J Weston - arXiv preprint arXiv …, 2021 - arxiv.org
Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue
models often suffer from factual incorrectness and hallucination of knowledge (Roller et al …

Knowledge-grounded dialogue generation with pre-trained language models

X Zhao, W Wu, C Xu, C Tao, D Zhao, R Yan - arXiv preprint arXiv …, 2020 - arxiv.org
We study knowledge-grounded dialogue generation with pre-trained language models. To
leverage the redundant external knowledge under capacity constraint, we propose …

Plato-2: Towards building an open-domain chatbot via curriculum learning

S Bao, H He, F Wang, H Wu, H Wang, W Wu… - arXiv preprint arXiv …, 2020 - arxiv.org
To build a high-quality open-domain chatbot, we introduce the effective training process of
PLATO-2 via curriculum learning. There are two stages involved in the learning process. In …

Increasing faithfulness in knowledge-grounded dialogue with controllable features

H Rashkin, D Reitter, GS Tomar, D Das - arXiv preprint arXiv:2107.06963, 2021 - arxiv.org
Knowledge-grounded dialogue systems are intended to convey information that is based on
evidence provided in a given source text. We discuss the challenges of training a generative …

A survey of multi-task learning in natural language processing: Regarding task relatedness and training methods

Z Zhang, W Yu, M Yu, Z Guo, M Jiang - arXiv preprint arXiv:2204.03508, 2022 - arxiv.org
Multi-task learning (MTL) has become increasingly popular in natural language processing
(NLP) because it improves the performance of related tasks by exploiting their …

Zero-resource knowledge-grounded dialogue generation

L Li, C Xu, W Wu, Y Zhao, X Zhao… - Advances in Neural …, 2020 - proceedings.neurips.cc
While neural conversation models have shown great potentials towards generating
informative and engaging responses via introducing external knowledge, learning such a …

Hindsight: Posterior-guided training of retrievers for improved open-ended generation

A Paranjape, O Khattab, C Potts, M Zaharia… - arXiv preprint arXiv …, 2021 - arxiv.org
Many text generation systems benefit from using a retriever to retrieve passages from a
textual knowledge corpus (eg, Wikipedia) which are then provided as additional context to …

A probabilistic end-to-end task-oriented dialog model with latent belief states towards semi-supervised learning

Y Zhang, Z Ou, H Wang, J Feng - arXiv preprint arXiv:2009.08115, 2020 - arxiv.org
Structured belief states are crucial for user goal tracking and database query in task-oriented
dialog systems. However, training belief trackers often requires expensive turn-level …