Survey of hallucination in natural language generation
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …
the development of sequence-to-sequence deep learning technologies such as Transformer …
On the opportunities and challenges of foundation models for geospatial artificial intelligence
Large pre-trained models, also known as foundation models (FMs), are trained in a task-
agnostic manner on large-scale data and can be adapted to a wide range of downstream …
agnostic manner on large-scale data and can be adapted to a wide range of downstream …
Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …
KILT: a benchmark for knowledge intensive language tasks
Challenging problems such as open-domain question answering, fact checking, slot filling
and entity linking require access to large, external knowledge sources. While some models …
and entity linking require access to large, external knowledge sources. While some models …
A survey of knowledge-enhanced text generation
The goal of text-to-text generation is to make machines express like a human in many
applications such as conversation, summarization, and translation. It is one of the most …
applications such as conversation, summarization, and translation. It is one of the most …
Dart: Open-domain structured data record to text generation
We present DART, an open domain structured DAta Record to Text generation dataset with
over 82k instances (DARTs). Data-to-Text annotations can be a costly process, especially …
over 82k instances (DARTs). Data-to-Text annotations can be a costly process, especially …
Dynamic neuro-symbolic knowledge graph construction for zero-shot commonsense question answering
Understanding narratives requires reasoning about implicit world knowledge related to the
causes, effects, and states of situations described in text. At the core of this challenge is how …
causes, effects, and states of situations described in text. At the core of this challenge is how …
Leveraging graph to improve abstractive multi-document summarization
Graphs that capture relations between textual units have great benefits for detecting salient
information from multiple documents and generating overall coherent summaries. In this …
information from multiple documents and generating overall coherent summaries. In this …
Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward
Sequence-to-sequence models for abstractive summarization have been studied
extensively, yet the generated summaries commonly suffer from fabricated content, and are …
extensively, yet the generated summaries commonly suffer from fabricated content, and are …
Structure-aware abstractive conversation summarization via discourse and action graphs
Abstractive conversation summarization has received much attention recently. However,
these generated summaries often suffer from insufficient, redundant, or incorrect content …
these generated summaries often suffer from insufficient, redundant, or incorrect content …