Enabling large language models to generate text with citations
Large language models (LLMs) have emerged as a widely-used tool for information
seeking, but their generated outputs are prone to hallucination. In this work, our aim is to …
seeking, but their generated outputs are prone to hallucination. In this work, our aim is to …
Factuality enhanced language models for open-ended text generation
Pretrained language models (LMs) are susceptible to generate text with nonfactual
information. In this work, we measure and improve the factual accuracy of large-scale LMs …
information. In this work, we measure and improve the factual accuracy of large-scale LMs …
Autoregressive search engines: Generating substrings as document identifiers
Abstract Knowledge-intensive language tasks require NLP systems to both provide the
correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive …
correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive …
Dense text retrieval based on pretrained language models: A survey
Text retrieval is a long-standing research topic on information seeking, where a system is
required to return relevant information resources to user's queries in natural language. From …
required to return relevant information resources to user's queries in natural language. From …
Internet-augmented language models through few-shot prompting for open-domain question answering
In this work, we aim to capitalize on the unique few-shot capabilities of large-scale language
models (LSLMs) to overcome some of their challenges with respect to grounding to factual …
models (LSLMs) to overcome some of their challenges with respect to grounding to factual …
Rarr: Researching and revising what language models say, using language models
Language models (LMs) now excel at many tasks such as few-shot learning, question
answering, reasoning, and dialog. However, they sometimes generate unsupported or …
answering, reasoning, and dialog. However, they sometimes generate unsupported or …
International Workshop on Multimodal Learning-2023 Theme: Multimodal Learning with Foundation Models
The recent advancements in machine learning and artificial intelligence (particularly
foundation models such as BERT, GPT-3, T5, ResNet, etc.) have demonstrated remarkable …
foundation models such as BERT, GPT-3, T5, ResNet, etc.) have demonstrated remarkable …
Re2G: Retrieve, rerank, generate
As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces
become larger and larger. However, for tasks that require a large amount of knowledge, non …
become larger and larger. However, for tasks that require a large amount of knowledge, non …
Temporalwiki: A lifelong benchmark for training and evaluating ever-evolving language models
Language Models (LMs) become outdated as the world changes; they often fail to perform
tasks requiring recent factual information which was absent or different during training, a …
tasks requiring recent factual information which was absent or different during training, a …
Expertqa: Expert-curated questions and attributed answers
As language models are adapted by a more sophisticated and diverse set of users, the
importance of guaranteeing that they provide factually correct information supported by …
importance of guaranteeing that they provide factually correct information supported by …