When not to trust language models: Investigating effectiveness of parametric and non-parametric memories
Despite their impressive performance on diverse tasks, large language models (LMs) still
struggle with tasks requiring rich world knowledge, implying the limitations of relying solely …
struggle with tasks requiring rich world knowledge, implying the limitations of relying solely …
Evaluating open-domain question answering in the era of large language models
Lexical matching remains the de facto evaluation method for open-domain question
answering (QA). Unfortunately, lexical matching fails completely when a plausible candidate …
answering (QA). Unfortunately, lexical matching fails completely when a plausible candidate …
Learning to filter context for retrieval-augmented generation
On-the-fly retrieval of relevant knowledge has proven an essential element of reliable
systems for tasks such as open-domain question answering and fact verification. However …
systems for tasks such as open-domain question answering and fact verification. However …
Fid-light: Efficient and effective retrieval-augmented text generation
Retrieval-augmented generation models offer many benefits over standalone language
models: besides a textual answer to a given query they provide provenance items retrieved …
models: besides a textual answer to a given query they provide provenance items retrieved …
Merging generated and retrieved knowledge for open-domain QA
Open-domain question answering (QA) systems are often built with retrieval modules.
However, retrieving passages from a given source is known to suffer from insufficient …
However, retrieving passages from a given source is known to suffer from insufficient …
Towards robust qa evaluation via open llms
Instruction-tuned large language models (LLMs) have been shown to be viable surrogates
for the widely used, albeit overly rigid, lexical matching metrics in evaluating question …
for the widely used, albeit overly rigid, lexical matching metrics in evaluating question …
CREPE: Open-Domain Question Answering with False Presuppositions
Information seeking users often pose questions with false presuppositions, especially when
asking about unfamiliar topics. Most existing question answering (QA) datasets, in contrast …
asking about unfamiliar topics. Most existing question answering (QA) datasets, in contrast …
Detrimental contexts in open-domain question answering
For knowledge intensive NLP tasks, it has been widely accepted that accessing more
information is a contributing factor to improvements in the model's end-to-end performance …
information is a contributing factor to improvements in the model's end-to-end performance …
Beyond relevant documents: A knowledge-intensive approach for query-focused summarization using large language models
Query-focused summarization (QFS) is a fundamental task in natural language processing
with broad applications, including search engines and report generation. However …
with broad applications, including search engines and report generation. However …
Rfid: Towards rational fusion-in-decoder for open-domain question answering
Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of
generating answers by simultaneously referring to multiple passages. Although …
generating answers by simultaneously referring to multiple passages. Although …