Evidentiality-guided generation for knowledge-intensive NLP tasks

A Asai, M Gardner, H Hajishirzi - arXiv preprint arXiv:2112.08688, 2021 - arxiv.org
Retrieval-augmented generation models have shown state-of-the-art performance across
many knowledge-intensive NLP tasks such as open question answering and fact …

Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction

Z Xu, K Peng, L Ding, D Tao, X Lu - arXiv preprint arXiv:2403.09963, 2024 - arxiv.org
Recent research shows that pre-trained language models (PLMs) suffer from" prompt bias"
in factual knowledge extraction, ie, prompts tend to introduce biases toward specific labels …

Knowledgeable or educated guess? revisiting language models as knowledge bases

B Cao, H Lin, X Han, L Sun, L Yan, M Liao… - arXiv preprint arXiv …, 2021 - arxiv.org
Previous literatures show that pre-trained masked language models (MLMs) such as BERT
can achieve competitive factual knowledge extraction performance on some datasets …

Improving commonsense question answering by graph-based iterative retrieval over multiple knowledge sources

Q Chen, F Ji, H Chen, Y Zhang - arXiv preprint arXiv:2011.02705, 2020 - arxiv.org
In order to facilitate natural language understanding, the key is to engage commonsense or
background knowledge. However, how to engage commonsense effectively in question …

Recitation-augmented language models

Z Sun, X Wang, Y Tay, Y Yang, D Zhou - arXiv preprint arXiv:2210.01296, 2022 - arxiv.org
We propose a new paradigm to help Large Language Models (LLMs) generate more
accurate factual knowledge without retrieving from an external corpus, called RECITation …

Enhancing llm factual accuracy with rag to counter hallucinations: A case study on domain-specific queries in private knowledge-bases

J Li, Y Yuan, Z Zhang - arXiv preprint arXiv:2403.10446, 2024 - arxiv.org
We proposed an end-to-end system design towards utilizing Retrieval Augmented
Generation (RAG) to improve the factual accuracy of Large Language Models (LLMs) for …

Faviq: Fact verification from information-seeking questions

J Park, S Min, J Kang, L Zettlemoyer… - arXiv preprint arXiv …, 2021 - arxiv.org
Despite significant interest in developing general purpose fact checking models, it is
challenging to construct a large-scale fact verification dataset with realistic real-world claims …

Improving large-scale fact-checking using decomposable attention models and lexical tagging

N Lee, CS Wu, P Fung - Proceedings of the 2018 Conference on …, 2018 - aclanthology.org
Fact-checking of textual sources needs to effectively extract relevant information from large
knowledge bases. In this paper, we extend an existing pipeline approach to better tackle this …

Understanding finetuning for factual knowledge extraction

G Ghosal, T Hashimoto, A Raghunathan - arXiv preprint arXiv:2406.14785, 2024 - arxiv.org
In this work, we study the impact of QA fine-tuning data on downstream factuality. We show
that fine-tuning on lesser-known facts that are poorly stored during pretraining yields …

Learning to filter context for retrieval-augmented generation

Z Wang, J Araki, Z Jiang, MR Parvez… - arXiv preprint arXiv …, 2023 - arxiv.org
On-the-fly retrieval of relevant knowledge has proven an essential element of reliable
systems for tasks such as open-domain question answering and fact verification. However …