A survey on rag meeting llms: Towards retrieval-augmented large language models

W Fan, Y Ding, L Ning, S Wang, H Li, D Yin… - Proceedings of the 30th …, 2024 - dl.acm.org
As one of the most advanced techniques in AI, Retrieval-Augmented Generation (RAG) can
offer reliable and up-to-date external knowledge, providing huge convenience for numerous …

Generative AI for Education (GAIED): Advances, Opportunities, and Challenges

P Denny, S Gulwani, NT Heffernan, T Käser… - arXiv preprint arXiv …, 2024 - arxiv.org
This survey article has grown out of the GAIED (pronounced" guide") workshop organized by
the authors at the NeurIPS 2023 conference. We organized the GAIED workshop as part of a …

Knowledge conflicts for llms: A survey

R Xu, Z Qi, Z Guo, C Wang, H Wang, Y Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
This survey provides an in-depth analysis of knowledge conflicts for large language models
(LLMs), highlighting the complex challenges they encounter when blending contextual and …

Astute rag: Overcoming imperfect retrieval augmentation and knowledge conflicts for large language models

F Wang, X Wan, R Sun, J Chen, SÖ Arık - arXiv preprint arXiv:2410.07176, 2024 - arxiv.org
Retrieval-Augmented Generation (RAG), while effective in integrating external knowledge to
address the limitations of large language models (LLMs), can be undermined by imperfect …

Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts When Knowledge Conflicts?

H Tan, F Sun, W Yang, Y Wang, Q Cao… - Proceedings of the …, 2024 - aclanthology.org
While auxiliary information has become a key to enhancing Large Language Models
(LLMs), relatively little is known about how LLMs merge these contexts, specifically contexts …

[PDF][PDF] Trustworthiness in retrieval-augmented generation systems: A survey

Y Zhou, Y Liu, X Li, J Jin, H Qian, Z Liu, C Li… - arXiv preprint arXiv …, 2024 - zhouyujia.cn
Retrieval-Augmented Generation (RAG) has quickly grown into a pivotal paradigm in the
development of Large Language Models (LLMs). While much of the current research in this …

Familiarity-aware evidence compression for retrieval augmented generation

D Jung, Q Liu, T Huang, B Zhou, M Chen - arXiv preprint arXiv:2409.12468, 2024 - arxiv.org
Retrieval Augmented Generation (RAG) improves large language models (LMs) by
incorporating non-parametric knowledge through evidence retrieval from external sources …

Evaluation of Retrieval-Augmented Generation: A Survey

H Yu, A Gan, K Zhang, S Tong, Q Liu, Z Liu - arXiv preprint arXiv …, 2024 - arxiv.org
Retrieval-Augmented Generation (RAG) has emerged as a pivotal innovation in natural
language processing, enhancing generative models by incorporating external information …

To generate or to retrieve? on the effectiveness of artificial contexts for medical open-domain question answering

G Frisoni, A Cocchieri, A Presepi, G Moro… - arXiv preprint arXiv …, 2024 - arxiv.org
Medical open-domain question answering demands substantial access to specialized
knowledge. Recent efforts have sought to decouple knowledge from model parameters …

Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts for Open-Domain QA?

H Tan, F Sun, W Yang, Y Wang, Q Cao… - arXiv preprint arXiv …, 2024 - arxiv.org
While auxiliary information has become a key to enhance Large Language Models (LLMs),
relatively little is known about how well LLMs merge these contexts, specifically generated …