Retrieval-augmented generation for large language models: A survey

Y Gao, Y Xiong, X Gao, K Jia, J Pan, Y Bi, Y Dai… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) demonstrate powerful capabilities, but they still face
challenges in practical applications, such as hallucinations, slow knowledge updates, and …

Cognitive mirage: A review of hallucinations in large language models

H Ye, T Liu, A Zhang, W Hua, W Jia - arXiv preprint arXiv:2309.06794, 2023 - arxiv.org
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …

Wizardlm: Empowering large language models to follow complex instructions

C Xu, Q Sun, K Zheng, X Geng, P Zhao, J Feng… - arXiv preprint arXiv …, 2023 - arxiv.org
Training large language models (LLMs) with open-domain instruction following data brings
colossal success. However, manually creating such instruction data is very time-consuming …

Siren's song in the AI ocean: a survey on hallucination in large language models

Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu… - arXiv preprint arXiv …, 2023 - arxiv.org
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …

WizardLM: Empowering large pre-trained language models to follow complex instructions

C Xu, Q Sun, K Zheng, X Geng, P Zhao… - The Twelfth …, 2024 - openreview.net
Training large language models (LLMs) with open-domain instruction following data brings
colossal success. However, manually creating such instruction data is very time-consuming …

Chatgpt's one-year anniversary: are open-source large language models catching up?

H Chen, F Jiao, X Li, C Qin, M Ravaut, R Zhao… - arXiv preprint arXiv …, 2023 - arxiv.org
Upon its release in late 2022, ChatGPT has brought a seismic shift in the entire landscape of
AI, both in research and commerce. Through instruction-tuning a large language model …

R-tuning: Teaching large language models to refuse unknown questions

H Zhang, S Diao, Y Lin, YR Fung, Q Lian… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have revolutionized numerous domains with their
impressive performance but still face their challenges. A predominant issue is the propensity …

Chain-of-knowledge: Grounding large language models via dynamic knowledge adapting over heterogeneous sources

X Li, R Zhao, YK Chia, B Ding, S Joty, S Poria… - arXiv preprint arXiv …, 2023 - arxiv.org
We present chain-of-knowledge (CoK), a novel framework that augments large language
models (LLMs) by dynamically incorporating grounding information from heterogeneous …

Resolving knowledge conflicts in large language models

Y Wang, S Feng, H Wang, W Shi… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) often encounter knowledge conflicts, scenarios where
discrepancy arises between the internal parametric knowledge of LLMs and non-parametric …

Citb: A benchmark for continual instruction tuning

Z Zhang, M Fang, L Chen, MR Namazi-Rad - arXiv preprint arXiv …, 2023 - arxiv.org
Continual learning (CL) is a paradigm that aims to replicate the human ability to learn and
accumulate knowledge continually without forgetting previous knowledge and transferring it …