A comprehensive survey of hallucination mitigation techniques in large language models

SM Tonmoy, SM Zaman, V Jain, A Rani… - arXiv preprint arXiv …, 2024 - arxiv.org
As Large Language Models (LLMs) continue to advance in their ability to write human-like
text, a key challenge remains around their tendency to hallucinate generating content that …

Deliberate reasoning for llms as structure-aware planning with accurate world model

S Xiong, A Payani, Y Yang, F Fekri - arXiv preprint arXiv:2410.03136, 2024 - arxiv.org
Enhancing the reasoning capabilities of large language models (LLMs) remains a key
challenge, especially for tasks that require complex, multi-step decision-making. Humans …

Multilingual Fine-Grained News Headline Hallucination Detection

J Shen, T Liu, J Liu, Z Qin, J Pavagadhi… - arXiv preprint arXiv …, 2024 - arxiv.org
The popularity of automated news headline generation has surged with advancements in
pre-trained language models. However, these models often suffer from the``hallucination'' …

FACTOID: FACtual enTailment fOr hallucInation Detection

V Rawte, SM Tonmoy, K Rajbangshi, S Nag… - arXiv preprint arXiv …, 2024 - arxiv.org
The widespread adoption of Large Language Models (LLMs) has facilitated numerous
benefits. However, hallucination is a significant concern. In response, Retrieval Augmented …