Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2024 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

Grounding and evaluation for large language models: Practical challenges and lessons learned (survey)

K Kenthapadi, M Sameki, A Taly - Proceedings of the 30th ACM SIGKDD …, 2024 - dl.acm.org
With the ongoing rapid adoption of Artificial Intelligence (AI)-based systems in high-stakes
domains, ensuring the trustworthiness, safety, and observability of these systems has …

Can knowledge graphs reduce hallucinations in llms?: A survey

G Agrawal, T Kumarage, Z Alghamdi, H Liu - arXiv preprint arXiv …, 2023 - arxiv.org
The contemporary LLMs are prone to producing hallucinations, stemming mainly from the
knowledge gaps within the models. To address this critical limitation, researchers employ …

Hallucination is inevitable: An innate limitation of large language models

Z Xu, S Jain, M Kankanhalli - arXiv preprint arXiv:2401.11817, 2024 - arxiv.org
Hallucination has been widely recognized to be a significant drawback for large language
models (LLMs). There have been many works that attempt to reduce the extent of …

Uhgeval: Benchmarking the hallucination of chinese large language models via unconstrained generation

X Liang, S Song, S Niu, Z Li, F Xiong, B Tang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have emerged as pivotal contributors in contemporary
natural language processing and are increasingly being applied across a diverse range of …

Fine-grained hallucination detection and editing for language models

A Mishra, A Asai, V Balachandran, Y Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LMs) are prone to generate diverse factually incorrect statements,
which are widely called hallucinations. Current approaches predominantly focus on coarse …

MoRAL: MoE Augmented LoRA for LLMs' Lifelong Learning

S Yang, MA Ali, CL Wang, L Hu, D Wang - arXiv preprint arXiv:2402.11260, 2024 - arxiv.org
Adapting large language models (LLMs) to new domains/tasks and enabling them to be
efficient lifelong learners is a pivotal challenge. In this paper, we propose MoRAL, ie, Mixture …

" Generate" the Future of Work through AI: Empirical Evidence from Online Labor Markets

J Liu, X Xu, X Nan, Y Li, Y Tan - arXiv preprint arXiv:2308.05201, 2023 - arxiv.org
Large Language Model (LLM) based generative AI, such as ChatGPT, is considered the first
generation of Artificial General Intelligence (AGI), exhibiting zero-shot learning abilities for a …

AI hallucination: towards a comprehensive classification of distorted information in artificial intelligence-generated content

Y Sun, D Sheng, Z Zhou, Y Wu - Humanities and Social Sciences …, 2024 - nature.com
Amidst the burgeoning information age, the rapid development of artificial intelligence-
generated content (AIGC) has brought forth challenges regarding information authenticity …

Natural language processing in the era of large language models

A Zubiaga - Frontiers in Artificial Intelligence, 2024 - frontiersin.org
Since their inception in the 1980s, language models (LMs) have been around for more than
four decades as a means for statistically modeling the properties observed from natural …