Why Does New Knowledge Create Messy Ripple Effects in LLMs?

J Qin, Z Zhang, C Han, M Li, P Yu, H Ji - arXiv preprint arXiv:2407.12828, 2024 - arxiv.org
Extensive previous research has focused on post-training knowledge editing (KE) for
language models (LMs) to ensure that knowledge remains accurate and up-to-date. One …

Integrative Decoding: Improve Factuality via Implicit Self-consistency

Y Cheng, X Liang, Y Gong, W Xiao, S Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Self-consistency-based approaches, which involve repeatedly sampling multiple outputs
and selecting the most consistent one as the final response, prove to be remarkably effective …

Continual Memorization of Factoids in Large Language Models

H Chen, J Geng, A Bhaskar, D Friedman… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models can absorb a massive amount of knowledge through pretraining,
but pretraining is inefficient for acquiring long-tailed or specialized facts. Therefore, fine …

ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains

Y Park, C Yoon, J Park, D Lee, M Jeong… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) have significantly impacted many aspects of our lives.
However, assessing and ensuring their chronological knowledge remains challenging …

EscapeBench: Pushing Language Models to Think Outside the Box

C Qian, P Han, Q Luo, B He, X Chen, Y Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Language model agents excel in long-session planning and reasoning, but existing
benchmarks primarily focus on goal-oriented tasks with explicit objectives, neglecting …

[PDF][PDF] Hallucinations in llms: Types, causes, and approaches for enhanced reliability

M Cleti, P Jano - 2024 - researchgate.net
Hallucinations in LLMs: Types, Causes, and Approaches for Enhanced Reliability Page 1
Hallucinations in LLMs: Types, Causes, and Approaches for Enhanced Reliability Meade …

Aligning LLMs with Individual Preferences via Interaction

S Wu, M Fung, C Qian, J Kim, D Hakkani-Tur… - arXiv preprint arXiv …, 2024 - arxiv.org
As large language models (LLMs) demonstrate increasingly advanced capabilities, aligning
their behaviors with human values and preferences becomes crucial for their wide adoption …

KcMF: A Knowledge-compliant Framework for Schema and Entity Matching with Fine-tuning-free LLMs

Y Xu, H Li, K Chen, L Shou - arXiv preprint arXiv:2410.12480, 2024 - arxiv.org
Schema and entity matching tasks are crucial for data integration and management. While
large language models (LLMs) have shown promising results in these tasks, they suffer from …

How new data pollutes LLM knowledge and how to dilute it

C Sun, R Aksitov, A Zhmoginov, NA Miller… - Neurips Safe Generative … - openreview.net
Understanding how the learning of new texts alter the existing knowledge in a large
language model is of great importance, because it is through these accumulated changes …