Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts

B Bi, S Liu, L Mei, Y Wang, P Ji, X Cheng - arXiv preprint arXiv:2405.11613, 2024 - arxiv.org
The knowledge within large language models (LLMs) may become outdated quickly. While
in-context editing (ICE) is currently the most effective method for knowledge editing (KE), it is …