Kg-rank: Enhancing large language models for medical qa with knowledge graphs and ranking techniques

R Yang, H Liu, E Marrese-Taylor, Q Zeng… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) have demonstrated impressive generative capabilities with
the potential to innovate in medicine. However, the application of LLMs in real clinical …

Can Editing LLMs Inject Harm?

C Chen, B Huang, Z Li, Z Chen, S Lai, X Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
Knowledge editing techniques have been increasingly adopted to efficiently correct the false
or outdated knowledge in Large Language Models (LLMs), due to the high cost of retraining …

Struedit: Structured outputs enable the fast and accurate knowledge editing for large language models

B Bi, S Liu, Y Wang, L Mei, H Gao, J Fang… - arXiv preprint arXiv …, 2024 - arxiv.org
As the modern tool of choice for question answering, large language models (LLMs) are
expected to deliver answers with up-to-date knowledge. To achieve such ideal question …

Lpnl: Scalable link prediction with large language models

B Bi, S Liu, Y Wang, L Mei, X Cheng - Findings of the Association …, 2024 - aclanthology.org
Exploring the application of large language models (LLMs) to graph learning is an emerging
endeavor. However, the vast amount of information inherent in large graphs poses …

Can Knowledge Editing Really Correct Hallucinations?

B Huang, C Chen, X Xu, A Payani, K Shu - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual
information in generated content, despite their superior capacities across tasks. Meanwhile …

HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router

L Mei, S Liu, Y Wang, B Bi, R Yuan, X Cheng - arXiv preprint arXiv …, 2024 - arxiv.org
As Large Language Models (LLMs) grow increasingly powerful, ensuring their safety and
alignment with human values remains a critical challenge. Ideally, LLMs should provide …

PFME: A Modular Approach for Fine-grained Hallucination Detection and Editing of Large Language Models

K Deng, Z Huang, C Li, C Lin, M Gao… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) excel in fluency but risk producing inaccurate content,
called" hallucinations." This paper outlines a standardized process for categorizing fine …

Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities

B Bi, S Liu, Y Wang, L Mei, H Gao, Y Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
The parametric knowledge memorized by large language models (LLMs) becomes outdated
quickly. In-context editing (ICE) is currently the most effective method for updating the …