A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

Knowledge conflicts for llms: A survey

R Xu, Z Qi, Z Guo, C Wang, H Wang, Y Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
This survey provides an in-depth analysis of knowledge conflicts for large language models
(LLMs), highlighting the complex challenges they encounter when blending contextual and …

The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning

S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin… - arXiv preprint arXiv …, 2023 - arxiv.org
Language models (LMs) with less than 100B parameters are known to perform poorly on
chain-of-thought (CoT) reasoning in contrast to large LMs when solving unseen tasks. In this …

Resolving knowledge conflicts in large language models

Y Wang, S Feng, H Wang, W Shi… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) often encounter knowledge conflicts, scenarios where
discrepancy arises between the internal parametric knowledge of LLMs and non-parametric …

Skills-in-context prompting: Unlocking compositionality in large language models

J Chen, X Pan, D Yu, K Song, X Wang, D Yu… - arXiv preprint arXiv …, 2023 - arxiv.org
We consider the problem of eliciting compositional generalization capabilities in large
language models (LLMs) with a novel type of prompting strategy. Compositional …

Towards verifiable generation: A benchmark for knowledge-aware language model attribution

X Li, Y Cao, L Pan, Y Ma, A Sun - arXiv preprint arXiv:2310.05634, 2023 - arxiv.org
Although achieving great success, Large Language Models (LLMs) usually suffer from
unreliable hallucinations. In this paper, we define a new task of Knowledge-aware …

Dissecting Dissonance: Benchmarking Large Multimodal Models Against Self-Contradictory Instructions

J Gao, L Gan, Y Li, Y Ye, D Wang - European Conference on Computer …, 2025 - Springer
Large multimodal models (LMMs) excel in adhering to human instructions. However, self-
contradictory instructions may arise due to the increasing trend of multimodal interaction and …

Thrust: Adaptively propels large language models with external knowledge

X Zhao, H Zhang, X Pan, W Yao… - Advances in Neural …, 2024 - proceedings.neurips.cc
Although large-scale pre-trained language models (PTLMs) are shown to encode rich
knowledge in their model parameters, the inherent knowledge in PTLMs can be opaque or …

Retrieval In Decoder benefits generative models for explainable complex question answering

J Feng, Q Wang, H Qiu, L Liu - Neural Networks, 2025 - Elsevier
Abstract Large-scale Language Models (LLMs) utilizing the Chain-of-Thought prompting
demonstrate exceptional performance in a variety of tasks. However, the persistence of …

[PDF][PDF] Retrieval augmented generation with rich answer encoding

W Huang, M Lapata, P Vougiouklis… - Proceedings of the …, 2023 - aclanthology.org
Abstract Knowledge-intensive generation tasks like generative question answering require
models to retrieve appropriate passages from external knowledge sources to support …