A survey of knowledge enhanced pre-trained language models
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …
supervised learning method, have yielded promising performance on various tasks in …
Knowledge conflicts for llms: A survey
This survey provides an in-depth analysis of knowledge conflicts for large language models
(LLMs), highlighting the complex challenges they encounter when blending contextual and …
(LLMs), highlighting the complex challenges they encounter when blending contextual and …
The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning
Language models (LMs) with less than 100B parameters are known to perform poorly on
chain-of-thought (CoT) reasoning in contrast to large LMs when solving unseen tasks. In this …
chain-of-thought (CoT) reasoning in contrast to large LMs when solving unseen tasks. In this …
Resolving knowledge conflicts in large language models
Large language models (LLMs) often encounter knowledge conflicts, scenarios where
discrepancy arises between the internal parametric knowledge of LLMs and non-parametric …
discrepancy arises between the internal parametric knowledge of LLMs and non-parametric …
Skills-in-context prompting: Unlocking compositionality in large language models
We consider the problem of eliciting compositional generalization capabilities in large
language models (LLMs) with a novel type of prompting strategy. Compositional …
language models (LLMs) with a novel type of prompting strategy. Compositional …
Towards verifiable generation: A benchmark for knowledge-aware language model attribution
Although achieving great success, Large Language Models (LLMs) usually suffer from
unreliable hallucinations. In this paper, we define a new task of Knowledge-aware …
unreliable hallucinations. In this paper, we define a new task of Knowledge-aware …
Dissecting Dissonance: Benchmarking Large Multimodal Models Against Self-Contradictory Instructions
Large multimodal models (LMMs) excel in adhering to human instructions. However, self-
contradictory instructions may arise due to the increasing trend of multimodal interaction and …
contradictory instructions may arise due to the increasing trend of multimodal interaction and …
Thrust: Adaptively propels large language models with external knowledge
Although large-scale pre-trained language models (PTLMs) are shown to encode rich
knowledge in their model parameters, the inherent knowledge in PTLMs can be opaque or …
knowledge in their model parameters, the inherent knowledge in PTLMs can be opaque or …
Retrieval In Decoder benefits generative models for explainable complex question answering
J Feng, Q Wang, H Qiu, L Liu - Neural Networks, 2025 - Elsevier
Abstract Large-scale Language Models (LLMs) utilizing the Chain-of-Thought prompting
demonstrate exceptional performance in a variety of tasks. However, the persistence of …
demonstrate exceptional performance in a variety of tasks. However, the persistence of …
[PDF][PDF] Retrieval augmented generation with rich answer encoding
Abstract Knowledge-intensive generation tasks like generative question answering require
models to retrieve appropriate passages from external knowledge sources to support …
models to retrieve appropriate passages from external knowledge sources to support …