Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning

R Zhu, Z Ma, J Wu, J Gao, J Wang, D Lin… - arXiv preprint arXiv …, 2024 - arxiv.org
Refusal-Aware Instruction Tuning (RAIT) enables Large Language Models (LLMs) to refuse
to answer unknown questions. By modifying responses of unknown questions in the training …

UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models

B Xue, F Mi, Q Zhu, H Wang, R Wang, S Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite demonstrating impressive capabilities, Large Language Models (LLMs) still often
struggle to accurately express the factual knowledge they possess, especially in cases …

Multi-Task Learning with LLMs for Implicit Sentiment Analysis: Data-level and Task-level Automatic Weight Learning

W Lai, H Xie, G Xu, Q Li - arXiv preprint arXiv:2412.09046, 2024 - arxiv.org
Implicit sentiment analysis (ISA) presents significant challenges due to the absence of
salient cue words. Previous methods have struggled with insufficient data and limited …

FAITHEVAL: CAN YOUR LANGUAGE MODEL STAY FAITHFUL TO CONTEXT, EVEN IF “THE MOON IS

MOF MARSHMALLOWS - openreview.net
Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented
generation (RAG) systems is crucial for reliable deployment in real-world applications, as …