Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning
R Zhu, Z Ma, J Wu, J Gao, J Wang, D Lin… - arXiv preprint arXiv …, 2024 - arxiv.org
Refusal-Aware Instruction Tuning (RAIT) enables Large Language Models (LLMs) to refuse
to answer unknown questions. By modifying responses of unknown questions in the training …
to answer unknown questions. By modifying responses of unknown questions in the training …
UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models
Despite demonstrating impressive capabilities, Large Language Models (LLMs) still often
struggle to accurately express the factual knowledge they possess, especially in cases …
struggle to accurately express the factual knowledge they possess, especially in cases …
Multi-Task Learning with LLMs for Implicit Sentiment Analysis: Data-level and Task-level Automatic Weight Learning
Implicit sentiment analysis (ISA) presents significant challenges due to the absence of
salient cue words. Previous methods have struggled with insufficient data and limited …
salient cue words. Previous methods have struggled with insufficient data and limited …
FAITHEVAL: CAN YOUR LANGUAGE MODEL STAY FAITHFUL TO CONTEXT, EVEN IF “THE MOON IS
MOF MARSHMALLOWS - openreview.net
Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented
generation (RAG) systems is crucial for reliable deployment in real-world applications, as …
generation (RAG) systems is crucial for reliable deployment in real-world applications, as …