Securing large language models: Addressing bias, misinformation, and prompt attacks
Large Language Models (LLMs) demonstrate impressive capabilities across various fields,
yet their increasing use raises critical security concerns. This article reviews recent literature …
yet their increasing use raises critical security concerns. This article reviews recent literature …
[PDF][PDF] Trustworthiness in retrieval-augmented generation systems: A survey
Retrieval-Augmented Generation (RAG) has quickly grown into a pivotal paradigm in the
development of Large Language Models (LLMs). While much of the current research in this …
development of Large Language Models (LLMs). While much of the current research in this …
Mitigating entity-level hallucination in large language models
The emergence of Large Language Models (LLMs) has revolutionized how users access
information, shifting from traditional search engines to direct question-and-answer …
information, shifting from traditional search engines to direct question-and-answer …
Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations
N Jiang, A Kachinthaya, S Petryk… - arXiv preprint arXiv …, 2024 - arxiv.org
We investigate the internal representations of vision-language models (VLMs) to address
hallucinations, a persistent challenge despite advances in model size and training. We …
hallucinations, a persistent challenge despite advances in model size and training. We …
Haloscope: Harnessing unlabeled llm generations for hallucination detection
The surge in applications of large language models (LLMs) has prompted concerns about
the generation of misleading or fabricated information, known as hallucinations. Therefore …
the generation of misleading or fabricated information, known as hallucinations. Therefore …
Llm internal states reveal hallucination risk faced with a query
The hallucination problem of Large Language Models (LLMs) significantly limits their
reliability and trustworthiness. Humans have a self-awareness process that allows us to …
reliability and trustworthiness. Humans have a self-awareness process that allows us to …
Dragin: Dynamic retrieval augmented generation based on the real-time information needs of large language models
Dynamic retrieval augmented generation (RAG) paradigm actively decides when and what
to retrieve during the text generation process of Large Language Models (LLMs). There are …
to retrieve during the text generation process of Large Language Models (LLMs). There are …
STARD: A Chinese Statute Retrieval Dataset Derived from Real-life Queries by Non-professionals
Statute retrieval aims to find relevant statutory articles for specific queries. This process is the
basis of a wide range of legal applications such as legal advice, automated judicial …
basis of a wide range of legal applications such as legal advice, automated judicial …
Layer Importance and Hallucination Analysis in Large Language Models via Enhanced Activation Variance-Sparsity
Evaluating the importance of different layers in large language models (LLMs) is crucial for
optimizing model performance and interpretability. This paper first explores layer importance …
optimizing model performance and interpretability. This paper first explores layer importance …
LLM Hallucination Reasoning with Zero-shot Knowledge Test
LLM hallucination, where LLMs occasionally generate unfaithful text, poses significant
challenges for their practical applications. Most existing detection methods rely on external …
challenges for their practical applications. Most existing detection methods rely on external …