Survey on factuality in large language models: Knowledge, retrieval and domain-specificity
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
A survey of large language models for healthcare: from data, technology, and applications to accountability and ethics
The utilization of large language models (LLMs) in the Healthcare domain has generated
both excitement and concern due to their ability to effectively respond to freetext queries with …
both excitement and concern due to their ability to effectively respond to freetext queries with …
Augmentation-adapted retriever improves generalization of language models as generic plug-in
Retrieval augmentation can aid language models (LMs) in knowledge-intensive tasks by
supplying them with external information. Prior works on retrieval augmentation usually …
supplying them with external information. Prior works on retrieval augmentation usually …
Configurable foundation models: Building llms from a modular perspective
Advancements in LLMs have recently unveiled challenges tied to computational efficiency
and continual scalability due to their requirements of huge parameters, making the …
and continual scalability due to their requirements of huge parameters, making the …
Plug-and-play document modules for pre-trained models
Large-scale pre-trained models (PTMs) have been widely used in document-oriented NLP
tasks, such as question answering. However, the encoding-task coupling requirement …
tasks, such as question answering. However, the encoding-task coupling requirement …
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
Language models deployed in the wild make errors. However, simply updating the model
with the corrected error instances causes catastrophic forgetting--the updated model makes …
with the corrected error instances causes catastrophic forgetting--the updated model makes …
Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at
the expense of huge parameter sizes and the consequent computational costs. In this paper …
the expense of huge parameter sizes and the consequent computational costs. In this paper …
A Contextual Dependency-Aware Graph Convolutional Network for extracting entity relations
J Liao, Y Du, J Hu, H Li, X Li, X Chen - Expert Systems with Applications, 2024 - Elsevier
Dependency trees reflect rich structural information, which can effectively guide models to
understand text semantics and are widely used for relation extraction. However, existing …
understand text semantics and are widely used for relation extraction. However, existing …
Synthetic Knowledge Ingestion: Towards Knowledge Refinement and Injection for Enhancing Large Language Models
Large language models (LLMs) are proficient in capturing factual knowledge across various
domains. However, refining their capabilities on previously seen knowledge or integrating …
domains. However, refining their capabilities on previously seen knowledge or integrating …
[PDF][PDF] Exploring Multimodal Models for Humor Recognition in Portuguese
M Inácio, HG Oliveira - … of the 16th International Conference on …, 2024 - aclanthology.org
Verbal humor is commonly mentioned to be a complex phenomenon that requires deep
linguistic and extralinguistic forms of knowledge. However, state-of-the-art deep learning …
linguistic and extralinguistic forms of knowledge. However, state-of-the-art deep learning …