Lamp: When large language models meet personalization
This paper highlights the importance of personalization in large language models and
introduces the LaMP benchmark--a novel benchmark for training and evaluating language …
introduces the LaMP benchmark--a novel benchmark for training and evaluating language …
Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning
K Bu, Y Liu, X Ju - Knowledge-Based Systems, 2023 - Elsevier
Sentiment analysis is one of the traditional well-known tasks in Natural Language
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …
ProQA: Structural prompt-based pre-training for unified question answering
Question Answering (QA) is a longstanding challenge in natural language processing.
Existing QA works mostly focus on specific question types, knowledge domains, or …
Existing QA works mostly focus on specific question types, knowledge domains, or …
Useridentifier: implicit user representations for simple and effective personalized sentiment analysis
Global models are trained to be as generalizable as possible, with user invariance
considered desirable since the models are shared across multitudes of users. As such …
considered desirable since the models are shared across multitudes of users. As such …
[HTML][HTML] Improving task generalization via unified schema prompt
Task generalization has been a long-standing challenge in Natural Language Processing
(NLP). Recent research attempts to improve the task generalization ability of pre-trained …
(NLP). Recent research attempts to improve the task generalization ability of pre-trained …
Differential dataset cartography: Explainable artificial intelligence in comparative personalized sentiment analysis
Data Maps is an interesting method of graphical representation of datasets, which allows
observing the model's behaviour for individual instances in the learning process (training …
observing the model's behaviour for individual instances in the learning process (training …
Personalized LoRA for Human-Centered Text Understanding
Effectively and efficiently adapting a pre-trained language model (PLM) for human-centered
text understanding (HCTU) is challenging since user tokens are million-level in most …
text understanding (HCTU) is challenging since user tokens are million-level in most …
Large human language models: A need and the challenges
As research in human-centered NLP advances, there is a growing recognition of the
importance of incorporating human and social factors into NLP models. At the same time …
importance of incorporating human and social factors into NLP models. At the same time …
List: Lite prompted self-training makes parameter-efficient few-shot learners
We present a new method LiST is short for Lite Prompted Self-Training for parameter-
efficient fine-tuning of large pre-trained language models (PLMs) for few-shot learning. LiST …
efficient fine-tuning of large pre-trained language models (PLMs) for few-shot learning. LiST …
Learning User Embeddings from Human Gaze for Personalised Saliency Prediction
Reusable embeddings of user behaviour have shown significant performance
improvements for the personalised saliency prediction task. However, prior works require …
improvements for the personalised saliency prediction task. However, prior works require …