[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4
KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
Graph learning based recommender systems: A review
Recent years have witnessed the fast development of the emerging topic of Graph Learning
based Recommender Systems (GLRS). GLRS employ advanced graph learning …
based Recommender Systems (GLRS). GLRS employ advanced graph learning …
Large language models are zero-shot rankers for recommender systems
Recently, large language models (LLMs)(eg, GPT-4) have demonstrated impressive general-
purpose task-solving abilities, including the potential to approach recommendation tasks …
purpose task-solving abilities, including the potential to approach recommendation tasks …
Chat-rec: Towards interactive and explainable llms-augmented recommender system
Large language models (LLMs) have demonstrated their significant potential to be applied
for addressing various application tasks. However, traditional recommender systems …
for addressing various application tasks. However, traditional recommender systems …
Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5)
For a long time, different recommendation tasks require designing task-specific architectures
and training objectives. As a result, it is hard to transfer the knowledge and representations …
and training objectives. As a result, it is hard to transfer the knowledge and representations …
Towards universal sequence representation learning for recommender systems
In order to develop effective sequential recommenders, a series of sequence representation
learning (SRL) methods are proposed to model historical user behaviors. Most existing SRL …
learning (SRL) methods are proposed to model historical user behaviors. Most existing SRL …
Text is all you need: Learning language representations for sequential recommendation
Sequential recommendation aims to model dynamic user behavior from historical
interactions. Existing methods rely on either explicit item IDs or general textual features for …
interactions. Existing methods rely on either explicit item IDs or general textual features for …
A survey on session-based recommender systems
Recommender systems (RSs) have been playing an increasingly important role for informed
consumption, services, and decision-making in the overloaded information era and digitized …
consumption, services, and decision-making in the overloaded information era and digitized …
Learning vector-quantized item representation for transferable sequential recommenders
Recently, the generality of natural language text has been leveraged to develop transferable
recommender systems. The basic idea is to employ pre-trained language models (PLM) to …
recommender systems. The basic idea is to employ pre-trained language models (PLM) to …
Disencdr: Learning disentangled representations for cross-domain recommendation
Data sparsity is a long-standing problem in recommender systems. To alleviate it, Cross-
Domain Recommendation (CDR) has attracted a surge of interests, which utilizes the rich …
Domain Recommendation (CDR) has attracted a surge of interests, which utilizes the rich …