Self-supervised learning for recommender systems: A survey
In recent years, neural architecture-based recommender systems have achieved
tremendous success, but they still fall short of expectation when dealing with highly sparse …
tremendous success, but they still fall short of expectation when dealing with highly sparse …
A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …
achieve satisfactory performance. However, the process of collecting and labeling such data …
One fits all: Power general time series analysis by pretrained lm
Although we have witnessed great success of pre-trained models in natural language
processing (NLP) and computer vision (CV), limited progress has been made for general …
processing (NLP) and computer vision (CV), limited progress has been made for general …
Are graph augmentations necessary? simple graph contrastive learning for recommendation
Contrastive learning (CL) recently has spurred a fruitful line of research in the field of
recommendation, since its ability to extract self-supervised signals from the raw data is well …
recommendation, since its ability to extract self-supervised signals from the raw data is well …
Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning
We present modality gap, an intriguing geometric phenomenon of the representation space
of multi-modal models. Specifically, we show that different data modalities (eg images and …
of multi-modal models. Specifically, we show that different data modalities (eg images and …
Improving graph collaborative filtering with neighborhood-enriched contrastive learning
Recently, graph collaborative filtering methods have been proposed as an effective
recommendation approach, which can capture users' preference over items by modeling the …
recommendation approach, which can capture users' preference over items by modeling the …
Vision-language pre-training with triple contrastive learning
Vision-language representation learning largely benefits from image-text alignment through
contrastive losses (eg, InfoNCE loss). The success of this alignment strategy is attributed to …
contrastive losses (eg, InfoNCE loss). The success of this alignment strategy is attributed to …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Delving into out-of-distribution detection with vision-language representations
Recognizing out-of-distribution (OOD) samples is critical for machine learning systems
deployed in the open world. The vast majority of OOD detection methods are driven by a …
deployed in the open world. The vast majority of OOD detection methods are driven by a …
[HTML][HTML] Learnable latent embeddings for joint behavioural and neural analysis
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our
ability to record large neural and behavioural data increases, there is growing interest in …
ability to record large neural and behavioural data increases, there is growing interest in …