Hard negative mixing for contrastive learning

Y Kalantidis, MB Sariyildiz, N Pion… - Advances in neural …, 2020 - proceedings.neurips.cc
Contrastive learning has become a key component of self-supervised learning approaches
for computer vision. By learning to embed two augmented versions of the same image close …

Csi: Novelty detection via contrastive learning on distributionally shifted instances

J Tack, S Mo, J Jeong, J Shin - Advances in neural …, 2020 - proceedings.neurips.cc
Novelty detection, ie, identifying whether a given sample is drawn from outside the training
distribution, is essential for reliable machine learning. To this end, there have been many …

What to hide from your students: Attention-guided masked image modeling

I Kakogeorgiou, S Gidaris, B Psomas, Y Avrithis… - … on Computer Vision, 2022 - Springer
Transformers and masked language modeling are quickly being adopted and explored in
computer vision as vision transformers and masked image modeling (MIM). In this work, we …

Subtab: Subsetting features of tabular data for self-supervised representation learning

T Ucar, E Hajiramezanali… - Advances in Neural …, 2021 - proceedings.neurips.cc
Self-supervised learning has been shown to be very effective in learning useful
representations, and yet much of the success is achieved in data types such as images …

Review on self-supervised image recognition using deep neural networks

K Ohri, M Kumar - Knowledge-Based Systems, 2021 - Elsevier
Deep learning has brought significant developments in image understanding tasks such as
object detection, image classification, and image segmentation. But the success of image …

Sequence-to-sequence contrastive learning for text recognition

A Aberdam, R Litman, S Tsiper… - Proceedings of the …, 2021 - openaccess.thecvf.com
We propose a framework for sequence-to-sequence contrastive learning (SeqCLR) of visual
representations, which we apply to text recognition. To account for the sequence-to …

[HTML][HTML] Self-supervised representation learning from 12-lead ECG data

T Mehari, N Strodthoff - Computers in biology and medicine, 2022 - Elsevier
Abstract Clinical 12-lead electrocardiography (ECG) is one of the most widely encountered
kinds of biosignals. Despite the increased availability of public ECG datasets, label scarcity …

The causal-neural connection: Expressiveness, learnability, and inference

K Xia, KZ Lee, Y Bengio… - Advances in Neural …, 2021 - proceedings.neurips.cc
One of the central elements of any causal inference is an object called structural causal
model (SCM), which represents a collection of mechanisms and exogenous sources of …

Robust contrastive learning against noisy views

CY Chuang, RD Hjelm, X Wang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Contrastive learning relies on an assumption that positive pairs contain related views that
share certain underlying information about an instance, eg, patches of an image or co …

Perfectly balanced: Improving transfer and robustness of supervised contrastive learning

M Chen, DY Fu, A Narayan, M Zhang… - International …, 2022 - proceedings.mlr.press
An ideal learned representation should display transferability and robustness. Supervised
contrastive learning (SupCon) is a promising method for training accurate models, but …