Contrastive self-supervised learning: review, progress, challenges and future research directions

P Kumar, P Rawat, S Chauhan - International Journal of Multimedia …, 2022 - Springer
In the last decade, deep supervised learning has had tremendous success. However, its
flaws, such as its dependency on manual and costly annotations on large datasets and …

Beyond just vision: A review on self-supervised representation learning on multimodal and temporal data

S Deldari, H Xue, A Saeed, J He, DV Smith… - arXiv preprint arXiv …, 2022 - arxiv.org
Recently, Self-Supervised Representation Learning (SSRL) has attracted much attention in
the field of computer vision, speech, natural language processing (NLP), and recently, with …

A metric learning reality check

K Musgrave, S Belongie, SN Lim - … , Glasgow, UK, August 23–28, 2020 …, 2020 - Springer
Deep metric learning papers from the past four years have consistently claimed great
advances in accuracy, often more than doubling the performance of decade-old methods. In …

Proxy anchor loss for deep metric learning

S Kim, D Kim, M Cho, S Kwak - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
Existing metric learning losses can be categorized into two classes: pair-based and proxy-
based losses. The former class can leverage fine-grained semantic relations between data …

Smooth-ap: Smoothing the path towards large-scale image retrieval

A Brown, W Xie, V Kalogeiton, A Zisserman - European conference on …, 2020 - Springer
Optimising a ranking-based metric, such as Average Precision (AP), is notoriously
challenging due to the fact that it is non-differentiable, and hence cannot be optimised …

Farewell to mutual information: Variational distillation for cross-modal person re-identification

X Tian, Z Zhang, S Lin, Y Qu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Abstract The Information Bottleneck (IB) provides an information theoretic principle for
representation learning, by retaining all information relevant for predicting label while …

Self-attention between datapoints: Going beyond individual input-output pairs in deep learning

J Kossen, N Band, C Lyle, AN Gomez… - Advances in …, 2021 - proceedings.neurips.cc
We challenge a common assumption underlying most supervised deep learning: that a
model makes a prediction depending only on its parameters and the features of a single …

Revisiting training strategies and generalization performance in deep metric learning

K Roth, T Milbich, S Sinha, P Gupta… - International …, 2020 - proceedings.mlr.press
Abstract Deep Metric Learning (DML) is arguably one of the most influential lines of research
for learning visual similarities with many proposed approaches every year. Although the field …

Time series change point detection with self-supervised contrastive predictive coding

S Deldari, DV Smith, H Xue, FD Salim - Proceedings of the Web …, 2021 - dl.acm.org
Change Point Detection (CPD) methods identify the times associated with changes in the
trends and properties of time series data in order to describe the underlying behaviour of the …

Use all the labels: A hierarchical multi-label contrastive learning framework

S Zhang, R Xu, C Xiong… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Current contrastive learning frameworks focus on leveraging a single supervisory signal to
learn representations, which limits the efficacy on unseen data and downstream tasks. In this …