Contrastive self-supervised learning: review, progress, challenges and future research directions
In the last decade, deep supervised learning has had tremendous success. However, its
flaws, such as its dependency on manual and costly annotations on large datasets and …
flaws, such as its dependency on manual and costly annotations on large datasets and …
Beyond just vision: A review on self-supervised representation learning on multimodal and temporal data
Recently, Self-Supervised Representation Learning (SSRL) has attracted much attention in
the field of computer vision, speech, natural language processing (NLP), and recently, with …
the field of computer vision, speech, natural language processing (NLP), and recently, with …
A metric learning reality check
Deep metric learning papers from the past four years have consistently claimed great
advances in accuracy, often more than doubling the performance of decade-old methods. In …
advances in accuracy, often more than doubling the performance of decade-old methods. In …
Proxy anchor loss for deep metric learning
Existing metric learning losses can be categorized into two classes: pair-based and proxy-
based losses. The former class can leverage fine-grained semantic relations between data …
based losses. The former class can leverage fine-grained semantic relations between data …
Smooth-ap: Smoothing the path towards large-scale image retrieval
Optimising a ranking-based metric, such as Average Precision (AP), is notoriously
challenging due to the fact that it is non-differentiable, and hence cannot be optimised …
challenging due to the fact that it is non-differentiable, and hence cannot be optimised …
Farewell to mutual information: Variational distillation for cross-modal person re-identification
Abstract The Information Bottleneck (IB) provides an information theoretic principle for
representation learning, by retaining all information relevant for predicting label while …
representation learning, by retaining all information relevant for predicting label while …
Self-attention between datapoints: Going beyond individual input-output pairs in deep learning
We challenge a common assumption underlying most supervised deep learning: that a
model makes a prediction depending only on its parameters and the features of a single …
model makes a prediction depending only on its parameters and the features of a single …
Revisiting training strategies and generalization performance in deep metric learning
Abstract Deep Metric Learning (DML) is arguably one of the most influential lines of research
for learning visual similarities with many proposed approaches every year. Although the field …
for learning visual similarities with many proposed approaches every year. Although the field …
Time series change point detection with self-supervised contrastive predictive coding
Change Point Detection (CPD) methods identify the times associated with changes in the
trends and properties of time series data in order to describe the underlying behaviour of the …
trends and properties of time series data in order to describe the underlying behaviour of the …
Use all the labels: A hierarchical multi-label contrastive learning framework
Current contrastive learning frameworks focus on leveraging a single supervisory signal to
learn representations, which limits the efficacy on unseen data and downstream tasks. In this …
learn representations, which limits the efficacy on unseen data and downstream tasks. In this …