Self-supervised learning for time series analysis: Taxonomy, progress, and prospects
Self-supervised learning (SSL) has recently achieved impressive performance on various
time series tasks. The most prominent advantage of SSL is that it reduces the dependence …
time series tasks. The most prominent advantage of SSL is that it reduces the dependence …
Robust multi-view clustering with incomplete information
The success of existing multi-view clustering methods heavily relies on the assumption of
view consistency and instance completeness, referred to as the complete information …
view consistency and instance completeness, referred to as the complete information …
Panda-70m: Captioning 70m videos with multiple cross-modality teachers
The quality of the data and annotation upper-bounds the quality of a downstream model.
While there exist large text corpora and image-text pairs high-quality video-text data is much …
While there exist large text corpora and image-text pairs high-quality video-text data is much …
Promptcal: Contrastive affinity learning via auxiliary prompts for generalized novel category discovery
Although existing semi-supervised learning models achieve remarkable success in learning
with unannotated in-distribution data, they mostly fail to learn on unlabeled data sampled …
with unannotated in-distribution data, they mostly fail to learn on unlabeled data sampled …
Understanding contrastive learning via distributionally robust optimization
This study reveals the inherent tolerance of contrastive learning (CL) towards sampling bias,
wherein negative samples may encompass similar semantics (\eg labels). However, existing …
wherein negative samples may encompass similar semantics (\eg labels). However, existing …
Learning representation for clustering via prototype scattering and positive sampling
Existing deep clustering methods rely on either contrastive or non-contrastive representation
learning for downstream clustering task. Contrastive-based methods thanks to negative …
learning for downstream clustering task. Contrastive-based methods thanks to negative …
Exploring denoised cross-video contrast for weakly-supervised temporal action localization
Weakly-supervised temporal action localization aims to localize actions in untrimmed videos
with only video-level labels. Most existing methods address this problem with a" localization …
with only video-level labels. Most existing methods address this problem with a" localization …
Best of both worlds: Multimodal contrastive learning with tabular and imaging data
Medical datasets and especially biobanks, often contain extensive tabular data with rich
clinical information in addition to images. In practice, clinicians typically have less data, both …
clinical information in addition to images. In practice, clinicians typically have less data, both …
Does Negative Sampling Matter? A Review with Insights into its Theory and Applications
Negative sampling has swiftly risen to prominence as a focal point of research, with wide-
ranging applications spanning machine learning, computer vision, natural language …
ranging applications spanning machine learning, computer vision, natural language …
Learning audio-visual source localization via false negative aware contrastive learning
Self-supervised audio-visual source localization aims to locate sound-source objects in
video frames without extra annotations. Recent methods often approach this goal with the …
video frames without extra annotations. Recent methods often approach this goal with the …