Spectral feature augmentation for graph contrastive learning and beyond
Although augmentations (eg, perturbation of graph edges, image crops) boost the efficiency
of Contrastive Learning (CL), feature level augmentation is another plausible …
of Contrastive Learning (CL), feature level augmentation is another plausible …
Downstream-agnostic adversarial examples
Self-supervised learning usually uses a large amount of unlabeled data to pre-train an
encoder which can be used as a general-purpose feature extractor, such that downstream …
encoder which can be used as a general-purpose feature extractor, such that downstream …
Self-supervised learning with an information maximization criterion
Self-supervised learning allows AI systems to learn effective representations from large
amounts of data using tasks that do not require costly labeling. Mode collapse, ie, the model …
amounts of data using tasks that do not require costly labeling. Mode collapse, ie, the model …
Index your position: A novel self-supervised learning method for remote sensing images semantic segmentation
Learning effective visual representations without human supervision is a critical problem for
the task of semantic segmentation of remote sensing images (RSIs), where pixel-level …
the task of semantic segmentation of remote sensing images (RSIs), where pixel-level …
Semi-supervised learning made simple with self-supervised clustering
Self-supervised learning models have been shown to learn rich visual representations
without requiring human annotations. However, in many real-world scenarios, labels are …
without requiring human annotations. However, in many real-world scenarios, labels are …
Self-supervised pretraining for 2d medical image segmentation
A Kalapos, B Gyires-Tóth - European Conference on Computer Vision, 2022 - Springer
Supervised machine learning provides state-of-the-art solutions to a wide range of computer
vision problems. However, the need for copious labelled training data limits the capabilities …
vision problems. However, the need for copious labelled training data limits the capabilities …
Zero-cl: Instance and feature decorrelation for negative-free symmetric contrastive learning
For self-supervised contrastive learning, models can easily collapse and generate trivial
constant solutions. The issue has been mitigated by recent improvement on objective …
constant solutions. The issue has been mitigated by recent improvement on objective …
Neural manifold clustering and embedding
Given a union of non-linear manifolds, non-linear subspace clustering or manifold clustering
aims to cluster data points based on manifold structures and also learn to parameterize each …
aims to cluster data points based on manifold structures and also learn to parameterize each …
Is self-supervised learning more robust than supervised learning?
Self-supervised contrastive learning is a powerful tool to learn visual representation without
labels. Prior work has primarily focused on evaluating the recognition accuracy of various …
labels. Prior work has primarily focused on evaluating the recognition accuracy of various …
Decoupled adversarial contrastive learning for self-supervised adversarial robustness
Adversarial training (AT) for robust representation learning and self-supervised learning
(SSL) for unsupervised representation learning are two active research fields. Integrating AT …
(SSL) for unsupervised representation learning are two active research fields. Integrating AT …