Spectral feature augmentation for graph contrastive learning and beyond

Y Zhang, H Zhu, Z Song, P Koniusz… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Although augmentations (eg, perturbation of graph edges, image crops) boost the efficiency
of Contrastive Learning (CL), feature level augmentation is another plausible …

Downstream-agnostic adversarial examples

Z Zhou, S Hu, R Zhao, Q Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Self-supervised learning usually uses a large amount of unlabeled data to pre-train an
encoder which can be used as a general-purpose feature extractor, such that downstream …

Self-supervised learning with an information maximization criterion

S Ozsoy, S Hamdan, S Arik, D Yuret… - Advances in Neural …, 2022 - proceedings.neurips.cc
Self-supervised learning allows AI systems to learn effective representations from large
amounts of data using tasks that do not require costly labeling. Mode collapse, ie, the model …

Index your position: A novel self-supervised learning method for remote sensing images semantic segmentation

D Muhtar, X Zhang, P Xiao - IEEE Transactions on Geoscience …, 2022 - ieeexplore.ieee.org
Learning effective visual representations without human supervision is a critical problem for
the task of semantic segmentation of remote sensing images (RSIs), where pixel-level …

Semi-supervised learning made simple with self-supervised clustering

E Fini, P Astolfi, K Alahari… - Proceedings of the …, 2023 - openaccess.thecvf.com
Self-supervised learning models have been shown to learn rich visual representations
without requiring human annotations. However, in many real-world scenarios, labels are …

Self-supervised pretraining for 2d medical image segmentation

A Kalapos, B Gyires-Tóth - European Conference on Computer Vision, 2022 - Springer
Supervised machine learning provides state-of-the-art solutions to a wide range of computer
vision problems. However, the need for copious labelled training data limits the capabilities …

Zero-cl: Instance and feature decorrelation for negative-free symmetric contrastive learning

S Zhang, F Zhu, J Yan, R Zhao… - … Conference on Learning …, 2021 - openreview.net
For self-supervised contrastive learning, models can easily collapse and generate trivial
constant solutions. The issue has been mitigated by recent improvement on objective …

Neural manifold clustering and embedding

Z Li, Y Chen, Y LeCun, FT Sommer - arXiv preprint arXiv:2201.10000, 2022 - arxiv.org
Given a union of non-linear manifolds, non-linear subspace clustering or manifold clustering
aims to cluster data points based on manifold structures and also learn to parameterize each …

Is self-supervised learning more robust than supervised learning?

Y Zhong, H Tang, J Chen, J Peng, YX Wang - arXiv preprint arXiv …, 2022 - arxiv.org
Self-supervised contrastive learning is a powerful tool to learn visual representation without
labels. Prior work has primarily focused on evaluating the recognition accuracy of various …

Decoupled adversarial contrastive learning for self-supervised adversarial robustness

C Zhang, K Zhang, C Zhang, A Niu, J Feng… - … on Computer Vision, 2022 - Springer
Adversarial training (AT) for robust representation learning and self-supervised learning
(SSL) for unsupervised representation learning are two active research fields. Integrating AT …