A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …
achieve satisfactory performance. However, the process of collecting and labeling such data …
Simmatch: Semi-supervised learning with similarity matching
Learning with few labeled data has been a longstanding problem in the computer vision and
machine learning research community. In this paper, we introduced a new semi-supervised …
machine learning research community. In this paper, we introduced a new semi-supervised …
Rethinking federated learning with domain shift: A prototype view
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …
technique. However, prevalent solutions mainly focus on all private data sampled from the …
Weakly supervised contrastive learning
Unsupervised visual representation learning has gained much attention from the computer
vision community because of the recent achievement of contrastive learning. Most of the …
vision community because of the recent achievement of contrastive learning. Most of the …
Green hierarchical vision transformer for masked image modeling
We present an efficient approach for Masked Image Modeling (MIM) with hierarchical Vision
Transformers (ViTs), allowing the hierarchical ViTs to discard masked patches and operate …
Transformers (ViTs), allowing the hierarchical ViTs to discard masked patches and operate …
solo-learn: A library of self-supervised methods for visual representation learning
This paper presents solo-learn, a library of self-supervised methods for visual representation
learning. Implemented in Python, using Pytorch and Pytorch lightning, the library fits both …
learning. Implemented in Python, using Pytorch and Pytorch lightning, the library fits both …
Downstream-agnostic adversarial examples
Self-supervised learning usually uses a large amount of unlabeled data to pre-train an
encoder which can be used as a general-purpose feature extractor, such that downstream …
encoder which can be used as a general-purpose feature extractor, such that downstream …
Mine your own anatomy: Revisiting medical image segmentation with extremely limited labels
Recent studies on contrastive learning have achieved remarkable performance solely by
leveraging few labels in medical image segmentation. Existing methods mainly focus on …
leveraging few labels in medical image segmentation. Existing methods mainly focus on …
Lightvit: Towards light-weight convolution-free vision transformers
Vision transformers (ViTs) are usually considered to be less light-weight than convolutional
neural networks (CNNs) due to the lack of inductive bias. Recent works thus resort to …
neural networks (CNNs) due to the lack of inductive bias. Recent works thus resort to …
Vitas: Vision transformer architecture search
Vision transformers (ViTs) inherited the success of NLP but their structures have not been
sufficiently investigated and optimized for visual tasks. One of the simplest solutions is to …
sufficiently investigated and optimized for visual tasks. One of the simplest solutions is to …