Distilling visual priors from self-supervised learning
Abstract Convolutional Neural Networks (CNNs) are prone to overfit small training datasets.
We present a novel two-phase pipeline that leverages self-supervised learning and
knowledge distillation to improve the generalization ability of CNN models for image
classification under the data-deficient setting. The first phase is to learn a teacher model
which possesses rich and generalizable visual representations via self-supervised learning,
and the second phase is to distill the representations into a student model in a self …
We present a novel two-phase pipeline that leverages self-supervised learning and
knowledge distillation to improve the generalization ability of CNN models for image
classification under the data-deficient setting. The first phase is to learn a teacher model
which possesses rich and generalizable visual representations via self-supervised learning,
and the second phase is to distill the representations into a student model in a self …
Abstract
Convolutional Neural Networks (CNNs) are prone to overfit small training datasets. We present a novel two-phase pipeline that leverages self-supervised learning and knowledge distillation to improve the generalization ability of CNN models for image classification under the data-deficient setting. The first phase is to learn a teacher model which possesses rich and generalizable visual representations via self-supervised learning, and the second phase is to distill the representations into a student model in a self-distillation manner, and meanwhile fine-tune the student model for the image classification task. We also propose a novel margin loss for the self-supervised contrastive learning proxy task to better learn the representation under the data-deficient scenario. Together with other tricks, we achieve competitive performance in the VIPriors image classification challenge.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果