Capsule Network based Contrastive Learning of Unsupervised Visual Representations

H Panwar, I Patras - arXiv preprint arXiv:2209.11276, 2022 - arxiv.org
arXiv preprint arXiv:2209.11276, 2022arxiv.org
Capsule Networks have shown tremendous advancement in the past decade, outperforming
the traditional CNNs in various task due to it's equivariant properties. With the use of vector
I/O which provides information of both magnitude and direction of an object or it's part, there
lies an enormous possibility of using Capsule Networks in unsupervised learning
environment for visual representation tasks such as multi class image classification. In this
paper, we propose Contrastive Capsule (CoCa) Model which is a Siamese style Capsule …
Capsule Networks have shown tremendous advancement in the past decade, outperforming the traditional CNNs in various task due to it's equivariant properties. With the use of vector I/O which provides information of both magnitude and direction of an object or it's part, there lies an enormous possibility of using Capsule Networks in unsupervised learning environment for visual representation tasks such as multi class image classification. In this paper, we propose Contrastive Capsule (CoCa) Model which is a Siamese style Capsule Network using Contrastive loss with our novel architecture, training and testing algorithm. We evaluate the model on unsupervised image classification CIFAR-10 dataset and achieve a top-1 test accuracy of 70.50% and top-5 test accuracy of 98.10%. Due to our efficient architecture our model has 31 times less parameters and 71 times less FLOPs than the current SOTA in both supervised and unsupervised learning.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果