Imbalanced data learning by minority class augmentation using capsule adversarial networks
P Shamsolmoali, M Zareapoor, L Shen, AH Sadka… - Neurocomputing, 2021 - Elsevier
Neurocomputing, 2021•Elsevier
The fact that image datasets are often imbalanced poses an intense challenge for deep
learning techniques. In this paper, we propose a method to restore the balance in
imbalanced images, by coalescing two concurrent methods, generative adversarial
networks (GANs) and capsule network. In our model, generative and discriminative networks
play a novel competitive game, in which the generator generates samples towards specific
classes from multivariate probabilities distribution. The discriminator of our model is …
learning techniques. In this paper, we propose a method to restore the balance in
imbalanced images, by coalescing two concurrent methods, generative adversarial
networks (GANs) and capsule network. In our model, generative and discriminative networks
play a novel competitive game, in which the generator generates samples towards specific
classes from multivariate probabilities distribution. The discriminator of our model is …
Abstract
The fact that image datasets are often imbalanced poses an intense challenge for deep learning techniques. In this paper, we propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods, generative adversarial networks (GANs) and capsule network. In our model, generative and discriminative networks play a novel competitive game, in which the generator generates samples towards specific classes from multivariate probabilities distribution. The discriminator of our model is designed in a way that while recognizing the real and fake samples, it is also requires to assign classes to the inputs. Since GAN approaches require fully observed data during training, when the training samples are imbalanced, the approaches might generate similar samples which leading to data overfitting. This problem is addressed by providing all the available information from both the class components jointly in the adversarial training. It improves learning from imbalanced data by incorporating the majority distribution structure in the generation of new minority samples. Furthermore, the generator is trained with feature matching loss function to improve the training convergence. In addition, prevents generation of outliers and does not affect majority class space. The evaluations show the effectiveness of our proposed methodology; in particular, the coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果