Semisupervised learning-based SAR ATR via self-consistent augmentation
C Wang, J Shi, Y Zhou, X Yang, Z Zhou… - … on Geoscience and …, 2020 - ieeexplore.ieee.org
IEEE Transactions on Geoscience and Remote Sensing, 2020•ieeexplore.ieee.org
In synthetic aperture radar (SAR) automatic target recognition, it is expensive and time-
consuming to annotate the targets. Thus, training a network with a few labeled data and
plenty of unlabeled data attracts attention of many researchers. In this article, we design a
semisupervised learning framework including self-consistent augmentation rule, mixup-
based mixture, and weighted loss, which allows a classification network to utilize unlabeled
data during training and ultimately alleviates the demand of labeled data. The proposed self …
consuming to annotate the targets. Thus, training a network with a few labeled data and
plenty of unlabeled data attracts attention of many researchers. In this article, we design a
semisupervised learning framework including self-consistent augmentation rule, mixup-
based mixture, and weighted loss, which allows a classification network to utilize unlabeled
data during training and ultimately alleviates the demand of labeled data. The proposed self …
In synthetic aperture radar (SAR) automatic target recognition, it is expensive and time-consuming to annotate the targets. Thus, training a network with a few labeled data and plenty of unlabeled data attracts attention of many researchers. In this article, we design a semisupervised learning framework including self-consistent augmentation rule, mixup-based mixture, and weighted loss, which allows a classification network to utilize unlabeled data during training and ultimately alleviates the demand of labeled data. The proposed self-consistent augmentation rule forces the samples before and after augmentation to share the same labels to utilize the unlabeled data, which can ensure the prominent effect of supervised learning part of the framework for training by balancing amounts of labeled and unlabeled samples in a minibatch, and makes the network achieve better performance. Then, a mixture method is introduced to mix the labeled, unlabeled, and augmented samples for the better involvement of label information in the mixed samples. By using cross-entropy loss for the mixed-labeled mixtures and mean-squared error loss for the mixed-unlabeled mixtures, the total loss is defined as the weighted sum of them. The experiments on the MSTAR data set and OpenSARShip data set show that the performance of the method is not only far better than the state of the art among current semisupervised-based classifiers but also near to the state of the art among the supervised learning-based networks.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果