G-GANISR: Gradual generative adversarial network for image super resolution

P Shamsolmoali, M Zareapoor, R Wang, DK Jain… - Neurocomputing, 2019 - Elsevier
Neurocomputing, 2019Elsevier
Adversarial methods have demonsterated to be signifiant at generating realistic images.
However, these approaches have a challenging training process which partially attributed to
the performance of discriminator. In this paper, we proposed an efficient super-resolution
model based on generative adversarial network (GAN), to effectively generate reprehensive
information and improve the test quality of the real-world images. To overcome the current
issues, we designed the discriminator of our model based on the Least Square Loss …
Abstract
Adversarial methods have demonsterated to be signifiant at generating realistic images. However, these approaches have a challenging training process which partially attributed to the performance of discriminator. In this paper, we proposed an efficient super-resolution model based on generative adversarial network (GAN), to effectively generate reprehensive information and improve the test quality of the real-world images. To overcome the current issues, we designed the discriminator of our model based on the Least Square Loss function. The proposed network is organized by a gradual learning process from simple to advanced, which means from the small upsampling factors to the large upsampling factor that helps to improve the overall stability of the training. In particular, to control the model parameters and mitigate the training difficulties, dense residual learning strategy is adopted. Indeed, the key idea of proposed methodology is (i) fully exploit all the image details without losing information by gradually increases the task of discriminator, where the output of each layer is gradually improved in the next layer. In this way the model efficiently generates a super-resolution image even up to high scaling factors (e.g. × 8). (ii) The model is stable during the learning process, as we use least square loss instead of cross-entropy. In addition, the effects of different objective function on training stability are compared. To evaluate the model we conducted two sets of experiments, by using the proposed gradual GAN and the regular GAN to demonstrate the efficiency and stability of the proposed model for both quantitative and qualitative benchmarks.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果