CASR: a context-aware residual network for single-image super-resolution

Y Wu, X Ji, W Ji, Y Tian, H Zhou - Neural Computing and Applications, 2020 - Springer
Y Wu, X Ji, W Ji, Y Tian, H Zhou
Neural Computing and Applications, 2020Springer
With the significant power of deep learning architectures, researchers have made much
progress on super-resolution in the past few years. However, due to low representational
ability of feature maps extracted from nature scene images, directly applying deep learning
architectures for super-resolution could result in poor visual effects. Essentially, unique
characteristics like low-frequency information should be emphasized for better shape
reconstruction, other than treated equally across different patches and channels. To ease …
Abstract
With the significant power of deep learning architectures, researchers have made much progress on super-resolution in the past few years. However, due to low representational ability of feature maps extracted from nature scene images, directly applying deep learning architectures for super-resolution could result in poor visual effects. Essentially, unique characteristics like low-frequency information should be emphasized for better shape reconstruction, other than treated equally across different patches and channels. To ease this problem, we propose a lightweight context-aware deep residual network named as CASR network, which appropriately encodes channel and spatial attention information to construct context-aware feature map for single-image super-resolution. We firstly design a task-specified inception block with a novel structure of astrous filters and specially chosen kernel size to extract multi-level information from low-resolution images. Then, a Dual-Attention ResNet module is applied to capture context information by dually connecting spatial and channel attention schemes. With high representational ability of context-aware feature map, CASR can accurately and efficiently generate high-resolution images. Experiments on several popular datasets show the proposed method has achieved better visual improvements and superior efficiencies than most of the existing studies.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果