Super Resolution Using Segmentation-Prior Self-Attention Generative Adversarial Network

03/07/2020 ∙ by Yuxin Zhang, et al. ∙ HUAWEI Technologies Co., Ltd. Zhejiang University 10

Convolutional Neural Network (CNN) is intensively implemented to solve super resolution (SR) tasks because of its superior performance. However, the problem of super resolution is still challenging due to the lack of prior knowledge and small receptive field of CNN. We propose the Segmentation-Piror Self-Attention Generative Adversarial Network (SPSAGAN) to combine segmentation-priors and feature attentions into a unified framework. This combination is led by a carefully designed weighted addition to balance the influence of feature and segmentation attentions, so that the network can emphasize textures in the same segmentation category and meanwhile focus on the long-distance feature relationship. We also propose a lightweight skip connection architecture called Residual-in-Residual Sparse Block (RRSB) to further improve the super-resolution performance and save computation. Extensive experiments show that SPSAGAN can generate more realistic and visually pleasing textures compared to state-of-the-art SFTGAN and ESRGAN on many SR datasets.



There are no comments yet.


page 2

page 3

page 9

page 11

page 12

page 13

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Single image super-resolution (SR) is aimed to restore a high resolution (HR) image from a single low-resolution (LR) one. The problem is ill-posed because multiple solutions exist for any given LR image. Due to its superior performance, methods based on convolutional neural networks (CNN) [Chao2014Learning, DongAccelerating, KimAccurate, Kim_2016_DRCN, LapSRN] have attracted much attention in recent years to learn the mapping from LR to HR images

To push super-resolution closer to natural images, several new losses are proposed to replace the traditional mean squared error (MSE) which tends to encourage blurry and implausible results [Chao2014Learning, DongAccelerating, KimAccurate]. For example, the perceptual loss [Bruna2015per, Johnson2016Perceptual] has been proposed to optimize the network in a feature space instead of pixel space. Generative Adversarial Network (GAN) loss [Christian2017Photo, enhancenet] is introduced to encourage perceptually-rich textures and significantly improves the visual quality compared with PSNR-oriented methods [Chao2014Learning, Christian2017Photo, dai2019SAN].

Figure 1: The SR images generated by SAN [dai2019SAN], SFTGAN [wang2018sftgan] and SPSAGAN respectively. (Zoom in for best view).

One disadvantage of the above methods is the lack of prior knowledge to guide the SR algorithm. Perceptual and adversarial losses (without prior) add textures which are learned from images belonging to different categories, neglecting the semantic implications contained in the same category. Wang et al. [wang2018sftgan]

address this problem and propose the Spatial Feature Transform Generative Adversarial Network (SFTGAN) which is conditioned on segmentation probability maps to improve super-resolution. However, the receptive field of the network based on GAN structure is relatively small. Since the SR images are generated by adjacent image patches, it fails to capture semantic information from faraway patches. Figure 

1 shows that SFTGAN is not satisfactory in recovering texture details, especially when textures scatter over a wide spacial range. Several researches aim at enlarging the receptive field of super-resolution by attention-based modules. For example, Dai et al. [dai2019SAN] propose a trainable second-order channel attention network (SAN) to adaptively rescale the channel-wise features in SR. Pathak et al. [PathakEfficient] directly add a self-attention layer to SRGAN [Christian2017Photo] with the motivation to generate perception-friendly textures in a wide spatial range.

Inspired by these works, we propose Segmentation-Prior Self-Attention Generative Adversarial Network (SPSAGAN) to combine segmentation-priors and feature attentions in a unified GAN-based network. The feature attention module captures long-range and multi-level dependencies across the whole image regions, and the segmentation-prior forces the GAN generator focus on the correct segmentation categories, avoiding random generation over a large scale of image patches. The final attention maps are obtained by a carefully-designed weighted addition of segmentation and feature attentions, where weights are assigned according to the different relationship of segmentation and feature attentions. Figure 1 and 2 show that the proposed SPSAGAN achieves better SR results than methods which only consider segmentation-prior (SFTGAN [wang2018sftgan]) or attention (SAN [dai2019SAN], A-SRResNeT/A-SRGAN [PathakEfficient]) respectively.

Complementary to combining segmentation-priors and self-attentions, we also investigate the network architecture design, which is another important factor for performance. The baseline of the proposed method is the Residual-in-Residual Dense Block (RRDB) [wang2018esrgan], which implements dense connections to reuse features of convolution layers. Since recent researches show that unnecessary connections may affect the performance [CondenseNet], we further investigate pruning methods to automatically eliminate unnecessary connections and make the residual blocks compact. In this paper, we introduce a lightweight skip connection structure called Residual-in-Residual Sparse Block (RRSB) to prune unnecessary connections of RRDB. Experiments show that the pruning method boosts the performance and saves computation simultaneously.

Figure 2: The SR images generated by A-SRResNet, A-SRGAN and SPSAGAN (Zoom in for best view).

Our contributions are three-fold:

  1. We propose a novel SR algorithm to combine segmentation-priors and feature attentions in a unified GAN-based network. The segmentation probability maps are combined with the self-attention mechanism by weighted addition, so that the GAN generator can emphasize textures in the same segmentation category and focus on the long-distance feature relationship.

  2. We propose a lighter skip connection structure RRSB which prunes the dense connections of RRDB to improve performance and save computation.

  3. Extensive experiments show the effectiveness of the proposed method on different datasets compared with state-of-the-arts.

2 Related Work

Single Image Super Resolution. Convolutional neural network for super-resolution is originated from Dong et al.’s work SRCNN [Chao2014Learning], and later on various network architectures are proposed to map between low- and high-resolution images in an end-to-end manner. Dong et al. [DongAccelerating] propose a faster network structure FSRCNN to accelerate SRCNN. Kim et al. [KimAccurate] introduce residual learning to ease the training difficulty, which achieves significant improvement in accuracy. LapSRN [LapSRN] implements the Laplacian pyramid structure to progressively reconstruct the sub-band residuals of high-resolution images. Ledig et al. [Christian2017Photo] propose ResNet [Kaiming2016Deep] to construct a deeper network SRResNet. They also propose SRGAN with perceptual and GAN losses [Johnson2016Perceptual, Christian2017Photo]. EnhanceNet [enhancenet] further expands the SRGAN by combining automated texture synthesis and perceptual loss. ESRGAN [wang2018esrgan]

enhances the SRGAN by introducing the Residual-in-Residual Dense Block (RRDB) without batch normalization, restoring more accurate brightness and realistic textures of the SR images. By defining the naturalness prior in the low-level domain and constraining the output image in the natural manifold, NatSR 

[Soh_2019_CVPR_NatSR] generates more natural and realistic images compared with state-of-the-arts. Our work is an extension of the baseline network ESRGAN [wang2018esrgan] to add a segmentation-prior self-attention (SPSA) module and make the network focus on the textures in the same segmentation category of the image.

Attention. The attention mechanism is widely used in image classification [WangResidual, HuSqueeze], segmentation [OktayAttention, fu2018dual, li2019expectationmaximization], and super-resolution [LiuAn, zhang2018rcan, dai2019SAN, Li2019, liu2019image]. Self-attention GAN (SAGAN) [Zhang2018Self] is firstly proposed to generate images with consistent objects/scenarios for image generation tasks. Pathak et al. [PathakEfficient] introduce a flexible self-attention layer to process large-scale image super-resolution. Our method differs from pervious works in two aspects – first, we introduce segmentation-priors to constrain the feature attention mechanism; and second, we propose a novel fusion algorithm to combine the feature and segmentation attentions with weighted addition.

Semantic Guidance. Semantic information is increasingly used in various image processing tasks such as sytle transfer [gatys2017controlling], video debluring [Zhu2017Be] and image generation [isola2017imagetoimage]. Wang et al. [wang2018sftgan] introduce semantic probability maps to apply conditional normalization, guiding texture recovery for different regions in the super-resolution domain. Similarly, Wu et al. [semantic2019] propose semantic-prior for video super-resolution. The main difference of our method is that we enhance the guidance of segmentation network by directly using semantic probability maps to constrain feature attention, thus allowing the network focus on the same segmentation category of the reconstructed pixels.

Network Redundancy. Many previous researches [DenilPredicting, Chen2015Compressing, HuangDeep] indicate neural networks are typically over-parameterized. Zoph et al. [nas]

utilize reinforcement learning to find compact network structures in the search space. They also prove that complex network structure does not always result in good performance. The DenseNet architecture 

[huang2017densely] alleviates the need for feature replication by directly connecting each layer with its previous layers. The CondenseNet [CondenseNet] simplifies DenseNet by pruning its connections which have smaller filter importance values. The performance increases after pruning, indicating much redundancy exists in the unpruned DenseNet. The proposed RRSB is also based on pruning redundant connections of RRDB. However, we design a dissimilarity measure among interconnected layers to guide pruning, which is different from the filter importance measure adopted by CondenseNet.

3 The Proposed Method

Figure 3 shows our network architecture, which is an extension of SRGAN [Christian2017Photo] and ESRGAN [wang2018esrgan]. The LR images are fed into a CNN with  basic blocks to obtain feature maps. In SRGAN [Christian2017Photo], the basic blocks are plain convolution layers. ESRGAN [wang2018esrgan] updates the basic block with residual-in-residual dense block (RRDB), which combines multi-level residual network and dense connections to improve performance. For the proposed method, we replace RRDB with a lighter skip connection structure RRSB to further improve performance. The obtained feature maps after basic blocks are combined with segmentation-priors in the proposed SPSA layer to balance the influence of segmentation and feature attentions. The output of the SPSA layer is passed to the upsampling layer and then several convolution layers to obtain the SR image. Inspired by [wang2018esrgan, Christian2017Photo], we apply the perceptual loss [wang2018esrgan] generated by a pretrained VGG-19 network [SimonyanVGG]. The GAN loss is also added to the network to make reconstruction more natural. Following [Christian2017Photo, enhancenet], we apply a VGG-style [SimonyanVGG]

network with Leaky ReLU activations for the discriminator of GAN. The novelty of the proposed architecture is the SPSA layer and the RRSB which prunes the dense RRDB to improve performance and save computation.

Figure 3:

The structure of the proposed network. The segmentation probability maps are first fed into a transform network to convert into the same shape as the feature maps. Both of them are fed into the SPSA layer to extract attention maps. The attention maps then pass through several upsampling and convolution layers to obtain the final SR image. The perceptual and GAN losses are implemented to train the network.

3.1 Segmentation-prior Self-Attention (SPSA)

Figure 4: The proposed segmentation-piror self-attention module for the SPSAGAN.

Figure 4 shows the structure of the proposed SPSA module. The feature map from the previous layer is represented as , where  and  represent number of channels and number of pixels respectively. It is first transformed into two feature spaces  and  by  convolutions:


The feature attention  between the th and th pixels is calculated as:


For calculating the segmentation-prior knowledge, the LR image is interpolated to the size of HR image by bicubic kernels, then fed into a semantic segmentation network 

[LiuSeg] which is pretrained on COCO dataset [Lin2014Microsoft] and fine-tuned on ADE dataset [Zhou2017Scene]. The network is trained to segment outdoor scenes with seven categories: sky, mountain, plant, grass, water, animal and building. Pixels that fall outside of the seven categories are labeled as ‘background’.

The segmentation probability map  is first sent into a transform network which consists of a convolution layer and a scale layer to obtain segmentation features:


The number of filters is , and the size of each filter is 

with stride

to ensure the dimension of  is , the same with the feature map . Here  is a trainable parameter to make the magnitude of  comparable to  to guarantee fast convergence. The initial settings of  is the average  norm of  divided by the average  norm of . Similar with the feature attention, is first transformed into two feature spaces to calculate the segmentation attention , where  and .


The feature and segmentation attention maps are combined by the weighted sum rule, where weights are automatically calculated by


The combined attention is obtained and normalized by


The reason for assigning  by Equation (5) lies in four aspects: (1) When  and are relatively similar, the guidance of segmentation-prior is neglected because feature attention is consistent with the segmentation attention. In this situation, should be increased to enhance the influence of  with the motivation that feature attentions are helpful in generating texture details of the SR image. (2) When  is smaller and bigger, it means that colors or textures of two regions are similar, but they belong to different categories. In this situation, the guidance of the segmentation-prior should take effect to de-emphasize the interference of different segmentation categories. (3) The situation rarely happens when  is bigger and smaller. For a single image, features from the same segmentation category are likely to be similar. Even if it happens, emphasizing the segmentation attention is also a good solution because pixels belonging to the same category tends to complement with each other. (4) The value of  is in the range of , which is a mandate for weighted sum combination.

Finally, the output of the SPSA layer is obtained by


where , which is also a  convolution of feature map .

3.2 The Design of Residual-in-Residual Sparse Block

The proposed residual-in-residual sparse block (RRSB) is originated from the residual-in-residual dense block (RRDB) [wang2018esrgan]. As shown in Fig. 5, each RRDB consists of three dense blocks and each dense block consists of five convolution layers with dense connections for each layer. The dense connections consume much computation and may be redundant. In this paper, we propose the RRSB which aims at pruning redundant connections in RRDB.

Figure 5: Sparse block is used in RRSB module. The ‘x’ means that the connection is pruned from the original RRDB. Here is the residual scaling parameter of RRDB.

For a dense block consisting of  convolution layers, we denote its input as , and the subsequent output of the th layer as . In RRDB, the th layer receives feature-maps from all preceding layers as input: , where refers to the concatenation of feature-maps. Here  is a composite function consisting of a convolution and leaky ReLU. The dissimilarity measure of the feature maps  () and  is defined as


For the th layer, the associated connections to be pruned are generally those with smaller dissimilarity measures. If  is similar with , it is unnecessary to concatenate  to

. Due to the different number of preceding connections, it is difficult to set a fixed threshold. Thus, we use a heuristic method to determine the threshold. Firstly, the 

dissimilarity measures are clustered into two classes by K-means algorithm, then all connections in the class with smaller mean are removed from the network. One exception is that the dissimilarity difference between two classes is not prominent, i.e., the difference is less than 

. In this situation, all connections are retained without pruning.

The network is firstly trained with RRDB till convergence. Then, the average dissimilarity measures during last several iterations are implemented to prune connections and obtain the basic RRSB blocks. The final network with RRSB is trained from scratch till convergence.

4 Experiments

For preprocessing, the spatial size of the HR images are randomly cropped with size  from training datasets, and then down-sampled with a scaling factor of  to obtain  LR images. The training process is divided into two steps. Firstly, we pre-train a PSNR-oriented model using the  loss without the SPSA module. The learning rate is initialized as and decayed by a factor of 2 every iterations. The pre-trained model is employed as the initialization for the proposed method. For further training of SPSAGAN, we use Adam [kingma2014adam] with . The learning rate is set to  for the self-attention module and  for the rest of the network. The learning rate decays by a factor of every k iterations. The batch size is set to  consistently for the two steps.

We use the DIV2K [Agustsson2017NTIRE] and Flickr2K [Flickr] datasets for pre-training, which contain  and  K resolution images respectively. Then, we use the OST training set [wang2018sftgan] to train the proposed SPSAGAN. OST contains outdoor scenes with seven categories, which are the same with the training dataset of the segmentation network [LiuSeg]. Each category has k to k images and the total image number is . One disadvantage of OST is that each image only contains one category, so it is impossible to learn category relationship in a single image. However, from Equation (4), category relationship is important for the proposed attention-based method. To remedy this, we randomly select training images from DIV2K which contain multiple categories for SPSAGAN training. Following [wang2018sftgan], the ratio of OST and DIV2K data samples are set to .

4.1 The Self-Attention Mechanism

Figure 6 is the visualization of attention maps. The red point on the image is the query pixel, e.g., the th pixel in Equation (6). The output of segmentation results is calculated as the maximum of the eight segmentation probability maps. The feature, segmentation and combined attention maps are the ’s in Equation (2), (4) and (6) respectively, where the th pixel goes through the whole image. It shows that the attention maps tend to concentrate on long-range pixels rather than spatially local pixels. For example, in line 1, the combined attention focuses on the whole sky; and in line 2, the query point attends to the grass and lion in the feature attention map. These long-range dependencies cannot be captured by convolution with local receptive fields. We also find that the feature attention module tends to concentrate on similar color and texture regions, but the segmentation attention tends to inference according to categories. The fourth line illustrates one example that the segmentation attention guides the feature attention to focus on textures of the same category. The query pixel locates in the water region, but the feature attention is misled by part of the sky region because their colors are similar. However, the segmentation attention takes effect to pull the combined attention back into the water region. The second line also shows that even if the feature attention is randomized over the image, the combined attention can still be reasonable due to the interference of the segmentation attention. These observations further demonstrate that the segmentation-prior is complementary to feature map convolution, which can bring robustness to super-resolution. The attention maps of the out-of-category images are provided in the supplementary material.

Figure 6: The image, segmentation result and attention maps of OST dataset.

4.2 Comparison with the State-of-the-art

The proposed SPSAGAN is compared with several PSNR-oriented methods including SRCNN [Chao2014Learning], SRResNet [Christian2017Photo], SAN [dai2019SAN], and also with several perception-driven approaches including SRGAN [Christian2017Photo], NatSR [Soh_2019_CVPR_NatSR], SFTGAN [wang2018sftgan] and ESRGAN [wang2018esrgan]. The datasets for comparison are OST, Set5, Set14 and BSD100.

Three quantitative metrics are implemented for evaluation, i.e., PSNR (dB), SSIM [Wang2004Image] (evaluated on the Y channel in YCbCr color space) and the Perceptual Index (PI) [PRIM] (lower PI stands for better perceptual quality). Table 1 summarizes the average of these metrics for each method. PSNR-oriented approaches yield better PSNR and SSIM values, but perception-driven methods achieve better PI values. The proposed SPSAGAN achieves the best PI on BSD100, and the PI on OST is also close to the best method SFTGAN. However, PI is not always the superior metric for super-resolution. For example, it is unreasonable that the PI of SFTGAN on OST is even better than the ground truth HR. The above experiments indicate that designing a unanimously agreed quantitative metric for super-resolution is still an unsolved problem.


Table 1: Quantitative evaluation of PSNR, SSIM and PI on BSD100 and OST. The best and second best results are highlighted and underlined, respectively. [ upscaling]

Figure 7 shows the qualitative results of each methods. It can be seen that the proposed SPSAGAN is superior to the previous approaches in both details and natural textures. For instance, SPSAGAN can produce more natural water waves for OST and more vivid textures for OST. SPSAGAN is also capable of generating more detailed building structures for OST while other methods either produce blurry textures or the lines of bricks are not natural.

Figure 7: Qualitative results of different methods (Zoom in for best view).

Figure 8 shows the qualitative results for image patches which are out of the seven segmentation categories (walking person, tablecloth and flower). SPSAGAN is also reliable in producing comparable results like other methods in this situation, although its performance is not as good as processing images from the seven categories. The reasonable performance is attributed to the feature attention, which still takes effects because other categories share similar textures and colors with the seven categories. However, the performance of SPSAGAN degrades because the guidance of segmentation attention is weakened in this situation. More results of out-of-category images are provided in the supplementary material.

Figure 8: Qualitative results of SR methods on out-of-category images (Zoom in for best view).

4.3 User Study

We conduct the user study to compare the perceptual quality of the generated SR images. We divide the study into the following two tasks.

Task 1 is to compare the proposed SPSAGAN with the PSNR-oriented methods. In this task, users are asked to rank the four images based on their visual quality – the SR images generated by SRResNet [Christian2017Photo], SAN [dai2019SAN] and the proposed SPSAGAN respectively, and the ground truth HR image. For each person, we randomly select  images from the OST test dataset and  out-of-category images from Set5, Set14 and BSD100. For each image, we show the four SR and HR images with random order to the users, and ask them to rank from  to  according to their visual quality. The ranking results are shown in Figure 9. It can be seen that SPSAGAN is significantly better than the two PSNR-oriented methods. The only exception is that on the out-of-category images, SPSAGAN has slightly less Rank 1 images compared with SRResNet, but Rank 2 images of SPSAGAN are much more than the other two. On the OST dataset, sometimes SPSAGAN can confuse the users and make them think it is better than the ground truth.

Figure 9: The ranking results of SRResNet [Christian2017Photo], SAN [dai2019SAN], SPSAGAN and HR. Numbers represent frequency of voting. (a) The results of 30 images in OST, totally 900 valid votes. (b) The results of 10 out-of-category images, totally 300 valid votes.

Task 2 is to compare the generated texture quality of SPSAGAN with other perception-driven approaches. The same with Task 1, we randomly select  images from the OST test dataset and  out-of-category images from Set5, Set14 and BSD100. These images are shown by pairs, of which one is the SR image generated by the proposed SPSAGAN, and the other is generated from SFTGAN [wang2018sftgan], ESRGAN [wang2018esrgan], and NatSR [Soh_2019_CVPR_NatSR] respectively. We show enlarged texture patches to  users and ask them to select the image with more natural and perception-friendly textures. Figure 10 shows the comparison results. Our method ranks much higher than the other three in this situation, indicating it is superior in generating natural and visual pleasing images.

Figure 10: The comparison results of SPSAGAN with SFTGAN, ESRGAN and NatSR. Numbers represent the frequency of voting. (a) The results of 30 images in OST, totally 900 valid votes. (b) The results of 10 out-of-category images, totally 300 valid votes.

4.4 Ablation Study

To study the impact of each component in the proposed SPSAGAN, we update the baseline ESRGAN [wang2018esrgan] by gradually adding components. Figure 11 shows the visual comparison of different models. Each column represents a model with its configuration shown at the top. The red check mark indicates the major improvement compared to the previous model.

Feature Attention. The main effect of adding feature attention is to clear the blurred texture (e.g., OST) and eliminate strange artifacts (e.g., OST, OST and OST). The addition of feature attention expands the receptive field of the network. The generation of the current pixel is not only based on adjacent image patches, but also relies on textures from faraway patches.

Segmentation Attention. It can be seem that segmentation attention produces clearer and more regular textures because it constrains feature attention to focus only on the textures belonging to the same segmentation category. For example, in OST the edges of the bricks become more flat, and the leopard’s markings in OST are not messy.

RRSB. Pruning unnecessary connections of RRDB further improves the overall visual quality. Some textures become soft and smooth, which is more amenable to human visual system, such as the water wave in OST. Pruing of RRDB also saves computation. The average inference time per image before and after pruning on the OST dataset is s and s respectively (tested on GeForce GTX 1080Ti).

Figure 11: Visual comparison of each component in SPSAGAN (Zoom in for best view). Each column represents a model with its configurations in the top. The red sign indicates the main improvement compared with the previous model. The numbers below the image are PSNR, SSIM and PI, respectively.

5 Conclusions

We propose a novel segmentation-prior self-attention (SPSA) layer that enables the super-resolution network to reconstruct high-quality images. The self-attention mechanism expands the receptive field of the network, and the segmentation priors constrain the focus of the attention module on regions belonging to the same segmentation category. We also explore the basic blocks in the network and propose a skip connection architecture to eliminate redundancy of the network, thus achieving better performance and saving computation. Extensive experiments demonstrate the superior performance of the proposed method in generating natural and perception-friendly SR images compared with state-of-the-arts.