SRDGAN: learning the noise prior for Super Resolution with Dual Generative Adversarial Networks

03/28/2019 ∙ by Jingwei Guan, et al. ∙ 0

Single Image Super Resolution (SISR) is the task of producing a high resolution (HR) image from a given low-resolution (LR) image. It is a well researched problem with extensive commercial applications such as digital camera, video compression, medical imaging and so on. Most super resolution works focus on the features learning architecture, which can recover the texture details as close as possible. However, these works suffer from the following challenges: (1) The low-resolution (LR) training images are artificially synthesized using HR images with bicubic downsampling, which have much richer-information than real demosaic-upscaled mobile images. The mismatch between training and inference mobile data heavily blocks the improvement of practical super resolution algorithms. (2) These methods cannot effectively handle the blind distortions during super resolution in practical applications. In this work, an end-to-end novel framework, including high-to-low network and low-to-high network, is proposed to solve the above problems with dual Generative Adversarial Networks (GAN). First, the above mismatch problems are well explored with the high-to-low network, where clear high-resolution image and the corresponding realistic low-resolution image pairs can be generated. Moreover, a large-scale General Mobile Super Resolution Dataset, GMSR, is proposed, which can be utilized for training or as a fair comparison benchmark for super resolution methods. Second, an effective low-to-high network (super resolution network) is proposed in the framework. Benefiting from the GMSR dataset and novel training strategies, the super resolution model can effectively handle detail recovery and denoising at the same time.



There are no comments yet.


page 1

page 4

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: (a) Example of an mobile low-resolution image, which is utilized as the input of super resolution (SR) methods. (b) Examples of super-resolution result via state-of-the-art method ESRGAN [33]. (c) Examples of SR result via the proposed method. (Best viewed in color.)

Among all photographic equipment, mobile has developed as the most popular one because of its portability and evolving quality. Most cameras on mobile phones support 1x-10x zoom level change. However, using optical zoom to achieve the large zoom-level change is not realistic. Thus, digital zoom method, i.e. super resolution, is extensively utilized in smart phones.

This paper targets on single image super resolution (SISR), which produces the high-resolution image based on a single low-resolution image. Various works have been done to recover the details and generate realistic high-resolution (HR) images. Recently, learning based methods have shown great potential in solving low-level problems [40, 41, 15, 14]

. Especially, development of deep learning provides various learning-based solutions for super resolution

[7, 8, 24, 33]. Details and patterns are expected to be learned from large-scale data sets. Among the learning-based methods, generative adversarial network (GAN)[13] achieved great progress. It has been proved that GAN-based methods [24, 33] can generate plausible-looking natural details which are consistent with the human visual system (HVS).

However, mismatches exist between the data sets utilized by most existing learning-based methods and the real images captured by a mobile phone. First, in most super resolution methods, low-resolution(LR) images in popular data sets, such as Set5, Set14, are simply downsampled by bicubic interpolation of the high-resolution image. Each pixel of the LR image is obtained by weighted averaging the corresponding patch of the HR image. Thus the informative level of the training low-resolution image is much richer than

of the high-resolution image, where N represents the downscale factor. To sum, the training LR image is rich-informative. While in the real case, each pixel of the mobile captured image is actually upscaled by demosaicing. Thus, the information of mobile images is under-informative than its actual image size. Therefore, the first mismatch between the training LR data and the mobile captured data is the informative level. Second, the LR image in training data is clear and with no noise and distortions. While the LR image captured by mobile suffers from many distortions, such as noise and blur, as shown in Fig. 1 (a).

These mismatches between the downsampled LR training data and real LR data (mobile-captured data) heavily restrict the performance of the algorithm and lead to unsatisfactory results. As shown in Fig. 1 (b), the HR image generated by the state-of-the-art method ESRGAN [33] with DIV2K as training data cannot handle real mobile images well.

Based on the observation, an end-to-end framework is proposed which consists of two parts. First, a high-to-low network is trained to generate realistic HR/LR image pair for training super resolution models. Benefiting from the , a new dataset, GMSR, is generated which provides training and testing sets accompanied with references of different similarity levels in terms of noise reduction, texture, color, illumination, view point, etc. This dataset can promote fair comparison and support further research on the SR problems in general. Second, a low-to-high network is trained to produce the super resolution image. Novel training methods were utilized to reproduce better results. As shown in Fig. 1 (c), the HR image generated by our method can handle the noise better than ESRGAN [33] as illustrated by the blue patch. Moreover, better details can be reproduced as shown in the red patch. To sum up, the proposed super resolution method can effectively deal with the real distortions and restore fine details.

The contributions of our work can be summarized into three categories. First, a general SR problem is explored which straightly face the mismatch between real mobile images and commonly utilized training data. A H2L network is proposed to solve the problems effectively. Second, an end-to-end framework is proposed, which includes two parts, high-to-low network (H2L) and super-resolution network (L2H). During training, dual Generative Adversarial Network (GAN) is utilized to optimize the parameters. Besides, novel training strategies, such as using nearest neighbour downsampling method to generate training pairs, were explored to further improve the performance. We demonstrate the visual improvement, fine reproduced details and denoising effect during super resolution of the proposed dual-generative network by extensive empirical studies. Third, a large-scale General Mobile Super Resolution dataset, GMSR, is generated by high-to-low network. This dataset can support further research on the super resolution domain, and utilized either for training or as a fair benchmark.

2 Related Work

Single image super resolution (SISR) is an important topic and has been developed for a long time. Early methods [25, 3, 39, 9] that based on the interpolation theory can be very fast, however usually yield over-smooth results. Methods rely on neighbor embedding and sparse representation [6, 21, 30, 31, 35, 37] targeted on learning the mapping between LR and HR. Some example-based approaches used image self-similarity property to reduce the amount of training data needed [12, 10, 11], and increased the size of the limited internal dictionary [17].

With the development of deep learning technology, methods based on convolution neural network have shown great potential in solving super resolution problems. Dong

et al. [7] first proposed an end-to-end convolutional neural network to learn the mapping between HR and LR. Kim et al. [19] improved the reconstruction accuracy by using very deep convolutional network and residual learning, and they further utilized recursive structure and skip-connection to improve performance without introducing new parameters [20]. To increase inference efficiency, methods with low-resolution images as input [23, 8, 27, 26] were proposed. In [23], the authors proposed the Laplacian Pyramid network structure to reduce the computational complexity and improve the performance. Dong et al. [8] and [27] introduced different deconvolution methods to upscale the LR to HR.

Most recently, GAN based super resolution methods have been proposed and achieved great progress. Ledig et al. [24] first applied GAN to super resolution task and got highly photo-realistic results. ESRGAN [33] improved the performance by modifying the network structure and loss based on SRGAN. Network conditioning was used in [32] to combine the category prior with the generative network to get more realistic textures. Yuan et al. [36]

developed unsupervised learning with GAN, where unpaired HR-LR data were utilized. Until now, most super resolution methods use the bicubic downsampled LR image as training data. However, since bicubic downsampling is quiet different from the degradation in real-world, it makes the trained model not effective to the real-world cases. Inspired by ESRGAN

[33], We followed its main structure and proposed a deeper network named the low-to-high(L2H) network to solve the more complicated real-world cases, and build a large-scale General Mobile Super Resolution Dataset GMSR, which is potentially applicable to other image reconstruction methods. Bulat et al. [5] adopted a degradation network to generate LR image first, then use it to train the model and get good result. However, they focus on face category, which has much less complexity and diversity than general image. Hence, we followed their idea and extended their work to the general image super resolution. We proposed a high-to-low(H2L) network to learn the degradation from HR image to LR image, so that the ’realistic’ LR image can be generated once we have the HR image. Zhao et al. [42] proposed a degradation network to generate ’realistic’ LR image and use it to train the degradation and SR construction network. However, they only use one discriminative network to train the degradation network. Different from them, two discriminative networks were utilized to train H2L and L2H network respectively in this work. Moreover, the two networks were jointly optimized as well.

Figure 2: Pipeline of the proposed SRDGAN framework, including and . The high-to-low network is proposed to generate realistic HR/LR image pairs. is set as the input of , which can add the randomness when generating realistic distorted LR image. The low-to-high network is proposed to produce the super resolution image. The detailed architecture of and are illustrated in the top and bottom row respectively. (Best viewed in color.)
High-to-low Network
={HR, HR} Pixel-level corresponding image pairs, including clear image HR and noisy image HR
LR Low-resolution image downsampled from HR
Captured large-scale noisy images by phone. No corresponding clear images
Cropped LR image from
GMSR Proposed general mobile super resolution dataset
Image pairs in dataset GMSR
HR Clear high-resolution images as input of
N Random noise added to HR
Low-to-high Network, i.e. super-resolution network
Table 1:   Notations used in the paper.

3 Proposed Framework

The proposed framework aims at solving real-world super resolution problems. In the framework, a high-to-low network is first proposed to generate realistic image pairs, which is trained with GAN using real-word images captured by mobiles. (Section 3.2). Secondly, a super resolution method with a novel structure and training strategy is explored (Section 3.3). Besides, new datasets are captured or generated, which are also introduced in this section.

3.1 Datasets

Three novel datasets were captured and generated to train the proposed framework, including , and GMSR. and were captured to train the network. Dataset GMSR is simulated by and utilized to train the super resolution model . Details of the network as given as follows.

  1. In order to learn the noise distribution of real world images, we applied smart phone to capture the dataset, named . Blackberry Key2 [1] is selected to do this task duo to its optical module is very representative for many general common mobile devices . The resolution of images in are . It contains pairs of images, which are randomly divided into training, validation and test set with number 400, 27 and 20 respectively. Each pair contains a noisy image and a corresponding clear image . To generate the clear image , a burst of 20 noisy images were captured with the mobile phone fixed with a tripod.

    A multi-frame denoising method was used to generate a clear high-resolution image by adaptively fusing the 20 noisy images. Since any one of the 20 noisy images can be used for generating training image pairs. At most 20 pairs of images can be generated for each scene. During training, is down-sampled from , and constitutes an image pair with .

  2. Dataset contain 206 captured noisy image by Blackberry key2 [1] with resolution . Contents of the images are different from the ones in . We randomly cropped to the size of , and input it to the D network as reference to help the high-to-low network to learn the actual noise distribution.

  3. General Mobile Super resolution GMSR is generated using and contains 1153 image pairs. represent the image pairs. A large amount of clear images are collected from the Internet and regarded as HR images. The corresponding low-resolution image is generated by , and the details are introduced in Sec. 3.2. The simulated dataset is used to train .

Figure 3: Training procedure of the proposed framework with dual GAN. During training, three kind of losses are used, including pixel/feature loss(Pix/Fea loss) and GAN loss. (a) introduces the training process of , where dataset and are utilized. is the downsampling result of by nearest neighbour. is cropped from for the GAN loss. (b) is the training procedure of . and represent the image pairs utilized for training, which are obtained from dataset GMSR and DIV2K[29]. (Best viewed in color)

3.2 High-to-low Network

The biggest problem faced by SR is lacking realistic datasets, i.e. noisy low-resolution image and the corresponding high-resolution image. The difficulty of generating such image pairs is mainly in two-fold. One is to model and simulate distortions in LR images, the other is to generate image pairs with pixel-wise correspondence.

To solve the problem, a high-to-low network is proposed. Notations used in this paper are summarized in Table 1. LR image is generated based on a clear as follows:


where is a clear high-resolution image, the generated low-resolution image is supposed to have similar noise distribution with real-world image. Gaussian random noise

is added to simulate the randomness of the distortions in LR. The mean and standard deviation of the Gaussian noise is 0 and 0.05, respectively. The two values can be adjusted according to the distortion level of different mobiles. The noise transition between Gaussian distribution and that in real-world images are accomplished by

and learned through GAN.

3.2.1 Model Structure

The structure of the network is shown in Fig. 2

, which consists of four Resblocks and one convolutional layer. Each of the Resblock consists of two convolution layers and two activation functions. In order to further build the global connection


, multiple shortcut connections is added as indicated by the red lines. The final convolution layer aims at decreasing the resolution by setting the downscale factor as the stride.

is able to model the image distortion of real mobile images caused in the image capturing and processing pipeline.

3.2.2 Training Procedure of

The training strategy of is summarized in Fig. 3. Inputs of are and random noise , and the output is . During the training process, dataset and are used. In order to get the ground-truth of , noisy image is down-sampled to generate , which is used to calculate the pixel and feature losses. However, the noise distribution of is not consistent with

after the down-sampling. Therefore, Generative Adversarial Network (GAN) is used to further learn the noise distribution of mobile images. The loss function and down-sampling strategy are introduced as follows.

Loss function: Pixel loss, feature loss (i.e. perceptual loss) and GAN loss are used to train .


where , and are weighting parameters.

Pixel loss aims at pixel-wisely comparing the image difference between and the network output , as


where corresponds to the number of pixels in .

Feature loss aims at comparing the difference between high-level features of two images, which has been proved to be effective in many previous works[24, 33, 16, 18]. Inspired by these works, VGG[28]

is utilized as feature extraction network



is the extracted high-level features of input image LR by . represents the index in the extracted features.

However, is generated by downsampling mobile images. The noise distribution in cannot be well maintained. Thus GAN is used to learn the distribution by discriminating unpaired images, and , where is the low-resolution image randomly cropped from .


where is the number of batch size, is the discriminative model with parameters . The goal of is to well distinguish the generated LR image and real image . is trained iteratively with the generative model as shown in Equ. 6. Thus, better is trained to distinguish results generated by , while better is trained to ”confuse” the discriminator.


Downsampling strategy: Bicubic interpolation is used as the down-sampling method to generate training LR image in most previous works [33, 24, 7, 8]. However, images downsampled by bicubic interpolation contain much more information than real-world images captured by mobiles.

To be specific, the luminance of a pixel in the low-resolution image is a weighted average of the luminance of its surrounding pixels during the bicubic interpolation. Therefore, rich information is contained in the bicubic-downsampled images. However, in real-world image cases, the multi-channel color images captured by mobile are actually interpolated by single-channel image using the demosaic of the image signal processing(ISP). The information level of real-world images and bicubic downsampled images are unbalanced. Therefore, models trained with bicubic-downsampled image pairs certainly cannot work well on real-world images.

In this work, nearest neighbour is adopted as the down-sampling method to better simulate the real-world ill-informative situation of mobile images.

3.3 Low-to-high Network

Network is proposed to implement super resolution based on the low-resolution image with parameter as in Equation 7.


3.3.1 Network Structure

The structure of is shown in Fig. 2. Following ESRGAN [33], RRDB block is used as the basic network unit. Each RRDB block contains 3 residual blocks(RDB), and each residual block contains 5 convolution layers without BN layers in a dense structure. The basic unit is not limited to RRDB block, other basic block like residual block and dense block can work as the basic unit in the proposed network structure. However, since the complexity of the real-world super resolution task is high, a deeper structure with 25 RRDB blocks is proposed to model the more complex task.

3.3.2 Training Procedure of

The training strategy of is summarized in Fig. 3. Significant improvements have been achieved by new training strategies, i.e. using simulated realistic training data and nearest neighbour as the down-sampling method. Benefiting from these, the SR model is capable of dealing with the real world image distortions.

The loss function of is similar to that of , which is described in Section 3.2.2.


where , and are weighting parameters. The difference between and is mainly at training data. For , two categories of training samples are utilized. The most important one is from dataset GMSR that simulates real-world distortions. Furthermore, the popular dataset DIV2K [29] are also utilized with nearest neighbour as the downsampling method, to further increase the data diversity, and encourage the model to perform well both in real-world cases and traditional benchmarks.

Finally, global fine-tuning is performed on the whole network structure including , and the discriminative networks, to jointly optimize the performance.

Figure 4: Subjective comparison between the proposed method some state-of-the-art methods. Left is the mobile low-resolution image example and a detail patch. Right is the super-resolution result of different methods and high-resolution ground-truth (HR). (Best viewed in color)

4 Experiments

Experiments were conducted to evaluate the effectiveness of the proposed framework. First of all, the SR results trained based on the proposed framework are presented. Second, effectiveness of each component is evaluated, including results generated by and the novel training strategies and structure utilized in .

The parameters for training the proposed method are as follows. Batch-size is 16 with training patch of size . Following ESRGAN[33], the learning rate is initialized with , and halved at [50k, 100k, 200k, 300k] iterations. Adam[22] optimizer is utilized with , without weight decay.

4.1 Super Resolution Results

Figure 5: Demonstration of the image pairs generated by . Please noted that the resolution of and Generated are different. (Best viewed in color)

Subjective and quantitative experiments were conducted to evaluate the performance of the proposed method. Table 2 shows the quantitative evaluation results of the popular and state-of-the-art GAN-based super resolution methods with an upscale factor . For SRCNN [7] and SRGAN [24], the Set5 and Set14 results are copied from the original papers, and the results of dataset are generated based on the model provided by their authors. Results of ESRGAN [33] are also calculated by the code provided by the author. Since ESRGAN did not have quantitative result in the paper, we download their GAN-based model to test on Set5, Set14 and . To evaluate the performance of the proposed method, both general datasets (set5[4], set14[38]) and the mobile-specific dataset () are utilized. PSNR and SSIM [34] are adopted for measuring the effectiveness of the proposed method, where higher value indicates better performance. The best performing results are shown in bold.

It is shown that the proposed method outperforms the other state-of-the-art methods in terms of SSIM[34] in all circumstances, demonstrating its effectiveness on reconstructing contrast, luminance and structure. Especially, the improvements achieved on Set is significant, more than 1 dB improvement in PSNR and 0.01 improvement in SSIM showing its good performance on mobile images.

Subjective comparison results were provided as well as shown in Fig. 4. real-world mobile images are utilized. PSNR and SSIM results are presented at the bottom for reference, which are evaluated on Y channel. In many situations, mobile images suffer from noise and detail missing. Most methods cannot handle these cases well, including bicubic, SRCNN [7], SRGAN [24], ESRGAN [33]. The noise is still very obvious, and even increased in SR results. Instead, our method can deliver good results by removing all the noise and enhancing the details.

Method Set5 Set14 Set (test)
Bicubic 28.42/0.8104 26.00/0.7027 35.85/0.9674
SRCNN 30.07/0.8627 27.50/0.7513 39.09/0.9716
SRGAN 29.40/0.8472 26.02/0.7397 38.43/0.9596
ESRGAN 30.55/0.8677 26.39/0.7246 38.55/0.9572
Ours 30.37/0.8766 27.35/0.7767 39.93/0.9782
Table 2: Quantitative evaluation of SR algorithms(PSNR/SSIM)
Figure 6: Comparison result of ESRGAN and the proposed method

4.2 Results of

With , we aim to simulate LR images that is a better ensemble of real mobile images and to generate a large-scale dataset. The image pairs generated by are shown in Fig. 5, including high-resolution input and corresponding low-resolution output. It can be seen that the noise distribution of the generated LR images are consistent with mobile images. The simulated noise levels of different position and on different objects are similar with the real ones. Moreover, the color is well remained in the generated low-resolution images.

4.3 Exploring Training Strategies for

Experiments were conducted to evaluate the effectiveness of the new dataset, new training strategies and the proposed model structure as shown in Fig. 6. ’ESRGAN’ represents the original ESRGAN method with GAN-based structure and DIV2K[29] (bicubic downsampling) as training data. ’ESRGAN*’ represents the new-trained method with the same model structure but new datasets for training, including DIV2K[29] (Nearest neighbour downsampling) and GMSR. ’Ours’ is the proposed method with the new model structure and new datasets for training. Comparison between ’ESRGAN’ and ’ESRGAN*’ can evaluate the effectiveness of new training datasets. While performance comparison between ’ESRGAN*’ and ’Ours’ evaluates the impacts in the model structure.

PSNR and SSIM [34] are utilized, where larger value represents better performance. The test was conducted on both general dataset set14[38] and the mobile image dataset Set . It can be seen that the ’ESRGAN*’ outperforms ’ESRGAN’ by changing training data. It proves that the new dataset(GMSR) and new training strategy (nearest neighbour) can be easily generalized to other model structures and improve the performance on both general data and mobile data. Moreover, ’Ours’ further improve the performance compared with ’ESRGAN*’. It proves the effectiveness of the model structure in .

5 Conclusion

In this work, a novel framework SRDGAN is proposed to solve the noise prior super resolution problem with dual generative adversarial network. A general degradation network H2L is proposed and is able to learn the noise prior of real-world LR image. By using our proposed training strategy, the H2L network is trained to generate the ’realistic’ LR image paired with the HR image, and A large-scale benchmark general mobile super resolution dataset, GMSR, is generated from it. The GMSR can help super resolution methods to be well applied in the real-world images. Meanwhile, a super resolution network L2H is proposed, which contains new structure and training strategies(i.e.using simulated realistic as training data and using the nearest neighbour as the down-sampling method). Especially, these training strategies are proved to be effective for other SR model, such as ESRGAN, and improve the performance.


  • [1]
  • [2] N. Ahn, B. Kang, and K.-A. Sohn. Fast, accurate, and, lightweight super-resolution with cascading residual network. Proc. ECCV, 2018.
  • [3] J. Allebach and P. W. Wong. Edge-directed interpolation. In Proc. ICIP, volume 3, pages 707–710. IEEE, 1996.
  • [4] M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. Alberi-Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. 2012.
  • [5] A. Bulat, J. Yang, and G. Tzimiropoulos. To learn image super-resolution, use a gan to learn how to do image degradation first. In Proc. ECCV, pages 185–200, 2018.
  • [6] H. Chang, D.-Y. Yeung, and Y. Xiong. Super-resolution through neighbor embedding. In Proc. CVPR, pages 275–282. IEEE, 2004.
  • [7] X. T. Chao Dong, Chen Change Loy. Accelerating the super-resolution convolutional neural network. Proc. ECCV, 2016.
  • [8] C. Dong, C. C. Loy, and X. Tang. Accelerating the super-resolution convolutional neural network. In Proc. ECCV, pages 391–407. Springer, 2016.
  • [9] C. E. Duchon. Lanczos filtering in one and two dimensions. Journal of applied meteorology, 18(8):1016–1022, 1979.
  • [10] G. Freedman and R. Fattal. Image and video upscaling from local self-examples. ACM Transactions on Graphics (TOG), 30(2):12, 2011.
  • [11] X. Gao, K. Zhang, D. Tao, and X. Li. Joint learning for single-image super-resolution via a coupled constraint. IEEE Trans. Image Processing, 21(2):469–480, 2012.
  • [12] D. Glasner, S. Bagon, and M. Irani. Super-resolution from a single image. In Proc. ICCV, pages 349–356. IEEE, 2009.
  • [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proc. NeurIPS, pages 2672–2680, 2014.
  • [14] J. Guan and W.-K. Cham.

    Quality estimation based multi-focus image fusion.

    In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1987–1991. IEEE, 2017.
  • [15] J. Guan, S. Yi, X. Zeng, W.-K. Cham, and X. Wang. Visual importance and distortion guided deep image quality assessment framework. IEEE Transactions on Multimedia, 19(11):2505–2520, 2017.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. CVPR, June 2016.
  • [17] J.-B. Huang, A. Singh, and N. Ahuja. Single image super-resolution from transformed self-exemplars. In Proc. CVPR, pages 5197–5206, 2015.
  • [18] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proc. ECCV, pages 694–711. Springer, 2016.
  • [19] J. Kim, J. Kwon Lee, and K. Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proc. CVPR, pages 1646–1654, 2016.
  • [20] J. Kim, J. Kwon Lee, and K. Mu Lee. Deeply-recursive convolutional network for image super-resolution. In Proc. CVPR, pages 1637–1645, 2016.
  • [21] K. I. Kim and Y. Kwon. Single-image super-resolution using sparse regression and natural image prior. IEEE transactions on pattern analysis and machine intelligence, 32(6):1127–1133, 2010.
  • [22] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [23] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proc. CVPR, pages 624–632, 2017.
  • [24] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proc. CVPR, pages 4681–4690, 2017.
  • [25] X. Li and M. T. Orchard. New edge-directed interpolation. IEEE Trans. Image Processing, 10(10):1521–1527, 2001.
  • [26] B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proc. CVPR Workshops, pages 136–144, 2017.
  • [27] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proc. CVPR, pages 1874–1883, 2016.
  • [28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [29] R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, and L. Zhang. Ntire 2017 challenge on single image super-resolution: Methods and results. In Proc. CVPR Workshops, pages 114–125, 2017.
  • [30] R. Timofte, V. De Smet, and L. Van Gool. Anchored neighborhood regression for fast example-based super-resolution. In Proc. CVPR, pages 1920–1927, 2013.
  • [31] R. Timofte, V. De Smet, and L. Van Gool. A+: Adjusted anchored neighborhood regression for fast super-resolution. In

    Asian conference on computer vision

    , pages 111–126. Springer, 2014.
  • [32] X. Wang, K. Yu, C. Dong, and C. Change Loy. Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proc. CVPR, pages 606–615, 2018.
  • [33] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In Proc. ECCV, pages 63–79. Springer, 2018.
  • [34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing, 13(4):600–612, 2004.
  • [35] J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image super-resolution via sparse representation. IEEE Trans. Image Processing, 19(11):2861–2873, 2010.
  • [36] Y. Yuan, S. Liu, J. Zhang, Y. Zhang, C. Dong, and L. Lin. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In Proc. CVPR Workshops, pages 701–710, 2018.
  • [37] R. Zeyde, M. Elad, and M. Protter. On single image scale-up using sparse-representations. In International conference on curves and surfaces, pages 711–730. Springer, 2010.
  • [38] R. Zeyde, M. Elad, and M. Protter. On single image scale-up using sparse-representations. In International conference on curves and surfaces, pages 711–730. Springer, 2010.
  • [39] L. Zhang and X. Wu. An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Trans. Image Processing, 15(8):2226–2238, 2006.
  • [40] W. Zhang, C. Qu, L. Ma, J. Guan, and R. Huang. Learning structure of stereoscopic image for no-reference quality assessment with convolutional neural network. Pattern Recognition, 59:176–187, 2016.
  • [41] W. Zhang, Y. Zhang, L. Ma, J. Guan, and S. Gong. Multimodal learning for facial expression recognition. Pattern Recognition, 48(10):3191–3202, 2015.
  • [42] T. Zhao, C. Zhang, W. Ren, D. Ren, and Q. Hu. Unsupervised degradation learning for single image super-resolution. CoRR, abs/1812.04240, 2018.