Single image super-resolution (SR) is an algorithm to reconstruct a high-resolution (HR) image from a single low-resolution (LR) image . It allows a system to overcome limitations of LR imaging sensors or from image processing steps in multimedia systems. Several SR algorithms [18, 25, 23, 29, 30]
have been proposed and applied in the fields of computer vision, image processing, surveillance systems, etc. However, SR is still challenging due to its ill-posedness, which means that multiple HR images are solutions for a single LR image. Furthermore, the reconstructed HR image should be close to the real one and, at the same time, visually pleasant.
In recent years, various deep learning-based SR algorithms have been proposed in literature. Convolutional neural network architectures are adopted in many deep learning-based SR methods following the super-resolution convolutional neural network (SRCNN)
, which showed better performance than the classical SR methods. They typically consist of two parts, feature extraction part and upscaling part. With improving these parts in various ways, recent deep learning-based SR algorithms have achieved significant enhancement in terms of distortion-based quality such as root mean squared error (RMSE) or peak signal-to-noise ratio (PSNR)[10, 15, 8, 13, 9, 16].
However, it has been recently shown that there exists the trade-off relationship between distortion and perception for image restoration problems including SR 
. In other words, as the mean distortion decreases, the probability for correctly discriminating the output image from the real one increases. Generative adversarial networks (GANs) are a way to approach the perception-distortion bound. This is achieved by controlling relative contributions of the two types of losses popularly employed in the GAN-based SR methods, which are a content loss and an adversarial loss. For the content loss, a reconstruction loss such as the L1 or L2 loss is used. However, optimizing to the content loss usually leads to unnatural blurry reconstruction, which can improve the distortion-based performance, but decreases the perceptual quality. On the other hand, focusing on the adversarial loss leads to perceptually better reconstruction, which tends to decrease the distortion-based quality.
One of the keys to improve both the distortion and perception is to consider perceptual part in the content loss. In this matter, consideration of proper high frequency components would be helpful, because many perceptual quality metrics consider the frequency domain to measure the perceptual quality [17, 20]. Not only traditional SR algorithms such as [27, 11] but also deep learning-based methods [14, 7] focus on restoration of high frequency components. However, there exists little attempt to consider the frequency domain to compare the real and fake (i.e., super-resolved) images in GAN-based SR.
In this study, we propose a novel GAN model for SR considering the trade-off relationship between perception and distortion. Based on good distortion-based performance of our base model, i.e., the deep residual network using enhanced upscale modules (EUSR) 
, the proposed GAN model is trained to improve both the perception and distortion. Together with the conventional content loss for deep networks, we consider additional loss functions, namely, the discrete cosine transform (DCT) loss and differential content loss. These loss functions directly consider the high frequency parts of the super-resolved images, which are related to the perception of image quality by the human visual system. The proposed model was ranked in the 2nd place among 13 participants inRegion 1 of the PIRM Challenge  on perceptual super-resolution at ECCV 2018.
The rest of the paper is organized as follows. We first describe the base model of the proposed method and the proposed loss functions in Section 2. Then, in Section 3, we explain the experiments conducted for this study. The results and analysis are given in Section 4. Finally, we conclude the study in Section 5.
2 Proposed Method
2.1 Super-resolution with enhanced upscaling modules
As the generator in the proposed model, we employ the recently developed EUSR model . Its overall structure is shown in Figure 1. It is a multi-scale approach performing reconstruction in three different scales (, , and ) simultaneously. Low-level features for each scale are extracted from the input LR image by two residual blocks (RBs). And, higher-level features are extracted by the residual module (RM), which consists of several local RBs, one convolution layer, and global skip connection. Then, for each scale, the extracted features are upscaled by enhanced upscaling modules (EUMs). This model showed good performance for some benchmark datasets in the NTIRE 2018 Challenge  in terms of PSNR and structural similarity (SSIM) . We set the number of RBs in each RM to 80, which is larger than that used in  (i.e., 48) in order to enhance the learning capability of the network.
The discriminator network in the proposed method is based on that of the super-resolution using a generative adversarial network (SRGAN) model 
. The network consists of 10 convolutional layers followed by leaky ReLU activations and batch normalization units. The resulting feature maps are processed by two dense layers and a final sigmoid activation function in order to determine the probability whether the input image is real (HR) or fake (super-resolved).
2.2 Loss functions
In addition to the conventional loss functions for GAN models for SR, i.e., content loss () and adversarial loss (), we consider two more content-related losses to train the proposed model. They are the DCT loss () and differential content loss (), which are named as perceptual content losses (PCL) in this study. Therefore, we use four loss functions in total in order to improve both the perceptual quality and distortion-based quality. The details of the loss functions are described below.
Content loss () : The content loss is a pixel-based reconstruction loss function. The L1-norm and L2-norm are generally used for SR. We employ the L1-norm between the HR image and SR image:
where and are the width and height of the image, respectively. And, and are the pixel values of the HR and SR images, respectively, where and are the horizontal and vertical pixel indexes, respectively.
Differential content loss () : The differential content loss evaluates the difference between the SR and HR images in a deeper level. It can help to reduce the over-smoothness and improve the performance of reconstruction particularly for high frequency components. We also employ the L1-norm for the differential content loss:
where and are horizontal and vertical differential operators, respectively.
DCT loss () : The DCT loss evaluates the difference between DCT coefficients of the HR and SR images. This enables to explicitly compare the two images in the frequency domain for performance improvement. In other words, while different SR images can have the same value of , the DCT loss forces the model to generate the one having a frequency distribution as similar to the HR image as possible. The L2-norm is employed for the DCT loss function:
where means the DCT coefficients of image .
Adversarial loss () : The adversarial loss is used to enhance the perceptual quality. It is calculated as
is the probability of the discriminator calculated by a sigmoid cross-entropy of logits from the discriminator, which represents the probability that the input image is a real image.
We use the DIV2K dataset 
for training of the proposed model in this experiment, which consists of 1000 2K resolution RGB images. LR training images are obtained by downscaling the original images using bicubic interpolation. For testing, we evaluate the performance of the SR models on several datasets, i.e., Set5, Set14 , BSD100 , and PIRM self-validation set . Set5 and Set14 consist of 5 and 14 images, respectively. And, BSD100 and PIRM self-validation set include 100 challenging images. All testing experiments are performed with a scale factor of , which is the target scale of the PIRM Challenge on perceptual super-resolution.
3.2 Implementation details
For the EUSR-based generator in the proposed model, we employ 80 and two local RBs in each RM and the upscaling part, respectively. We first pre-train the EUSR model as a baseline on the training set of the DIV2K dataset . In the pre-training phase, we use only the content loss () as the loss function.
For each training step, we feed two randomly cropped image patches having a size of 4848 from LR images into the networks. The patches are transformed by random rotation by three angles (90, 180, and 270) or horizontal flips. The Adam optimization method  with , , and is used for both pre-training and training phases. The initial learning rate is set to and the learning rate is reduced by a half for every
steps. A total of 500,000 training steps are executed. The networks are implemented using the Tensorflow framework. It roughly takes two days with NVIDIA GeForce GTX 1080 GPU to train the networks.
3.3 Performance measures
As proposed in , we measure the performance of the SR methods using distortion-based quality and perception-based quality. First, we measure the distortion-based quality of the SR images using RMSE, PSNR, and SSIM , which are calculated by comparing the SR and HR images. In addition, we measure the perceptual quality of the SR image by 
where is a SR image, means the quality score measure proposed in , and means the quality score by the natural image quality evaluator (NIQE) metric . This perceptual index is also adopted to measure the performance of the SR methods in the PIRM Challenge on perceptual super-resolution . The lower the perceptual index is, the better the perceptual quality is. We compute all metrics after discarding the 4-pixel border and on the Y-channel of YCbCr channels converted from RGB channels as in .
We evaluate the performance of the proposed method and the state-of-the-art SR algorithms, i.e., the generative adversarial network for image super-resolution (SRGAN) , the SRResNet (SRGAN model without the adversarial loss) , the dense deep back-projection networks (D-DBPN) , and the multi-scale deep Laplacian pyramid super-resolution network (MS-LapSRN) . And, the bicubic upscaling method and pre-trained EUSR model are also included. Our proposed model, named as deep residual network using enhanced upscale modules with perceptual content losses (EUSR-PCL), and SRGAN are adversarial networks, and the others are non-adversarial models. Note that, the SRResNet and SRGAN have variants that are optimized in terms of MSE or in the feature space of a VGG net . We consider SRResNet-VGG and SRGAN-VGG in this study, which show better perceptual quality among their variants. For the Set5, Set14, and BSD100 datasets, the SR images of the SR methods are either obtained from their supplementary materials (SRGAN111https://twitter.app.box.com/s/lcue6vlrd01ljkdtdkhmfvk7vtjhetog, SRResNet1, and MS-LapSRN222http://vllab.ucmerced.edu/wlai24/LapSRN/) or reproduced from their pre-trained model (D-DBPN333https://drive.google.com/drive/folders/1ahbeoEHkjxoo4NV1wReOmpoRWbl448z-?usp=sharing). For the PIRM set, the SR images of D-DBPN and EUSR are generated using their own pre-trained models.
Table 1 shows the performance of the considered SR methods for the Set5, Set14, and BSD100 datasets. Our proposed model is ranked second among the SR methods in terms of the perceptual quality. The perceptual index of the proposed method is between those of SRGAN and SRResNet, which are an adversarial network and the best model among non-adversarial models, respectively. Considering the PSNR and SSIM results, EUSR-PCL shows better performance than both SRGAN and SRResNet. When we compare our model with other non-adversarial networks, i.e., EUSR, MS-LapSRN, and D-DBPN, our model shows slightly lower PSNR results, while the perceptual quality is significantly improved. These results show that our model achieves proper balance between the distortion and perception aspects.
Figs. 2 and 3 show example images produced by the SR methods for qualitative evaluation. In Figure 2, except the bicubic interpolation method, most of the methods restore high frequency details in the HR image to some extents. If the details of the SR images are examined, however, the models show different qualitative results. The SR images of the bottom row (i.e., SRResNet, SRGAN, EUSR, and EUSR-PCL) show relatively better perceptual quality with less blurring. However, the reconstructed details are different depending on the methods. The images by SRGAN contain noise, although the method shows the best perceptual quality for the Set5 dataset in Table 1. Our model shows lower performance than SRGAN in terms of perception, but the noise is less visible. In Figure 3, it is also found that the details of the SR image of EUSR-PCL are perceptually better than those of SRGAN, although SRGAN shows better perceptual quality than EUSR-PCL for the BSD100 dataset in Table 1. These results imply that a proper balance between perception and distortion is important and our proposed model performs well for that.
The results for the PIRM dataset  are summarized in Table 2. In this case, we also consider variants of the EUSR-PCL model in order to examine the contributions of the losses. In the table, EUSR-PCL indicates the proposed model that considers all loss functions described in Section 2. The EUSR-PCL () is the basic GAN model based on EUSR. EUSR-PCL () is the EUSR-PCL model considering the content loss and DCT loss, and EUSR-PCL () is the model with the content loss and differential content loss. In all cases, the adversarial loss is included. It is observed that the performance of EUSR-PCL is the best in terms of perception among all methods in the table. Although the PSNR values of the EUSR-PCL variants are slightly lower than EUSR and D-DBPN, their perceptual quality scores are better. Comparing EUSR-PCL and its variants, we can find the effectiveness of the perceptual content losses. When the two perceptual content losses are included, we can obtain the best performance in terms of both the perception and distortion.
Figure 4 shows example SR images for the PIRM dataset. The images obtained by the EUSR-PCL models at the bottom row have better perceptual quality and are less blurry than those of the other methods. As mentioned above, these models show lower PSNR values, but the reconstructed images are better in terms of perception. When we compare the results of the variants of EUSR-PCL, there exist slight differences in the result images, in particular in the details. For instance, EUSR-PCL () generates a more noisy SR image than EUSR-PCL. Although the differences between their quality scores are not large in Table 2, the result images show noticeable perceptual differences. This demonstrates that the improvement of the perceptual quality of SR is important, and the proposed method achieves good performance for perceptual SR.
|EUSR-PCL ()||EUSR-PCL ()||EUSR-PCL ()||EUSR-PCL|
In this study, we focused on developing the perceptual content losses and proposed the GAN model in order to properly consider the trade-off problem between perception and distortion. We proposed two perceptual content loss functions, i.e., the DCT loss and the differential content loss, used to train the EUSR-based GAN model. The results showed that the proposed method is effective in SR applications with consideration of both the perception and distortion aspects.
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the “ICT Consilience Creative Program” (IITP-2018-2017-0-01015) supervised by the IITP (Institute for Information & communications Technology Promotion) and also supported by the IITP grant funded by the Korea government (MSIT) (R7124-16-0004, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding).
Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: Dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2017)
-  Bevilacqua, M., Roumy, A., Guillemot, C., Morel, M.L.A.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: Proceedings of the British Machine Vision Conference (BMVC) (2012)
-  Blau, Y., Mechrez, R., Timofte, R., Michaeli, T., Zelnik-Manor, L.: 2018 PIRM Challenge on Perceptual Image Super-resolution. arXiv:1809.07517 (2018)
-  Blau, Y., Mechrez, R., Timofte, R., Michaeli, T., Zelnik-Manor, L.: 2018 PIRM Challenge on Perceptual Image Super-resolution. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops (2018)
-  Blau, Y., Michaeli, T.: The perception-distortion tradeoff. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
-  Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 184–199 (2014)
-  Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Transactions on Graphics (TOG) 36(4), 118 (2017)
-  Haris, M., Shakhnarovich, G., Ukita, N.: Deep backprojection networks for super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
-  Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1646–1654 (2016)
-  Kim, J.H., Lee, J.S.: Deep residual network with enhanced upscaling module for super-resolution. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2018)
-  Kim, W.H., Lee, J.S.: Blind single image super resolution with low computational complexity. Multimedia Tools and Applications 76(5), 7235–7249 (2017)
-  Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Proceedings of the International Conference on Learning Representations (ICLR) (2015)
-  Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
-  Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Fast and accurate image super-resolution with deep Laplacian pyramid networks. arXiv:1710.01992 (2017)
-  Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A.P., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
-  Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2017)
-  Ma, C., Yang, C.Y., Yang, X., Yang, M.H.: Learning a no-reference quality metric for single-image super-resolution. Computer Vision and Image Understanding 158, 1–16 (2017)
-  Martin, A.J., Gotlieb, A.I., Henkelman, R.M.: High-resolution MR imaging of human arteries. Journal of Magnetic Resonance Imaging 5(1), 93–100 (1995)
-  Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the International Conference on Computer Vision (ICCV). pp. 416–423 (2001)
-  Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters 20(3), 209–212 (2013)
-  Park, S.C., Park, M.K., Kang, M.G.: Super-resolution image reconstruction: a technical overview. IEEE Signal Processing Magazine 20(3), 21–36 (2003)
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
-  Thornton, M.W., Atkinson, P.M., Holland, D.: Sub-pixel mapping of rural land cover objects from fine spatial resolution satellite sensor imagery using super-resolution pixel-swapping. International Journal of Remote Sensing 27(3), 473–491 (2006)
-  Timofte, R., Gu, S., Wu, J., Van Gool, L., Zhang, L., Yang, M.H., et al.: NTIRE 2018 challenge on single image super-resolution: Methods and results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2018)
-  Wang, C., Xue, P., Lin, W.: Improved super-resolution reconstruction from video. IEEE Transactions on Circuits and Systems for Video Technology 16(11), 1411–1422 (2006)
-  Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13(4), 600–612 (2004)
-  Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Transactions on Image Processing 19(11), 2861–2873 (2010)
-  Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Proceedings of the International Conference on Curves and Surfaces. pp. 711–730 (2010)
-  Zhang, L., Zhang, H., Shen, H., Li, P.: A super-resolution reconstruction algorithm for surveillance images. Signal Processing 90(3), 848–859 (2010)
Zou, W.W., Yuen, P.C.: Very low resolution face recognition problem. IEEE Transactions on Image Processing21(1), 327–340 (2012)