Single image super-resolution (SISR)  is defined as a process of reconstructing high-resolution (HR) image from a low-resolution (LR) image. Due to the ill-posed nature of SISR process, how to ensure that the fine textures [2, 3] can still be generated without changing the image information is still a challenge . LR images used to train SISR models are often obtained by directly downsampling HR images. Some SISR methods choose to add Gaussian noise to LR images to improve the generalization ability of the model , and the specific process is as follows:
where represents the convolutions between a blur kernel an the HR image, where represents a downsampling operation with different scales, and represents that Gaussian noise is added to the down-sampled LR.
Degradation of real-world images is often more complex, and complex factors such as motion, defocusing, compression, sensor noise need to be considered [6, 7, 8]. Therefore, in the process of SISR, the model trained using the dataset obtained by the bicubic interpolation method is less robust in reconstructing the real-world image. The fundamental reason is that the bicubic downsampling algorithm used to build datasets are difficult to simulate the degradation process of real-world images. Fig. 1 shows the SISR result of NO.007 degraded image in our MDD400 test set. We used VDSR  and UP-net (proposed by this paper) to train three models using simulated images with bicubic degradation and MDD400 which is proposed by this paper. The results illustrate that the model trained using the bicubic interpolation is difficult to solve the problem of reconstruction of degraded images, while the model trained on our dataset shows good performance.
In order to improve the robustness and generalization capability of SISR model for practical application. Two GAN networks are used to complete the super-resolution reconstruction of the image , one of which simulates the degradation process of real-world images as much as possible, thereby generating LR images closer to the real world.  propose a simple yet effective and scalable deep CNN framework for SISR. The proposed model surpasses the widely-used bicubic degradation assumption and works for multiple and even spatially variant degradations, thus making a substantial step towards developing a practical CNN-based super-resolver.
Therefore, compared to simulated LR and HR image pairs, a training dataset containing real-world data is needed. To the best of our knowledge, there is currently only one real-world-based dataset constructed with photos of different resolutions by adjusting the camera’s focus. However, the images in this dataset are all true-color photos, ignoring the other type of images - black-and-white old photos, which are common in the museum or people’s home. These photos tended to be low resolution due to the limitations of shooting and film development technology at the time. Repairing and enlarging them not only helps historians to better recover history, but also allows ordinary people to remember the past, which has a high scientific and social value. Therefore, we propose a dataset of old photos based on the real world. We took photos with a film camera and generated an electronic version of the old photos as LR images with a scanning device, while trying to take the photos of the same content in the same location as the corresponding HR image with a digital single-lens reflex (DSLR) camera Canon 650D. And then we designed the corresponding image alignment algorithm to obtain a precisely aligned image pair.
At the same time, for the images of degraded scenes in the real world, the ground truth of them is often difficult to obtain through photography, and can only be obtained through simulation. Therefore we propose the MDD400 that simulates real-world degradation. We first constructed a new dataset of 300 images as ground truth, and then used the bicubic interpolation algorithm to obtain the corresponding LR images to construct 100 pairs of interpolation degraded images. Next, we designed a GAN-based image enhancement method for the super-resolution reconstruction model. Up-Net upsampling was used to generate 100 pairs of CNN-class degraded images. Because the images generated by CNN are too smooth and the texture is not clear, we constructed texture-Net to generate corresponding texture details for the image to form 100 pairs of GAN-class degraded images. Finally, we captured 100 sets of videos with different bit rates to construct image pairs with different resolutions to form video degradation images.
The contribution of our work are threefold:
We provide a real-world-based dataset OID-RW that includes character and architecture. The model trained on this dataset shows better results in practical applications.
We provide a dataset MDD400 that simulates dal degradation of real-world images, which include interpolation class, CNN class, GAN class and video class. When reconstructing real-world images, the model trained on this dataset has strong generalization capability and high robustness.
We propose a GAN-based data augmentation algorithm suitable for SISR, that generates LR images with different degrees of degradation by selecting different loss functions.
In section II, datasets and data enhancement algorithms applied in the super-resolution field are introduced. Section III and section IV describe the construction process of real-world-based SISR dataset and the multi-modal degradation dataset. The last section V shows the experimental details and result analysis.
Ii Related Work
Ii-a Singal Image Super-resolution Datasets
The datasets commonly applied in SISR training so far primarily are DIV2K , Urban100 , BSD300 . Set5  and Set14  are often used for model evaluation. The model trained by the dataset which is generated through the bicubic interpolation algorithm lacks the generalization capability. In recent years, a lot of work has been done to make SISR models perform well in real-world scenarios. Qu et al.  presents a novel prototype camera system to address the aforementioned difﬁculties of acquiring ground truth SR data. Two identical camera sensors equipped with a wide-angle lens and a telephoto lens respectively to collect a face image dataset. Köhler et al.  introduce the first comprehensive laboratory SR database of all-real acquisitions, which consists of more than 80k images of 14 scenes combining different facets. Cai et al.  captured images from various scenes at multiple focal lengths, providing a general and easy-to-use benchmark for real-world single image super-resolution. chen et al.  captured City100 dataset at on one scaling factor. The above datasets often focus on true color images, but SISR work on black and white photos of film cameras is still of great practical significance, so this paper builds a dataset based on old photos.
Ii-B Image Degradation
In recent works, in order to improve the generalization ability of the model, blur kernels and noise have been added to LR images. These two factors have been recognized as key factors for the success of SISR and several methods have been proposed to consider them. zhang et al.  propose a dimensional stretching strategy that takes blur and noise as inputs. This method can cope with multiple and spatially changing degradation models, which obviously improves the practicality. Adrian et al.  use a mismatched HR and LR image training set to train a GAN network that converts HR images into LR images, and then use this network to simulate the generation of degraded images in the real world. We are the first to propose a multi-modal image degradation dataset. Based on previous work, we used CNN and GAN networks to generate degraded images. In order to expand the diversity of the dataset, we captured videos at different bit rates.
Iii Old Real-world Dataset
This paper mainly constructs two types of images, including character and architecture. For character, HR images are obtained by manually filling pixels. For architecture, HR and LR images are obtained by shooting with two kinds of cameras. The dataset contains 82 groups of images, of which 22 are character and 60 are architecture. Although the number of photos seems to be small, the size is large enough to fit the quantity level of the training model.
Iii-a Image Collection
We took the same picture scene at the same position and angle as far as possible, using film camera and CCD camera sharing the same focal length and other parameters, to construct the dataset. The LR images were constructed by digital scanning format of photographic film which were developed by adopting professional reagents and instruments in the darkroom. Correspondingly, the HR images were ideally built by using CCD cameras. However, due to the different image formations of the two cameras and other uncontrollable factors such as illumination and wind, it was impossible for the two cameras to correspond completely and there was doomed to be some deviations. The paper applies an approach aligning the two kinds of images by a series of image processing methods.
Iii-B Image Registration
As mentioned earlier, the images of the same scene taken by two cameras cannot be completely aligned at the pixel level. Thus, we design an image alignment algorithm to process the captured images, so as to obtain the precisely aligned image pairs. We perform the steps of image cropping, feature extraction, feature matching, image affine transformation and clipping of the common area based on the obtained image pairs, as is shown in Fig.2.
Image cropping and Feature extraction. We first cropped the invalid part of the image, such as pedestrians, shaking branches, waving red flags, moving cars and so on. As to the feature extraction, we make a selection among the SUFT , ORB  and SIFT  algorithms. We chose SIFT algorithm because robustness rather than time is mainly focused during the construction of dataset.
Feature Matching. First of all, we manage to get as many matching pairs as possible. After optimizing the selected feature matching pairs by GMS and MLC, we retain the correct matches and removed the wrong matches as possible as we can. The main idea of GMS  optimization is to judge whether there are multiple support matches around the correct match.
The main idea of matching location constraints (MLC) is that the same feature points in the image should be roughly the same, which means there is no need to consider the rotation consistency and other issues. The size of image A and image B is . For a correct match , the position of the feature point in image A is , and the position of feature point in image B is , and the two feature points should meet the constraints:
Where is the threshold in the x-axis direction and is the threshold in the y-axis direction. It satisfied and , .
The smaller the , the smaller the threshold . On the contrary, the larger the
, the higher the probability of removing the mismatch. For the two images have already been conducted with an alignment process using matching location constraints, and our following images alignment have limited acceptance of error matching, the value ofin this paper will be set smaller. We set for our dataset.
Affine transformation and common area clipping. According to the feature point matching pairs obtained in the previous step, we complete the viewpoint matching of raw-LR and raw-HR images by calculating the affine variation matrix. We introduce the RANSAC algorithm  to reduce the effect of noise (error matching point pairs) on the image alignment in the calculating process. After the image alignment operation, we got the raw-HR images with black edges. We perform the global binary process to those images to extract the edge contour, and then design an algorithm to detect and capture the largest inscribed quadrilateral as the clipping region of the final HR images. At this time, the raw-LR image is also clipped according to the four corresponding angular coordinates of the inscribed quadrilateral to get the final LR image.
Iv Multi-modal Dataset
For better super-resolution results in real scenarios, we propose a new dataset NRD300 and a new data augmentation pipeline to synthesize more realistic training data. We build a multi-modal image degradation dataset MDD400 on the basis of NRD300 by using CNN and GAN network simulation to generate degraded images. In order to increase the diversity and generalization capability of the dataset, we captured videos of different resolutions as part of MDD400.
Iv-a New Resolution dataset
We first build a new dataset called New Resolution Dataset 300 (NRD300) for SR domain. By analyzing the dataset used to train SR model, we find that although the number of the images is quite large, they are not rich in species. Therefore, in order to build a comprehensive dataset in SR field, we select the images of ancient Chinese architecture and cultural relics to increase the diversity of images for the current dataset. After that, we use our new dataset NRD300 to build a multi-modal images degradation dataset called MDD400.
Iv-B Degraded Image by CNN
The Up-Net model based on CNN is applied to upsample the LR images. Fig. 3 shows overall proposed architecture and training pipeline. We use a lightweight network to perform super-resolution reconstruction of the LR. First, in order to better retain the texture information of the images, we apply 9 9 convolution to obtain the low-level features of the images, utilize the Residual Block to obtain the high-level features of the images, and use long-range skip connection to enhance the feature propagation in deep networks . In order to improve the training speed, we remove the BN layer from the Residual Block , and then set the Residual Block to 9 blocks. Finally, the up-sampling of the image was completed through two layers of sub-pixel convolution . In the upsampling process, we use Mean square error (MSE) as loss function. The result is show in Fig. 4.
where is the output image of the Up-Net model, is the HR image corresponding to .
Iv-C Degraded Image by GAN
Inspired by , GAN can be used to effectively simulate the image degradation, thus we designed a GAN network called Texture-net to generate different levels of low-resolution degraded images. Texture-net adopts a consistent network structure of the Up-net, and the input is which is obtained using Bicubic downsampling. Inspired by SRFeat , we apply to each long-rang skip connection to adjust and balance the output.
In order to generate different levels of degraded images, we use multiple loss functions to train the network. Due to the rich texture information in the images generated by GAN, we use two discriminators for the authenticity and fineness of the generated textures. One is the image discriminator, which is used to discriminate the authenticity of the generated images. The other is a perceptual discriminator, which extracts the advanced features of the image through VGG-19, and judges the authenticity of the generated image features . We use the following four loss functions to train the Texture-net.
Image Adversarial loss. The GAN loss of image is defined as follows:
where is the output image of the Texture-Net model, is the high resolution image corresponding to , is the loss function of generator , is the loss function of discriminator, and is the image discriminator. In our model we minimized .
Feature Adversarial loss. VGG-19 is used to extract the feature map of and , then the feature map is fed into feature discriminator to get real texture.
where is the loss function of generator, is the loss function of discriminator, and is the feature discriminator.
Perceptual loss. Perceptual loss  is often used for the GAN models to generate images with better visual quality, and we adopted the relu5_4 layer of VGG-19.
where is the width of feature map, is the height of feature map, is channel of the feature map, and is the feature map of VGG-19 on relu5_4.
Style loss. Style loss is defined by the Gram matrix. To further make the texture generated by more realistic and close to , we applied the style loss function to neural style learning.
where is Gram matrix, is the width of feature map, is the height of feature map, is channel of the feature map, and is the feature map of VGG-19 on relu5_4.
In order to increase the diversity of image in the dataset, we generated 3 levels of degraded images according to the number of the selected loss functions which is shown in Table I. The result as show in Fig. 5.
Iv-D Degraded Images from videos
We select a 1080P documentary and its 360P version on a video website, and then play them in a same size screen to capture the video at the same time. To obtain more image pairs, we also select other types of videos. In the process of capturing, we try to capture the still parts to obtain the aligned image pairs. However, for the reason that there are many frames of video in one second, the captured images may not be fully aligned. Therefore, we used the alignment algorithm of Section III-B to align them.
V Experiments and Results
V-a Training Details
Training process. Here are the details of training Up-Net and Texture-Net networks. We use the training set and test set of DIV2K as the training set of our model (a total of 900 images). We randomly select 500 training images and generate LR images by bicubic interpolation as the input of Up-Net. Data augmentation was performed by randomly rotating , , degrees and horizontally flipping the input. After training the UP-Net network, the remaining 400 images after downsampling are fed into the Up-Net to get .
and the corresponding HR form the training set of Texture-Net. During the training process, Up-Net and Texture-Net are both trained for 20 epochs. The experimental environment for OID-RW is Intel Core-i7, Nvidia GeForce GTX 1080ti, Nvidia titan xp and 32G memory. Experiments on MDD are conducted on NVIDIA Tesla P100.
Parameter settings. In order to balance the different loss functions, the feature map is scaled with scaling factor, and the weights of and are set to , 1, , 1 respectively. The momentum parameter of Adam optimizer is 0.9.
V-B OID-RW Dataset vs. Simulated Dataset
We choose the state-of-art SISR model called VDSR  and treat its performance on different datasets as the evaluation index to demonstrate the advantages of OID-RW dataset. In order to speed up the overall training, we reduce the number of VDSR intermediate layers (which is set to 15). Then we utilize the original dataset used by VDSR to obtain the training set BD by bicubic interpolation downsampling. The OID-RW dataset are divided into a training set and a test set at a ratio of 8: 2. The input image are shuffled with factor , , for the three scaling factors , and , respectively.
Finally, the VDSR model is separately trained using OID-RW and BD training set, and tested on the OID-RW test set. PSNR and SSIM (on Y channel in the YCbCr space) are used to evaluate the performance of the model as Table II shows. Results show that VDSR trained on our OID-RW dataset obtain significantly better performance than that trained on BD dataset for all three scaling factors. Specifically, for scaling factor 3 and 4, VDSR trained on our OID-RW dataset has about 0.53dB improvement on average for different magnification.
|Magnification||Trained with original dataset||Trained with OID dataset|
V-C Generalization Capability of OID-RW Dataset
In order to further prove the generalization capability and robustness of the OID-RW Dataset, this paper performs , and super-resolution reconstruction on 10 museum photos respectively, and the super-resolved results are visualized as Fig. 6. Since there are no corresponding ground truth images of real museum photos, and objective evaluation indicators cannot be used, we adopted the method of manual scoring, inviting 20 volunteers for subjective evaluations. For each super-resolved result, the volunteers are asked to give a rating for two questions. One is Is the result visually realistic?, and the other is Are the details easy to perceive?. The Likert scale of the reconstructed image () is shown in Fig. 9, where the best score is 5, and the worst is 1.
It can be seen from the Fig. 6 that in all these cases the images generated by VDSR trained on our OID-RW Dataset have better texture details and clear edge. The distribution shows that our results are more preferred, where our images receive more red and far less blue ratings compared to the others. As can be seen from Fig. 9, the images trained on the OID-RW dataset have a better visual effect, and the texture information of the images is also more refined.
V-D SISR Models Trained on MDD400 Dataset
In order to prove that the model trained on our MDD400 dataset can get good performance in degraded scenarios, the following experiments are designed. DIV2K is often selected as the training set for state-of-the-art SISR models. Here we use the bicubic interpolation algorithm on DIV2K to obtain a low-resolution dataset (BD), we train three networks including Up-Net, VDSR and SRResNet on the BD and MDD400 training sets.
We tested all the SISR models on our MDD400 test set. We evaluate the images in two scaling factors, and , by PSNR and SSIM on the Y channel. The evaluation results are shown in Table III. Obviously, models trained on MDD400 dataset perform better than baselines and the models trained on BD dataset on all the factors. The models trained on our MDD400 dataset have about 0.8dB improvement on average for all the three SISR models, specifically for scaling factor .
V-E Generalization Capability of MDD400 Dataset
In order to prove that the model trained on MDD400 dataset has good robustness, we selected the test image NiKon_050 and Canon_005 in the RealSR test set for further experiments. We use BD and our MDD400 dataset to train Up-Net and SRResNet networks. From the Fig. 8 it can be seen that the image generated by the BD-trained model has a certain degree of distortion, but that generated by MDD400-trained model can generate clear edges profile.
The traditional SISR datasets are often obtained by bicubic interpolation of high-resolution images to generate low-resolution images, and sometimes noise are added after downsampling. Models trained on these datasets are not robust in practical applications. In this paper, we propose two datasets. One is based on the real-world old photo dataset OID-RW, and the other is the MDD400 dataset that simulates real-world multi-modal degradation. Our extensive experiments demonstrates that not only the real-world SISR results of models trained on our dataset is much better than those trained on existing simulated datasets, but also models trained on our dataset show better generalization capability.
D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,”
2009 IEEE 12th international conference on computer vision. IEEE, 2009, pp. 349–356.
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta,
A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic
single image super-resolution using a generative adversarial network,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681–4690.
-  Z. Zhang, Z. Wang, Z. Lin, and H. Qi, “Image super-resolution by neural texture transfer,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 7982–7991.
-  K. Nasrollahi and T. B. Moeslund, “Super-resolution: a comprehensive survey,” Machine vision and applications, vol. 25, no. 6, pp. 1423–1468, 2014.
-  A. Singh, F. Porikli, and N. Ahuja, “Super-resolving noisy images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2846–2853.
-  C.-Y. Yang, C. Ma, and M.-H. Yang, “Single-image super-resolution: A benchmark,” in European Conference on Computer Vision. Springer, 2014, pp. 372–386.
-  Y. Romano, J. Isidoro, and P. Milanfar, “Raisr: Rapid and accurate image super resolution,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 110–125, 2016.
-  A. Bulat, J. Yang, and G. Tzimiropoulos, “To learn image super-resolution, use a gan to learn how to do image degradation first,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 185–200.
-  J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1646–1654.
-  K. Zhang, W. Zuo, and L. Zhang, “Learning a single convolutional super-resolution network for multiple degradations,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3262–3271.
-  J. Cai, H. Zeng, H. Yong, Z. Cao, and L. Zhang, “Toward real-world single image super-resolution: A new benchmark and a new model,” in Proceedings of the IEEE International Conference on Computer Vision, 2019.
-  R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, and L. Zhang, “Ntire 2017 challenge on single image super-resolution: Methods and results,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 114–125.
-  J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5197–5206.
-  D. Martin, C. Fowlkes, D. Tal, J. Malik et al., “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics.” Iccv Vancouver:, 2001.
-  M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. Alberi-Morel, “Low-complexity single-image super-resolution based on nonnegative neighbor embedding,” 2012.
-  R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparse-representations,” in International conference on curves and surfaces. Springer, 2010, pp. 711–730.
-  C. Qu, D. Luo, E. Monari, T. Schuchert, and J. Beyerer, “Capturing ground truth super-resolution data,” in 2016 IEEE International Conference on Image Processing (ICIP), 2016.
-  T. Köhler, M. Bätz, F. Naderi, A. Kaup, A. Maier, and C. Riess, “Toward bridging the simulated-to-real gap: Benchmarking super-resolution on real data.”
-  C. Chen, Z. Xiong, X. Tian, Z.-J. Zha, and F. Wu, “Camera lens super-resolution.”
-  H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in European conference on computer vision. Springer, 2006, pp. 404–417.
-  E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski, “Orb: An efficient alternative to sift or surf.” in ICCV, vol. 11, no. 1. Citeseer, 2011, p. 2.
-  D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
-  J. Bian, W.-Y. Lin, Y. Matsushita, S.-K. Yeung, T. D. Nguyen, and M.-M. Cheng, “Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
-  T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4799–4807.
-  B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 136–144.
W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” inProceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874–1883.
-  S.-J. Park, H. Son, S. Cho, K.-S. Hong, and S. Lee, “Srfeat: Single image super-resolution with feature discrimination,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 439–455.
-  A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.