Dual Recovery Network with Online Compensation for Image Super-Resolution

01/20/2017 ∙ by Sifeng Xia, et al. ∙ 0

The image super-resolution (SR) methods will essentially lead to a loss of some high-frequency (HF) information when predicting high-resolution (HR) images from low-resolution (LR) images without using external references. To address that, we additionally utilize online retrieved data to facilitate image SR in a unified deep framework. A novel dual high-frequency recovery network (DHN) is proposed to predict an HR image with three parts: an LR image, an internal inferred HF (IHF) map (HF missing part inferred solely from the LR image) and an external extracted HF (EHF) map. In particular, we infer the HF information based on both the LR image and similar HR references which is retrieved online. For the EHF map, we align the references with affine transformation and then in the aligned references, part of HF signals are extracted by the proposed DHN to compensate for the HF loss. Extensive experimental results demonstrate that our DHN achieves notably better performance than state-of-the-art SR methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image super-resolution (SR) aims to estimate a high-resolution (HR) image from low-resolution (LR) observations. In essence, due to the information loss in the image degradation process, SR is an ill-posed problem. The earliest works, image interpolation, estimate the HR image based on local statistics of the LR image. Typical methods include Bi-linear, Bi-cubic and new edge directed interpolation that predict the HR pixels by utilizing the spatial relationship between LR and HR pixels. Later on, many successive works

[1, 2]

regard the image SR as a Maximum-a-posteriori estimation and propose to impose various priors to constrain the inverse estimation of image SR. In these methods, priors and constraints are typically achieved in a heuristic way. Thus, it is insufficient to represent the diversified patterns of natural images.

Learning based methods obtain a mapping between LR and HR images based on a large training set with dynamic learned prior knowledge. Sparse representation based methods such as [3] learn the map by building an LR and HR patch mapping dictionary. Neighbor embedding (NE) methods linearly combine the HR neighbors to infer the HR image. Timofte et al. [4] proposed an adjusted anchored neighborhood regression method for image SR. Li et al. [5]

proposed a neighbor preserving based method which specially utilizes HR reference patches only in reconstructing the high frequency region of LR images. Recently, deep-learning based methods

[6, 7, 8, 9] are proposed. SRCNN is the first method [6] that utilizes a three-layer convolutional network for image SR. In [7], the sparse prior is incorporated into the network. Then, the residual learning [8] and sub-band recovery with edge guidance [9] networks are constructed to recover HF signal and offer state-of-the-art performance.

Despite impressive results achieved by the learning-based methods, some HF information has still been lost because of the ill-posed nature of the image SR and the problem that mean squared error leads to “regression to mean” [10]. As a result, a few methods have recently been proposed, which additionally compensate for HF information loss with online retrieved HR references. Yue et al. [11] directly utilized the references to enhance the SR result by patch matching and patch blending. Li et al. [12] used the retrieved HR image patches to learn more accurate sparse distribution. Liu et al. [13] utilized a group-structured sparse representation to further use the nonlocal dependency information of HR references. However, in these methods there are still several important issues not being fully considered. For example, their fusion methods do not effectively extract external HF information for compensation, which may even bring artifacts. Besides, they did not make full use of the internal redundancy to benefit the recovery of HF information.

To address the above issues, we propose a unified deep network that additionally utilizes online retrieved data to facilitate image SR. Our work can efficiently extract an HF map from multiple HR references that are retrieved based on the intermediately inferred SR image.

Contributions of this paper are as follows: (1) This is the first work that efficiently extracts high-frequency information from HR references and successfully compensate for the HF information loss of the SR result with the deep framework. (2) Our work shows that it is capable to model internal and external images jointly, achieving a more accurate and robust fusion of internal and external information for HF information recovery. (3) Compared with both previous deep learning-based methods and online compensation SR methods, our approach achieves superior performance and has offered new state-of-the-art performance.

The rest of the article is organized as follows. Sec. 2 illustrates our DHN network. Details of utilizing the EHF map for compensation are introduced in Sec. 3. Experimental results are shown in Sec. 4 and concluding remarks are given in Sec. 5.

2 dual high-frequency recovery network

Given an LR image , we predict the HR image from with the reference of retrieved HR reference images by our dual high-frequency recovery network (DHN). In this paper, We use to represent the reference image. Architecture of the proposed DHN has been illustrated in Fig. 1. DHN consists of two components called internal high-frequency inference network (IHN) and external high-frequency compensation network (EHN), respectively. IHN infers missing HF information of merely based on internal data in . And then the intermediate SR image is generated by combining internal inferred HF (IHF) map and the simply up-smapled LR image . EHN further enhances the final SR result by adding the external extracted HF (EHF) map obtained from the aligned retrieved HR reference images to the intermediate image .

(a) Input
(b) One Reference
(c) Aligned
(d) Ground Truth
(e) Intermediate
(f) Result
Figure 2: An example of our SR method at scaling size 3. is one of the multiple references. As we can see our method robustly obtains better SR result with the help of reference images despite big differences between and references.

2.1 Internal high-frequency inference network

The first component IHN proposed by [9] is utilized to initially reconstruct the LR image with its own information. As shown in Fig.1, and its edge map, which is extracted by applying a hand-crafted edge detector, are utilized as the input of IHN. Then the recurrent network of IHN estimates the IHF map from the above input. IHN also predicts an HR edge map, which is used to further guide the HF map estimation.

With the inferred IHF map, The intermediate result image is then generalized as follows:

(1)

where is the direct sum operation between and and represents the process that IHN infers IHF map from LR image . is the image that simply up-sampled from . Specifically, at scaling size , the value of pixel in is copied to those pixels belong to the corresponding patch in . We then define the loss of IHN as the combination of loss of the predicted HR edge and . The loss is measured by the mean squared error (MSE) with the ground truth signal.

2.2 External high-frequency compensation network

IHN works well in predicting the HF map from an LR image. However, during this process not all HF information can be well recovered, as shown in Fig. 2 (e). This inspires us to construct EHN to further extract significant EHF map from each HR reference with the trained fixed IHN. Note that during training process is generated from the ground truth HR image.

As shown in Fig. 2, there are common illumination and color differences between the LR image and its reference images. Moreover, there is much useless low-frequency information in the references that may affect extracting HF information. Therefore we take different measures to improve the robustness of the process of extracting . Firstly, contrast of the label images is additionally adjusted to simulate the common conditions of illumination and color differences in training process. Besides, we alternatively utilize the difference image between and its intermediate SR image as the input of EHN, rather than directly input the information of . is obtained through up-sampling the down-sampled image of by IHN. The difference image is chosen because of its high efficiency in reducing illumination and color differences and removing redundant low-frequency information.

Then EHN extracts EHF map from the input by the recurrent network. Final reconstructed result is derived by:

(2)

where is the formulation of the process that EHN extracts the HF map . The operation represents the combination of the intermediate image and . During the training process, operation directly adds to . In the testing process, will be utilized based on patch matching results, which will be elaborated in Sec. 3.3. Loss of EHN is defined as MSE between and the raw ground truth image.

3 Online Retrieval For Compensation

Different with the training process, we retrieve HR reference images online in the testing process for compensation. And the extracted HF map will be fused based on the patch matching results.

3.1 Reference retrieval and registration

We search the HR reference images with the method mentioned in [13]. The initially reconstructed immediate SR image is used for retrieving reference images in a dataset. The SURF detector [14]

is first used to detect key points. Then a 144-dimension vector that contains discriminative information is extracted for each patch centered at the key point. Finally, the BOW

[15] model is used for indexing and retrieving reference images with extracted feature vectors.

However, we cannot directly utilize the retrieved HR references to compensate for information loss of because there are different scales and viewpoints between references and . As a result, each is aligned to for best compensation. We first detect the SIFT feature [16] of and each and match their feature points. Then the RANSAC algorithm is performed over the matched points to find the best homography transformation matrix. Finally, the aligned reference images are derived based on the transformation matrix and the aligned image of is defined as .

3.2 Patch matching

After obtaining the aligned references, the EHF map of each is then extracted from by IHN. As pixels in the aligned references are still not exactly corresponding to the pixels at the same position of , extracted feature values of can not be directly added to the intermediate up-sampled image . Therefore patch matching is utilized to find corresponding pixels between each and for guiding the combination of and .

There are usually significant differences on illumination, color and resolution between the intermediate SR image and the aligned HR references. As a result, for the purpose of better matching results we first utilize the intermediate SR reference image of each for matching, which shares similar resolution-level with . Then, we transfer each to reduce the effect of illumination difference:

(3)

where is the transform result, and

are the mean and standard deviation values of all pixels of the image, respectively. Then

is split into overlapped query patches of size at the step size 4. And we search for the corresponding patches of the query patches within a search window on each .

Since small patches contain little structural information of raw images, patch matching results at small patch size are not accurate. Thus we perform patch matching between and with large patches. Considering it is impossible for each patch in to have an exact corresponding large patch in , a method that adaptively adjusts patch sizes according to patch difference [11] is adopted for more accurate patch matching.

Let denote the query patch of size in centered at position and denote the candidate patch in centered at . We search for the best matching candidate patch of within the search window of size centered at in . The patch distance between and is defined as:

(4)

where is the operation that calculates the gradient of the patches and is the weighting parameter that controls the relative importance of pixel value differences and their gradient differences. is set to be in this paper. Besides, DC components of the patches are removed before distance computation.

The value of is defined as gradient mean square error (GMSE) and is set as minimum GMSE value between the query patch and the candidate patch within the search window. In particular, the value of is consistent with the quality of patch matching. In order to improve the quality of patch matching, patch sizes are then adaptively adjusted according to as follows:

(5)

Patch matching is performed at initial size and will be at smaller size if the value of is too large according to Eq. 5. The sliding step of patch matching is set to be . Then a closest candidate patch will be found. However, large step size may result in missing a better matching patch in . Thus we further search a candidate patch of the same size as within a size search window centered at position in , with the step size of .

3.3 External high-frequency information utilization

After patch matching, pixels at the same position in the matched patches between and each are matched. Then the EHF map is combined with based on the pixel-wise matching correlation. As mentioned in Sec. 2.2, the EHF map is denoted as . We define the final extracted HF map that can be directly added to as:

(6)

where is value of the pixel in map and similarly is the value of pixel . The set contains the matched pixels of from the HF maps extracted from all of the references. is the GMSE distance between the patches that and belong to. Note that pixel-wise correlations between and each here are the same as the builded pixel-wise correlations between and each . represents the number of elements in set .

Finally the result SR image is obtained by directly adding the final extracted HF map to the intermediate reconstructed SR image as .

Figure 3: Testing images from ‘a’ to ‘h’.
Images NE Landmark GSSR Baseline Proposde Method
2 3 4 2 3 4 2 3 4 2 3 4 2 3 4
a 28.38 27.27 25.63 30.77 29.74 28.36 32.14 29.66 28.14 33.26 30.11 28.30 34.37 31.84 30.16
0.8596 0.8375 0.7916 0.9000 0.8723 0.8390 0.9291 0.8878 0.8461 0.9488 0.8974 0.8498 0.9597 0.9296 0.8992
b 27.00 25.89 24.23 28.71 27.69 26.45 29.73 27.80 26.49 30.83 28.00 26.68 31.57 29.04 27.69
0.8180 0.7903 0.7349 0.8475 0.8152 0.7780 0.8892 0.8396 0.7921 0.9174 0.8477 0.7982 0.9294 0.8772 0.8355
c 29.09 28.07 26.30 30.67 29.40 27.77 31.72 29.82 28.28 32.92 30.10 28.43 34.18 31.87 30.33
0.8233 0.7945 0.7397 0.8031 0.8232 0.7670 0.8936 0.8382 0.7758 0.9235 0.8480 0.7826 0.9319 0.8727 0.8094
d 26.99 26.08 24.47 28.72 28.25 26.43 28.63 26.85 25.81 29.52 27.07 25.83 31.40 29.30 27.83
0.7884 0.7560 0.6978 0.8539 0.7573 0.7569 0.8359 0.7774 0.7262 0.8731 0.7848 0.7287 0.9089 0.8435 0.7938
e 31.13 30.01 28.18 33.43 32.32 30.23 33.58 31.81 30.22 35.22 32.05 30.32 36.35 33.37 32.54
0.9311 0.9191 0.8937 0.9384 0.9240 0.9031 0.9535 0.9396 0.9190 0.9717 0.9453 0.9205 0.9745 0.9523 0.9335
f 28.83 27.76 25.91 31.11 29.94 28.34 31.74 29.56 27.89 33.21 30.14 28.37 34.05 31.36 29.47
0.8120 0.7895 0.7258 0.8640 0.8353 0.7912 0.8792 0.8258 0.7653 0.9134 0.8383 0.7765 0.9244 0.8706 0.8194
g 28.57 26.77 25.76 30.31 29.18 27.83 32.17 29.86 28.33 33.41 30.17 28.55 34.44 31.32 29.47
0.7859 0.7612 0.7012 0.8423 0.8080 0.7557 0.8847 0.8132 0.7496 0.9135 0.8225 0.7593 0.9316 0.8659 0.8118
h 27.19 26.11 24.41 29.52 27.92 26.29 31.37 28.20 26.35 32.07 28.18 26.35 32.89 29.35 27.34
0.7551 0.7176 0.6351 0.8329 0.7711 0.6994 0.8879 0.7991 0.7019 0.9174 0.8083 0.7115 0.9359 0.8614 0.7775
Avg. 28.40 27.25 25.61 30.41 29.31 27.71 31.39 29.20 27.69 32.56 29.48 27.85 33.66 30.93 29.35
0.8217 0.7957 0.7400 0.8603 0.8258 0.7863 0.8941 0.8401 0.7845 0.9224 0.8490 0.7909 0.9370 0.8842 0.8350
Gain 5.26 3.69 3.74 3.25 1.63 1.64 2.27 1.74 1.67 1.10 1.45 1.50 - - -
0.1154 0.0884 0.0950 0.0768 0.0584 0.0487 0.0429 0.0441 0.0505 0.0147 0.0351 0.0441 - - -
Table 1: PSNR and SSIM values of SR images for different scaling factors obtained by different methods.

4 Experiments

4.1 Experimental settings

We train our DHN based on 91 images in [3] and 200 training images in BSD500 [17]. Besides, as mentioned in Sec. 2.2, during the training process of EHN, contrast of the ground truth images is first adjusted with random perturbation for more robustly extracting HF map from the references.

With the mentioned 291 images, we first transfer the images to color space and only utilize the channel. The chrominance channels are later simply up-sampled by the Bi-cubic method in the testing process. Then we generate sub-images at the size of

from images in the dataset with the stride step of 16 pixel. Down-sampling method in

[18] is utilized that images are first blurred and then down-sampled with factors of 2, 3 and 4. As a result, around 10 thousand sub-images are obtained for training. The learning rate is initially set as .

We compare our algorithm with different SR methods including a typical learning-based SR method [5] (denoted as NE) and two online compensation methods [11, 13] (respectively denoted as Landmark and GSSR). For fairly comparison, we add the retrieved HR reference images to the training set of the learning-based method NE. Besides, the intermediate results derived by IHN [9] are also shown as the baseline. The baseline is one of the newest deep based SR methods without using external references. The testing images are chosen from the Oxford Building dataset11footnotemark: 1. There are totally 8 testing images named from ’a’ to ’h’ for comparison, which have also been shown in Fig. 3. For each testing image, we retrieve 4 reference images to extract the EHF map for enhancement.

Note that experimental results of Landmark are only enhanced by one single reference image. Whole experimental results will be updated soon.

4.2 Experimental results and analysis

Table 1 has shown the objective results of the chosen images. Our proposed method has obtained the best PSNR and SSIM values at every down-sampling scale factor for all of the 8 images. Even for the image ‘c’ shown in Fig. 2 that has great differences with the reference images in illumination and color, the proposed method still has sufficient gain over the baseline. Specifically, there is respectively 0.54, 0.58 and 0.3 dB gain in PSNR value at scaling size 2, 3 and 4. Note that though Landmark has successfully recovered some HF information, it does not perform well in objective comparison. It’s caused by the brought artifacts, which will also be analyzed later.

Subjective results are shown in Fig. 4. Because images in the Oxford Building dataset have large resolutions (mostly in the size of ), we only show part of each chosen image to more clearly compare the quality of HF information reconstructing. We also enlarge some HF regions in Fig. 4 for further comparison.

The edge-preserving based method NE has successfully obtained more sharp edge. However, it fails to reconstruct the other HF signal which is more detailed. Landmark has successfully combined some HF signal of the HR references in the result images. However, artifacts sometimes may be brought by incorrect patch matching results or inappropriate patch blending, as shown in Fig. 4. As a result, the visual quality of the results of Landmark is relatively low. Sparse-based method GSSR did not consider position feature of the reference patches. While there are many similar reference patches, more noise will be brought into GSSR’s SR reults. Edge feature combined baseline method [9] has also well reconstructed some HF signal. However, without information from HR references it fails to reconstruct the detail of many more complex regions. On the contrast, our method has obtained the best result in HF information reconstructing. Owing to the robustness of HF information extracting and correct patch matching, no artifact has been brought and our results also own the best visual quality.

Baseline DHN-d Proposed
Avg. 29.48 30.53 30.93
0.8490 0.8842 0.8842
Gain 1.45 0.40 -
0.0351 0.0000 -
Table 2: Average PSNR and SSIM results of the baseline, DHN-d and proposed methods for 8 testing images at the scaling size 3.

To evaluate the effectiveness of our training policy with input random perturbation and normalization for EHN, we also compare with the case of directly inputting the information of the raw HR ground truth (we call this method DHN-d). Other training details are the same. The objective comparison between results of our proposed method and the mentioned DHN-d is shown in Table 2. Although DHN-d is capable of extracting some HF information and outperforms the baseline, we still obtain 0.40 dB gain with the input random perturbation and normalization training policy.

[width=28mm]img/1_gt.png

[width=28mm]img/1_NE.png

[width=28mm]img/1_lan.png

[width=28mm]img/1_TMM.png

[width=28mm]img/1_Bas.png

[width=28mm]img/1_pro.png

[width=28mm]img/2_gt.png

[width=28mm]img/2_NE.png

[width=28mm]img/2_lan.png

[width=28mm]img/2_TMM.png

[width=28mm]img/2_Bas.png

[width=28mm]img/2_pro.png

[width=27.9mm]img/3_gt.png

() Ground Truth

[width=27.9mm]img/3_NE.png

() NE [5]

[width=27.9mm]img/3_lan.png

() Landmark [11]

[width=27.9mm]img/3_TMM.png

() GSSR [13]

[width=27.9mm]img/3_Bas.png

() Baseline [9]

[width=27.9mm]img/3_pro.png

() Proposed Method
Figure 4: Subjective results of different methods at scaling size 3 for image ‘b’, ‘d’ and ‘h’. Some regions that have HF signal have been marked in red rectangle and enlarged for comparison.

5 Conclusion

In this paper, we propose a deep online compensation work for image super-resolution. With the IHF map estimated by IHN, we initially obtain an intermediate SR result by combining the IHF map with a simply up-sampled LR image. Then the EHF map is further extracted from some HR references retrieved online for compensation. The final SR result is obtained by adding the EHF map to the intermediate SR result. Extensive experimental results demonstrate that the proposed method can robustly extract external HF map from the reference images and significantly improve the SR results based on the compensation brought by the EHF map.

References

  • [1] J. Sun, J. Sun, Z. Xu, and H. Y. Shum, “Gradient profile prior and its applications in image super-resolution and enhancement,” IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1529–1542, June 2011.
  • [2] A. Marquina and SJ. Osher, “Image super-resolution by TV-regularization and bregman iteration,” Journal of Scientific Computing, vol. 37, no. 3, pp. 367–382, December 2008.
  • [3] J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010.
  • [4] R. Timofte, V. De Smet, and L. Van Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution,” in

    Proc. Asian Conference on Computer Vision

    , 2014.
  • [5] Y. Li, J. Liu, W. Yang, and Z. Guo, “Neighborhood regression for edge-preserving image super-resolution,” in Proc. IEEE Int’l Conf. Acoustics, Speech, and Signal Processing, 2015.
  • [6] C. Dong, C. Chen, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proc. European Conference on Computer Vision, 2014, pp. 184–199.
  • [7] D. Liu, Z. Wang, B. Wen, J. Yang, W. Han, and T. S. Huang, “Robust single image super-resolution via deep networks with sparse prior,” IEEE Transactions on Image Processing, vol. 25, no. 7, pp. 3194–3207, 2016.
  • [8] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in

    Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition

    , 2016.
  • [9] W. Yang, J. Feng, J. Yang, F. Zhao, J. Liu, Z. Guo, and S. Yan, “Deep edge guided recurrent residual learning for image super-resolution,” in Arxiv, 1604.08671, 2016.
  • [10] R. Timofte, VD. Smet, and LV. Gool, “Semantic super-resolution: When and where is it useful?,” Computer Vision and Image Understanding, vol. 142, pp. 1–12, 2016.
  • [11] H. Yue, X. Sun, J. Yang, and F. Wu, “Landmark image super-resolution by retrieving web images,” IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 4865–4875, December 2013.
  • [12] Y. Li, W. Dong, G. Shi, and X. Xie, “Learning parametric distributions for image super-resolution: Where patch matching meets sparse coding,” in Proc. IEEE Int’l Conf. Computer Vision, 2015.
  • [13] J. Liu, W. Yang, X. Zhang, and Z. Guo, “Retrieval compensated group structured sparsity for image super-resolution,” IEEE Transactions on Multimedia, vol. PP, no. 99, 2016.
  • [14] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (surf),” Computer vision and image understanding, vol. 110, no. 3, pp. 346–359, 2008.
  • [15] F. Li and P. Perona, “A bayesian hierarchical model for learning natural scene categories,” in Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2005, pp. 524–531.
  • [16] D. Lowe, “Distinctive image features from scale-invariant keypoints,” Int’l Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, November 2004.
  • [17] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp. 898–916, 2011.
  • [18] Y. Li, W. Dong, G. Shi, and X. Xie, “Learning parametric distributions for image super-resolution: Where patch matching meets sparse coding view document,” in Proc. IEEE Int’l Conf. Computer Vision, 2015.