Super-resolution (SR) algorithms aim to construct a high-resolution (HR) image from one or multiple low-resolution (LR) inputs. Being ill-posed, SR has to resort to strong image priors, ranging from the simplest analytical smoothness assumptions, to more complicated statistical and structural priors , . The most popular SR methods rely on a large and representative external set of image pairs to learn the mapping between LR and HR image patches . Those methods are known for their capabilities to produce plausible image appearances. However, there is no guarantee that an arbitrary input patch can be well matched or represented by a pre-chosen external set. When there are rarely matching features for the input, external examples are prone to produce either noise or oversmoothness . Meanwhile, image patches tend to recur within the same image , , or across different image scales . The self similarity property provides self examples that are highly relevant to the input, but only of a limited number. Due to the insufficiency of self examples, their mismatches often result in more visual artifacts. It is recognized that external and self example-based SR methods each suffer from their inherent drawbacks .
The joint utilization of both external and self examples has been first studied for image denoising . Mosseri et. al.  proposed that image patches have different preferences towards either external or self examples for denoising. Such a preference is in essence the tradeoff between noise-fitting versus signal-fitting. Burger et. al.  proposed a learning-based approach that automatically combines denoising results from an self example and an external example-based method. In SR literature, the authors in  incorporated both a local autoregressive (AR) model and a nonlocal self similarity regularization term, into the sparse representation framework. Yang et. al.  learned the approximated nonlinear SR mapping function from external examples with the help of in-place self similarity. More recently, a joint SR model was proposed in ,  , to adaptively combine the advantages of both external and self examples. It is observed in  that external examples contribute to visually pleasant SR results for relatively smooth regions, while self examples reproduce recurring singular features of the input. The complementary behavior has been similarly verified in the the image denoising literature .
More recently, inspired by the great success achieved by deep learning (DL) models in other computer vision tasks
, there is a growing interest in applying deep architectures to image SR. A Super-Resolution Convolutional Neural Network (SRCNN) was proposed in. Thanks to the end-to-end training and the large learning capacity of the CNN, the SRCNN obtains significant improvements over classical non-DL methods. In SRCNN, the information exploited for reconstruction is comparatively larger than that used in previous sparse coding approaches. However, the SRCNN has not taken any self similarity property into account. The authors in  proposed the deep network cascade (DNC) to embed self example-based approach to auto-encoders (AEs). In each layer of the network, patchwise non-local self similarity search is first performed to enhance high-frequency details, and thus the whole model is not specifically designed to be an end-to-end solution. So far, there lacks a principled approach to utilize self similarity to regularize deep learning models, not only for SR, but also for general image restoration applications.
In this paper, we propose a unified deep learning framework, to joint utilize both the wealth of external examples, and the power of self examples specifically to the input. We name our proposed model deep joint super resolution (DJSR). While the mutually reinforcing properties of external and self similarities are utilized in classical example-based methods , , , to our best knowledge, DJSR is the first to adapt deep models for joint similarities. The major contributions are summarized as multi-folds:
We pre-train the model using an external set with data augmentations. We then fine-tune it using self-example pairs from the input image. Such a framework can be easily extended to other applications.
We propose to sample a large pool of self-example pairs using multi-scale self similarity, each of which is assigned a confidence weight during training. That alleviates the insufficiency of reliable self examples.
We extend DJSR into several dedicated sub-models, and conduct selective training and patch processing.
Connecting SR to Domain Adaption
The idea of DJSR has certain connections to domain adaption or transfer learning. For domain adaption, given a source domain having sufficient labeled data for training, and a target domain with insufficient labeled data and a different distribution, the problem is to have the model trained on the source domain generalize well on the target domain. In our setting, LR-HR pairs resemble the data-label tuples. The DJSR model is first learned on the source domain of external examples, and then adapted to the target domain of self samples from the testing image. That explains why DJSR could outperform previous models based on either external examples (applying source domain models directly to the target domain) or self examples (relying on target domain only to train models) from a domain adaption perspective.
2 Pre-Training Using External Examples
Several deep architectures have been explored for SR previously. The authors of DNC  referred to a collaborative local auto-encoder (CLA) to be stacked to form a cascade. However, auto-encoders (AEs) rely mostly on fully-connected models and ignore the 2D image structure . In SRCNN , a fully convolutional network is learned to predict the nonlinear LR-HR mapping. Such a model has a clear analogy to classical sparse coding methods .
In , a Convolutional Auto Encoder (CAE) was proposed to learn features using a hierarchical unsupervised feature extractor while preserving spatial locality. CAEs can be stacked to form a Stacked Convolutional Auto Encoder (SCAE), where each layer receives its input from a latent representation of the layer below. It was further revealed in  that auto encoders are prone to learn trivial projections and the learned filters are usually subject to random corruptions. Denoising was thus suggested as a training criterion  to learn robust structural features, which also proves to be effective for image restoration tasks besides the original classification setting . We employ a Stacked Denoising Convolutional Auto-Encoder (SDCAE)  to reconstruct HR images from its stochastically corrupted LR versions. Such an architecture combines the intuitive idea of AEs, and the power of CNNs to capture 2-D structures efficiently. Note that more potential alternatives, such as SRCNN, can possibly be adapted here.
While there are multiple SCAE implementations available, we adopt the implementation by  111https://github.com/ifp-uiuc/anna as it has shown some improvements over the one in  on the CIFAR-10 benchmark. Their model is similar to the network in  but without using sparse coding, and introducing zero-biases  and ReLUs in the convolutional layers. To convert the SCAE into a SDCAE, all we need to do is to add a stochastic corruption (we use additive isotropic Gaussian noise) step operating on the input. Assuming that the original images are downsized by a scale to generate LR-HR example pairs for both training and evaluation, the SDCAE architecture is depicted in Fig. 1, where the input is a LR image and the output is its HR counterpart. The trained network fits SR with a factor of .
3 Fine-Tuning Using Self Examples
3.1 Self-example pairs by Multi-Scale Similarity
, the authors enhanced high-frequency by employing non-local self similarity (NLSS) search over the successive blurred and downscaled versions of the input image. By combining those internal matches, the estimated patch usually contains more abundant texture information. However, it overlooks the across-scale similarity properties of natural images, , that singular features like edges and corners in small patches tend to repeat almost identically in their slightly upscaled versions. In addition, such a pre-processing step is not jointly optimized with the deep network cascade.
Freedman and Fattal  applied the “high frequency transfer” method to search for the high-frequency component of a target HR patch, by NN patch matching across scales. Let denote the HR image to be estimated from the LR input . and stand for the -th () patch from and
, respectively. Defining a linear interpolation operatorand a downsampling operator , for the input LR image , we first obtain its initial upsampled image , and a smoothed input image . Given the smoothed patch , the missing high-frequency band of each unknown patch is predicted by first solving a NN matching (1):
where is defined as a searching window on image . With the co-located patch from , the high-frequency band is pasted onto , i.e., .
Note the above methodology could be applied to construct self-example pairs for a input image . It is thus straightforward to consider adopting those self-example pairs to fine-tune our pre-trained network. However, two problems obstacle such a practice:
Insufficiency of Informative Examples The amount of self examples is usually far less than the size of external training sets. For example, a input image can generate at most = 61,504 patches of a small size
(and thus the same amount of self-example pairs) and a minimum stride of 1. Moreover, the information of those example pairs is also far less rich.
Limited Reliability In essence, a part of input patches may be identified with few discernible repeating patterns. They might thus not be able to find good matches within the same image, which constitutes the visual artifacts in previous high frequency transfer methods . Besides, The matching of over makes the core step of the high frequency transfer scheme. However, NN matching (1
) is not reliable under noise and outliers in LR images.
To resolve the above raised concerns, we sample a hierarchy of self examples from multiple scales, each of which is associated with a confidence weight calculated by the NN matching error from (1). The key idea is to exploit cross-scale patch redundancy embedded between multiple neighborhood scales. The steps are outlined in Algorithm I.
3.2 Weighted Back Propagation
The self-example pairs obtained from Algorithm 1 can be used to fine-tune the pre-trained SDCAE, making it specially adapted for the input. To incorporate the reliability of the self-example pairs into the process, a variant of standard back propagation, called Weighted Back Propagation (WBP), is developed to alleviate the negative impacts of bad examples, without sacrificing the benefits of abundant training data. In particular, assuming that is the normalized confidence weight for the current self-example pair, let denote the learning rate for fine tuning and the gradient, the weight matrices ( is the layer index) are updated as:
Note each example pair possesses a different (and pre-calculated) . Such an importance weighting strategy has been commonly applied to transfer learning problems . Yet to our best knowledge, there is no similar work in deep learning. In all experiments, the learning rate for fine-tuning is set to be by default, and will not be annealed. That large value is empirically found to work well, leading to a better presence of self similarity in final SR results.
4 Sub-model Training and Selection
Previous DL-based image SR methods aim at learning one model that is capable to represent various image structures. Such a model lacks the adaptivity to local structures. In some cases, It might also lead to a model of overly high complexity and redundancy . When learning regressors from LR to HR patches, it is observed that regressors have different specialities at dealing with certain patches . Following this idea, external examples are partitioned into many clusters, each of which consists of patches with similar patterns and can be used to pre-train a sub SDCAE model. Next, for each input patch, the most relevant sub-model is first selected and then fine-tuned by its own self-example pairs. Since a given patch can be better represented by the adaptively selected sub-models, the whole HR image can be more accurately reconstructed.
Provided with an external set, we first use the high-pass filtering output of each LR patch as the feature for clustering. It allows us to focus on the edges and structures of image patches. We then adopt K-means algorithm to partition the whole set intoclusters, where denotes the centroid of -th cluster, . During model fine-tuning (and testing), the best sub-model is chosen based on the minimum distance between the LR patch and the centroids. As suggested by , let = [, , …
], its PCA transformation matrix is obtained by applying SVD to the co-variance matrix of
. We can compute the distance between input patch and cluster centroids more robustly in the subspace spanned by the most significant eigenvectors.
The SDCAE is learned from an external training set with 91 images , which is also adopted in . The 91-image dataset can be decomposed into 24,800 sub-images of size 33 33 for training purpose, which are extracted from original images with a stride of 14. For each LR patch, we subtract its mean and normalize its magnitude, which are later put back to the recovered HR patch. While data augmentation is not adopted in SRCNN, we believe that it plays an important role in training DJSR, to help it focus on meaningful visual features rather than artifacts in training images. We add the following distortions to training images:
Translation: random x-y shifts between [-4, 4] pixels.
Rotation: affine transform with random parameters.
Zoom: random scaling factors between [1/1.2, 1.2]. Note it has to keep the ratio unchanged.
SDCAE can also be viewed as a data augmentation way by adding noise. We train SDCAE on sub-images, using stochastic gradient descent with a constant momentum of 0.9, and a learning rate
of 0.01 (we do not anneal it through training). Mean Squared Error (MSE) is used as the loss function.
Since cross-scale self similarity performs best at small scales , we stick to a small upscaling factor (1.2 by default) for model training, unless otherwise specified. To achieve any targeted upscaling factor , we zoom up an image repeatedly using the learned DJSR model until it is at least as large as the desired size. Then a bicubic interpolation is used to downscale it to the target resolution if necessary. We do not conduct extra joint optimization on the resulting network cascade. The proposed networks are implemented using the CUDA ConvNet package  and the ANNA open source library , and run on a workstation with 12 Intel Xeon 2.67GHz CPUs and 1 GTX680 GPU.
For color images, we apply SR algorithms to the illuminance channel only, while interpolating the color layers (Cb, Cr) using plain bi-cubic interpolation. However our model is flexible to process more color channels without altering the network design. To avoid border effects, all the convolutional layers have no padding, and the network produces a smaller central output.
5.2 Model Analysis
Validating Pre-Training We visualize the learned convolutional filters in the first layers of SDCAEs without and with augmentations. They are trained with a relatively large scaling factor = 2, so as to be compared with the first-layer filter visualizations of SRCNN, as depicted in Fig. 2. The training process takes around 7 hours. Both SDCAEs and SRCNN have 64 channels of convolutional filters in the first layer. While there is hardly any recognizable structural features from the filters in (a), the introduction of data augmentations leads to much more clear and interpretable filter responses in (b), from simple edge (curve) detectors at different directions (e.g., a, b and c), to more sophisticated texture descriptors (e.g., d, e and f). On the other hand, since the first layer of SRCNN is designed for patch extraction and representation, it is natural that its learned filters show different from ours. One interesting observation is that SRCNN suffers from several “dead” filters, whose weights are all nearly zeros (as discussed in ), whereas almost all filters of SDCAE are fairly strong and diverse. Further, the SDCAE with augmentations obtains an average PSNR of 36.43 dB when testing on the Set 5  (with no fine-tuning applied), where we see a notable performance improvement of 1.44dB compared to the case without augmentations (35.01 dB).
Understanding Fine-Tuning During fine-tuning, we sample LR patches from the input with a default size of and stride of 1. In this section, we take the Baby LR image of size for example,which will result in 58,564 patches. Assuming SDCAE has been pre-trained, by default, we fix = 5 (defined in Algorithm 1), which means the hierarchy will contain 2 upscaled layers (by factors of 1.2 and 1.44) and two downscaled layers (by factors of 1/1.2 and 1/1.44). Fine-tuning a trained SDCAE on Baby takes less than 1 hour, and it could be potentially accelerated to a large extent by avoiding working on those homogeneous regions . We will then investigate the influences of the parameter (that controls the amount of self-example pairs), and the effects of tuning the learning rate , as well as the effects of WBP algorithm.
Fig. 3 depicts the distribution of normalized weight values of all self-example pairs obtained on Baby, with = 8. Notably, two peaks appear on the histogram, one largest peak near the weight value of 1, and the other lower one is centered around 0.75. Further observations reveal that the first peak corresponds to those in-place self examples as in , which follow the local scale invariance property and are usually very accurate matches. The second peak, with relatively larger errors, mostly corresponds to the non-local similar examples .
To further understand how the amount of self examples and their weights influence the fine-tuning, we introduce several measurements: Let denote the total volume of self-example pairs (thus ). Define the effective volume as the (rounded) summations of all normalized weights, and the average effective volume . can be viewed as an indicator on how reliable and representative the chosen self-example set is. As shown in Table 1, with growing from 4 to 8, both and increase, implying that self similarity is better exploited. The PSNR improvement after fine-tuning also becomes more substantial. However, when continues going up, both and PSNR reach the plateau, whereas decreases dramatically. That clearly manifests that little self similarity information remains to be excavated, and the self example sets assumably turn redundant. Therefore, is set as 8 by default hereinafter. The last row of Table 1 lists the PSNR results obtained from fine-tuning with standard back propagation, denoted as . It is noteworthy that without taking the confidence weights into consideration, more self-example pairs may even harm the SR performance of the pre-trained model when drops. The WBP algorithm shows quite robust when more self examples are used in fine-tuning.
Table 2 examines the PNSR changes with varying . We observe that a small learning rate (such as 0.01) does not bring any notable improvement to final results. Until = 0.5 (the empirical default value), a growing leads to a monotone increase in final PSNR results. That is interpretable, as self examples in any way do not contain as sufficient and diverse information as external examples do; a large learning rate can strengthen their influences on the pre-trained model. It may also help overcome some local minima. However, when is further improved beyond 0.5, the performances start to be undermined, and more fluctuations are observed during the convergence process ( = 0.8 actually does not lead to a stable convergence).
5.3 Comparison with State-of-the-Arts
We first compare DJSR qualitatively with two recent DL-based SR methods, SRCNN  222Results by using the original implementation available at: http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html and DNC  333Results provided by the authors: http://vipl.ict.ac.cn/paperpage/DNC. Fig. 4, 5, and 6 demonstrate visual comparisons on three natural images, Baby, Roman, and Train, respectively; all are upsampled by a factor of 3. The zoomed regions are also displayed. SCRNN performs reasonably well on Baby and Train images, but are visually worse than DNC on the Roman image, since Roman are abundant in repeating textures on the pillars of the Parthenon, making self similarity especially powerful. DJSR produces image patterns with shaper boundaries and richer textures (see the zoomed pillar regions on Roman, and the numbers on Train), and suppresses the jaggy and blockiness artifacts discernibly better.
PSNR and SSIM  are used to evaluate the performances quantitatively (only the luminance channel is considered). While all three deep networks are optimized under a MSE (equivalent to PSNR) loss, DJSR is slightly worse than SRCNN on Baby in terms of PSNR, but obtains the best performances on both Roman and Train images. What is more, we notice that DJSR is particularly more favorable by SSIM, which measures image quality more consistently with human perception than PSNR. The observation is further verified on the commonly-adopted Set 5 and Set 14 datasets. Such an advantage can be owed to our fine-tuning step, which further enhances the generic model by exploiting the self similar structures of the input. Table 3 compares the average PSNR and SSIM results of the DJSR and SRCNN 444DNC is not included, since neither the original codes nor any reported result on the two sets are unavailable. A part of data in Table 3 is from , as well as a few other classical non-DL SR methods, on the Set 5 and Set 14 datasets. DJSR obtains an overall competitive performance, and especially gains a consistent advantage over others in SSIM.
5.4 Evaluation of Sub-models
While the DJSR model could very well compete against the state-of-the-arts, there is still potential room for improvements, by training a group of sub-models and selecting the optimal sub-models for each patch. The number of clusters is a parameter to be pre-determined. Specifically, we train the sub-models under different values, and applied them to upscale LR images in Set 5 ( = 2). Fig. 7 records the average PSNR values (the black dash line denotes the original DJSR model, i.e., =1). We can see that when increases from 50 to 500, the PSNR results gradually raise, as each sub-model is developed to describe a smaller subset of similar image patches more precisely. Yet a slight performance drop occurs at =800 and continues when increases to 1,000. A closer look into the clusters demonstrates that when becomes too large, many clusters contain only as few as thousands of examples, which are inadequate for training a deep network. Such ”chaos” sub-models will finally hamper the overall performance.
|Bicubic||Sparse Coding ||Freedman et.al. ||A+ ||SRCNN ||DJSR|
|Set 5, =2||PSNR||33.66||35.27||33.61||36.24||36.66||36.78|
|Set 5, =3||PSNR||30.39||31.42||30.77||32.59||32.75||32.65|
|Set 14, =2||PSNR||30.23||31.34||31.99||32.58||32.45||32.51|
|Set 14, =3||PSNR||27.54||28.31||28.26||29.13||29.60||29.96|
Fig. 7 reminds us that while dividing sub-models is usually supposed to be helpful, an improper choice of can impact the results negatively. The optimal selection of is a nontrivial task, and is subject to the bias and variance tradeoff . If is too small, the boundaries between clusters will be smoothed out and the distinctiveness of sub-models are compromised. On the other hand, an overly larger
will make a part of sub-models unreliable. A simple heuristics is adopted in our experiments: the training dataset is first partitioned into 100 clusters; next, those fragment small clusters (e.g., containing less than 500 samples) are merged into their neighboring clusters. That will usually lead tobetween [40, 50]. We find that strategy leads to a good and stable performance improvement, after we try it on several different training sets.
In this paper, we investigate a deep joint super resolution (DJSR) model, to exploit external and self similarities for image SR in a unified framework. We utilize external examples to pre-train the model, and fine-tune it using sufficient self examples weighted by their reliability. We thoroughly analyze the model and interpret its behaviors. DJSR is compared with several state-of-the-art SR methods (both DL and non-DL) in our experiments, and shows a visible performance advantage both quantitatively and perceptually. Similar approaches can be extended to many other image restoration applications.
-  H. C. Burger, C. Schuler, and S. Harmeling. Learning how to combine internal and external denoising methods. In Pattern Recognition, pages 121–130. Springer, 2013.
-  Z. Cui, H. Chang, S. Shan, B. Zhong, and X. Chen. Deep network cascade for image super-resolution. In ECCV, pages 49–64. 2014.
-  D. Dai, R. Timofte, and L. Van Gool. Jointly optimized regressors for image super-resolution.
-  C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In ECCV, pages 184–199. 2014.
-  W. Dong, D. Zhang, G. Shi, and X. Wu. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. Image Processing, IEEE Transactions on, 20(7):1838–1857, 2011.
-  M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. Image Processing, IEEE Transactions on, 15(12):3736–3745, 2006.
-  G. Freedman and R. Fattal. Image and video upscaling from local self-examples. ACM Transactions on Graphics, 30(2):12, 2011.
-  D. Glasner, S. Bagon, and M. Irani. Super-resolution from a single image. In ICCV, pages 349–356. IEEE, 2009.
-  X. Glorot, A. Bordes, and Y. Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of ICML, pages 513–520, 2011.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
-  A. Liu and B. Ziebart. Robust classification under sample selection bias. In NIPS, pages 37–45, 2014.
J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber.
Stacked convolutional auto-encoders for hierarchical feature extraction.In ICANN, pages 52–59. 2011.
-  R. Memisevic, K. Konda, and D. Krueger. Zero-bias autoencoders and the benefits of co-adapting features. arXiv preprint arXiv:1402.3337, 2014.
-  I. Mosseri, M. Zontak, and M. Irani. Combining the power of internal and external denoising. In ICCP, pages 1–9. IEEE, 2013.
-  T. Paine, P. Khorrami, W. Han, and T. S. Huang. An analysis of unsupervised pre-training in light of recent advances. arXiv preprint arXiv:1412.6597, 2014.
-  R. Timofte, V. De Smet, and L. Van Gool. A+: Adjusted anchored neighborhood regression for fast super-resolution.
P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol.
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
The Journal of Machine Learning Research, 11:3371–3408, 2010.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. Image Processing, IEEE Transactions on, 13(4):600–612, 2004.
-  Z. Wang, Z. Wang, S. Chang, J. Yang, and T. Huang. A joint perspective towards image super-resolution: Unifying external-and self-examples. In WACV, pages 596–603. IEEE, 2014.
-  Z. Wang, Y. Yang, Z. Wang, S. Chang, J. Yang, and T. S. Huang. Learning super-resolution jointly from external and internal examples. arXiv preprint arXiv:1503.01138, 2015.
-  Z. Wang, Y. Yang, J. Yang, and T. S. Huang. Designing a composite dictionary adaptively from joint examples. arXiv preprint arXiv:1503.03621, 2015.
-  J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks. In NIPS, pages 341–349, 2012.
-  C.-Y. Yang, J.-B. Huang, and M.-H. Yang. Exploiting self-similarities for single frame super-resolution. In ACCV, pages 497–510. 2011.
-  J. Yang, Z. Lin, and S. Cohen. Fast image super-resolution based on in-place example regression. In CVPR, pages 1059–1066. IEEE, 2013.
-  J. Yang, Z. Wang, Z. Lin, S. Cohen, and T. Huang. Coupled dictionary training for image super-resolution. Image Processing, IEEE Transactions on, 21(8):3467–3478, 2012.
-  J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image super-resolution via sparse representation. Image Processing, IEEE Transactions on, 19(11):2861–2873, 2010.
-  M. Zeiler, G. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In ICCV, pages 2018–2025. IEEE, 2011.
-  M. Zontak and M. Irani. Internal statistics of a single natural image. In CVPR, pages 977–984. IEEE, 2011.