Single image super-resolution (SISR) is to reconstruct a high-resolution (HR) image from a single low-resolution (LR) image, which is an ill-posed inverse problem. SISR has gained increasing research interest for decades. Recently, convolutional neural networks (CNNs) [6, 32, 25] significantly improve the peak signal-to noise ratio (PSNR) in SISR. These networks commonly use an extraction module to extract a series of feature maps from the LR image, cascaded with an up-sampling module to increase resolution and reconstruct the HR image.
The quality of extracting features will seriously affect the performance of the HR image reconstruction. The main part of extraction module used in modern SR networks can be primarily divided into three types: conventional convolution layers , residual blocks  and dense blocks .
Conventional convolution has been widely considered by scholars since AlexNet  won the first prize of ILSVRC in 2012. The first model using conventional convolution to solve the SR problem is SRCNN . After that, many improved networks such as FSRCNN , SCN , ESPCN  and DRCN  also use conventional convolution and achieve great results. Residual block 
is an improved version of the convolutional layer, which exhibits excellent performance in computer vision problems. Since it can enhance the feature propagation in networks and alleviate the vanishing-gradient problem, many SR networks such as VDSR, LapSRN, EDSR  and SRResNet  import residual blocks and exhibit improved performances.
To make use of the skip connections used in residual blocks, Huang et al. proposed the dense block  further. A dense block builds more connections among layers to enlarge the information flow. Tong et al.  proposed SRDenseNet using dense blocks, which boosts the performance further more.
Recently, Yang et al proposed a novel block called the clique block , where the layers in a block are constructed as a clique and are updated alternately in a loop manner. Any layer is both the input and the output of another one in the same block so that the information flow is maximized. The propagation of a clique block contains two stages. The first stage does the same thing as a dense block. The second stage distills the feature maps by using the skip connections between any layers, including connections between subsequent layers.
A suitable up-sampling module can further improve image reconstruction performance. The up-sampling modules used in modern SR networks to increase the resolution can also be primarily divided into three types: interpolation up-sampling, deconvolution up-sampling and sub-pixel convolution up-sampling.
Interpolation up-sampling was first used in SRCNN . At that time, there were no effective implementations of module that can make the output size larger than the input size. So SRCNN used pre-defined bicubic interpolation on input images to get the desired size first. Following SRCNN using pre-interpolation, VDSR , IRCNN , DRRN  and MemNet  used different extraction modules. However, this pre-processing step increases computation complexity because the size of feature maps is multiple.
can be seen as multiplication of each input pixel by a filter, which could increase the input size if the stride. Many modern SR networks such as FSRCNN , LapCNN , DBPN  and IDN  got better results by using deconvolution as the up-sampling module. However, the computation complexity of forward and back propagation of deconvolution is still a major concern.
Sub-pixel convolution proposed in  aims at accelerating the up-sampling operation. Unlike previous up-sampling methods that change the height and width of the input feature maps, sub-pixel convolution implements up-sampling by increasing the number of channels. After that sub-pixel convolution uses a periodic shuffling operation to reshape the output feature map to the desired height and width. ESPCN , EDSR  and SRMD  used sub-pixel convolution to achieve good performances on benchmark datasets.
These above-mentioned networks tend to produce blurry and overly-smoothed HR images, lacking some texture details. Wavelet transform (WT) has been shown to be an efficient and highly intuitive tool to represent and store images in a multi-resolution way [30, 26]. WT can describe the contextual and textural information of an image at different scales. WT for super-resolution has been applied successfully to the multi-frame SR problem [4, 16, 27].
Motivated by the remarkable properties of clique block and WT, we propose a novel network for SR called SRCliqueNet to address the above-mentioned challenges. We design the res-clique block as the main part of the extraction module to improve the network’s performance. We also design a novel up-sampling module called clique up-sampling. It consists of four sub-nets which use to predict the high resolution wavelet coefficients of four sub-bands. Since we consider the edge feature properties of four sub-bands, four sub-nets can learn the coefficients of four sub-bands jointly. For magnification factors greater than 2, we design a progressive SRCliqueNet upon image pyramids . Our proposed network achieves superior performance over the state-of-the-art methods on benchmark datasets.
2 Super-Resolution CliqueNet
In this section, we first overview the proposed SRCliqueNet architecture, then we introduce the feature embedding net (FEN) and the image reconstruction net (IRN), which are the key parts of SRCliqueNet.
2.1 Network architecture
As shown in Figure 1, our SRCliqueNet mainly consists of two sub-networks: FEN and IRN. FEN represents the LR input image as a set of feature maps. Note that FEN does not change the size () of the input image, where and are the height and the width, respectively. IRN up-samples the feature map got by FEN and reconstructs the HR image. Here we denote as the input LR image and as the ground truth HR image, where is the magnification factor.
2.2 Feature Embedding Net
As shown in the left part of Figure 1, FEN starts with two convolutional layers. The first convolutional layer tries to increase the number of channels of input, which can be added with the output of the clique block group via the skip connection. The clique block group will be introduced immediately. The skip connection after the first convolutional layer has been widely used in SR networks [24, 25, 14]. The output of the first convolutional layer is , where is the number of clique blocks that follow, is the number of layers in each clique block and is the growth rate of each clique block. The second convlutional layer tries to change the number of channels so that they can fit the input of clique block group. The output of the second convolutional layer is .
The illustrations of res-clique block and clique block group are shown in Figure 2
. We choose clique block as our main feature extractor for the following reasons. First, a clique block’s forward propagation contains two stages. The propagation of first stage does the same things as dense block, while the second stage distills the feature further. Second, a clique block contains more skip connections compared with a dense block, so the information among layers can be more easily propagated. We add a residual connection to the clique block, since the input feature contains plenty of useful information in terms of SR problem. We call such kind of clique block as the res-clique block.
Suppose a res-clique block has layers and the input and the output of the res-clique block are denoted by and , respectively. The weight between layer and layer is represented by . The feed-forward pass of the clique block can be mathematically described as the following equations. For stage one, , where is the convolution operation,
is the activation function. For stage two,For residual connection, , where represents the concatenation operation.
Then we combine res-clique blocks into a clique block group. The output of a clique block group makes use of features from all preceding res-clique blocks and can be represented as , where is the output and is the underlying mapping of the -th res-clique block. Since is the input of the first res-clique block, we have . is the output of clique block group. Finally, the output of FEN is a summation of and , that is .
2.3 Image Reconstruction Net
Now we present details about IRN. As shown in the right part of Figure 1, IRN consists of two parts: a clique up-sampling module and a convolutional layer which is used to reduce the number of feature maps to reconstruct the HR image with 3 channels (RGB).
The clique up-sampling module showing in Figure 3 is the most significant part of IRN. It is motivated by discrete wavelet transformation (DWT) and clique block. It contains four sub-nets, representing four sub-bands denoted by LL, HL, LH and HH in the wavelet domain, respectively. Previous CNNs for wavelet domain SR [11, 21] ignore the relationship among the four sub-bands. The LL block represents low-pass filtering of the original image at half the resolution. The output feature maps of FEN encode the essential information in the original LR image. So we use the output feature to learn the LL block firstly. We represent the number of channels of input feature maps by , then , . This process can be written as
where denotes the learnable non-linear function of the LL block for the first step. The HL block shows horizontal edges, mostly. In contrast, the LH block mainly contains vertical edges. As illustrated in the left part of Figure 4, we take an image from Set5  as an example. Both the HL and LH blocks can be learned from the LL block and the feature , written as
where and denote the learnable function to construct the HL and the LH blocks for the first step. The HH block finds edges of the original image in the diagonal direction. Also shown in the left part of Figure 4, the HH block looks similar to the LH and the HL blocks, so we suggest that using LL, HL, LH blocks and the output feature map of FEN could learn the HH block easier than using the feature map alone. We formulate it as
We name the above-mentioned operations as the sub-band extraction stage. We also plot four histograms at the right part of Figure 4 to prove that the sub-band extraction stage is effective. We apply DWT to 800 images from DIV2K  which we use as our training dataset in our experiments and plot histograms of four sub-bands’ DWT coefficients of these images. From Figure 4, we find that the distributions of LH, HL and HH blocks are similar to each other. So it is reasonable to use the HL and LH blocks to learn the HH blocks.
The four sub-bands are followed by a few residual blocks after the sub-band extraction stage. Due to that high frequency coefficients may be more difficult to learn than low frequency coefficients, we use different numbers of residual blocks for different sub-bands. We denote the numbers of residual blocks of each sub-band as and , respectively. we update each sub-band by the following equation
where and represent the residual learnable function of for four sub-bands, respectively. We name the above-mentioned operations as the self residual learning stage.
After the operations of the self residual learning stage, IRN enters the sub-band refinement stage. At this stage, we use the high frequency blocks to refine the low frequency blocks, which is an inverse process of the sub-band extraction stage. Concretely, we use the HH block to learn the LH and the HL blocks, represented as
where and represent the learnable function of sub-band refinement stage for the LH and HL blocks, respectively. For the unification of representations, we define . In a similar way, we update by the following equation
Then we apply IDWT to these four blocks, we choose the simplest wavelet, Haar wavelet, for it can be computed by deconvolution operation easily. The dimensions of all blocks are the same. They are all , where represents the number of feature maps produced by each sub-net. So the output of clique up-sampling module is . At last, the output of clique up-sampling module is sent to a convolutional layer, which is used to reduce the number of channels and get the predicted HR image . We call the up-sampling module as clique up-sampling for the following reasons. First, the connection patterns of these two modules are consistent. Both of clique block and clique up-sampling use dense connections among sub-bands/layers. Second, the forward propagation mechanisms of these two modules seem to be similar, that is, both the two modules update the output of sub-bands/layers stage by stage. Since both the extraction module and the up-sampling module relate to clique, we call our network as Super Resolution CliqueNet (SRCliqueNet in short).
2.4 Comparison between clique block and clique up-sampling
Although we call the block and the up-sampling module as clique block and clique up-sampling, respectively, there are many differences between these two modules. Concretely, the number of sub-bands/layers of clique up-sampling is fixed to four because of the formula of IDWT. In contrast, the layer number of clique block is not constrained. Clique up-sampling has three stages to update the output of each sub-band/layer. The clique block, by contrast, does not have a stage that can update the output by its own layer alone. Since we consider the edge feature properties of all sub-bands, the HL block mostly shows horizontal edges. In contrast, the LH block mainly contains vertical edges. The outputs of these two blocks seem to be “orthogonal”. So there may be no connection between the second and the third sub-bands/layers in clique up-sampling module. At last, the outputs of these two modules are quite different. To be more specific, the output of a clique block is the concatenation form of the output of all layers, which makes it have more channels. The output of clique up-sampling is the output of all layers after IDWT, which increases the resolution.
2.5 Architecture for magnification factor
Till now, we have introduced the network architecture for magnification factor . In this subsection, we propose SRCliqueNet’s architecture for magnification factor , where is the total level of the network. Image pyramid  has been widely used in computer vision applications. LAPGAN  and LapSRN  used Laplacian pyramid for SR. Motivated by these works, we import image pyramid to our proposed network to deal with magnification factors at . As shown in the left part of Figure 5, our model generates multiple intermediate SR predictions in one feed-forward pass through progressive reconstruction. Due to our cascaded and progressive architecture, our final loss consists of parts: . We use the bicubic down-sampling to resize the ground truth HR image to at level . Following [25, 14], we use mean absolute error (MAE) to measure the performance of reconstruction for each level: , where is the predicted HR image at level .
3.1 Implementation and training details
In our proposed SRCliqueNet, we set
as the size of most convolutional layers. We also pad zeros to each side of the input to keep size fixed. We also use a fewconvolutional layers for feature pooling and dimension reduction. The details of our SRCliqueNet’s setting are presented in Table 1. In Table 1, represents the number of clique blocks. and represent the number of layer and the growth rate in each clique block, respectively. The numbers of input and output channels of clique up-sampling module are denoted as and , respectively. and represent the number of residual blocks in the four sub-bands. Unlike most CNNs for computer vision problems, we avoid dropout 15] and instance normalization , which are not suitable for the SR problem, because they reduce the flexibility of features .
Datasets and training details.
We trained all networks using images from DIV2K  and Flickr . For testing, we used four standard banchmark datasets: Set5 , Set14 , BSDS100  and Urban100 . Following settings of , we used a batch size of 16 with size for LR images, while the size of HR images changes according to the magnification factor. We randomly augmented the patches by flipping horizontally or vertically and rotating
. We chose parametric rectified linear units (PReLUs) as the activation function for our networks. The base learning rate was initialized to
for all layers and decreased by a factor of 2 for every 200 epochs. The total training epoch was set to 500. We used Adam
as our optimizer and conducted all experiments using PyTorch.
Magnitude of sub-bands.
As mentioned above, our clique up-sampling module has four sub-nets and every sub-net has connection with the other sub-nets. Since the feature maps of one sub-band are learned from some other sub-bands’, the magnitude of each sub-band block should be similar to others in order to get full use of each sub-net. As shown in Figure 4, the histograms of DWT coefficients of original images are at the top right part. The coefficients’ magnitude of the LL sub-band is quite different from the other three’s, which may make training process difficult. So we want to transform the original images to reduce the difference among magnitudes of the four sub-bands. We propose four modes: (1) Original pixel range from 0 to 255. (2) Each pixel divides 255. (3) Each pixel divides 255 and then subtracts the mean of the training dataset by channel. (4) Each pixel divides 255 and then subtracts the mean of the training dataset by channel, then after DWT the coefficients of LL blocks divide a scalar which is around 4 to make the magnitude of LL sub-band more similar to other sub-bands’. The final histograms are showing in the bottom right part of Figure 4. Under the same experiment setting, we pre-process the input images with the four modes. The performance of four modes are shown in the right part of Figure 5. From the figure, we can find that mode 4 gets best performance in terms of loss value. So in the subsequent experiments, we pre-process our input in mode 4.
3.2 Investigation of FEN and IRN
To verify the power of the res-clique block and the clique up-sampling module, we designed two contrast experiments. In these two experiments, we used a small version of SRCliqueNet which contains eight blocks, each block having four layers and each layer producing 32 feature maps.In the first experiment, we fixed the clique up-sampling module in IRN and used different blocks, i.e, residual block (RB), dense block (DB) and res-clique block (CB) in FEN. In the second experiment, we fixed the clique blocks in FEN and changed the up-sampling module, i.e, deconvolution (DC), sub-pixel convolution (SC), clique up-sampling without joint learning () and clique up-sampling (CU). We recorded the best performance in terms of PSNR/SSIM  on Set5 with magnification factor during 400 epochs. The performances of all kind of settings are listed in Table 2 and 3.
We also visualize the feature maps of four sub-bands in two stages. Since the channels’ number of the two stages is larger than 3, we consider the mean of the feature maps in channel dimension for better visualization, which can be described by . The channel-wise averaged feature maps are shown at the bottom of Figure 3. From Figure 3, we can find that the feature maps of input and stage one do not look like coefficients in the wavelet domain. However, the feature maps of stage two are close to the coefficients of DWT and can reconstruct clear and high resolution images after IDWT. The visualization results demonstrate that it is necessary to add sub-band refinement stage in the clique upsampling module.
3.3 Comparison with other wavelet CNN methods
As mentioned above, some exist methods such as Wavelet-SRNet  and CNNWSR  also used wavelet and CNN for image super-resolution. we first give a detailed comparison with Wavelet-SRNet and SRCliqueNet. There are three main differences between these two models. (1) Wavelet-SRNet learns wavelet coefficients independently and directly. Our SRCliqueNet considers the relationship among the four sub-bands in the frequency domain. Moreover, Our net applies three stages to learn the coefficients of all sub-bands jointly, i.e. sub-band extraction stage, self residual learning stage and sub-band refinement stage. (2) Wavelet-SRNet uses full wavelet packet decomposition to reconstruct SR images with magnification factor and larger. SRCliqueNet reconstructs SR images with large magnification factor progressively by image pyramid. We use the bicubic down-sampling to resize the ground truth HR image at each level to assist learning. So our net can take full advantage of the supervisory information for HR images. (3) SRCliqueNet is based on clique blocks, which can propagate the information among layers more easily than residual block. We also conduct an experiment to compare these two models on Helen test dataset with magnification factor . Our network is trained with images from Helen training dataset, while Wavelet-SRNet is trained with images from both Helen and CelebA datasets. The results are listed in Table 4 below and we can find that our SRCliqueNet outperforms Wavelet-SRNet.
In the following, we list a detailed comparison with CNNWSR and SRCliqueNet. In addition to the above differences between Wavelet-SRNet and SRCliqueNet, CNNWSR is a simpler network with only three layers. CNNWSR supposes that the input LR image is an approximation of LL sub-band. So CNNWSR just tries to learn other three sub-bands by LR image, which is inaccurate. Hence, there is no surprise that our model obviously outstrips CNNSWR in the following quantitative experiment. In , the authors show four reconstructed images (names: monarch, zebra, baby and bird) chosen from Set5 and Set14 datasets. The PSNR comparison on these images is shown in Table 5 below.
3.4 Comparison with the-state-of-the-arts
To validate the effectiveness of the proposed network, we performed several experiments and visualizations. We compared our proposed network with 8 state-of-the-art SR algorithms: DRCN , LapSRN , DRRN , MemNet , SRMDNF , IDN , D-DBPN  and EDSR . We carried out extensive experiments using four benchmark datasets mentioned above. We evaluated the reconstructed images with PSNR and SSIM. Table 6 shows quantitative comparisons on and SR. Our SRCliqueNet performs better than existing methods on almost all datasets. In order to maximize the potential performance of our SRCliqueNet, we adopt the self-ensemble strategy similar with . We mark the self-ensemble version of our model as SRCliqueNet+ in Table 6.
In Figure 6, we show visual comparisons on Set14, BSDS100 and Urban100 with a magnification factor . Due to limited space, we show only four images results here. For more SR results, please refer to our supplementary materials. As shown in Figure 6, our method accurately reconstructs more clear and textural details of English letters and more textural stripes on zebras. For structured architectural style images, our method tends to get more legible reconstructed HR images. The comparisons suggest that our method infers the high-frequency details directly in the wavelet domain and the results prove its effectiveness. Also, our method gets better quantitative results in terms of PSNR and SSIM than other state-of-the-arts.
In this paper, we propose a novel CNN called SRCliqueNet for SISR. We design a new up-sampling module called clique up-sampling which uses IDWT to change the size of feature maps and jointly learn all sub-band coefficients depending on the edge feature property. We design a res-clique block to extract features for SR. We verify the necessity of both two modules on benchmark datasets. We also extend our SRCliqueNet with a progressive up-sampling module to deal with larger magnification factors. Extensive evaluations on benchmark datasets demonstrate that the proposed network performs better than the state-of-the-art SR algorithms in terms of quantitative metrics. For visual quality, our algorithm also reconstructs more clear and textual details than other state-of-the-arts.
This research is partially supported by National Basic Research Program of China (973 Program) (grant nos. 2015CB352502 and 2015CB352303), National Natural Science Foundation (NSF) of China (grant nos. 61625301, 61731018 and 61671027), Qualcomm, and Microsoft Research Asia.
- Adelson  E. H Adelson. Pyramid methods in image processing. Rca Engineer, 29, 1984.
- Arbelaez et al.  Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. IEEE TPAMI, (5):898–916, 2011.
- Bevilacqua et al.  Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie-Line Alberi Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In BMVC, 2012.
- Chan et al.  Raymond H Chan, Tony F Chan, Lixin Shen, and Zuowei Shen. Wavelet algorithms for high-resolution image reconstruction. SIAM Journal on Scientific Computing, (4):1408–1432, 2003.
- Denton et al.  Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, pages 1486–1494, 2015.
- Dong et al.  Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutional network for image super-resolution. In ECCV, pages 184–199, 2014.
- Dong et al.  Chao Dong, Chen Change Loy, and Xiaoou Tang. Accelerating the super-resolution convolutional neural network. In ECCV, pages 391–407, 2016.
- Haris et al.  Muhammad Haris, Greg Shakhnarovich, and Norimichi Ukita. Deep back-projection networks for super-resolution. In CVPR, 2018.
- He et al.  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
- Huang et al. [2017a] Gao Huang, Zhuang Liu, Laurens Van De Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In CVPR, 2017a.
- Huang et al. [2017b] Huaibo Huang, Ran He, Zhenan Sun, and Tieniu Tan. Wavelet-SRNet: A wavelet-based cnn for multi-scale face super resolution. In ICCV, pages 1689–1697, 2017b.
- Huang et al.  Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In CVPR, pages 5197–5206, 2015.
- Huang and Belongie  Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In CVPR, pages 1501–1510, 2017.
- Hui et al.  Zheng Hui, Xiumei Wang, and Xinbo Gao. Fast and accurate single image super-resolution via information distillation network. In CVPR, 2018.
- Ioffe and Szegedy  Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pages 448–456, 2015.
- Ji and Fermuller  Hui Ji and Cornelia Fermuller. Robust wavelet-based super-resolution reconstruction: theory and algorithm. IEEE TPAMI, (4):649–660, 2009.
- Kim et al. [2016a] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In CVPR, pages 1646–1654, 2016a.
- Kim et al. [2016b] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Deeply-recursive convolutional network for image super-resolution. In CVPR, pages 1637–1645, 2016b.
- Kingma and Ba  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Computer Science, 2014.
- Krizhevsky et al.  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
- Kumar et al.  Neeraj Kumar, Ruchika Verma, and Amit Sethi. Convolutional neural networks for wavelet domain super resolution. Pattern Recognition Letters, pages 65–71, 2017.
- Lai et al.  Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. Deep Laplacian pyramid networks for fast and accurate super-resolution. In CVPR, pages 624–632, 2017.
- Lecun et al.  Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, (11):2278–2324, 1998.
- Ledig et al.  Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, pages 4681–4690, 2017.
- Lim et al.  Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In CVPR Workshops, 2017.
- Mallat  Stephane Mallat. Wavelets for a vision. Proceedings of the IEEE, (4):604–614, 1996.
- Robinson et al.  M Dirk Robinson, Cynthia A Toth, Joseph Y Lo, and Sina Farsiu. Efficient Fourier-wavelet super-resolution. IEEE TIP, (10):2669–2681, 2010.
- Shi et al.  Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, pages 1874–1883, 2016.
- Srivastava et al.  Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, (1):1929–1958, 2014.
- Stanković and Falkowski  Radomir S Stanković and Bogdan J Falkowski. The Haar wavelet transform: its status and achievements. Computers & Electrical Engineering, (1):25–44, 2003.
- Tai et al. [2017a] Ying Tai, Jian Yang, and Xiaoming Liu. Image super-resolution via deep recursive residual network. In CVPR, 2017a.
- Tai et al. [2017b] Ying Tai, Jian Yang, Xiaoming Liu, and Chunyan Xu. MemNet: A persistent memory network for image restoration. In CVPR, pages 4539–4547, 2017b.
- Timofte et al.  Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. In CVPR, pages 1865–1873, 2016.
- Timofte et al.  Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, Lei Zhang, Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee, et al. Ntire 2017 challenge on single image super-resolution: Methods and results. In CVPR Workshops, pages 1110–1121. IEEE, 2017.
- Tong et al.  Tong Tong, Gen Li, Xiejie Liu, and Qinquan Gao. Image super-resolution using dense skip connections. In ICCV, pages 4809–4817, 2017.
- Wang et al.  Zhaowen Wang, Ding Liu, Jianchao Yang, Wei Han, and Thomas Huang. Deep networks for image super-resolution with sparse prior. In ICCV, pages 370–378, 2016.
- Wang et al.  Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE TIP, (4):600–612, 2004.
- Yang et al.  Yibo Yang, Zhisheng Zhong, Tiancheng Shen, and Zhouchen Lin. Convolutional neural networks with alternately updated clique. In CVPR, 2018.
- Zeiler and Fergus  Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, pages 818–833, 2014.
- Zeiler et al.  Matthew D Zeiler, Graham W Taylor, and Rob Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In ICCV, pages 2018–2025, 2011.
- Zeyde et al.  Roman Zeyde, Michael Elad, and Matan Protter. On single image scale-up using sparse-representations. In International conference on curves and surfaces, pages 711–730. Springer, 2010.
- Zhang et al.  Kai Zhang, Wangmeng Zuo, and Lei Zhang. Learning a single convolutional super-resolution network for multiple degradations. In CVPR, 2018.