The goal of single image super-resolution (SISR) 
is to recover the sharp high-resolution (HR) counterpart from its low-resolution (LR) observation. Image SR is an ill-posed problem, since there are multiple HR solutions for a LR input. To solve this inverse problem, many convolutional neural networks (CNNs)[6, 37, 17, 23, 42, 13, 5] have been proposed to capture useful priors by learning mappings between LR and HR images. While immense performance has been achieved, learning from external training data solely still falls short in recovering detailed textures for specific images, especially when the up-scaling factor is large.
Apart from exploiting external paired data, internal image-specific information has also been widely studied in image restoration. Some classical non-local methods [2, 4, 25, 10] have shown the values of capturing correlation among non-local self-similar patches for improving the restoration quality. However, convolutional operations are not able to capture such patterns due to the locality of convolutional kernels. Though the receptive fields are large in the deep networks, some long-range dependencies still cannot be well maintained. Inspired by the classical non-local means method , non-local neural networks  are proposed to capture long-range dependencies for video classification. Non-local neural networks are thereafter introduced to image restoration tasks [24, 43].
These methods, in general, perform self-attention weighting of full connection among positions in the features. Besides non-local neural networks, the neural nearest neighbors network  and graph-convolutional denoiser network  have been proposed to aggregate nearest neighboring patches for image restoration. However, all these methods only exploit correlations of recurrent patches within the same scale, without harvesting any high-resolution information. Different from image denoising, the aggregation of multiple similar patches at the same scale (only subpixel misalignments) hardly improve the performance for SR.
The proposed the cross-scale internal graph neural network (IGNN) is inspired by the traditional self-example based SR methods [7, 3, 14]. Our IGNN is based on cross-scale patch recurrence property verified statistically in [45, 9, 28] that patches in a single natural image tend to recur many times across scales. An illustrative example is shown in Figure 1 (a). Given a query patch (yellow square) in the LR image , many similar patches (solid-marked green squares) can be found in the downsampled image . Thus the corresponding HR patches (dashed-marked green squares) in the original LR image
can also be obtained. Such cross-scale patches provide an indication of what the (unknown) HR patches of the query patch might look like. The cross-scale patch recurrence is previously utilized as example-based SR constraints to estimate a HR image[9, 39] or a SR kernel .
In this paper, we model this internal correlations between cross-scale similar patches as a graph, where every patch is a vertex and the edge is similarity-weighted connection of two vertexes from two different scales. Based on this graph structure, we then present our IGNN to process this irregular graph data and explore cross-scale recurrence property effectively. Instead of using this property as constraints [9, 28], IGNN intrinsically aggregates HR patches using the proposed graph module, which includes two operations: graph construction and patch aggregation. More specifically, as shown in Figure 1 (b)(c), we first dynamically construct a cross-scale graph by searching -nearest neighboring patches in the downsampled LR image for each query patch in the LR image . After mapping the regions of neighbors from to scale, the constructed cross-scale graph can provide LR/HR patch pairs for each query patch. In , the vertexes are the patches in LR image and their HR neighboring patches and the edges are correlations of these matched LR/HR patches. Inspired by Edge-Conditioned Convolution , we formulate an edge-conditioned patch aggregation operation based on the graph . The operation aggregates HR patches conditioned on edge labels (similarity of two matched patches). Different from previous non-local methods that explore and aggregate neighboring patches at the same scale, we search for similar patches at the downsampled LR scale but aggregate HR patches. It allows our network to perform more efficiently and effectively for SISR.
The proposed IGNN obtains image-specific LR/HR patch correspondences as helpful complements to the external information learned from a training dataset. Instead of learning a LR-to-HR mapping only from external data as other SR networks do, the proposed IGNN makes full use of most likely HR counterparts found from the LR image itself to recover more detailed textures. In this way, the ill-posed issue in SR can be alleviated in IGNN. We thoroughly analyze and discuss the proposed graph module via extensive ablation studies. The proposed IGNN performs favorably against state-of-the-art CNN-based SR baselines and existing non-local neural networks, demonstrating the usefulness of cross-scale graph convolution for image super-resolution.
In this section, we start by briefly reviewing the general formulation of some previous non-local methods. We then introduce the proposed cross-scale graph aggregation module (GraphAgg) based on graph message aggregation methods [8, 11, 19, 41, 30]. Built on GraphAgg module, we finally present our cross-scale internal graph neural network (IGNN).
2.1 Background of Non-local Methods for Image Restoration
Non-local aggregation strategy has been widely applied in image restoration. Under the assumption that similar patches frequently recur in a natural image, many classical methods, e.g., non-local means  and BM3D , have been proposed to aggregate similar patches for image denoising. With the development of deep neural network, the non-local neural networks [36, 24, 43] and some -nearest neighbor based networks [21, 29, 35] are proposed for image restoration to explore this non-local self-similarity strategy. For these non-local methods that consider similar patch aggregation, the aggregation process can be generally formulated as:
where and are the input and output feature patch (or element) at -th location (aggregation center), and is also the query item in Eq. (1). is the -th neighbors included in the neighboring feature patch set for -th location. The transforms the input to the other feature space. As for , it computes an aggregation weights for transformed neighbors and the more similar patch relative to should have the larger weight. The output is finally normalized by a factor , i.e., .
The above aggregation can be treated as a GNN if we treat the feature patches and weighted connections as vertices and edges respectively. The non-local neural networks [36, 24, 43] actually model a fully-connected self-similarity graph. They estimate the aggregation weights between the query item and all the spatially nearby patches in a window (or within the whole features). To reduce the memory and computational costs introduced by the above dense connection, some -nearest neighbor based networks, e.g., GCDN  and NNet , only consider () most similar feature patches for aggregation and treat them as the neighbors in for every query . For all the above mentioned non-local methods, the aggregated neighboring patches are all in the same scale of the query and no HR information is incorporated, thus leading to a limited performance improvement for SISR. In [9, 45, 28], Irani et al. notice that patch recurrence also exists across the different scales. They explore these cross-scale recurrent LR/HR pairs as example-based constraints to recover the HR images [9, 39] or to estimate the SR kernels  from the LR images.
2.2 Cross-Scale Graph Aggregation Module
For the aforementioned methods [2, 4, 24, 35, 29], the patch size of the aggregated feature patches is the same as the query one. Even though it works well for image denoising, it fails to incorporate high-resolution information and only provides limited improvement for SR. Based on the patch recurrency property  that similar patches will recur in different scales of nature image, we propose a cross-scale internal graph neural network (IGNN) for SISR. An example of patch aggregation in image domain is shown in Figure 1. For each query patch (yellow square) in , we search for the most similar patches (solid-marked squares) in the downsampled image . we then aggregate their HR corresponding patches (dashed-marked squares) in .
The connections between cross-scale patches can be well constructed as a graph, where every patch is a vertex and edge is a similarity-weighted connection of two vertices from two different scales. To exploit the information of HR patches for SR, we propose a cross-scale graph aggregation module (GraphAgg) to aggregate HR patches in feature domain. As shown in Figure 2, the GraphAgg includes two operations: Graph Construction and Patch Aggregation.
Graph Construction: We first downsample the input LR image by a factor of using the widely used Bicubic operation. The downsampled image is denoted as , where the downsampling ratio is equal to the desired SR up-scaling factor. Thus the found neighboring feature patches in graph are the same size as the desired HR feature patch.
To obtain the neighboring feature patches, we first extract embedded features and by the first three layers of VGG19  from and , respectively. Following the notion of block matching in classical non-local methods [2, 4, 25, 10], for a query feature patch in , we find k nearest neighboring patches in according to the Euclidean distance between the query feature patch and neighboring ones. Then, we can get the HR feature patch corresponding to in . We mark this process with a dashed red line in Figure 2, denoted as Vertex Mapping.
Consequently, a cross-scale -nearest neighbor graph is constructed. is the patch set (vertices in graph) including a LR patch set and a HR neighboring patch set , where the size of equals to number of LR patches in . Set is the correlation set (edges in graph) with size , which indicates correlations in for each LR patch in . The two vertices of each edge in this cross-scale graph are LR and HR feature patches, respectively. To measure the similarity of query and the -th neighbor , we define the edge label as the difference between the query feature patch and neighboring patch , i.e., . It will be used to estimate aggregation weights in the following Patch Aggregation operation.
We search similar patches from rather than , hence our searching space is times smaller than previous non-local methods. Unlike the fully-connected feature graph in non-local neural networks , we only select nearset HR neighbors for aggregation, which further leads to a more efficient network. Following the previous non-local methods [2, 4, 24], we also design a searching window in , which is centered with the position of the query patch in the downsampled scale. As verified statistically in [45, 9, 28], there are abundant cross-scale recurring patches in the whole single image. Our experiments show that searching for HR parches from a window region is sufficient for the network to achieve the desired performance.
Patch Aggregation: Inspired by Edge-Conditioned Convolution (ECC) , we aggregate HR neighbors in graph weighted on the edge labels . Our Patch Aggregation reformulates the general non-local aggregation Eq. (1) as:
where is the -th neighboring HR feature patch from GraphAgg module input and is the output HR feature patch at the query location. And the patch2img  operator is used to transform the output feature patches into the output feature . We propose to use an adaptive Edge-Conditioned sub-network (ECN), i.e., , to estimate the aggregation weight for each neighbor according to , which is the feature difference between the query patch and neighboring patch from the embedded feature . We use to denote the exponential function and to represent the normalization factor. Therefore, Eq. (1) defines an adaptive edge-conditioned aggregation utilizing the sub-network ECN. By exploiting edge labels (i.e., ), the proposed GraphAgg aggregates HR feature patches in a robust and flexible manner.
To further utilize , we use a small Downsampled-Embedding sub-network (DEN) to embed it to a feature with the same resolution as and then concatenate it with to get . Then is used in subsequent layers of the network. Note that the two sub-networks ECN and DEN in Patch Aggregation are both very small networks containing only three convolutional layers, respectively. Please see Figure 2 for more details.
Adaptive Patch Normalization: We observe that the obtained HR neighboring patches have some low-frequency discrepancy, e.g., color, brightness, with the query patch. Besides the adaptive weighting by edge-conditioned aggregation, we propose an Adaptive Patch Normalization (AdaPN), which is inspired by Adaptive Instance Normalization (AdaIN)  for image style transfer, to align the neighboring patches to the query one. Let us denote and as the -th channel of features of the query patch and -th HR neighboring patch in , respectively. The -th normalized neighboring patch by AdaPN is formulated as:
are the mean and standard deviation. By aligning the mean and variance of the each neighboring patch features with those of the query patch one, AdaPN transfers the low-frequency information of the query to the neighbors and keep their high-frequency texture information unchanged. By eliminating the discrepancy between query patch andneighbor patches, the proposed AdaPN benefits the subsequent feature aggregation.
2.3 Cross-Scale Internal Graph Neural Network
As shown in Figure 2, we build the IGNN based on GraphAgg. After the GraphAgg module, a final HR feature is obtained. With a skip connection across different scales, the rich HR information in aggregated HR feature is passed directly from the middle to the late position in the network. This mechanism allows the HR information to help the network in generating outputs with more details. Besides, the enriched intermediate feature is obtained by concatenating the input feature and the downsampled-embedded feature from using sub-network DEN. It is then fed into the subsequent layers of the network, enabling the network to explore more cross-scale internal information.
Compared to the previous non-local networks [21, 29, 24, 43] for image restoration that only exploit self-similarity patches with the same LR scale, the proposed IGNN exploits internal recurring patches across different scales. Benefits from the GraphAgg module, IGNN obtains internal image-specific LR/HR feature patches as effective HR complements to the external information learned from a training dataset. Instead of learning a LR-to-HR mapping only from external data as other CNN SR networks do, IGNN takes advantage of most likely HR counterparts to recover more detailed textures. By LR/HR exemplars mining, the ill-posed issue of SR can be mitigated in the IGNN.
To show the effectiveness of our GraphAgg module, we choose the widely used EDSR  as our backbone network, which contains 32 residual blocks. The proposed GraphAgg module is only used once in IGNN and it is inserted after the 16th residual block.
In Graph Construction, we use the first three layers of the VGG19  with fixed pre-trained parameters to embed image and to and , respectively. In Graph Aggregation, both adaptive edge-conditioned network and downsample-embedding network are small network with three convolutional layers. More detailed structures are provided in the supplementary material.
Datasets and Evaluation Metrics
Datasets and Evaluation Metrics: Following [23, 12, 44, 42, 5], we use 800 high-quality (2K resolution) images from DIV2K dataset  as training set. We evaluate our models on five standard benchmarks: Set5 , Set14 , BSD100 , Urban100  and Manga109  in three upscaling factors: , and . The estimated high-resolution images are evaluated by PSNR and SSIM  on Y channel (i.e., luminance) of the transformed YCbCr space.
Training Settings: We crop the HR patches from DIV2K dataset  for training. Then these patches are downsampled by Bicubic to get the LR patches. For all different downsampling scales in our experiments, we fixed the size of LR patches as . All the training patches are augmented by randomly horizontally flipping and ratation of , , . We set the minibatch size to 4 and train our model using ADAM  optimizer with the settings of , , . The initial learning rate is set as and then decreases to half for every iterations. Training is terminated after iterations. The network is trained by using
norm loss. The IGNN is implemented on the PyTorch framework on an NVIDIA Tesla V100 GPU.
In the Graph Aggregation module, we set the number of neighbors as 5. The size of the searching window is 30 within the times downsampled LR (i.e., ). Note that our GraphAgg is a plug-in module, and the backbone of our network is based on EDSR. We use the pretrained backbone model to initialize the IGNN in order to improve the training stability and save the training time.
3.1 Comparisons with State-of-the-Art Methods
We compare our proposed method with 11 state-of-the-art methods: VDSR , LapSRN , MemNet , EDSR , DBPN , RDN , NLRN , RNAN , SRFBN , OISR , and SAN . Following [23, 34, 43, 5], we also use self-ensemble strategy to further improve our IGNN and denote the self-ensembled one as IGNN+.
As shown in Table 1, the proposed IGNN outperforms existing CNN-based methods, e.g. VDSR , LapSRN , MemNet , EDSR , DBPN , RDN , SRFBN  and OISR , and existing non-local neural networks, e.g. NLRN  and RNAN . Similar to OISR , IGNN is also built based on EDSR  but has better performance. This demonstrates the effectiveness of the proposed GraphAgg for SISR. In addition, the GraphAgg only has two very small sub-networks (ECN and DEN), each of which only contain three convolutional layers. Thus, the improvement comes from the cross-scale aggregation rather than a larger model size. As to SAN , it performs the best in some cases. However, it uses a very deep network (including 200 residual blocks) which is around seven times deeper than the proposed IGNN.
We also present a qualitative comparison of our IGNN with other state-of-the-art methods, as shown in Figure 3. The IGNN recovers more details with less blurring, especially on small recurring textures. This results demonstrate that IGNN indeed explores the rich texture from cross-scale patch searching and aggregation. Compared with other methods, IGNN obtains image-specific information from the searched HR feature patches. Such internal cues complement external information obtained by network learning from the dataset. More visual results are provided in the supplementary material.
3.2 Analysis and Discussions
In this section, we conduct a number of comparative experiments for further analysis and discussions.
|after 8th||after 16th||after 24th|
Effectiveness of Graph Aggregation Module: In order to show the effectiveness of the cross-scale aggregation intuitively, we provide a non-learning version, denoted as GraphAgg*, which constructs the cross-scale graph in exactly the same way as to IGNN. Different from IGNNwhose GraphAgg aggregates extracted features in IGNN, GraphAgg* directly aggregates neighboring HR patches cropped from the input LR image by simply averaging. As shown in the first row of Figure 4, GraphAgg* is capable of recovering more detailed and sharper result, compared with the Bicubic upsampled input LR image. The results intuitively show the effectiveness of cross-scale aggregation in image SR task. Even though the SR images generated from GraphAgg* are promising, they still contain some artifacts in the second row of Figure 4
. The proposed IGNN can remove them and restore better images with finer details by aggregating the features extracted from the network.
To further verify the effectiveness of GraphAgg, we replace it with the basic non-local block [36, 24] with Embedded Gaussian distance. The results in Table 3 show that the basic non-local blocks bring limited improvements of only 0.05 dB in PSNR. In contrast, IGNN shows evident improvements in performance, suggesting the importance of cross-scale aggregation for SISR.
Position of Graph Aggregation Module: We compare three positions in the backbone network to integrate GraphAgg, i.e., after the 8th residual block, after the 16th residual block and after the 24th residual block. As summarized in Table 3, performance improvement is observed at all positions. The largest gain is achieved by inserting GraphAgg in the middle, i.e., after the 16th residual block.
Settings for Graph Aggregation Module: We investigate the influence of the searching window size and neighborhood number in GraphAgg. Table 5 shows the results on Urban100 () for different size of searching window . As expected, the estimated SR image has better quality when increases. We also find that has almost the same performance relative to searching among the whole downsampled features (). Therefore, we empirically set (i.e., window) as a trade-off between the computational complexity and performance.
Table 5 presents the results on Urban100 () for different number of neighbors . In general, more neighbors improve SR results since more HR information can be utilized by GraphAgg. However, the performance does not improve after since it may be hard to find more than five useful HR neighbors for aggregation.
|w/o AdaPN||w/o ECN||w/o AdaPN and w/o ECN||IGNN|
Effectiveness of Adaptive Patch Normalization and Edge-Conditioned sub-network: The retrieved HR neighboring patches are sometimes mismatched with the query patch in low-frequency information, e.g., color, brightness. To solve this problem, we adopt two modules in the proposed GraphAgg, i.e., Adaptive Patch Normalization (AdaPN) and Edge-Conditioned sub-network (ECN), i.e., . To validate the effectiveness of AdaPN and ECN, we compare GraphAgg with three variants: removing AdaPN only (w/o AdaPN), removing ECN only (w/o ECN), and removing both of them (w/o AdaPN and w/o ECN). Table 3.2 shows that the network performs worse when any one of them is removed. Note that we remove ECN by replacing in Eq. (2) by the metric of weighted Euclidean distance with Gaussian kernel, i.e., . The above experimental results demonstrate that AdaPN and ECW indeed make the GraphAgg module more robust for the patch aggregation.
We present a novel notion of modelling internal correlations of cross-scale recurring patches as a graph, and then propose a graph network IGNN that explores this internal recurrence property effectively. IGNN obtains rich textures from the HR counterparts found from LR features itself to alleviate the ill-posed nature in SISR and recover more detailed textures. The paper has shown the effectiveness of the cross-scale graph aggregation, which passes HR information from HR neighboring patches to LR ones. Extensive results over benchmarks demonstrate the effectiveness of the proposed IGNN against state-of-the-art SISR methods.
-  Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie Line Alberi-Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In BMVC, 2012.
-  Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In CVPR, 2005.
-  Hong Chang, Dit-Yan Yeung, and Yimin Xiong. Super-resolution through neighbor embedding. In CVPR, 2004.
-  Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. TIP, 16(8):2080–2095, 2007.
-  Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. In CVPR, 2019.
-  Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. TPAMI, 38(2):295–307, 2015.
-  William T Freeman, Thouis R Jones, and Egon C Pasztor. Example-based super-resolution. CG&A, 22(2):56–65, 2002.
-  Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In ICML, 2017.
-  Daniel Glasner, Shai Bagon, and Michal Irani. Super-resolution from a single image. In ICCV, 2009.
-  Shuhang Gu, Lei Zhang, Wangmeng Zuo, and Xiangchu Feng. Weighted nuclear norm minimization with application to image denoising. In CVPR, 2014.
-  Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, 2017.
-  Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Deep back-projection networks for super-resolution. In CVPR, 2018.
-  Xiangyu He, Zitao Mo, Peisong Wang, Yang Liu, Mingyuan Yang, and Jian Cheng. Ode-inspired network design for single image super-resolution. In CVPR, 2019.
-  Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In CVPR, 2015.
-  Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017.
-  Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In CVPR, 2016.
-  Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Deeply-recursive convolutional network for image super-resolution. In CVPR, 2016.
-  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
-  Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
-  Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In CVPR, 2017.
-  Stamatios Lefkimmiatis. Universal denoising networks: a novel cnn architecture for image denoising. In CVPR, 2018.
-  Zhen Li, Jinglei Yang, Zheng Liu, Xiaomin Yang, Gwanggil Jeon, and Wei Wu. Feedback network for image super-resolution. In CVPR, 2019.
-  Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In CVPRW, 2017.
-  Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, and Thomas S Huang. Non-local recurrent network for image restoration. In NeurIPS, 2018.
-  Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Non-local sparse models for image restoration. In ICCV, 2009.
-  David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, 2001.
-  Yusuke Matsui, Kota Ito, Yuji Aramaki, Azuma Fujimoto, Toru Ogawa, Toshihiko Yamasaki, and Kiyoharu Aizawa. Sketch-based manga retrieval using manga109 dataset. MTA, 76(20):21811–21838, 2017.
-  Tomer Michaeli and Michal Irani. Nonparametric blind super-resolution. In ICCV, 2013.
-  Tobias Plötz and Stefan Roth. Neural nearest neighbors networks. In NeurIPS, 2018.
-  Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In CVPR, 2017.
-  Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  Ying Tai, Jian Yang, Xiaoming Liu, and Chunyan Xu. Memnet: A persistent memory network for image restoration. In ICCV, 2017.
-  Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, and Lei Zhang. Ntire 2017 challenge on single image super-resolution: Methods and results. In CVPRW, 2017.
-  Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. In CVPR, 2016.
-  Diego Valsesia, Giulia Fracastoro, and Enrico Magli. Deep graph-convolutional image denoising. arXiv preprint arXiv:1907.08448, 2019.
-  Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018.
-  Zhaowen Wang, Ding Liu, Jianchao Yang, Wei Han, and Thomas Huang. Deep networks for image super-resolution with sparse prior. In ICCV, 2015.
-  Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. TIP, 13(4):600–612, 2004.
-  Jianchao Yang, Zhe Lin, and Scott Cohen. Fast image super-resolution based on in-place example regression. In CVPR, 2013.
-  Roman Zeyde, Michael Elad, and Matan Protter. On single image scale-up using sparse-representations. In ICCS, 2010.
-  Li Zhang, Dan Xu, Anurag Arnab, and Philip HS Torr. Dynamic graph message passing networks. In NeurIPS, 2019.
-  Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In ECCV, 2018.
-  Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, and Yun Fu. Residual non-local attention networks for image restoration. In ICLR, 2019.
-  Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In CVPR, 2018.
-  Maria Zontak and Michal Irani. Internal statistics of a single natural image. In CVPR, 2011.
Appendix: More Visual Results
In this section, we provide more visual comparisons with seven state-of-the-art SISR networks, i.e., VDSR , EDSR , RDN , RCAN , OISR , SAN , and RNAN , on standard benchmark datasets. As shown in Figure 5 and Figure 6, the proposed IGNN recovers richer and sharper details from the LR images especially in the regions with recurring patterns.