Image restoration has been a long-standing problem given its practical value for a variety of low-level vision applications, such as face restoration, semantic segmentation[30, 37] and target tracking[31, 52]. In general, image restoration aims to recover clean image from its corrupted observation , where is a ground-truth high-quality version of , is a degradation function, is additive noise. By accommodating different types of degradation function, the resulting mathematical models target at specific image restoration tasks, such as image super-resolution, denoising, and deblocking. Image super-resolution reconstructs a high-resolution (HR) image from the low-resolution (LR) counterpart with being a composite operator of blurring and down-sampling. Image denoising retrieves a clean image from a noisy observation, with commonly being the identity function and
being additive white Gaussian noise with standard deviation. JPEG image deblocking aims to remove the blocking artifact from a lossy image caused by corresponding to the JPEG compression function.
. The recent development of deep learning, especially convolutional neural networks (CNNs), has notably increased progress of image restoration[53, 40, 54, 55, 32]. Deep CNNs that enlarge the receptive field or enhance feature reusing provide state-of-the-art results in single-task image restoration, such as single image super-resolution[24, 29, 17], image denoising[18, 32] or JPEG image artifacts removal[14, 33], through residual learning and dense connections.
It is desirable for an image restoration network to well support all the three aforementioned tasks. Unfortunately, most existing models only perform well in one of these tasks. It is commonly recognized that these tasks happen to have strong correlations. In order to support all the tasks, the neural network of image restoration must fully harness the inter-task correlations.
Moreover, there exist critical differences on how to best treatment the three tasks. In particular, selection of feature scales is known to significantly impact the performance on these tasks. It is also well-known that each of these tasks has its own favorable scales of feature extraction. That is why we propose the cross-scale residual network (CSRnet) to improve multiple-scales features utilization and the performance on multiple tasks.
Fig. 1 shows the diagram of the proposed CSRnet, which extracts various features at different scales and fully uses all the hierarchical features throughout the network. Specifically, we propose cross-scale residual blocks (CSRBs) (see Fig.2), whose three states operate at different spatial resolutions, as the building blocks for the CSRnet. The states capture information at different scales, and the intra-block cross-scale connection of each CSRB produces an information flow from the fine to the coarse scale or vice versa. In addition, the inter-block connection combines information at a given resolution from all the preceding CSRBs, to provide rich features for the current CSRB. Extensive experimental results verify that each proposed component improves the network performance, and hence the CSRnet outperforms state-of-the-art methods in image super-resolution, denoising, and deblocking.
Our main contributions can be summarized in the following aspects:
We propose a cross-scale residual network (CSRnet) which simultaneously implements multi-temporal feature reusing and multi-spatial scale feature learning for multiple image restoration tasks, namely, image super-resolution, denoising, and deblocking.
Cross-scale residual blocks (CSRB) are proposed as the basic building block of the proposed network. The CSRB adaptively learns feature information from different scales. Though deep-learning-based methods have achieved a notable improvement over traditional methods in image restoration domain, most of them learn features from the image space at a single scale, thus cannot handle the scenario of multiple tasks. To this end, we design the CSRB to efficiently extract and adaptively fuse features from different scales for multiple tasks of image restoration.
To enhance feature reusing in the blocks and gradient flow during training, we propose two kinds of connections, namely, intra-block cross-scale connection and inter-block connection. The former produces an information flow from the fine to the coarse scale or vice versa. The latter allows the information from preceding blocks to be reused for learning of succeeding block features.
Ii Related Work
Ii-a Image super-resolution
Methods based on CNNs have recently revolutionized the field of image super-resolution. The most commonly used approach is to consider the interpolated low-resolution image as input to the network. Dong et al. first introduced an end-to-end CNN model called SRCNN to reconstruct interpolated low-resolution images into their high-resolution counterparts. Improvements to the SRCNN include a very deep network for super-resolution (VDSR), which increases the network depth with a smaller filter size and residual learning , and a deeply recursive convolutional network (DRCN), which uses recursive layers and multi-supervision . Deep CNN models using block structures [41, 40] based on residual units use features from different temporal levels for reconstruction. Although these methods [24, 50, 40, 9, 23, 41] have considerably improved super-resolution accuracy, the interpolated low-resolution inputs increase the computational complexity and might introduce additional noise.
Given the specificity of image super-resolution, another effective approach directly takes the low-resolution image as input to the CNN[29, 28, 43, 39, 7] for decreasing computational cost. Shi et al. proposed a sub-pixel convolutional layer to effectively up-sample the low-resolution feature maps in an approach, which is also used in enhanced deep residual networks for super-resolution. Based on dense connection, the SRDenseNet and residual dense network employ dense blocks or residual dense blocks to learn high-level features, whose outputs are concatenated into a final output. In addition, generative adversarial networks have been used for image super-resolution[28, 7] to learn adversarial and perceptual content losses that can improve visual quality.
Ii-B Image denoising
Traditional methods such as the BM3D algorithm and those based on dictionary learning  have improved the performance of image denoising to some extent. Still, methods based on CNNs are more suitable for this task. Xie et al.
combined sparse coding with an auto-encoder structure for image denoising. Inspired by residual learning and batch normalization, Zhang et al. proposed the DnCNN model to improve the outcome of image denoising. Mao et al. proposed a very deep convolutional auto-encoder network (RED) using symmetric skip connections for image denoising and super-resolution. Du et al. proposed stacked convolutional denoising auto-encoders to map images to hierarchical representations without any label information. Zhang et al.  integrated CNN denoisers into model-based optimization for image super-resolution and denoising. Tai et al. proposed a very deep persistent memory network (MemNet) that introduces a memory block consisting of a recursive unit and a gate unit to simultaneously perform several image restoration tasks.
Ii-C JPEG image deblocking
Given that JPEG compression often induces severe blocking artifacts and undermines visual quality, image deblocking is particularly important in restoration domain. Chen et al. proposed a flexible learning framework based on nonlinear reaction diffusion models for JPEG image deblocking, super-resolution, and denoising. Wang et al. designed a deep dual-domain-based fast restoration model for JPEG image deblocking, which combines prior knowledge from the JPEG compression scheme and the sparsity-based dual-domain approach. Unlike these traditional methods [5, 45], JPEG image deblocking based on CNNs is more effective to remove the blocking artifact and improve visual quality. Dong et al. proposed an artifact reduction CNN (ARCNN) for JPEG image deblocking. From the method in , the dual-domain CNN proposed by Guo et al.  performs joint learning of the discrete cosine transform and pixel domains. To improve visual quality and artistic appreciation, Guo et al. proposed a one-to-many network for JPEG image deblocking, which measures the output quality using perceptual, naturalness, and JPEG losses.
Ii-D Multiple task image processing
There exists only a few methods for multi-task image processing. The method proposed by Zhang et al. uses CNN-based denoisers into model-based optimization for image denoising and super-resolution. A very deep persistent memory network  introduces a memory block to explicitly mine persistent memory through adaptive learning for image denoising, super-resolution, and deblocking. Likewise, Y. Zhang et.al  proposed a residual dense network to exploit the hierarchical features from all the convolutional layers in three representative image restoration applications. However, these methods learn image mappings at a single scale, and ignore that different tasks may require features from different scales.
Iii Proposed CSRnet for Image Restoration
The proposed CSRnet illustrated in Fig. 1 comprises three stages: shallow feature extraction stage, hierarchical feature fusion stage, and reconstruction stage. They are respectively responsible for extracting shallow image features, fusing abundant feature maps, and adding image details. We denote and as the input and output of the CSRnet, respectively.
Shallow Feature Extraction Stage: We utilize two convolutional layers to extract shallow features from low-quality input images. The first convolutional layer extracts features from the input image, and the second convolutional layer reduces the dimension of the features. Shallow feature extraction stage can be expressed as:
where denotes the first convolutional operation,with filter size 77. Using a large convolutional kernel can produce a large receptive field which takes a large image context into account. denotes the second convolutional operation, with filter size 33. is further used for residual learning during the reconstruction stage by skip connection and is used as the input for the first CSRB.
Hierarchical Feature Fusion Stage: The CSRnet learns hierarchical features from every CSRB that has identical structure in this stage. If CSRBs are stacked by inter-block connection, hierarchical feature fusion stage makes full use of the scale state from each CSRB by:
where denotes the concatenation features of the outputs of scale from all the CSRBs and the shallow output from the previous stage, and introduces a convolutional layer to adaptively control the dimension of feature maps before inputting reconstruction stage.
Reconstruction Stage: To further improve information flow and reconstruct image details, this stage contains a skip connection and two convolutional layers and is expressed as:
where the skip connection adds output of the hierarchical feature fusion stage with shallow features from shallow feature extraction stage. And denotes two convolutional operations with filter sizes .
Given training set with training patches and is the ground truth high-quality patch corresponding to the low-quality patch
, we define the loss function of our model with the parameterset as below:
Iii-B Cross-Scale Residual Block (CSRB)
To determine image features at different scales, we propose the CSRB as the key component of the CSRnet. A CSRB adopts three branches using different scales (i.e., , , and ), to enable the use of cross-scale features. It is illustrated in Fig. 2 and detailed as follows.
Cross-Scale Design: Unlike models working at a single spatial resolution, the CSRB incorporates information from different scales. Specifically, boxes with different colored edges in Fig. 2 represent the structure designs at different scales. The boxes with black, purple, and yellow edges indicate scale = 0, 2, and 4, respectively. The value of represents the scale of downsampling, i.e., , , and . Two colored links, called intra-block cross-scale connections, indicate transitions between the three scales. The green and blue links respectively produce information flow from fine to coarse scale and vice versa. To learn abundant features from the previous blocks, we add the red link, called inter-block connection, at each scale.
The input at a given scale ( = 0, 2, 4) in the -th CSRB is computed by concatenating two kinds of features, namely, 1) same-scale features ( , = 0, 2, 4) from all the previous CSRBs; 2) either shallow features (, = 1) or finer-scale feature map (,
= 2, 4). The finer-scale feature map is obtained by a strided convolution from the higher-resolution layers. The overall inputs of the-th CSRB are given as:
where denotes a convolutional layer intended to reduce and maintain the dimension for input at different scales, and and are described detailedly in the intra-block cross-scale connections and inter-block connections.
To facilitate feature specialization at different resolutions, we modify the residual blocks for each scale input of the -th CSRB by removing the batch normalization layers from our network, as performed by Nah et al. Enhanced deep residual networks for super-resolution have experimentally shown that this simple modification substantially increases performance. The outputs of different scales in the -th CSRB can be formulated as:
where denotes the composite operations of the -th residual block indenotes a convolutional layer, that maintains the dimension for the outputs at different scales of CSRB. , = 0, 2 is the coarser-scale feature map obtained by deconvolution from lower-resolution layers, as detailed in the intra-block cross-scale connection.
Intra-Block Connection: Features across different scales can provide various types of information for image restoration. Hence, we propose the intra-block cross-scale connection for producing information flow from fine to coarse scale and vice versa.
Finer-scale feature map ( , = 2, 4) is produced from the higher-resolution layers by:
where denotes the down-sample convolutional operations, whose corresponding layer uses a stride size of 2 to reduce the size of the feature map by half.
Likewise, coarser-scale feature map ( , = 0, 2) is obtained from lower-resolution layers by:
where denotes the up-sample convolutional operations, whose corresponding layer uses a stride size of 1/2 to double the size of the feature map.
Inter-Block Connection: To enhance feature reusing and gradient flow, we perform inter-block connection that utilizes the information at a given resolution from all previous blocks. The input at a particular scale ( = 0,2,4) of -th CSRB can receive the corresponding scale features of all the preceding CSRBs as follows:
where represents the concatenation of features retrieved by all the preceding CSRBs at a particular scale. When d=1, .
In this section, we first describe the experimental setup including the datasets and network settings of the proposed CSRnet. Then, taking image super-resolution as an example, we evaluate the contributions of different CSRnet components and parameters through an ablation study, and then analyze the effect of the CSRnet depth. Finally, we compare our model with state-of-the-art methods in both objective and subjective aspects on three image restoration tasks, namely, denoising, super-resolution, and deblocking.
For image super-resolution, we generated the bicubic up-sampled image patches by using function imresize in MATLAB  with option bicubic as the input to CSRnet. Following [9, 23], we evaluate the proposed model on four popular benchmark datasets, namely Set5 , Set14 , BSD100 and Urban100 , with upscaling factors = 2, 4, and 8.
For image denoising, we generated noisy patches as CSRnet input by adding Gaussian noise at two levels =15, 30 and 50 to the clean patches. Four popular benchmarks, a dataset with 14 common images, BSD68, Urban100  and the BSD testing set with 200 images , were used for evaluation.
For JPEG image deblocking, we compressed the images using the MATLAB JPEG encoder with compression quality settings = 10, 20 as JPEG deblocking input to the CSRnet. Like in , we evaluated the CSRnet and comparison methods on the Classic5 and LIVE1 datasets.
Iv-B Network Settings
We use TensorFlow to implement the basic CSRnet network. Each convolutional layer, except for the first and final layers, has 32 filters. The first convolution layer has 64 filters, which are used to extract more shallow information. The final convolutional layer has a single feature channel (1 filter), which is used to output the high-quality image. Training the basic CSRnet for image super-resolution roughly required three days on a single GTX 1080 GP (Nvidia Co., Santa Clara, CA, USA). Due to space constraint, we focus on image super-resolution in Sec. IV.C and IV.D, while all three tasks in Sec. IV.E.
We evaluate the results for image restoration tasks in terms of the peak signal-to-noise ratio (PSNR) and structural similarity image measurement (SSIM) on the Y channel (luminance) in the YCbCr image space. The other two chrominance channels were directly transformed from the interpolated LR images for displaying the results.
Iv-C Ablation Study
TABLE I lists the PSNR obtained from the ablation study on the effects of multiple scales and inter-block connection. The baseline (denoted as CSRnet-1S) is at a single scale (). To further verify the effectiveness of the multiple spatial scale, CSRnet-2S adds a coarse scale () to baseline CSRnet-1S, and CSRnet-3S adds another coarse scale () to CSRnet-2S. These networks can exchange features among different scales via intra-block cross-scale connections. Among the three networks, CSRnet-3S achieves the best performance on the four testing datasets. To some extent, adding more scales enables a better learning of features, thereby further improves the network performance.
|15||- /-||32.01/0.8984||32.23 /0.9041||32.56/0.9110||-/-||-/-||32.86/0.9162|
|30||27.31 /0.7755||27.33 /0.7717||26.76/0.7101||28.52/0.8094||28.04/0.8053||-/-||28.82/0.8220|
|PSNR/SSIM||PSNR/SSIM||PSNR /SSIM||PSNR/SSIM||PSNR/SSIM||PSNR /SSIM||PSNR /SSIM|
|10||27.82/0.7595||29.03 /0.7929||29.28 /0.7992||29.40/0.8026||29.69/0.8107||29.43/0.8070||30.03/0.8199|
Then, we add inter-block connections to CSRnet-3S and denote the resulting network as CSRnet-Dense, which corresponds to the complete CSRnet. Compared to the previous CSRnet variants, CSRnet-Dense achieves the best results on the four testing datasets, which verifies the effect of inter-block connection. Through the inter-block connections, each component is able to contribute to information and gradient flow through the network.
To demonstrate the convergence of the four evaluated CSRnet variants, we determined PSNR curves shown in Fig. 3 with bicubic results being the reference. The four models have a stable training process without obvious performance degradation. In addition, multiple scales and inter-block connection not only accelerate convergence but also notably improve performance.
Iv-D Depth analysis of our network
Besides different architectures, we evaluated different depths of the proposed CSRnet. The network depth is related to two basic parameters: number of CSRBs and number of residual blocks per CSRB. In this study, we only tested the effect of number of blocks, , by setting up three structures: . TABLE V lists the PSNR obtained from image super-resolution of these networks on the four evaluated datasets, Set5, Set14, BSD100 and Urban100 with upscaling factor 2. Increasing the number of CSRBs considerably improves the PSNR in the datasets given the increased network depth, which in turn retrieves more hierarchical features for improving performance.
Iv-E Comparisons with State-of-the-Art Models
We compared the CSRnet with the state-of-the-art models for three restoration tasks, namely, image super-resolution, image denoising, and JPEG image deblocking.
Iv-E1 Image Super-resolution
Regarding image super-resolution, we quantitatively compared the proposed CSRnet with eight state-of-the-art methods, namely, namely SRCNN16, VDSR16, DRCN15, ESPCN16, LapSRN17, MemNet17, WaveResNet17, DRFN18, DSRN18 and EEDS19. For a fair comparison, we evaluated all the methods on the luminance channel for all upscaling factors. The comparison results on the four evaluated datasets for three upscaling factors (=2, 4, 8) are listed in TABLE II. The proposed CSRnet substantially outperforms the comparison models over the different upscaling factors and test datasets. On the Urban100 dataset, the CSRnet outperforms the second-best method by a PSNR gain of 0.60 dB at upscaling factor 4 . On the BSD100, the CSRnet achieves a PSNR gain of only 0.15 dB compared with the second-best method. Similar results occur at other scales and with respect to other comparison models. Hence, the proposed CSRnet performs better especially on structured images with similar geometric patterns across various spatial resolutions, such as urban scenes (Urban100). What’s more worth mentioning is the SSIM performance of our method at upscaling factors 8 . The SSIM value of CSRnet can be 0.050.07 higher than LapSRN at upscaling factor 8 , however, at other upscaling factors, the SSIM value of CSRnet is no more than 0.02 higher than LapSRN. This strongly proves that our method can retain higher structural similarity under larger upscaling factor.
Besides the quantitative comparison, Figs. 4, 5 and 6 show visual comparisons among the evaluated methods. Fig. 4 shows that the proposed CSRnet reconstructs clearer letters than the other models on an image from the Set14 dataset at upscaling factor 2 . Likewise, Fig. 5 shows that the CSRnet clearly recovers the parallel line structures on an image from the Urban100 dataset at upscaling factor 4 , whereas the other models retrieve obvious distortions. In Fig. 6, our model more realistically restores the man’s eyes and his shirt’s stripes for an image from the BSD100 dataset at upscaling factor 8 , whereas other methods are highly distorted. Overall, the CSRnet outperforms the other evaluated models both quantitatively and qualitatively.
Iv-E2 Image Denoising
We trained the proposed CSRnet by using the gray images and compared the results to those obtained from eight denoising methods: BM3D07, TNRD15 ,PGPD15 DnCNN16, IRCNN17, RED18, MemNet17 and FOCNet19. TABLE III lists the average PSNR/SSIM results of the evaluated methods on four benchmark datasets for three noise levels. The PSNR values of CSRnet is better than those of the second-best method at any noise level or any dataset. Like for super-resolution, Figs. 7 , 8 and 9 show visual comparisons among the evaluated methods on an image from BSD68 with the noise level = 15, an image from BSD200 with the noise level = 30 and an image from S14 with the noise level = 50. The proposed CSRnet recovers relatively sharper and clearer images than the other methods, thus being more faithful to the ground truth.
Iv-E3 JPEG image Deblocking
We applied the proposed CSRnet for deblocking considering only on the Y channel and compared it with four existing methods: ARCNN15, TNRD15, DnCNN16, MemNet17 and IACNN19. Table IV lists the average PSNR/SSIM of the evaluated methods on two benchmark datasets, namely, Classic5 and LIVE1, for quality factors of 10 and 20. The CSRnet outperforms IACNN19, the current state-of-the-art method, by more than 0.60 and 0.57 dB in Classic5 dataset, and 0.38 and 0.35 dB in the LIVE1 dataset with quality factors of 10 and 20 respectively. Fig. 10 and 11 show visual comparisons for JPEG image deblocking. ARCNN, DnCNN, and MemNet were compared using their public codes. Clearly, CSRnet more effectively removes the blocking artifact and restores detailed textures than the comparison methods.
This paper presents the CSRnet, a deep network intended to exploit scale-related features and the inter-task correlations among the three tasks: super-resolution, denoising, and deblocking. Several CSRBs are stacked in the CSRnet and adaptively learn image features at different scales. The same-resolution outputs from all the previous CSRBs are used by the current CSRB via inter-block connections for reusing information. The intra-block cross-scale connection within a CSRB at any scale allows to learn more abundant features from finer to coarser scales or vice versa. Extensive evaluations and comparisons with existing methods verify the advantages of the proposed CSRnet. In future developments, we will extend the CSRnet to handle more general restoration tasks such as image deblurring and blind deconvolution.
-  (2016) TensorFlow: large-scale machine learning on heterogeneous distributed systems. Cited by: §IV-B.
-  (2017) Beyond deep residual learning for image restoration: persistent homology-guided manifold simplification. In , pp. 145–153. Cited by: §IV-E1, TABLE II.
-  (2012) Neighbor embedding based single-image super-resolution using semi-nonnegative matrix factorization. In IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1289–1292. Cited by: §IV-A.
-  (2008) Nonlocal image and movie denoising. International journal of computer vision 76 (2), pp. 123–139. Cited by: §I.
-  (2015) Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (6), pp. 1256–1272. Cited by: §II-C, §IV-E2, §IV-E3, TABLE III, TABLE IV.
-  (2019) Sequential gating ensemble network for noise robust multiscale face restoration. IEEE transactions on cybernetics. Cited by: §I.
-  (2018) Generative adversarial network-based image super-resolution using perceptual content losses. arXiv preprint arXiv:1809.04783. Cited by: §II-A.
-  (2002) A robotics toolbox for matlab. IEEE Robotics And Automation Magazine 3 (1), pp. 24–32. Cited by: §IV-A.
-  (2016) Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (2), pp. 295–307. Cited by: §II-A, §IV-A, §IV-E1, TABLE II.
-  (2015) Compression artifacts reduction by a deep convolutional network. In Proceedings of the IEEE International Conference on Computer Vision, pp. 576–584. Cited by: §II-C, §IV-A, §IV-B, §IV-E3, TABLE IV.
-  (2016) Stacked convolutional denoising auto-encoders for feature representation. IEEE transactions on cybernetics 47 (4), pp. 1017–1027. Cited by: §II-B.
-  (2018) Text image deblurring using kernel sparsity prior. IEEE transactions on cybernetics. Cited by: §I.
-  (2016) Building dual-domain representations for compression artifacts reduction. Springer International Publishing. Cited by: §II-C.
-  (2017) One-to-many network for visually pleasing compression artifacts reduction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4867–4876. Cited by: §I, §II-C.
-  (2013) Enhanced computer vision with microsoft kinect sensor: a review. IEEE transactions on cybernetics 43 (5), pp. 1318–1334. Cited by: §I.
-  (2018) Image super-resolution via dual-state recurrent networks. In Proc. CVPR, Cited by: §IV-E1, TABLE II.
-  (2018) Deep back-projection networks for super-resolution. Cited by: §I.
-  (2012) Image denoising: can plain neural networks compete with bm3d?. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2392–2399. Cited by: §I.
-  (2015) Single image super-resolution from transformed self-exemplars. In Computer Vision and Pattern Recognition, pp. 5197–5206. Cited by: §IV-A, §IV-A.
-  (2019) FOCNet: a fractional optimal control network for image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6054–6063. Cited by: §IV-E2, TABLE III.
-  (2010) Image super-resolution via sparse representation. IEEE Trans Image Process 19 (11), pp. 2861–2873. Cited by: §IV-A.
-  (2019) Ensemble super-resolution with a reference dataset. IEEE transactions on cybernetics. Cited by: §I.
-  (2015) Deeply-recursive convolutional network for image super-resolution. pp. 1637–1645. Cited by: §II-A, §IV-A, §IV-E1, TABLE II.
-  (2016) Accurate image super-resolution using very deep convolutional networks. In Computer Vision and Pattern Recognition, pp. 1646–1654. Cited by: §I, §II-A, §IV-E1, TABLE II.
-  (2019) A pseudo-blind convolutional neural network for the reduction of compression artifacts. IEEE Transactions on Circuits and Systems for Video Technology. Cited by: §IV-E3, TABLE IV.
-  (2007) Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing 16 (8), pp. 2080. Cited by: §II-B, §IV-E2, TABLE III.
-  (2017) Deep laplacian pyramid networks for fast and accurate super-resolution. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 5835–5843. Cited by: §IV-E1, TABLE II.
-  (2017) Photo-realistic single image super-resolution using a generative adversarial network.. In CVPR, Vol. 2, pp. 4. Cited by: §II-A.
-  (2017) Enhanced deep residual networks for single image super-resolution. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1132–1140. Cited by: §I, §II-A, §III-B.
-  (2018) SCN: switchable context network for semantic segmentation of rgb-d images. IEEE transactions on cybernetics. Cited by: §I.
-  (2018) Robust visual tracking revisited: from correlation filter to template matching. IEEE Transactions on Image Processing 27 (6), pp. 2777–2790. Cited by: §I.
-  (2018) Multi-level wavelet-cnn for image restoration. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Cited by: §I.
-  (2018) BlockCNN: a deep network for artifact removal and image compression. arXiv preprint arXiv:1805.11091. Cited by: §I.
-  (2016) Image denoising using very deep fully convolutional encoder-decoder networks with symmetric skip connections. Cited by: §II-B.
-  (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, Vol. 2, pp. 416–423. Cited by: §IV-A, §IV-A.
-  (2016) Deep multi-scale convolutional neural network for dynamic scene deblurring. pp. 257–265. Cited by: §III-B.
-  (2018) 3-d fully convolutional networks for multimodal isointense infant brain image segmentation. IEEE transactions on cybernetics (99), pp. 1–14. Cited by: §I.
-  (2009) Clustering-based denoising with locally learned dictionaries. IEEE Trans Image Process 18 (7), pp. 1438–1451. Cited by: §II-B.
-  (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883. Cited by: §II-A, §IV-E1, TABLE II.
-  (2017) MemNet: a persistent memory network for image restoration. In IEEE International Conference on Computer Vision, pp. 4549–4557. Cited by: §I, §II-A, §II-B, §II-D, §IV-A, §IV-E1, §IV-E2, §IV-E3, TABLE II, TABLE III, TABLE IV.
-  (2017) Image super-resolution via deep recursive residual network. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2790–2798. Cited by: §II-A.
-  (2016) Seven ways to improve example-based single image super resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1865–1873. Cited by: §I.
-  (2017) Image super-resolution using dense skip connections. In IEEE International Conference on Computer Vision, pp. 4809–4817. Cited by: §II-A.
-  (2019) End-to-end image super-resolution via deep and shallow convolutional networks. IEEE Access 7, pp. 31959–31970. Cited by: §IV-E1, TABLE II.
-  (2016) D3: deep dual-domain based fast restoration of jpeg-compressed images. pp. 2764–2772. Cited by: §II-C.
-  (2012) Image denoising and inpainting with deep neural networks. In International Conference on Neural Information Processing Systems, pp. 341–349. Cited by: §II-B.
-  (2015) Patch group based nonlocal self-similarity prior learning for image denoising. In Proceedings of the IEEE international conference on computer vision, pp. 244–252. Cited by: §IV-E2, TABLE III.
-  (2010) Image super-resolution via sparse representation. IEEE transactions on image processing 19 (11), pp. 2861–2873. Cited by: §I.
-  (2018) DRFN: deep recurrent fusion network for single-image super-resolution with large factors. IEEE Transactions on Multimedia. Cited by: §IV-E1, §IV-E2, TABLE II.
Coupled deep autoencoder for single image super-resolution. IEEE transactions on cybernetics 47 (1), pp. 27–37. Cited by: §II-A.
-  (2012) On single image scale-up using sparse-representations. In International Conference on Curves and Surfaces, pp. 711–730. Cited by: §IV-A.
-  (2018) Adaptive consensus-based distributed target tracking with dynamic cluster in sensor networks. IEEE transactions on cybernetics (99), pp. 1–12. Cited by: §I.
-  (2016) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing 26 (7), pp. 3142–3155. Cited by: §I, §II-B, §IV-E2, §IV-E3, TABLE III, TABLE IV.
-  (2017) Learning deep cnn denoiser prior for image restoration. In IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2. Cited by: §I, §II-B, §II-D, §IV-E2.
-  (2018) Residual dense network for image restoration. CoRR abs/1812.10477. External Links: Cited by: §I, §II-D.