Single image restoration (IR) aims to generate a visually pleasing high-quality (HQ) image from its degraded low-quality (LQ) measurement. Image restoration is used in various computer vision tasks, such as security and surveillance imaging, medical imaging , and image generation . However, it is an ill-posed inverse procedure due to the irreversible nature of the image degradation process. Recently, deep convolutional neural network (CNN) has shown its excellent performance in different image restoration tasks, such as image super-resolution (SR) [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], image denoising (DN) [15, 16, 17, 18, 19, 20], and image compression reduction (CAR) [21, 15, 17].
Among them, Dong et al.  firstly introduced a three-layer convolutional neural network (CNN) into image SR and achieved significant improvement over conventional methods. Dong et al.  also applied such shallow CNN for image CAR. Kim et al. increased the network depth in VDSR  and DRCN 
by using gradient clipping, residual learning, or recursive-supervision to ease the difficulty of training deep network. By using effective building modules, the networks for image SR are further made deeper and wider with better performance. Zhang et al. incorporated such a residual learning into denoising network. Lim et al. used residual blocks (Figure 1(a)) to build a very wide network EDSR  with residual scaling  and a very deep one MDSR . Tai et al. proposed memory block to build MemNet for image restoration . As the network depth grows, the features in each convolutional layer would be hierarchical with different receptive fields. However, these methods neglect to fully use information of each convolutional layer. Although the gate unit in memory block was proposed to control short-term memory , the local convolutional layers don’t have direct access to the subsequent layers. So, it’s hard to say memory block makes full use of the information from all the layers within it.
Furthermore, objects in images have different scales, angles of view, and aspect ratios. These aspects can be captured by hierarchical features, which would give more clues for reconstruction. While, most deep learning (DL) based methods (e.g., VDSR, LapSRN , IRCNN , and EDSR ) neglect to use hierarchical features for reconstruction. Although memory block 
also takes information from preceding memory blocks as input, the multi-level features are not extracted from the original LQ image (e.g., the LR image). Taking image SR as an example, MemNet interpolates the original LR image to the desired size to form the input. This pre-processing step not only increases computation complexity quadratically, but also loses some details of the original LR image. Tong et al. introduced dense block (Figure1(b)) for image SR with relatively low growth rate (e.g.,16). According to our experiments (see Section 5.1), higher growth rate can improve the performance of the network. While, it is hard to train a wider network with dense blocks.
To address these limitations, we propose a residual dense network (RDN) (Figure 2) to fully make use of all the hierarchical features from the original LQ image with our proposed residual dense block (Figure 1(c)). It’s hard and impractical for a very deep network to directly extract the output of each convolutional layer in the LQ space. We propose residual dense block (RDB) as the building module for RDN. RDB consists of dense connected layers and local feature fusion (LFF) with local residual learning (LRL). Our RDB also supports contiguous memory among RDBs. The output of one RDB has direct access to each layer of the next RDB, resulting in a contiguous state pass. Each convolutional layer in RDB has access to all the subsequent layers and passes on information that needs to be preserved . Concatenating the states of preceding RDB and all the preceding layers within the current RDB, LFF extracts local dense feature by adaptively preserving the information. Moreover, LFF allows very high growth rate by stabilizing the training of a wider network. After extracting multi-level local dense features, we further conduct global feature fusion (GFF) to adaptively preserve the hierarchical features in a global way. As depicted in Figures 2 and 3, each layer has direct access to the original LR input, leading to an implicit deep supervision .
In summary, our main contributions are three-fold:
We propose a unified framework residual dense network (RDN) for high-quality image restoration. The network makes full use of all the hierarchical features from the original LQ image.
We propose residual dense block (RDB), which can not only read state from the preceding RDB via a contiguous memory (CM) mechanism, but also fully utilize all the layers within it via local dense connections. The accumulated features are then adaptively preserved by local feature fusion (LFF).
We propose global feature fusion to adaptively fuse hierarchical features from all RDBs in the LR space. With global residual learning, we combine the shallow features and deep features together, resulting in global dense features from the original LQ image.
A preliminary version of this work has been presented as a conference version . In the current work, we incorporate additional contents in significant ways:
We investigate a flexible structure of RDN and apply it for different IR tasks. Such IR applications allow us to further investigate the potential breadth of RDN.
We investigate more details and add considerable analyses to the initial version, such as block connection, network parameter number, and running time.
We extend RDN for Gaussian image denoising and compression artifact reduction. Extensive experiments demonstrate that our RDN still outperforms existing approaches in these IR tasks.
2 Related Work
Recently, deep learning (DL)-based methods have achieved dramatic advantages against conventional methods in computer vision [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]. Here, we focus on several representative image restoration tasks, such as image super-resolution (SR), denoising (DN), and compression artifact reduction (CAR).
2.1 Image Super-Resolution
Dong et al.  proposed SRCNN, establishing an end-to-end mapping between the interpolated LR images and their HR counterparts for the first time. This baseline was then further improved mainly by increasing network depth or sharing network weights. VDSR  and IRCNN  increased the network depth by stacking more convolutional layers with residual learning. DRCN  firstly introduced recursive learning in a very deep network for parameter sharing. Tai et al. introduced recursive blocks in DRRN  and the memory block in MemNet  for deeper networks. All these methods need to interpolate the original LR images to the desired size before applying them into the networks. This pre-processing step not only increases computation complexity quadratically , but also over-smooths and blurs the original LR image, from which some details are lost. As a result, these methods extract features from the interpolated LR images, failing to establish an end-to-end mapping from the original LR to HR images.
To solve the problem above, Dong et al.  directly took the original LR image as input and introduced a transposed convolution layer (also known as deconvolution layer) for upsampling to the fine resolution. Shi et al. proposed ESPCN , where an efficient sub-pixel convolution layer was introduced to upscale the final LR feature maps into the HR output. The efficient sub-pixel convolution layer was then adopted in SRResNet  and EDSR , which took advantage of residual learning . All of these methods extracted features in the LR space and upscaled the final LR features with transposed or sub-pixel convolution layer. By doing so, these networks can either be capable of real-time SR (e.g., FSRCNN and ESPCN), or be built to be very deep/wide (e.g., SRResNet and EDSR). However, all of these methods stack building modules (e.g., Conv layer in FSRCNN, residual block in SRResNet and EDSR) in a chain way. They neglect to adequately utilize information from each Conv layer and only adopt CNN features from the last Conv layer in LR space for upscaling.
2.2 Deep Convolutional Neural Network (CNN)
LeCun et al. integrated constraints from task domain to enhance network generalization ability for handwritten zip code recognition , which can be viewed as the pioneering usage of CNNs. Later, various network structures were proposed with better performance, such as AlexNet , VGG , and GoogleNet . Recently, He et al.  investigated the powerful effectiveness of network depth and proposed deep residual learning for very deep trainable networks. Such a very deep residual network achieves significant improvements on several computer vision tasks, like image classification and object detection. Huang et al. proposed DenseNet, which allows direct connections between any two layers within the same dense block . With the local dense connections, each layer reads information from all the preceding layers within the same dense block. The dense connection was introduced among memory blocks  and dense blocks . More differences between DenseNet/SRDenseNet/MemNet and our RDN would be discussed in Section 4.
2.3 Deep Learning for Image Restoration
Dong et al.  proposed ARCNN for image compression artifact reduction (CAR) with several stacked convolutional layers. Mao et al.  proposed residual encoder-decoder networks (RED) with symmetric skip connections, which made the network go deeper (up to 30 layers). Zhang et al. 
proposed DnCNN to learn mappings from noisy images to noise and further improved performance by utilizing batch normalization. Zhang et al.  proposed to learn deep CNN denoiser prior for image restoration (IRCNN) by integrating CNN denoisers into model-based optimization method. However, such methods have limited network depth (e.g., 30 for RED, 20 for DnCNN, and 7 for IRCNN), limiting the network ability. Simply stacking more layers cannot reach better results due to gradient vanishing problem. On the other hand, by using short term and long term memory, Tai et al.  proposed MemNet for image restoration, where the network depth reached 212 but obtained limited improvement over results with 80 layers. For 3131 input patches from 91 images, training an 80-layer MemNet takes 5 days using 1 Tesla P40 GPU .
The aforementioned DL-based image restoration methods have achieved significant improvement over conventional methods, but most of them lose some useful hierarchical features from the original LQ image. Hierarchical features produced by a very deep network are useful for image restoration tasks (e.g., image SR). To fix this case, we propose residual dense network (RDN) to extract and adaptively fuse features from all the layers in the LQ space efficiently.
3 Residual Dense Network for IR
3.1 Network Structure
we mainly take image SR as an example and give specific illustrations for image DN and CAR cases.
RDN for image SR. As shown in Figure 2(a)
, our RDN mainly consists of four parts: shallow feature extraction net (SFENet), residual dense blocks (RDBs), dense feature fusion (DFF), and finally the up-sampling net (UPNet). Let’s denoteand as the input and output of RDN. Specifically, we use two Conv layers to extract shallow features. The first Conv layer extracts features from the LQ input.
where denotes convolution operation. is then used for further shallow feature extraction and global residual learning. So, we can further have
where denotes convolution operation of the second shallow feature extraction layer and is used as input to residual dense blocks. Supposing we have residual dense blocks, the output of the -th RDB can be obtained by
where denotes the operations of the -th RDB.50]. As is produced by the -th RDB fully utilizing each convolutional layers within the block, we can view as local feature. More details about RDB will be given in Section 3.2.
After extracting hierarchical features with a set of RDBs, we further conduct dense feature fusion (DFF), which includes global feature fusion (GFF) and global residual learning (GRL). DFF makes full use of features from all the preceding layers and can be represented as
where is the output feature-maps of DFF by utilizing a composite function . More details about DFF will be shown in Section 3.3.
After extracting local and global features in the LQ space, we stack an up-sampling net (UPNet) in the HQ space. Inspired by , we utilize ESPCN  in UPNet followed by one Conv layer. The output of RDN can be obtained by
where denotes the function of our RDN.
RDN for image DN and CAR. When we apply our RDN to image DN and CAR, the resolution of the input and output keep the same. As shown in Figure 2(b), we remove the upscaling module in UPNet and obtain the final HQ output via residual learning
3.2 Residual Dense Block
We present details about our proposed residual dense block (RDB) shown in Figure 3. Our RDB contains dense connected layers, local feature fusion (LFF), and local residual learning, leading to a contiguous memory (CM) mechanism.
Contiguous memory (CM) mechanism. It is realized by passing the state of preceding RDB to each layer of the current RDB. Let and be the input and output of the -th RDB respectively and both have G feature-maps. The output of -th Conv layer of -th RDB can be formulated as
where denotes the ReLU activation function. is the weights of the -th Conv layer, where the bias term is omitted for simplicity. We assume consists of G (also known as growth rate ) feature-maps. refers to the concatenation of the feature-maps produced by the -th RDB, convolutional layers in the -th RDB, resulting in G+G feature-maps. The outputs of the preceding RDB and each layer have direct connections to all subsequent layers, which not only preserves the feed-forward nature, but also extracts local dense feature.
Local feature fusion (LFF). We apply LFF to adaptively fuse the states from preceding RDB and the whole Conv layers in current RDB. As analyzed above, the feature-maps of the -th RDB are introduced directly to the -th RDB in a concatenation way, it is essential to reduce the feature number. On the other hand, inspired by MemNet , we introduce a convolutional layer to adaptively control the output information. We name this operation as local feature fusion (LFF) formulated as
where denotes the function of the Conv layer in the -th RDB. We also find that as the growth rate G becomes larger, very deep dense network without LFF would be hard to train. However, larger growth rate further contributes to the performance, which will be detailed in Section 5.1.
Local residual learning (LRL). We introduce LRL in RDB to further improve the information flow and allow larger growth rate, as there are several convolutional layers in one RDB. The final output of the -th RDB can be obtained by
It should be noted that LRL can also further improve the network representation ability, resulting in better performance. We introduce more results about LRL in Section 5.2. Because of the dense connectivity and local residual learning, we refer to this block architecture as residual dense block (RDB). More differences between RDB and original dense block  would be summarized in Section 4.
3.3 Dense Feature Fusion
After extracting local dense features with a set of RDBs, we further propose dense feature fusion (DFF) to exploit hierarchical features in a global way. DFF consists of global feature fusion (GFF) and global residual learning (GRL).
Global feature fusion (GFF). We propose GFF to extract the global feature by fusing features from all the RDBs
where refers to the concatenation of feature maps produced by residual dense blocks . is a composite function of and convolution. The convolutional layer is used to adaptively fuse a range of features with different levels. The following convolutional layer is introduced to further extract features for global residual learning, which has been demonstrated to be effective in .
Global residual learning (GRL). We then utilize GRL to obtain the feature-maps before conducting up-scaling by
where denotes the shallow feature-maps. All the other layers before global feature fusion are fully utilized with our proposed residual dense blocks (RDBs). RDBs produce multi-level local dense features, which are further adaptively fused to form . After global residual learning, we obtain deep dense feature .
It should be noted that Tai et al.  utilized long-term dense connections in MemNet to recover more high-frequency information. However, in the memory block , the preceding layers don’t have direct access to all the subsequent layers. The local feature information is not fully used, limiting the ability of long-term connections. In addition, MemNet extracts features in the HQ space, increasing computational complexity. While, inspired by [41, 42, 25, 23], we extract local and global features in the LQ space. More differences between our proposed RDN and MemNet would be shown in Section 4. We would also demonstrate the effectiveness of global feature fusion in Section 5.2.
3.4 Implementation Details
In our proposed RDN, we set as the size of all convolutional layers except that in local and global feature fusion, whose kernel size is . For convolutional layer with kernel size
, we pad zeros to each side of the input to keep size fixed. Shallow feature extraction layers, local and global feature fusion layers have G=64 filters. Other layers in each RDB has G=64 filters and are followed by ReLU . For image SR, following , we use ESPCNN  to upscale the coarse resolution features to fine ones for the UPNet. For image DN and CAR, the up-scaling module is removed from UPNet. The final Conv layer has output channels, as we output color HQ images. However, the network can also process gray images, for example, when we apply RDN for gray-scale image denoising.
4 Differences with Prior Works
Here, we give more details about the differences between our RDN and several representative works.
Difference to DenseNet. Inspired from DenseNet , we adopt the local dense connections into our proposed residual dense block (RDB). In general, DenseNet is widely used in high-level computer vision tasks (e.g., object recognition ). While RDN is designed for image restoration. Moreover, we remove batch normalization (BN) layers, which consume the same amount of GPU memory as convolutional layers, increase computational complexity, and hinder performance of the network. We also remove the pooling layers, which could discard some pixel-level structural information. Furthermore, transition layers are placed into two adjacent dense blocks in DenseNet. While in RDN, we combine dense connected layers with local feature fusion (LFF) by using local residual learning, which would be demonstrated to be effective in Section 5.2. As a result, the output of the -th RDB has direct connections to each layer in the -th RDB and also contributes to the input of -th RDB. Last not the least, we adopt GFF to fully use hierarchical features, which are neglected in DenseNet.
Difference to SRDenseNet. There are three main differences between SRDenseNet  and our RDN. The first one is the design of the basic building block. SRDenseNet introduces the basic dense block from DenseNet . Our residual dense block (RDB) improves it in three ways: (1). We propose contiguous memory (CM) mechanism, which allows the state of preceding RDB to have direct access to each layer of the current RDB. (2). Our RDB allows larger growth rate by using local feature fusion (LFF), which stabilizes the training of the wider network. (3). Local residual learning (LRL) is utilized in RDB to further encourage the flow of information and gradient. The second one is that there are no dense connections among RDB. Instead, we use global feature fusion (GFF) and global residual learning to extract global features, because our RDBs with contiguous memory have fully extracted features locally. As shown in Sections 5.1 and 5.2, all these components increase the performance significantly. The third one is that SRDenseNet uses loss function. Whereas we utilize loss function, which has been demonstrated to be more powerful for performance and convergence . As a result, our proposed RDN achieves better performance than that of SRDenseNet. In Table I, our RDN with or without LRL would outperform SRDenseNet  for image SR on all the datasets.
Difference to MemNet. In addition to the different choice of loss function ( in MemNet ), we mainly summarize another three differences between MemNet and our RDN for image SR. First, MemNet needs to upscale the original LR image to the desired size using Bicubic interpolation for image SR. This procedure results in feature extraction and reconstruction in HR space. While, RDN extracts hierarchical features from the original LR image, reducing computational complexity significantly and improving the performance. Second, the memory block in MemNet contains recursive and gate units. Most layers within one recursive unit don’t receive the information from their preceding layers or memory block. While, in our proposed RDN, the output of RDB has direct access to each layer of the next RDB. Also, the information of each convolutional layer flow into all the subsequent layers within one RDB. Furthermore, local residual learning in RDB improves the flow of information and gradients and performance, which is demonstrated in Section 5.2. Third, as analyzed above, the current memory block doesn’t fully make use of the information of the output of the preceding block and its layers. Even though MemNet adopts densely connections among memory blocks in the HR space, MemNet fails to fully extract hierarchical features from the original LR inputs. While, after extracting local dense features with RDBs, our RDN further fuses the hierarchical features from the whole preceding layers in a global way in the LR space. As shown in Table I, RDN achieves better results than that of MemNet. For other image restoration tasks, such as image DN, RDN also reconstructs better outputs (see Section 6.3).
|Block connection||Dense connections||Contiguous memory|
|||||(w/o LRL)||(with LRL)|
5 Network Investigations
5.1 Study of D, C, and G.
In this subsection, we investigate the basic network parameters: the number of RDB (denote as D for short), the number of Conv layers per RDB (denote as C for short), and the growth rate (denote as G for short). We use the performance of SRCNN  as a reference. As shown in Figures 4 and 4, larger D or C would lead to higher performance. This is mainly because the network becomes deeper with larger D or C. As our proposed LFF allows larger G, we also observe larger G (see Figure 4) contributes to better performance. On the other hand, RND with smaller D, C, or G would suffer some performance drop in the training, but RDN would still outperform SRCNN . More important, our RDN allows deeper and wider network, where more hierarchical features are extracted for higher performance.
|Different combinations of CM, LRL, and GFF|
in 200 epochs.
5.2 Ablation Investigation
Table II shows the ablation investigation on the effects of contiguous memory (CM), local residual learning (LRL), and global feature fusion (GFF). The eight networks have the same RDB number (D = 20), Conv number (C = 6) per RDB, and growth rate (G = 32). We find that local feature fusion (LFF) is needed to train these networks properly, so LFF isn’t removed by default. The baseline (denote as RDN_CM0LRL0GFF0) is obtained without CM, LRL, or GFF and performs very poorly (PSNR = 34.87 dB). This is caused by the difficulty of training  and also demonstrates that stacking many basic dense blocks  in a very deep network would not result in better performance.
We then add one of CM, LRL, or GFF to the baseline, resulting in RDN_CM1LRL0GFF0, RDN_CM0LRL1GFF0, and RDN_CM0LRL0GFF1 respectively (from 2 to 4 combination in Table II). We can validate that each component can efficiently improve the performance of the baseline. This is mainly because each component contributes to the flow of information and gradient.
We further add two components to the baseline, resulting in RDN_CM1LRL1GFF0, RDN_CM1LRL0GFF1, and RDN_CM0LRL1GFF1 respectively (from 5 to 7 combination in Table II). It can be seen that two components would perform better than only one component. Similar phenomenon can be seen when we use these three components simultaneously (denote as RDN_CM1LRL1GFF1). RDN using three components performs the best.
We also visualize the convergence process of these eight combinations in Figure 5. The convergence curves are consistent with the analyses above and show that CM, LRL, and GFF can further stabilize the training process without obvious performance drop. These quantitative and visual analyses demonstrate the effectiveness and benefits of our proposed CM, LRL, and GFF.
5.3 Model Size, Performance, and Test Time
We also compare the model size, performance, and test time with other methods on Set14 () in Table III. Compared with EDSR, our RDN has half less amount of parameter and obtains better results. Although our RDN has more parameters than that of other methods, RDN achieves comparable (e.g., MDSR) or even less test time (e.g., MemNet). We further visualize the performance and test time comparison in Figure 6. We can see that our RDN achieves good trade-off between the performance and running time.
6 Experimental Results
The source code and models of the proposed method can be downloaded at https://github.com/yulunzhang/RDN.
Datasets and Metrics. Recently, Timofte et al. have released a high-quality (2K resolution) dataset DIV2K for image restoration applications . DIV2K consists of 800 training images, 100 validation images, and 100 test images. We train all our models with 800 training images and use 5 validation images in the training process. For testing, we use five standard benchmark datasets: Set5 , Set14 , B100 , Urban100 , and Manga109  for image SR. We use Kodak24 (http://r0k.us/graphics/kodak/), BSD68 , and Urban100  for color and gray image DN. LIVE1  and Classic5  are used for image CAR. The image SR and CAR results are evaluated with PSNR and SSIM  on Y channel (i.e., luminance) of transformed YCbCr space.
Degradation Models. In order to fully demonstrate the effectiveness of our proposed RDN, we use three degradation models to simulate LR images for image SR. The first one is bicubic downsampling by adopting the Matlab function imresize with the option bicubic (denote as BI for short). We use BI model to simulate LR images with scaling factor , , , and . Similar to , the second one is to blur HR image by Gaussian kernel of size
with standard deviation 1.6. The blurred image is then downsampled with scaling factor(denote as BD for short). We further produce LR image in a more challenging way. We first bicubic downsample HR image with scaling factor and then add Gaussian noise with noise level 30 (denote as DN for short).
Training Setting. Following settings of , in each training batch, we randomly extract 16 LQ RGB patches with the size of as inputs. We randomly augment the patches by flipping horizontally or vertically and rotating 90. 1,000 iterations of back-propagation constitute an epoch. We implement our RDN with the Torch7 framework and update it with Adam optimizer . The learning rate is initialized to 10 for all layers and decreases half for every 200 epochs. Training a RDN roughly takes 1 day with a Titan Xp GPU for 200 epochs.
6.2 Image Super-Resolution
6.2.1 Results with BI Degradation Model
Simulating LR image with BI degradation model is widely used in image SR settings. For BI degradation model, we compare our RDN with state-of-the-art image SR methods: SRCNN , FSRCNN , SCN , VDSR , LapSRN , MemNet , SRDenseNet , MSLapSRN , EDSR , SRMDNF , and D-DBPN . Similar to [62, 23], we also adopt self-ensemble strategy  to further improve our RDN and denote the self-ensembled RDN as RDN+. Here, we also additionally use Flickr2K  as training data, which is also used in SRMDNF , and D-DBPN . As analyzed above, a deeper and wider RDN would lead to a better performance. On the other hand, as most methods for comparison only use about 64 filters per Conv layer, we report results of RDN by using D = 16, C = 8, and G = 64 for a fair comparison.
Table IV shows quantitative comparisons for , , and SR. Results of SRDenseNet  are cited from their paper. When compared with persistent CNN models ( SRDenseNet  and MemNet ), our RDN performs the best on all datasets with all scaling factors. This indicates the better effectiveness of our residual dense block (RDB) over dense block in SRDensenet  and the memory block in MemNet . When compared with the remaining models, our RDN also achieves the best average results on most datasets. Specifically, for the scaling factor , our RDN performs the best on all datasets. EDSR  uses far more filters (i.e., 256) per Conv layer, leading to a very wide network with a large number of parameters (i.e., 43 M). Our RDN has about half less network parameter number and achieves better performance.
In Figure 7, we show visual comparisons on scales and . We observe that most of compared methods cannot recover the lost details in the LR image (e.g., “img_004”), even though EDSR and D-DBPN can reconstruct partial details. In contrast, our RDN can recover sharper and clearer edges, more faithful to the ground truth. In image “img_092”, some unwanted artifacts are generated in the degradation process. All the compared methods would fail to handle such a case, but enlarge the mistake. However, our RDN can alleviate the degradation artifacts and recover correct structures. When scaling factor goes larger (e.g., ), more structural and textural details are lost. Even we human beings can hardly distinguish the semantic content in the LR images. Most compared methods cannot recover the lost details either. However, with the usage of hierarchical features through dense feature fusion, our RDN reconstruct better visual results with clearer structures.
6.2.2 Results with BD and DN Degradation Models
Following , we also show the SR results with BD degradation model and further introduce DN degradation model. Our RDN is compared with SPMSR , SRCNN , FSRCNN , VDSR , IRCNN_G , and IRCNN_C . We re-train SRCNN, FSRCNN, and VDSR for each degradation model. Table V shows the average PSNR and SSIM results on Set5, Set14, B100, Urban100, and Manga109 with scaling factor . Our RDN and RDN+ perform the best on all the datasets with BD and DN degradation models. The performance gains over other state-of-the-art methods are consistent with the visual results in Figures 8 and 9.
For BD degradation model (Figure 8), the methods using interpolated LR image as input would produce noticeable artifacts and be unable to remove the blurring artifacts. In contrast, our RDN suppresses the blurring artifacts and recovers sharper edges. This comparison indicates that extracting hierarchical features from the original LR image would alleviate the blurring artifacts. It also demonstrates the strong ability of RDN for BD degradation model.
For DN degradation model (Figure 9), where the LR image is corrupted by noise and loses some details. We observe that the noised details are hard to recovered by other methods [54, 12, 19]. However, our RDN can not only handle the noise efficiently, but also recover more details. This comparison indicates that RDN is applicable for jointly image denoising and SR. These results with BD and DN degradation models demonstrate the effectiveness and robustness of our RDN model.
6.2.3 Super-Resolving Real-World Images
We also conduct SR experiments on two representative real-world images, “chip” (with 244200 pixels) and “hatc” (with 133174 pixels) . In this case, the original HR images are not available and the degradation model is unknown either. We compare our RND with VDSR , LapSRN , and MemNet . As shown in Figure 10, our RDN recovers sharper edges and finer details than other state-of-the-art methods. These results further indicate the benefits of learning dense features from the original input image. The hierarchical features perform robustly for different or unknown degradation models.
6.3 Image Denoising
We compare our RDN with recently leading Gaussian denoising methods: BM3D , CBM3D , TNRD , RED , DnCNN , MemNet , IRCNN , and FFDNet . Kodak24 (http://r0k.us/graphics/kodak/), BSD68 , and Urban100  are used for gray-scale and color image denoising. Noisy images are obtained by adding AWGN noises of different levels to clean images.
6.3.1 Gray-scale Image Denoising
The PSNR results are shown in Table VI. One can see that on all the 3 test sets with 4 noise levels, our RDN+ achieves the highest average PSNR values. On average, for noise level , our RDN achieves 0.22 dB, 0.11 dB, and 0.88 dB gains over FFDNet  on three test sets respectively. Gains on Urban100 become larger, which is mainly because our method takes advantage of a larger scope of context information with hierarchical features. Moreover, for noise levels
= 30, 50, and 70, the gains over BM3D of RDN are larger than 0.7 dB, breaking the estimated PSNR bound (0.7 dB) over BM3D in.
We show visual gray-scale denoised results of different methods in Figure 11. We can see that BM3D preserves image structure to some degree, but fails to remove noise deeply. TNRD  tends to generate some artifacts in the smooth region. RED , DnCNN , MemNet , and IRCNN  would over-smooth edges. The main reason should be the limited network ability for high noise level (e.g., = 50). In contrast, our RDN can remove noise greatly and recover more details (e.g., the tiny lines in “img_061”). Also, the gray-scale visual results by our RDN in the smooth region are more faithful to the clean images (e.g., smooth regions in “119082” and “img_061”).
|Dataset||Quality||JPEG||SA-DCT ||ARCNN ||TNRD ||DnCNN ||RDN (ours)||RDN+ (ours)|
6.3.2 Color Image Denoising
We generate noisy color images by adding AWGN noise to clean RGB images with different noise levels = 10, 30, 50, and 70. The PSNR results are listed in Table VII. We apply gray image denoising methods (e.g., MemNet ) for color image denoising channel by channel. Lager gains over MemNet  of our RDN indicate that denoising color images jointly perform better than denoising each channel separately. Take =50 as an example, our RDN obtains 0.56 dB, 0.35, and 1.24 dB improvements over FFDNet  on three test sets respectively. Residual learning and dense feature fusion allows RDN to go wider and deeper, obtain hierarchical features, and achieve better performance.
We also show color image denoising visual results in Figure 12. CBM3D  tends to produce artifacts along the edges. TNRD  produces artifacts in the smooth area and is unable to recover clear edges. RED , DnCNN , MemNet , IRCNN , and FFDNet  could produce blurring artifacts along edges (e.g., the structural lines in “img_039”). Because RED  and MemNet  were designed for gray image denoising. In our experiments on color image denoising, we conduct RED  and MemNet  in each channel. Although DnCNN , IRCNN , and FFDNet  directly denoise noisy color images in three channels, they either fail to recover sharp edges and clean smooth area. In contrast, our RDN can recover shaper edges and cleaner smooth area.
6.4 Image Compression Artifact Reduction
We further apply our RDN to reduce image compression artifacts. We compare our RDN with SA-DCT , ARCNN , TNRD , and DnCNN . We use Matlab JPEG encoder  to generate compressed test images from LIVE1  and Classic5 . Four JPEG quality settings = 10, 20, 30, 40 are used in Matlab JPEG encoder. Here, we only focus on the compression artifact reduction (CAR) of Y channel (in YCbCr space) to keep fair comparison with other methods.
We report PSNR/SSIM values in Table VIII. As we can see, our RDN and RDN+ achieve higher PSNR and SSIM values on LIVE1 and Classic5 with all JPEG qualities than other compared methods. Taking as an example, our RDN achieves 0.48 dB and 0.60 dB improvements over DnCNN  in terms of PSNR. Even in such a challenging case (very low compression quality), our RDN can still obtain great performance gains over others. Similar improvements are also significant for other compression qualities. These comparisons further demonstrate the effectiveness of our proposed RDN.
Visual comparisons are further shown in Figure 13, where we provide comparisons under very low image quality (=10). Although ARCNN , TNRD , and DnCNN  can remove blocking artifacts to some degree, they also over-smooth some details (e.g., 1st and 2nd rows in Figure 13) and cannot deeply remove the compression artifacts around content structures (e.g., 3rd and 4th rows in Figure 13). While, RDN has stronger network representation ability to distinguish compression artifacts and content information better. As a result, RDN recovers more details with consistent content structures.
Here, we give a brief view of the benefits and limitations of our RDN and challenges in image restoration.
Benefits of RDN. RDN is built on the RDB modules, where features from local layers are fully used with the dense connections among each layer. RDB allows direct connections from preceding RDB to each Conv layer of current RDB, resulting in a contiguous memory (CM) mechanism. LFF adaptively preserves the information from the current and previous RDBs. With the usage of LRL, the flow of gradient and information can be further improved and training wider network becomes more stable. Such local feature extraction and global feature fusion lead to a dense feature fusion and deep supervision.
Limitations of RDN. In some challenging cases, RDN may fail to reconstruct proper structures. As shown in Figure 14, although other methods fail to recover proper structures, our RDN generates wrong structures. The reason of this failure case may be that our RDN concentrates more on the local features and doesn’t extract enough global features. As a result, RDN generates better local structures than others, while the global recovered structures are wrong.
Challenges in image restoration. Extreme cases make the image restoration tasks much harder, such as very large scaling factors for image SR, heavy noise for image DN, low JPEG quality in image CAR. Complex desegregation processes in the real world make it difficult for us to formulate the degradation process. Then it may make the data preparation and network training harder.
In this paper, we proposed a very deep residual dense network (RDN) for image restoration, where residual dense block (RDB) serves as the basic build module. RDN takes advantage of local and global feature fusion, obtaining very powerful representational ability. RDN uses less network parameters than residual network while achieves better performance than dense network, leading to a good tradeoff between model size and performance. We apply the same RDN to handle three degradation models and real-world data in image SR. We further extend RDN to image denoising and compression artifact reduction. Extensive benchmark evaluations well demonstrate that our RDN achieves superiority over state-of-the-art methods. In the future works, our RDN may further benefit from adversarial training, which may help to alleviate the blurring artifacts. Moreover, more works deserve to be investigated to apply RDN for other image restoration tasks, such as image demosaicing and deblurring. We also want to connect the low-level and high-level vision tasks with our RDN. When the inputs suffer from quality degradation, the performances in high-level vision tasks would be also decreased obviously. We plan to investigate how image restoration can alleviate such performance decrease.
This research is supported in part by the NSF IIS award 1651902, ONR Young Investigator Award N00014-14-1-0484, and U.S. Army Research Office Award W911NF-17-1-0367.
W. W. Zou and P. C. Yuen, “Very low resolution face recognition problem,”IEEE Trans. Image Process., vol. 21, no. 1, pp. 327–340, Jan. 2012.
-  W. Shi, J. Caballero, C. Ledig, X. Zhuang, W. Bai, K. Bhatia, A. M. S. M. de Marvao, T. Dawes, D. O’Regan, and D. Rueckert, “Cardiac image super-resolution with global correspondence using multi-atlas patchmatch,” in Medical Image Computing and Computer Assisted Intervention, 2013.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” in Proc. International Conference on Learning Representations, 2018.
-  L. Zhang and X. Wu, “An edge-guided image interpolation algorithm via directional filtering and data fusion,” IEEE Trans. Image Process., 2006.
-  K. Zhang, X. Gao, D. Tao, and X. Li, “Single image super-resolution with non-local means and steering kernel regression,” IEEE Trans. Image Process., 2012.
-  R. Timofte, V. De, and L. V. Gool, “Anchored neighborhood regression for fast example-based super-resolution,” in Proc. IEEE Int. Conf. Comput. Vis., 2013.
-  R. Timofte, V. De Smet, and L. Van Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution,” in Proc. IEEE Asian Conf. Comput. Vis., 2014.
-  T. Peleg and M. Elad, “A statistical prediction model based on sparse representations for single image super-resolution.” IEEE Trans. Image Process., 2014.
-  C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proc. Eur. Conf. Comput. Vis., 2014.
-  S. Schulter, C. Leistner, and H. Bischof, “Fast and accurate image upscaling with super-resolution forests,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2015.
-  J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2015.
-  J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016.
-  T. Tong, G. Li, X. Liu, and Q. Gao, “Image super-resolution using dense skip connections,” in Proc. IEEE Int. Conf. Comput. Vis., 2017.
-  K. Zhang, W. Zuo, and L. Zhang, “Learning a single convolutional super-resolution network for multiple degradations,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2018.
-  Y. Chen and T. Pock, “Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., 2017.
-  X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Proc. Adv. Neural Inf. Process. Syst., 2016.
-  K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, Jul. 2017.
-  Y. Tai, J. Yang, X. Liu, and C. Xu, “Memnet: A persistent memory network for image restoration,” in Proc. IEEE Int. Conf. Comput. Vis., 2017.
-  K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017.
-  K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” arXiv preprint arXiv:1710.04026, 2017.
-  C. Dong, Y. Deng, C. Change Loy, and X. Tang, “Compression artifacts reduction by a deep convolutional network,” in IEEE Int. Conf. Comput. Vis., Dec. 2015, pp. 576–584.
-  J. Kim, J. Kwon Lee, and K. Mu Lee, “Deeply-recursive convolutional network for image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016.
-  B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. Workshop, 2017.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning.” in
Association for the Advancement of Artificial Intelligence, 2017.
-  W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017.
-  G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017.
-  C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, “Deeply-supervised nets,” in Proc. International Conference on Artificial Intelligence and Statistics, 2015.
-  Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2018.
-  H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional generative adversarial network,” arXiv preprint arXiv:1701.05957, 2017.
-  H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2018.
-  ——, “Densely connected pyramid dehazing network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2018.
-  K. Li, Z. Wu, K.-C. Peng, J. Ernst, and Y. Fu, “Tell me where to look: Guided attention inference network,” arXiv preprint arXiv:1802.10171, 2018.
-  R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, L. Zhang, B. Lim, S. Son, H. Kim, S. Nah, K. M. Lee et al., “Ntire 2017 challenge on single image super-resolution: Methods and results,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. Workshop, 2017.
-  M. Haris, G. Shakhnarovich, and N. Ukita, “Deep back-projection networks for super-resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2018.
-  C. Ancuti, C. O. Ancuti, R. Timofte, L. Van Gool, L. Zhang, M.-H. Yang, V. M. Patel, H. Zhang, V. A. Sindagi, R. Zhao et al., “Ntire 2018 challenge on image dehazing: Methods and results,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. Workshop, 2018.
-  Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. Zelnik-Manor, “2018 pirm challenge on perceptual image super-resolution,” in Proc. Eur. Conf. Comput. Vis. Workshop, 2018.
K. Yu, C. Dong, L. Lin, and C. C. Loy, “Crafting a toolchain for image restoration by deep reinforcement learning,” inProc. IEEE Conf. Comput. Vis. Pattern Recog., 2018, pp. 2443–2452.
-  X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, C. C. Loy, Y. Qiao, and X. Tang, “Esrgan: Enhanced super-resolution generative adversarial networks,” in Proc. Eur. Conf. Comput. Vis. Workshop, 2018.
-  Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in Proc. Eur. Conf. Comput. Vis., 2018.
-  Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017.
-  C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolutional neural network,” in Proc. Eur. Conf. Comput. Vis., 2016.
-  W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016.
-  C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016.
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,”Neural computation, vol. 1, no. 4, pp. 541–551, 1989.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” inProc. Adv. Neural Inf. Process. Syst., 2012.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., Jun. 2015, pp. 1–9.
-  S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. Mach. Learn., 2015.
-  X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proc. International Conference on Artificial Intelligence and Statistics, 2011.
-  M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. Alberi-Morel, “Low-complexity single-image super-resolution based on nonnegative neighbor embedding,” in Proc. Brit. Mach. Vis. Conf., 2012.
-  R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparse-representations,” in Proc. 7th Int. Conf. Curves Surf., 2010.
-  D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. IEEE Int. Conf. Comput. Vis., 2001.
-  C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., 2016.
-  Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang, “Deep networks for image super-resolution with sparse prior,” in Proc. IEEE Int. Conf. Comput. Vis., 2015.
-  W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Fast and accurate image super-resolution with deep laplacian pyramid networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PP, no. 99, pp. 1–14, 2018.
-  Y. Matsui, K. Ito, Y. Aramaki, A. Fujimoto, T. Ogawa, T. Yamasaki, and K. Aizawa, “Sketch-based manga retrieval using manga109 dataset,” Multimedia Tools and Applications, 2017.
-  H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “Live image quality assessment database release 2 (2005),” 2005.
-  A. Foi, V. Katkovnik, and K. Egiazarian, “Pointwise shape-adaptive dct for high-quality denoising and deblocking of grayscale and color images,” IEEE Transactions on Image Processing, vol. 16, no. 5, pp. 1395–1411, 2007.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., 2004.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. International Conference on Learning Representations, 2014.
-  R. Timofte, R. Rothe, and L. Van Gool, “Seven ways to improve example-based single image super resolution,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2016.
-  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process., 2007.
-  Y. Zhang, Y. Zhang, J. Zhang, D. Xu, Y. Fu, Y. Wang, X. Ji, and Q. Dai, “Collaborative representation cascade for single image super-resolution,” IEEE Trans. Syst., Man, Cybern., Syst., vol. PP, no. 99, pp. 1–11, 2017.
-  K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space,” in Proc. IEEE Int. Conf. Image Process., 2007.
-  A. Levin, B. Nadler, F. Durand, and W. T. Freeman, “Patch complexity, finite pixel correlations and optimal denoising,” in Proc. Eur. Conf. Comput. Vis., 2012.
-  J. Jancsary, S. Nowozin, and C. Rother, “Loss-specific training of non-parametric image restoration models: A new state of the art,” in Proc. Eur. Conf. Comput. Vis. Springer, Oct. 2012, pp. 112–125.