Spatial-Adaptive Network for Single Image Denoising

01/28/2020 ∙ by Meng Chang, et al. ∙ Zhejiang University 4

Previous works have shown that convolutional neural networks can achieve good performance in image denoising tasks. However, limited by the local rigid convolutional operation, these methods lead to oversmoothing artifacts. A deeper network structure could alleviate these problems, but more computational overhead is needed. In this paper, we propose a novel spatial-adaptive denoising network (SADNet) for efficient single image blind noise removal. To adapt to changes in spatial textures and edges, we design a residual spatial-adaptive block. Deformable convolution is introduced to sample the spatially correlated features for weighting. An encoder-decoder structure with a context block is introduced to capture multiscale information. With noise removal from the coarse to fine, a high-quality noisefree image can be obtained. We apply our method to both synthetic and real noisy image datasets. The experimental results demonstrate that our method can surpass the state-of-the-art denoising methods both quantitatively and visually.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: A real image denoising example from the SIDD dataset. Compared with other denoising methods, SADNet can recover structure and texture without other artifacts.

Image denoising is an important task in computer vision. During image acquisition, noise is often unavoidable due to the limitations of the imaging environment and equipment. Therefore, noise removal is an essential step, not only for visual quality but also for other computer vision tasks. Image denoising has a long history, and many methods have been proposed. Many of the earlier model-based methods find natural image priors and then apply optimization algorithms to solve the model iteratively 

[22, 2, 29, 40]

. These methods are time consuming and cannot effectively remove noise. With the rise of deep learning, convolutional neural networks (CNNs) has been applied to image denoising tasks and have achieved high-quality results.

On the other hand, the earlier works assume that noise is independent and identically distributed. Additive white Gaussian noise (AWGN) is often adopted for synthetic noise images. People now realize that noise has more complicated forms, which are spatially variant and channel dependent. Therefore, some recent works have made progress in real image denoising [25, 38, 12, 4].

However, despite the many advances in image denoising, some issues remain to be resolved. The traditional CNN can only use the features in local fixed-location neighborhoods, which may be irrelevant or even exclusive to the current location. Due to their lack of adaptability to textures and edges, CNN-based methods result in oversmoothing artifacts and some details are lost. In addition, the receptive field of the traditional CNN is relatively small. Many methods deepen the network structure [26] or use a non-local module to expand the receptive field [17, 36]. However, these methods lead to high computational memory and time consumption, and they cannot be applied in practice.

In this paper, we propose a spatial-adaptive denoising network (SADNet) to address the above issues. A residual spatial-adaptive block (RSAB) is designed to adapt to changes in spatial textures and edges. We introduce the modulated deformable convolution in each RSAB to sample the spatially relevant features for weighting. Moreover, we incorporate the RSAB and residual blocks (ResBlock) in an encoder-decoder structure to remove noise from coarse to fine. To further enlarge the receptive field and capture the multiscale information, a context block is applied on the coarsest scale. Compared to the stat-of-the-art methods, our method can achieve good performance while maintaining a relatively small computational overhead.

In conclusion, the main contributions of our method are as follows:

  • We propose a novel spatial-adaptive denoising network for efficient noise removal. The network can capture the relevant features from complex image content, and recover details and textures from heavy noise.

  • We propose the residual spatial-adaptive block, which introduces deformable convolution to adapt to spatial textures and edges. In addition, with a multiscale structure and context block to capture multiscale information, we can estimate offsets and remove noise from coarse to fine.

  • We experiment on multiple synthetic image datasets and real noisy datasets. The results demonstrate that our model achieves state-of-the-art performance on both synthetic and real images with a relatively small computational overhead.

2 Related works

In general, image denoising methods include model-based and learning-based methods. Model-based methods attempt to model the distribution of natural images or noise. With the distribution as the prior, they attempt to obtain clear images with optimization algorithms. The common priors include local smoothing [22, 29], sparsity [2, 19, 32], non-local self-similarity [5, 9, 8, 31, 11] and external statistical prior [40, 33]. Non-local self-similarity is the notable prior in the image denoising task. This prior believes that image information is redundant and that there are similar structures in a single image. We can find some self-similar patches to remove noise. Many methods have been proposed based on the non-local self-similarity prior including NLM [5], BM3D [9, 8], WNNM [11, 31], which are currently widely used.

With the popularity of deep neural networks, learning-based denoising methods have developed rapidly. Some works combine natural priors with deep neural networks. TRND [7] introduced the field-of-experts prior into a deep neural network. NLNet [16] combined the non-local self-similarity prior with a CNN. Limited by the designed priors, their performance is often inferior compared to end-to-end CNN methods. DnCNN [34]

introduced residual learning and batch normalization to implement end-to-end denoising. FFDNet 

[35] introduced the noise level map as the input and enhanced the flexibility of the network for non-uniform noise. MemNet [26] proposed a very deep end-to-end persistent memory network for image restoration, which fuses both short-term and long-term memories to capture different levels of information. Inspired by the non-local self-similarity prior, a non-local module [27] was designed in the neural network. NLRN [17]

attempted to incorporate non-local modules into a recurrent neural network (RNN) for image restoration. N3Net 

[25] proposed neural nearest neighbors block to achieve non-local operation. RNAN [36] designed non-local attention blocks to capture global information and pay more attention to the challenging parts. However, the non-local operations would lead to high computational memory and time consumption.

People now realize that real noise has a more complicated distribution. Thus, some recent works have focused on real noisy images. Several real noisy datasets have been established by capturing real noisy scenes [24, 3, 1], which promotes the research of real denoising methods. N3Net [25] demonstrated the significance on real noisy dataset. CBDNet [12] trained two subnets to estimate noise and non-blind denoising in sequence. PD [38] applied the pixel-shuffle downsampling strategy to approximate the real noise to AWGN, which can adapt the trained model to real noises. RIDNet [4] proposed a one-stage denoising network with feature attention for real image denoising. However, these methods lack adaptability to image content and result in oversmoothing artifacts.

3 Framework

Figure 2: The framework of our proposed spatial-adaptive denoising network.

The architecture of our proposed spatial-adaptive denoising network (SADNet) is shown in Fig. 2. Let denotes a noisy input image and denotes the corresponding output denoised image. Then our model can be described as follows:

(1)

We use one convolutional layer to extract the initial features from the noisy input and then place them into a multiscale encoder-decoder architecture. In the encoder component, we use ResBlock [14]

to extract different scale features. However, unlike the original ResBlock, we remove the batch normalization and use the leaky ReLU 

[18]

as the activation function. Avoiding damaging image structures, we limit the number of downsampling operations and implement a context block to further enlarge the receptive field and capture multiscale information. Then, in the decoder component, we design the residual spatial-adaptive block (RSAB) to sample and weight correlated features to remove noise and reconstruct the textures. In addition, we estimate the offsets and transfer them from coarse to fine, which is benefical for obtaining more accurate feature locations. Finally the reconstructed features are fed to the last one convolutional layer to restore the denoised image. With the use of the long residual connection, our network only learns the noise component.

In addition to the network architecture, the loss function is crucial to the performance. Several loss functions, such as

, , perceptual loss, asymmetric loss, have been used in denoising tasks. In general, and are the two most commonly used loss functions in previous works. The loss has good confidence in Gaussian noise, whereas the

loss has better tolerance for outliers. Given a batch of training pairs

that contain noisy inputs and their corresponding noise-free ground truths, the loss is defined as follows:

(2)

where is the learned parameters in the network. In the same way, the loss can be expressed as follows:

(3)

In our experiment, we use the loss for training on synthetic image datasets and the loss for training on real-image noise datasets.

In the following sections, we will focus on the RSAB and context block.

3.1 Residual spatial-adaptive block

Figure 3: Traditional convolution versus deformable convolution. The traditional convolution only samples from rigid locations, whereas the deformable convolution can change the sampling locations based on the image content
Figure 4: The architecture of the residual spatial-adaptive block (RSAB). The offset transfer component is shown in a green dashed. The architecture of deformable convolution is shown in blue dashed.

In this section, we first introduce the deformable convolution and then propose our RSAB in detail.

Let denote the features at location from the input feature map . Then, for a traditional convolution operation, the corresponding output features can be obtained by

(4)

where denote the neighborhood of location , whose size is equal to the convolutional kernel size. denotes the weight, and denotes the location in . As shown, the traditional convolution operation strictly takes the feature of the fixed location around for calculating the output feature. Some unwanted and unrelated features can interfere with the calculation of the output. As shown in Fig. 3, when the current location is near the edge, the distinct features located outside the object are introduced for weighting, which may smooth the edges and destroy the texture. For the denoising task, we hope that only the correlated or similar features are used for noise removal, similar to the non-local weighted mean denoising methods.

Therefore, we introduce deformable convolution [10, 39] to be adaptive to spatial texture changes. In contrast to traditional convolutional layers, deformable convolution can change the shapes of convolutional kernels. It first learns an offset map for every location and applies it to the feature map, which can resample the correlated features for weighting. Here, we use the modulated deformable convolution [39], which provides another dimension of freedom to adjust its spatial support regions,

(5)

where is the learnable offset for location . is the learnable modulation scalar, which lies in the range . It indicates the correlation between the sampled features and the features in the current location. Thus, the modulated deformable convolution can modulate the input feature amplitudes to further adjust the spatial support regions. Both and are obtained from the previous features.

In each RSAB, we first fuse the extracted features and the reconstructed features from the last scale as our input. The RSAB is constructed by a modulated deformable convolution followed by a traditional convolution with a short skip connection. Similar to ResBlock, we implement local residual learning to enhance the information flow and representation ability of the network. However, unlike ResBlock, we replace the first convolution with modulated deformable convolution and use the leaky ReLU as our activation function. Hence, the RSAB can be formulated as,

(6)

where and are the modulated deformable convolution and traditional convolution respectively. is the activation function, which is leaky ReLU here. The architecture of RSAB is shown in Fig. 4.

Furthermore, to better estimate the offsets from coarse to fine, we transfer the last-scale offsets and modulation scales to the current scale , and then use both and the input features to estimate . The formulation is

(7)

where and denote the offset transfer function and upsampling function respectively, which are shown in Fig. 4

. The offset transfer function is several convolutions, and it extracts features from input and fuses them with the last offsets to estimate the offsets in the current scale. The upsampling function magnifies the size and value of the last offset maps. In our experiment, bilinear interpolation is adopted to upsample the offsets and modulation scales.

3.2 Context block

Figure 5: The architecture of context block. Instead of downsampling operations, multi-size dilated convolutions are implemented to extract different receptive-field features.
RSAB
offset transfer
Context block
PSNR 29.05 29.55 29.12 29.60 29.59 29.62
Table 1: Ablation study of different components. PSNRs are based on Kodak24 ()

Multiscale information is important for image denoising tasks; therefore, the downsampling operation is often adapted in networks. However, when the spatial resolution is too small, the structure of the image is destroyed, and information is lost, which is not conducive to reconstructing the features.

To increase the receptive field and capture multiscale information without further reducing the spatial resolution, we introduce a context block into the minimum scale between the encoder and decoder. Context blocks have been successfully used in image segments [6] and deblurring task [37]. In contrast to spatial pyramid pooling [13]

, the context block uses several dilated convolutions with different dilation rates rather than downsampling. It can expand the receptive field without increasing the number of parameters and damaging the structure. Then the features extracted from different receptive fields are fused to estimate the output (as shown in Fig.

5). It is beneficial to estimate offsets from a larger receptive field.

In our experiment, we remove the batch normalization layer and only use four dilation rates which are set to 1, 2, 4, and 8. To further simplify the operation and speed up the running time, we first use convolution to compress the channels of the feature. The compression ratio is set to 4. In the fusion setup, we use convolution to output fusion features whose channels are equal to the original input features. Similarly, a local skip connection between the input and output features is applied to prevent information blocking.

3.3 Implementation

In the proposed model, we use four scales for the encoder-decoder architecture, and the number of channels for each scale is set to 32, 64, 128, and 256. Expect for the first and last convolutional layers, unless stated otherwise, all other convolutional layers adopt a kernel size of . The kernel size of the first and last convolutional layers is set as . The final output is set to 1 or 3 channels depending on the input.

4 Experiments

In this section, we demonstrate the effectiveness of our model on both synthetic datasets and real noisy datasets. We adopt DIV2K [20] which contains 800 2K images and add different levels of noise to synthetic noise datasets. For real noisy images, we use the SIDD [1], RENOIR [3] and Poly [30] datasets. We randomly rotate and flip the images horizontally and vertically for data augmentation. In each training batch, we use 16 patches with size of as inputs. We train our model using the ADAM [15] optimizer with , , and . The initial learning rate is set to and then halved after

. Our model is implemented in the PyTorch framework 

[23] with an Nvidia GeForce RTX 1080Ti. In addition, PSNR and SSIM [28] are employed to evaluate the results.

4.1 Ablation study

We perform the ablation study on the Kodak24 dataset with a noise sigma 50. The results are shown in Table 1.

Ablation on RSAB RSAB is the crucial block in our network. The network will lose adaptability to image content without RSAB. When we replace RSAB with a original ResBlock, the performance significantly decreases, which demonstrates the its effect.

Ablation on context block The context block complements the downsampling operations to capture larger field information. We can observe that the performance improves when the context block is introduced.

Ablation on offset transfer We remove the offset transfer from coarse to fine and only use the features on the current scale to estimate the offsets for RSAB. The comparison validates the effectiveness of offset transfer.

4.2 Comparisons

We compare our algorithm with the state-of-the-art denoising methods with PSNR as the evaluation metric. For a fair comparison, all methods in the comparison employ the default settings provided by the corresponding authors.

Dataset BM3D DnCNN MemNet FFDNet RNAN RIDNet SADNet (ours)
BSD68 30 27.76 28.36 28.43 28.39 28.61 28.54 28.61
50 25.62 26.23 26.35 26.29 26.48 26.40 26.49
70 24.44 24.90 25.09 25.04 25.18 25.12 25.22
Kodak24 30 29.13 29.62 29.72 29.70 30.04 29.90 29.99
50 26.99 27.51 27.68 27.63 27.93 29.79 27.93
70 25.73 26.08 26.42 26.34 26.60 26.51 26.68
Table 2: Quantitative results on synthetic gray noisy images
Dataset CBM3D DnCNN MemNet FFDNet RNAN RIDNet SADNet (ours)
BSD68 30 29.73 30.40 28.39 30.31 30.63 30.47 30.64
50 27.38 28.01 26.33 27.96 28.27 28.12 28.30
70 26.00 26.56 25.08 26.53 26.83 26.69 26.91
Kodak24 30 30.89 31.39 29.67 31.39 31.86 31.64 31.86
50 28.63 29.16 27.65 29.10 29.58 29.25 29.62
70 27.27 27.64 26.40 27.68 28.16 27.94 28.25
Table 3: Quantitative results on synthetic color noisy images
Figure 6: Synthetic image denoising results on BSD68 with noise level .
Figure 7: Synthetic image denoising results on Kodak24 with noise level .

4.2.1 Synthetic noisy images

In the comparisons of synthetic noisy images, we use BSD68 and Kodak24 as our test datasets. Both datasets have color and gray images for testing. We add AWGN of different noise levels to the clean images. We choose BM3D [9] and CBM3D [8] as representatives of the classical traditional methods and some CNN-based methods, including DnCNN [34], MemNet [26], FFDNet [35], RNAN [36], and RIDNet [4], for comparisons.

Tables 2 and 3 are the quantitative results for gray images and color images with three different noise levels. Our SADNet can outperform the state-of-the-art methods on most of the tested noise levels and datasets. Note that although RNAN can achieve comparable evaluations to our method on partial results, it requires more parameters and computational overhead. In addition, we can observe that our method shows more improvement on higher noise levels, which demonstrates the effectiveness of the method on heavy noise removal.

The visual comparisons are shown in Fig. 6 and Fig. 7. We present some challenging examples from BSD68 and Kodak24. The feathers of birds and the texture of clothes are difficult to separate from heavy noise. The other methods in the comparison remove the details along with the noise, which results in oversmoothing artifacts. Due to its adaptivity to the image content, our method can restore the vivid textures from noise without other artifacts.

4.2.2 Real noisy images

In the comparisons of real noisy images, we choose DND [24], SIDD [1] and Nam [21] as our test datasets. DND contains 50 real noisy images with their corresponding clear images. One-thousand patches with a size of are extracted from the dataset by the providers for testing and comparison. The SIDD validation dataset is introduced for our evaluation, which contains 1280 noisy-clean image pairs. Nam includes 15 large image pairs with JPEG compression for 11 scenes. We cropped the images into patches and selected 25 patches as CBDNet [12] for testing. We train our model on the SIDD medium dataset and RENOIR for evaluation on the DND and SIDD validation datasets. Then, we finetune our model on the Poly [30] dataset for Nam, which improves the performance on the noisy images with JPEG compression. Furthermore, we choose the state-of-the-art methods that have demonstrated their validity on real noisy images for comparisons, including CBM3D [8], DnCNN [34], CBDNet [12], PD [38], and RIDNet [4].

DND The quantitative results are listed in Table 4, which are obtained from the public DnD benchmark website. FFDNet+ is the improved version of FFDNet wih a uniform noise level map manually selected by the providers. CDnCNN-B is the original DnCNN model for blind color denoising. DnCNN+ is finetuned on CDnCNN-B with the results of FFDNet+. Both non-blind and blind denoising methods are included for comparisons. Our SADNet can outperform the state-of-the-art methods on both PSNR and SSIM values. We further make a visual comparison on denoised images from the DnD dataset, which are shown in Fig. 8. We magnify the patches for better comparison. Other methods corrode the edges with residual noise. Our method can effectively remove the noise in the smooth region and keep the edges clear.

Method Blind/Non-blind PSNR SSIM
CDnCNN-B Blind 32.43 0.7900
TNRD Non-blind 33.65 0.8306
BM3D Non-blind 34.51 0.8507
WNNM Non-blind 34.67 0.8646
MCWNNM Non-blind 37.38 0.9294
FFDNet+ Non-blind 37.61 0.9415
DnCNN+ Non-blind 37.90 0.9430
CBDNet Blind 38.06 0.9421
N3Net Blind 38.32 0.9384
PD Blind 38.40 0.9452
Path-Restore Blind 39.00 0.9542
RIDNet Blind 39.26 0.9528
SADNet (our) Blind 39.37 0.9544
Table 4: Quantitative results on DnD sRGB images
Figure 8: Real image denoising results from DnD dataset.

SIDD The images in the SIDD dataset are captured by smartphones, and some noisy images have high noise levels. We employ 1280 validation images for quantitative comparisons, which are shown in Table 5. This demonstrates that our method can make significant improvements over other methods. For visual comparisons, we choose two examples from the denoised results. The first scene has rich texture and the second scene has strong structures. As shown in Fig. 9 and Fig. 10, CDnCNN-B and CBDNet fail in noise removal. CBM3D results in pseudo artifacts. PD and RIDNet destroy the textures. Our network can recover textures and structures that are closer to the ground truth.

Method Blind/Non-blind PSNR
CBM3D Non-blind 30.88
CDnCNN-B Blind 26.21
CBDNet Blind 30.78
PD Blind 32.94
RIDNet Blind 38.71
SADNet (ours) Blind 39.36
Table 5: Quantitative results on SIDD sRGB validation dataset
Figure 9: A Real image denoising example from SIDD dataset.
Figure 10: Another Real image denoising example from SIDD dataset.

Nam The JPEG compression makes the noise more stubborn on the Nam dataset. For a fair comparison, we use the patches chosen by CBDNet for evaluation. Furthermore, CBDNet (JPEG) is introduced for comparison, which was trained on JPEG compressed datasets. We report the average PSNR values for Nam in Table 6. Our SADNet achieves 1.98, 1.93 and 1.71 dB gains over RIDNet, PD, and CBDNet (JPEG). In the visual comparison shown in Fig. 11, our method again obtains the best result over other methods.

Method Blind/Non-blind PSNR SSIM
CBM3D Non-blind 39.84 0.9657
CDnCNN-B Blind 37.49 0.9272
CBDNet Blind 41.31 0.9784
PD Blind 41.09 0.9780
RIDNet Blind 41.04 0.9814
SADNet (ours) Blind 43.02 0.9848
Table 6: Quantitative results on Nam dataset with JPEG compression
Figure 11: Real image denoising results from Nam dataset with JPEG compression.

4.2.3 Parameters and running times

To compare the running times, we test different methods on color image denoising. Note that the running time may depend on the test platform and code, and we also provide the number of floating point operations (FLOPs). All methods are implemented in PyTorch. Although SADNet has high parameter numbers, the FLOPs are minimal, and the running time is relatively short due to multiple downsampling. Most operations of our model run on smaller scale feature maps.

Method DnCNN MemNet RNAN RIDNet SADNET (ours)
Parameters 558k 2,908k 8,960k 1,499k 4,321k
FLOPs 86.1G 449.2G 1163.5G 230.0G 50.1G
times (ms) 21.3 154.2 1072.2 84.4 26.7
Table 7: Parameter and time comparisons on color images

5 Conclusion

In this paper, we propose the spatial-adaptive denoising network for effective noise removal. The network is built by multiscale residual spatial-adaptive blocks, which sample relevant features for weighting based on image content and textures. We further introduce a context block to capture multiscale information and implement offset transfer to more accurately estimate the sampling locations. We find that the introduction of spatially adaptive capability can result in richer details for complex scenes. The proposed SADNet achieves the state-of-the-art performances with a moderate running time.

6 Acknowledgments

This work is partially supported by Science and Technology on Optical Radiation Laboratory (61424080211).

References

  • [1] A. Abdelhamed, S. Lin, and M. S. Brown (2018) A high-quality denoising dataset for smartphone cameras. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 1692–1700. Cited by: §2, §4.2.2, §4.
  • [2] M. Aharon, M. Elad, and A. Bruckstein (2006) K-svd: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing 54 (11), pp. 4311–4322. Cited by: §1, §2.
  • [3] J. Anaya and A. Barbu (2018) RENOIR–a dataset for real low-light image noise reduction. Journal of Visual Communication and Image Representation 51, pp. 144–154. Cited by: §2, §4.
  • [4] S. Anwar and N. Barnes (2019) Real image denoising with feature attention. arXiv preprint arXiv:1904.07396. Cited by: §1, §2, §4.2.1, §4.2.2.
  • [5] A. Buades, B. Coll, and J. Morel (2005) A non-local algorithm for image denoising. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Vol. 2, pp. 60–65. Cited by: §2.
  • [6] L. Chen, G. Papandreou, F. Schroff, and H. Adam (2017) Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Cited by: §3.2.
  • [7] Y. Chen and T. Pock (2016) Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration. IEEE transactions on pattern analysis and machine intelligence 39 (6), pp. 1256–1272. Cited by: §2.
  • [8] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian (2007) Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In 2007 IEEE International Conference on Image Processing, Vol. 1, pp. I–313. Cited by: §2, §4.2.1, §4.2.2.
  • [9] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian (2007) Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing 16 (8), pp. 2080–2095. Cited by: §2, §4.2.1.
  • [10] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei (2017) Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. 764–773. Cited by: §3.1.
  • [11] S. Gu, L. Zhang, W. Zuo, and X. Feng (2014) Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2862–2869. Cited by: §2.
  • [12] S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang (2019) Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1712–1722. Cited by: §1, §2, §4.2.2.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence 37 (9), pp. 1904–1916. Cited by: §3.2.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.
  • [15] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.
  • [16] S. Lefkimmiatis (2017) Non-local color image denoising with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3587–3596. Cited by: §2.
  • [17] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang (2018) Non-local recurrent network for image restoration. In Advances in Neural Information Processing Systems, pp. 1673–1682. Cited by: §1, §2.
  • [18] A. L. Maas, A. Y. Hannun, and A. Y. Ng (2013) Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, Vol. 30, pp. 3. Cited by: §3.
  • [19] J. Mairal, F. R. Bach, J. Ponce, G. Sapiro, and A. Zisserman (2009) Non-local sparse models for image restoration.. In ICCV, Vol. 29, pp. 54–62. Cited by: §2.
  • [20] D. Martin, C. Fowlkes, D. Tal, J. Malik, et al. (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. Cited by: §4.
  • [21] S. Nam, Y. Hwang, Y. Matsushita, and S. Joo Kim (2016) A holistic approach to cross-channel image noise modeling and its application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1683–1691. Cited by: §4.2.2.
  • [22] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin (2005) An iterative regularization method for total variation-based image restoration. Multiscale Modeling & Simulation 4 (2), pp. 460–489. Cited by: §1, §2.
  • [23] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: §4.
  • [24] T. Plotz and S. Roth (2017) Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1586–1595. Cited by: §2, §4.2.2.
  • [25] T. Plötz and S. Roth (2018) Neural nearest neighbors networks. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: §1, §2, §2.
  • [26] Y. Tai, J. Yang, X. Liu, and C. Xu (2017) Memnet: a persistent memory network for image restoration. In Proceedings of the IEEE international conference on computer vision, pp. 4539–4547. Cited by: §1, §2, §4.2.1.
  • [27] X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. Cited by: §2.
  • [28] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §4.
  • [29] J. Xu and S. Osher (2007) Iterative regularization and nonlinear inverse scale space applied to wavelet-based denoising. IEEE Transactions on Image Processing 16 (2), pp. 534–544. Cited by: §1, §2.
  • [30] J. Xu, H. Li, Z. Liang, D. Zhang, and L. Zhang (2018) Real-world noisy image denoising: a new benchmark. arXiv preprint arXiv:1804.02603. Cited by: §4.2.2, §4.
  • [31] J. Xu, L. Zhang, D. Zhang, and X. Feng (2017) Multi-channel weighted nuclear norm minimization for real color image denoising. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1096–1104. Cited by: §2.
  • [32] J. Xu, L. Zhang, and D. Zhang (2018) A trilateral weighted sparse coding scheme for real-world image denoising. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 20–36. Cited by: §2.
  • [33] J. Xu, L. Zhang, and D. Zhang (2018) External prior guided internal prior learning for real-world noisy image denoising. IEEE Transactions on Image Processing 27 (6), pp. 2996–3010. Cited by: §2.
  • [34] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing 26 (7), pp. 3142–3155. Cited by: §2, §4.2.1, §4.2.2.
  • [35] K. Zhang, W. Zuo, and L. Zhang (2018) FFDNet: toward a fast and flexible solution for cnn-based image denoising. IEEE Transactions on Image Processing 27 (9), pp. 4608–4622. Cited by: §2, §4.2.1.
  • [36] Y. Zhang, K. Li, K. Li, B. Zhong, and Y. Fu (2019) Residual non-local attention networks for image restoration. arXiv preprint arXiv:1903.10082. Cited by: §1, §2, §4.2.1.
  • [37] S. Zhou, J. Zhang, W. Zuo, H. Xie, J. Pan, and J. S. Ren (2019) DAVANet: stereo deblurring with view aggregation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10996–11005. Cited by: §3.2.
  • [38] Y. Zhou, J. Jiao, H. Huang, Y. Wang, J. Wang, H. Shi, and T. Huang (2019) When awgn-based denoiser meets real noises. arXiv preprint arXiv:1904.03485. Cited by: §1, §2, §4.2.2.
  • [39] X. Zhu, H. Hu, S. Lin, and J. Dai (2019) Deformable convnets v2: more deformable, better results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9308–9316. Cited by: §3.1.
  • [40] D. Zoran and Y. Weiss (2011) From learning models of natural image patches to whole image restoration. In 2011 International Conference on Computer Vision, pp. 479–486. Cited by: §1, §2.