Low-Weight and Learnable Image Denoising

11/17/2019 ∙ by Gregory Vaksman, et al. ∙ Google Technion 40

Image denoising is a well studied problem with an extensive activity that has spread over several decades. Despite the many available denoising algorithms, the quest for simple, powerful and fast denoisers is still an active and vibrant topic of research. Leading classical denoising methods are typically designed to exploit the inner structure in images by modeling local overlapping patches. In contrast, recent newcomers to this arena are supervised neural-network-based methods that bypass this modeling altogether, targeting the inference goal directly and globally, while tending to be very deep and parameter heavy. This work proposes a novel low-weight learnable architecture that embeds in it several of the main concepts from the classical methods, while being trained for best denoising performance. More specifically, our proposed network relies on patch processing, leveraging non-local self-similarity, representation sparsity and a multiscale treatment. The proposed architecture achieves near state-of-the-art denoising results, while using a small fraction of the typical number of parameters. Furthermore, we demonstrate the ability of the proposed network to adapt itself to an incoming image by leveraging similar clean ones.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

page 8

page 9

page 10

page 11

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Image denoising is a well studied problem, and many successful algorithms have been developed for handling this task over the years, e.g. NLM [1], KSVD [7], BM3D [4], EPLL [37], WNNM [9] and others [25, 15, 33, 6, 21, 18, 10, 24, 29, 31, 27, 19]. These classically-oriented algorithms strongly rely on models that exploit properties of natural images, usually employed while operating on small fully overlapped patches. For example, both EPLL [37] and PLE [33]

perform denoising using Gaussian Mixture Modeling (GMM) imposed on the image patches. The K-SVD algorithm 

[7] restores images using a sparse modeling of such patches. BM3D [4] exploits self-similarity by grouping similar patches to 3D blocks and filtering them jointly. The algorithms reported in [27, 19] harness a multi-scale analysis framework on top of the above-mentioned local models.

Recently, supervised deep-learning based methods entered the denoising arena, showing state-of-the-art (SOTA) results in various contexts

[2, 3, 30, 34, 16, 35, 23, 28, 14, 36, 13]. In contrast to the above-mentioned classical algorithms, deep-learning based methods tend to bypass the need for an explicit modeling of image redundancies, operating instead by directly learning the inference from the incoming images to their desired outputs. In order to obtain a non-local flavor in their treatment, as self-similarity or multi-scale methods would do, most of these algorithms ([13] being an exception) tend to increase their footprint by utilizing very deep and parameter heavy networks. These reflect badly on their memory consumption, the required amount of training images and the time for training and inference.

(a) Clean image
(b) Noisy ()
(c) Denoised (before adapting)
PSNR = 24.33dB
(d) Denoised (after adapting)
PSNR = 26.25dB
Fig. 1: Example of our adaptation approach: (a) and (b) show the clean and noisy images respectively; (c) is the denoised result by our universal architecture trained on 432 BSD500 images [17]; (d) presents our adapted result, using a single astronomical image (shown in Figure 15(a)).

An interesting recent line of work by Lefkimmiatis proposes a denoising network with a significantly reduced number of parameters, while persisting on near SOTA performance [11, 12]. This method leverages the non-local self-similarity property of images by jointly operating on groups of similar patches. The network’s architecture consists of several repeated stages, each resembling a single step of the proximal gradient descent method under sparse modeling [20]. In comparison with DnCNN [34], the work reported in [11, 12] shows a reduction by factor of in the number of parameters, while achieving denoising PSNR that is only  dB lower.

Inspired by Lefkimmiatis’ work, in this paper we continue with his line of low-weight networks and propose a novel, easy, and learnable architecture that harnesses several main concepts from classical methods: (i) Operating on small fully overlapping patches; (ii) Exploiting non-local self-similarity; (iii) Leveraging representation sparsity; and (iv) Employing a multi-scale treatment. Our network resembles the one proposed in [11, 12], with several important differences:

  • We introduce a multi-scale treatment in our network that combats spatial artifacts, especially noticeable in smooth regions [27]. While this change does not reflect strongly on the PSNR results, it has a clear visual contribution;

  • Our network is more effective by operating in the residual domain, similar to the approach taken by [34];

  • Our patch fusion operator includes a spatial smoothing, which adds an extra force to our overall filtering; and

  • Our architecture is trained end-to-end, whereas Lefkimiatis’s scheme consists of a greedy training of the separate layers, followed by an end-to-end warm-start update.

Just as in [11, 12]

, our proposed method operates on all the overlapping patches taken from the processed image by augmenting each with its nearest neighbors and filtering these jointly. The patch grouping stage is applied only once before any filtering, and it is not a part of the learnable architecture. Each patch group undergoes a series of trainable steps that aim to predict the noise in the candidate patch, thereby operating in the residual domain. These include several layers of the triplet (i) a linear separable transform, (ii) a ReLU on the features obtained, and (iii) an inverse transform for returning to the image domain. As already mentioned above, our scheme includes a multi-scale treatment, in which we fuse the processing of corresponding patches from different scales. This paper has two key contributions:

  1. The proposed architecture preserves the low-weight form of [11, 12] while improving on the visual and PSNR qualities of the reconstructed images.

  2. Leveraging the low-weight nature of our network, we propose a novel ability of adapting our trained network to the content of the treated image, boosting this way its denoising performance. This is obtained by denoising the incoming image regularly, seeking few similar images to the outcome, updating the trained network to their content, and then running the denoising again. We show the tendency of this approach to lead to improved denoising performance, as demonstrated in Figure 1.

This paper is organized as follows. In section II we describe the proposed scheme and its various ingredients. Section III presents experimental results and compares our method to other recently published methods. In Section IV we introduce the ability of our low-weight network to adapt to an incoming image by finding similar ones to it, and get a boost in the denoising performance. Section V concludes this paper and raises directions for a future work.

Ii The Proposed Denoising Scheme

Ii-a Overall Algorithm Overview

Our proposed method extracts all possible overlapping patches of size from the processed image, and cleans each in a similar way. The final reconstructed image is obtained by combining these restored patches via averaging. The algorithm is shown schematically in Figure 2.

Fig. 2: The proposed denoising Algorithm starts by extracting all possible overlapping patches and their corresponding reduced-scale ones. Each patch is augmented with its nearest neighbors and filtered, while fusing information from both scales. The reconstructed image is obtained by combining all the filtered patches via averaging.

In order to formulate the patch extraction, combination and filtering operations, we introduce some notations. Assume that the processed image is of size . We denote by

the noisy and denoised images respectively, both reshaped to a 1D vector. Similarly, the corrupted and restored patches in location

are denoted by respectively, where .***

Note that we handle boundary pixels by padding the processed image using mirror reflection with

pixels from each side. Thus, the number of extracted patches is equal to the number of pixels in the image.
denotes the matrix that extracts a patch centered at the -th location. The patch extraction operation from the noisy image is given by , and the denoised image is obtained by combining the denoised patches using weighted averaging,

(1)

where smooth patches get higher weights. More precisely,

(2)

where

is a sample variance of

, and is learned.

Ii-B Our Scheme: A Closer Look

Zooming in on the local treatment, it starts by augmenting the patch with a group of its nearest neighbors, forming a matrix of size . The nearest neighbor search is done using a Euclidean metric, , limited to a search window of size around the center of the patch.

The matrix undergoes a series of trainable operations that aim to recover a clean candidate patch

. Our filtering network consists of several blocks, each consisting of (i) a forward 2D linear transform; (ii) a non-negative thresholding (ReLU) on the obtained features; and (iii) a transform back to the image domain. All transforms operate separately in the spatial and the similarity domains. In contrast to BM3D and other methods, our filtering predicts the residual rather than the clean patches, just as done in

[34]

. This means that we estimate the noise using the network. Restored patches are obtained by subtracting the estimated noise from the corrupted patches.

Our scheme includes a multi-scale treatment and a fusion of corresponding patches from the different scales. The adopted strategy borrows its rationale from [32]

, which studied the single image super resolution task. In their algorithm, high-resolution patches and their corresponding low-resolution versions are jointly treated by assuming that they share the same sparse representation. The two resolution patches are handled by learning a pair of coupled dictionaries. In a similar fashion, we augment the corresponding patches from the two scales, and learn a joint transform for their fusion. In our experiments, the multi-scale scheme includes only two scales, but the same concept can be applied to a higher pyramid. In our notations, the 1

scale is the original noisy image , and the 2 scale images are created by convolving with the low-pass filter

(3)

and down-sampling the result by a factor of two. In order to synchronize between patch locations in the two scales, we create four downscaled images by sampling the convolved image at either even or odd locations:

(even/odd columns & even/odd rows). For each 1 scale patch, the corresponding 2 scale patch (of the same size ) is extracted from the appropriate down-sampled image, such that both patches are centred at the same pixel in the original image, as depicted in Figure 3.

Fig. 3: Visualization of the corresponding 1 and 2 scale patches: Blue dots are the processed image pixels; the green square is a 1 scale patch, while the red square is its corresponding 2 scale patch. Both are of size pixels.

We denote the 2 scale patch that corresponds to by . This patch is augmented with a group of its nearest neighbors, forming a matrix of size . The nearest neighbor search is performed in the same down-scaled image from which is taken, while limiting the search to a window of size . Both matrices, and , are fed to the filtering network, which fuses the scales using a joint transform. The architecture of this network is described next.

Ii-C The Filtering Network

We turn to present the architecture of the filtering network, starting by describing the involved building blocks and then discussing the whole scheme. A basic component within our network is the TRT (Transform–ReLU–Transform) block. This follows the classic Thresholding algorithm in sparse approximation theory [8] in which denoising is obtained by transforming the incoming signal, discarding of small entries (this way getting a sparse representation), and applying an inverse transform that returns to the signal domain. Note that the same conceptual structure is employed by the well-known BM3D algorithm [4]. In a similar fashion, our TRT block applies a learned transform, non-negative thresholding (ReLU) and another transform on the resulting matrix. Both transforms are separable and linear, denoted by the operator and implemented using a Separable Linear (SL) layer,

(4)

where and operate in the spatial and the similarity domains respectively. Separability of the SL layer allows a substantial reduction in the number of parameters of the network. In fact, computing is equivalent to applying , where is a Kronecker product between and , and is a vectorized version of .

Since concatenation of two layers can be replaced by a single effective , due to their linearity, we remove one layer in any concatenation of two -s, as shown in Figure 4. The component without the second transform is denoted by , and when concatenating -s, the first blocks should be replaced by -s. Another variant we use in our network is , which is a version of

with batch normalization added before the ReLU.

Fig. 4: Concatenation of two TRTs: One SL layer is removed due to linearity, converting the first TRT block into TR.

Another component of the filtering network is an Aggregation block (), depicted in Figure 5. This block imposes consistency between overlapping patches by combining them to a temporary image using plain averaging (as described in Eq. (1) but without the weights), and extracting them back from the obtained image by .

Fig. 5: Aggregation () block.

The complete architecture of the filtering network is presented in Figure 6. The network receives as input two sets of matrices, and , and its output is an array of filtered overlapping patches . At first, each of these matrices is multiplied by a diagonal weight matrix . Recall that the columns of (or ) are image patches, where the first is the processed patch and the rest are its neighbors. The weights express the network’s belief regarding the relevance of each of the neighbor patches to the denoising process. These weights are calculated using an auxiliary network denoted as “weight net”, which consists of seven FC (Fully Connected) layers of size with batch normalization and ReLU between each two FC’s. The network gets as input the sample variance of the processed patch and squared distances between the patch and its nearest neighbors.

Fig. 6: The filtering network. The table below summarizes the sizes of the SL layers.

After multiplication by the matrix undergoes a series of operations that include transforms, ReLUs and , until it gets to the block, as shown in Figure 6. The aggregation block imposes consistency of the matrices, which represent overlapping patches, but also causes loss of some inf.ormation, therefore we split the flow to two branches: with and without . Since the output of any or component is in the feature domain, we wrap the block with and , where transforms the features to the image domain, and transforms the output back to the feature space, while imposing sparsity. The 2 scale matrices, , undergo very similar operations as the 1 scale ones, but with different learned parameters. The only difference in the treatment of the two scales is in the functionality of the aggregation blocks. Since the operates on downsampled patches, combination and extraction is done with . Additionally, applies bilinear low-pass filter as defined in Equation (3) on the temporary image obtained after the patch combination.

The block applies a joint transform that fuses the features coming from four origins: the 1 and 2 scales with and without aggregation. The columns of all these matrices are concatenated together such that the same spatial transformation is applied on all. Note that the network size can be reduced at the cost of a slight degradation in performance by removing the , and components. We discuss this option in the result section.

Iii Experimental Results

This section reports the performance of the proposed scheme, with a comprehensive comparison to recent SOTA denoising algorithms. In particular, we include in these comparisons the classical BM3D [4] due to its resemblance to our network architecture, the TNRD [3], DnCNN [34] and FFDNet [35] networks, the non-local and high performance NLRN [13] architecture, and the recently published Learned K-SVD (LKSVD) [26] method. We also include comparisons to Lefkimiatis’ networks, NLNet [11] and UNLNet [12], which inspired our work. Our algorithm is denoted as Non-Local Multi-Scale (NLMS), and we present two versions of it, NLMS and NLMS-S. The second is a simplified network with slightly weaker performance (see more below).

Iii-a Denoising with Known Noise Level

We start with plain denoising experiments, in which the noise is Gaussian white of known variance. This is the common case covered by all the above mentioned methods.

Our network is trained on 432 images from the BSD500 set [17], and the evaluation uses the remaining 68 images (BSD68). The network is trained end-to-end using decreasing learning rate over batches of 4 images, using the mean-squared-error loss. We start training with the Adam optimizer with a learning rate of , and switch to SGD at the last part of the training with an initial learning rate of .

Figure 7 presents a comparison between our algorithm and leading alternative ones by presenting their PSNR performance versus their number of trained parameters. This figure exposes the fact that the performance of denoising networks is heavily influenced by their complexity. As can be seen, the various algorithms can be roughly split into two categories: low-weight architectures with a number of parameters below 100K (TNRD [3], LKSVD [26], NLNet [11] and UNLNetUNLNet is a blind denoising network trained for . [12]), and much larger and slightly better performing networks (DnCNN [34], FFDNet [35], and NLRN [13]) that use hundreds of thousands of parameters. As we proceed in this section, we emphasize low-weight architectures in our comparisons, a category to which our network belongs. Figure 7 shows that our networks (both NLMS and NLMS-S) achieve the best results within this low-weight category.

Fig. 7: Comparing denoising networks: PSNR performance vs. the number of trained parameters (for noise level ).

Detailed quantitative denoising results per noise level are reported NLNet [11] complexity and PSNR are taken from the released code. in Table I. For each noise level, the best denoising performance is marked in red, and the best performance within the low-weight category is marked in blue. Table II reports the number of trained parameters per each of the competing networks. Figures 8, 9, 10 and 11 present examples of denoising results. Since our architecture is related to both BM3D and NLNet, we focus on qualitative comparisons with these algorithms. For all noise levels our results are significantly sharper, contain less artifacts and preserve more details than those of BM3D. In comparison to NLNet, our method is significantly better in high noise levels due to our multi-scale treatment, recovering large and repeating elements, as shown in Figure 7(p). In fact our algorithm manages to recover repeating elements better than all methods presented in Figure 8 except NLRN. In addition, in cases of high noise levels, the multi-scale treatment allows handling smooth areas with less artifacts than NLNet, as one can see from the results in Figure 8(p) and 8(o). In medium noise levels, our algorithm recovers more details, while NLNet tends to over-smooth the recovered image. For example, see the Elephant skin in Figure 10 and the mountain glacier in Figure 11.

Method Noise Average
15 25 50
[3] 31.42 28.92 25.97 28.77
[34] 31.73 29.23 26.23 29.06
[4] 31.07 28.57 25.62 28.42
[13] 31.88 29.41 26.47 29.25
11footnotemark: 1[11] 31.50 28.98 26.03 28.84
(our) 31.62 29.11 26.17 28.97
TABLE I: B/W Denoising performance with known noise level: Best PSNR is marked in red, and best PSNR within the low-weight category is marked in blue.
11footnotemark: 1
556K 330K 26.6K 24.3K 61.6K 40.2K
TABLE II: Denoising networks: Number of parameters.
(a) Original
(b) Noisy with
(c) DnCNN [34]
PSNR = 25.56dB
(d) BM3D [4]
PSNR = 24.99dB
(e) NLRN [13]
PSNR = 26.11dB
(f) TNRD [3]
PSNR = 25.07dB
(g) NLNet [11]
PSNR = 25.21dB
(h) NLMS (ours)
PSNR = 25.60dB
(i) Original
(j) Noisy
(k) DnCNN
(l) BM3D
(m) NLRN
(n) TNRD
(o) NLNet
(p) NLMS (ours)
Fig. 8: Denoising example with .
(a) Original
(b) Noisy with
(c) DnCNN [34]
PSNR = 23.95dB
(d) BM3D [4]
PSNR = 23.36dB
(e) NLRN [13]
PSNR = 24.22dB
(f) TNRD [3]
PSNR = 23.61dB
(g) NLNet [11]
PSNR = 23.63dB
(h) NLMS (ours)
PSNR = 23.91dB
(i) Original
(j) Noisy
(k) DnCNN
(l) BM3D
(m) NLRN
(n) TNRD
(o) NLNet
(p) NLMS (ours)
Fig. 9: Denoising example with .
(a) Original
(b) Noisy with
(c) DnCNN [34]
PSNR = 32.32dB
(d) BM3D [4]
PSNR = 31.70dB
(e) NLRN [13]
PSNR = 32.47dB
(f) TNRD [3]
PSNR = 32.05dB
(g) NLNet [11]
PSNR = 32.14dB
(h) NLMS (ours)
PSNR = 32.30dB
(i) Original
(j) Noisy
(k) DnCNN
(l) BM3D
(m) NLRN
(n) TNRD
(o) NLNet
(p) NLMS (ours)
Fig. 10: Denoising example with .
(a) Original
(b) Noisy with
(c) DnCNN [34]
PSNR = 24.47dB
(d) BM3D [4]
PSNR = 23.81dB
(e) NLRN [13]
PSNR = 24.58dB
(f) TNRD [3]
PSNR = 24.14dB
(g) NLNet [11]
PSNR = 24.12dB
(h) NLMS (ours)
PSNR = 24.38dB
(i) Original
(j) Noisy
(k) DnCNN
(l) BM3D
(m) NLRN
(n) TNRD
(o) NLNet
(p) NLMS (ours)
Fig. 11: Denoising example with .

For denoising of color images we use 3D patches of size and increase the size of the matrices from (, , ) to (, , ) accordingly, which increases the total number of our network’s parameters to 94K. The nearest neighbor search is done using the Luminance component, . Quantitative denoising results are reported in Table III, where our network is denoted as CNLMS (Color NLMS). As can be seen, our network is the best within the low-weight category, and gets quite close to the CDnCNN performance [34]. Figures 12 and 13 present examples of denoising results which show that CNLMS handles low frequency noise better than CBM3D and CNLNet due to its multi-scale treatment.

Method Noise Average
15 25 50
[35] 33.87 31.21 27.96 31.01
[34] 33.99 31.31 28.01 31.10
[5] 33.50 30.68 27.36 30.51
11footnotemark: 1[11] 33.81 31.08 27.73 30.87
(our) 33.85 31.18 27.91 30.98
TABLE III: Color image denoising performance: Best PSNR is marked in red, and best PSNR within the low-weight category is marked in blue.
(a) Original
(b) Noisy with
(c) CDnCNN [34]
PSNR = 36.68dB
(d) CBM3D [5]
PSNR = 35.51dB
(e) CFFDNet [35]
PSNR = 36.62dB
(f) CNLNet [11]
PSNR = 35.98dB
(g) CNLMS (ours)
PSNR = 36.56dB
(h) Original
(i) Noisy
(j) CDnCNN
(k) CBM3D
(l) CFFDNet
(m) CNLNet
(n) CNLMS (ours)
Fig. 12: Color Image denoising example with .
(a) Original
(b) Noisy with
(c) CDnCNN [34]
PSNR = 26.85dB
(d) CBM3D [5]
PSNR = 26.23dB
(e) CFFDNet [35]
PSNR = 26.78dB
(f) CNLNet [11]
PSNR = 26.62dB
(g) CNLMS (ours)
PSNR = 26.90dB
(h) Original
(i) Noisy
(j) CDnCNN
(k) CBM3D
(l) CFFDNet
(m) CNLNet
(n) CNLMS (ours)
Fig. 13: Color image denoising example with .

Iii-B Blind Denoising

Blind denoising, i.e., denoising with unknown noise level, is a useful feature when it comes to neural networks. This allows using a fixed network for performing image denoising, while serving a range of noise levels. This is a more practical solution, when compared to the one discussed above, in which we have designed a series of networks, each trained for a particular . We report blind denoising performance of our architecture and compare to similar results by DnCNN-b [34] (a version of DnCNN that has been trained for a range of values) and UNLNet [12]. Our blind denoising network (denoted NLMS-b) preserves all its structure, but simply trained by mixing noise level examples in the range . The evaluation of all three networks is performed on images with . The results of this experiment are brought in Table IV. As can be seen, our method obtains a higher PSNR than UNLNet, while being slightly weaker than DnCNN-b. Considering again the fact that our network has nearly of the parameters of DnCNN-b, we can say that our approach leads to SOTA results in the low-weight category.

Method Noise Average
15 25
[34] 31.61 29.16 30.39
[12] 31.47 28.96 30.22
(ours) 31.54 29.06 30.30
TABLE IV: Blind denoising performance.

Iii-C Reducing Network Size

Our NLMS denoising network can be further simplified by removing the , and components. The resulting smaller network, denoted by NLMS-S, contains 30% less parameters than the original NLMS architecture (see Table II), while achieving slightly weaker performance. Table V shows that for both regular and blind denoising scenarios, NLMS-S achieves an average PSNR that is only dB lower than the full-size NLMS network. Denoising examples are presented in Figure 14, showing that the visual quality gap between NLMS and NLMS-S is marginal.

Method Noise Average
15 25 50
31.62 29.11 26.17 28.97
31.57 29.08 26.13 28.93
31.54 29.06 30.30
31.49 29.01 ‭30.25 ‬
TABLE V: Performance comparison between NLMS and its smaller version, NLMS-S.
(a) Original

(b) Noisy with

(c) NLMS
PSNR = 29.38dB
(d) NLMS-S
PSNR = 29.34dB
(e) Original

(f) Noisy with

(g) NLMS
PSNR = 28.96dB
(h) NLMS-S
PSNR = 28.89dB
Fig. 14: Comparison between full and small versions of the NLMS network.

Iv Network Adaptation for Better Denoising

A network trained on a set of general natural images might fail to obtain high quality results when applied to images that are not well represented in the training set. For example, applying our network on astronomical or text images creates pronounced artifacts, as can be seen in Figures 16(g) and 17(g), as these images contain specific structures that are atypical of natural images. The work reported in  [22, 23] suggest training class-aware denoisers, showing that they lead to better performance. However, this approach requires a large amount of images for training each class (e.g. [22, 23] train their networks on 900 images per class), and holding many networks for covering the variety of classes to handle.

In this work we propose a different approach: Instead of learning class-aware denoisers, we adapt the above described universal network, which has been trained on natural images, to handle incoming images with special content. Adaptation is obtained by first denoising the input image regularly, then seeking (e.g., using Google image search) few closely related images to it, and then re-training the network on this small set of clean images. This process concludes by denoising the input image by the updated network. An advantage of low-weight schemes is their ability to be retrained on small amounts of data without overfitting. Indeed, we present experiments in which our network (NLMS) is updated with a single similar image. The training images used for our experiments are shown in Figure 16. In each experiment, except the text, the network has been trained over 500 batches of 4 cropped sub-images with random offsets, which takes about 6-7 minutes on Nvidia GeForce GTX 1080 Ti GPU. The adaptation does not require early stopping of the training, i.e. training the network over tens of thousands of batches leads to similar and even better results. Since the statistics of the text images is distant from that of natural images, the adaptation of the network takes more time. In our experiment, training has been done over 6300 batches. In accordance with the long training time, this adaptation gains more than 4dB improvement in PSNR, where improvement of 2.8dB is achieved with only 400 batches, as shown in Figure 15.

The first two experiments, presented in Figures 17 and 18, show adaptation examples for non-natural images: astronomical and text images. As can be seen in Figures 16(g)17(g), the denoising results achieved by NLMS before adaptation are poor. However, adapting the network by training on a single similar image (different training image for each experiment) significantly improves both PSNR and the visual quality of the results. When it comes to natural images, our regular trained network usually achieves satisfactory results that are harder to improve. However, even in such cases, the denoising quality might be boosted by adapting the network using one similar image, as indeed shown in the experiments presented in Figures 19 and 20. Adapting the network leads to PSNR improvement of more than 0.25dB.

Fig. 15: PSNR vs. number of batches for the text image adaptation experiment.
(a) Astronomical
(b) Brick house
(c) Text
(d) Whale shark
Fig. 16: Training images for the network adaptation experiments. Actual sizes of the images are included.
(a) Clean astronomical

(b) Noisy with

(c) Denoised
(before adaptation)
PSNR = 26.44dB
(d) Denoised
(after adaptation)
PSNR = 28.04dB
(e) Clean
(f) Noisy
(g) Denoised
(before adaptation)
(h) Denoised
(after adaptation)
Fig. 17: An example of network adaptation for astronomical images.
(a) Clean text

(b) Noisy with

(c) Denoised
(before adaptation)
PSNR = 22.52dB
(d) Denoised
(after adaptation)
PSNR = 26.78dB
(e) Clean text
(f) Noisy
(g) Denoised
(before adaptation)
(h) Denoised
(after adaptation)
Fig. 18: An example of network adaptation for text images.
(a) Clean house

(b) Noisy house
with
(c) Denoised
(before adaptation)
PSNR = 27.06dB
(d) Denoised
(after adaptation)
PSNR = 27.47dB
(e) Clean text
(f) Noisy
(g) Denoised
(before adaptation)
(h) Denoised
(after adaptation)
Fig. 19: An example of network adaptation for brick house images.
(a) Clean shark

(b) Noisy shark
with
(c) Denoised
(before adaptation)
PSNR = 27.87dB
(d) Denoised
(after adaptation)
PSNR = 28.14dB
(e) Absolute difference between the results
Fig. 20: An example of network adaptation for whale shark images. The difference is scaled by factor 3 for better visibility.

V Conclusion

This work presents a low-weight network for supervised image denoising. Our patch-based architecture exploits non-local self-similarity and representation sparsity, augmented by a multiscale treatment. Separable linear layers, combined with non-local neighbor search, allow capturing non-local interrelations between pixels using small number of learned parameters. The proposed network achieves SOTA results in the low-weight category, and competitive performance overall. In addition, the presented network can be adapted to incoming noisy images by a re-training on similar images, leading to boosted denoising performance. In our future work we intend to extend the algorithm to treat color images, and other noise models. In addition, we believe that the architecture composed could be found effective in more challenging inverse problems.

References

  • [1] A. Buades, B. Coll, and J. Morel (2005) A non-local algorithm for image denoising. In

    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)

    ,
    Vol. 2, pp. 60–65. Cited by: §I.
  • [2] H. C. Burger, C. J. Schuler, and S. Harmeling (2012) Image denoising: can plain neural networks compete with bm3d?. In 2012 IEEE conference on computer vision and pattern recognition, pp. 2392–2399. Cited by: §I.
  • [3] Y. Chen, W. Yu, and T. Pock (2015) On learning optimized reaction diffusion processes for effective image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5261–5269. Cited by: §I, 10(f), 7(f), 8(f), 9(f), §III-A, TABLE I, §III.
  • [4] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian (2007-08) Image denoising by sparse 3-d transform-domain collaborative filtering. Image Processing, IEEE Transactions on 16 (8), pp. 2080–2095. External Links: Document, ISSN 1057-7149 Cited by: §I, §II-C, 10(d), 7(d), 8(d), 9(d), TABLE I, §III.
  • [5] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian (2007) Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In 2007 IEEE International Conference on Image Processing, Vol. 1, pp. I–313. Cited by: 11(d), 12(d), TABLE III.
  • [6] W. Dong, L. Zhang, G. Shi, and X. Li (2012) Nonlocally centralized sparse representation for image restoration. IEEE transactions on Image Processing 22 (4), pp. 1620–1630. Cited by: §I.
  • [7] M. Elad and M. Aharon (2006-12) Image denoising via sparse and redundant representations over learned dictionaries. Image Processing, IEEE Transactions on 15 (12), pp. 3736–3745. External Links: Document, ISSN 1057-7149 Cited by: §I.
  • [8] M. Elad (2010) Sparse and redundant representations: from theory to applications in signal and image processing. Springer Science & Business Media. Cited by: §II-C.
  • [9] S. Gu, L. Zhang, W. Zuo, and X. Feng (2014) Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2862–2869. Cited by: §I.
  • [10] A. Kheradmand and P. Milanfar (2014-12) A general framework for regularized, similarity-based image restoration. Image Processing, IEEE Transactions on 23 (12), pp. 5136–5151. External Links: Document, ISSN 1057-7149 Cited by: §I.
  • [11] S. Lefkimmiatis (2017)

    Non-local color image denoising with convolutional neural networks

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3587–3596. Cited by: item 1, §I, §I, §I, 10(g), 11(f), 12(f), 7(g), 8(g), 9(g), §III-A, TABLE I, TABLE III, §III, footnote ‡.
  • [12] S. Lefkimmiatis (2018) Universal denoising networks: a novel cnn architecture for image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3204–3213. Cited by: item 1, §I, §I, §I, §III-A, §III-B, TABLE IV, §III.
  • [13] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang (2018) Non-local recurrent network for image restoration. In Advances in Neural Information Processing Systems, pp. 1673–1682. Cited by: §I, 10(e), 7(e), 8(e), 9(e), §III-A, TABLE I, §III.
  • [14] P. Liu, H. Zhang, K. Zhang, L. Lin, and W. Zuo (2018) Multi-level wavelet-cnn for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 773–782. Cited by: §I.
  • [15] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman (2009-Sept) Non-local sparse models for image restoration. In Computer Vision, 2009 IEEE 12th International Conference on, pp. 2272–2279. External Links: Document, ISSN 1550-5499 Cited by: §I.
  • [16] X. Mao, C. Shen, and Y. Yang (2016) Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in neural information processing systems, pp. 2802–2810. Cited by: §I.
  • [17] D. Martin, C. Fowlkes, D. Tal, and J. Malik (2001-07) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vol. 2, pp. 416–423 vol.2. External Links: Document, ISSN Cited by: Fig. 1, §III-A.
  • [18] P. Milanfar (2013-01) A tour of modern image filtering: new insights and methods, both practical and theoretical. Signal Processing Magazine, IEEE 30 (1), pp. 106–128. External Links: Document, ISSN 1053-5888 Cited by: §I.
  • [19] V. Papyan and M. Elad (2015) Multi-scale patch-based image restoration. IEEE Transactions on image processing 25 (1), pp. 249–261. Cited by: §I.
  • [20] N. Parikh, S. Boyd, et al. (2014) Proximal algorithms. Foundations and Trends® in Optimization 1 (3), pp. 127–239. Cited by: §I.
  • [21] I. Ram, M. Elad, and I. Cohen (2013-07) Image processing using smooth ordering of its patches. Image Processing, IEEE Transactions on 22 (7), pp. 2764–2774. External Links: Document, ISSN 1057-7149 Cited by: §I.
  • [22] T. Remez, O. Litany, R. Giryes, and A. M. Bronstein (2017) Deep class-aware image denoising. In 2017 international conference on sampling theory and applications (SampTA), pp. 138–142. Cited by: §IV.
  • [23] T. Remez, O. Litany, R. Giryes, and A. M. Bronstein (2018) Class-aware fully convolutional gaussian and poisson denoising. IEEE Transactions on Image Processing 27 (11), pp. 5707–5722. Cited by: §I, §IV.
  • [24] Y. Romano and M. Elad (2015) Boosting of image denoising algorithms. SIAM Journal on Imaging Sciences 8 (2), pp. 1187–1219. Cited by: §I.
  • [25] S. Roth and M. J. Black (2009) Fields of experts. International Journal of Computer Vision 82 (2), pp. 205. Cited by: §I.
  • [26] M. Scetbon, M. Elad, and P. Milanfar (2019) Deep k-svd denoising. External Links: 1909.13164 Cited by: §III-A, §III.
  • [27] J. Sulam, B. Ophir, and M. Elad (2014) Image denoising through multi-scale learnt dictionaries. In 2014 IEEE International Conference on Image Processing (ICIP), pp. 808–812. Cited by: 1st item, §I.
  • [28] Y. Tai, J. Yang, X. Liu, and C. Xu (2017) Memnet: a persistent memory network for image restoration. In Proceedings of the IEEE international conference on computer vision, pp. 4539–4547. Cited by: §I.
  • [29] G. Vaksman, M. Zibulevsky, and M. Elad (2016) Patch ordering as a regularization for inverse problems in image processing. SIAM Journal on Imaging Sciences 9 (1), pp. 287–319. Cited by: §I.
  • [30] Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang (2015) Deep networks for image super-resolution with sparse prior. In Proceedings of the IEEE international conference on computer vision, pp. 370–378. Cited by: §I.
  • [31] N. Yair and T. Michaeli (2018-06) Multi-scale weighted nuclear norm image restoration. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I.
  • [32] J. Yang, J. Wright, T.S. Huang, and Y. Ma (2010-11) Image super-resolution via sparse representation. Image Processing, IEEE Transactions on 19 (11), pp. 2861–2873. External Links: Document, ISSN 1057-7149 Cited by: §II-B.
  • [33] G. Yu, G. Sapiro, and S. Mallat (2012-05) Solving inverse problems with piecewise linear estimators: from gaussian mixture models to structured sparsity. Image Processing, IEEE Transactions on 21 (5), pp. 2481–2499. External Links: Document, ISSN 1057-7149 Cited by: §I.
  • [34] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing 26 (7), pp. 3142–3155. Cited by: 2nd item, §I, §I, §II-B, 10(c), 11(c), 12(c), 7(c), 8(c), 9(c), §III-A, §III-A, §III-B, TABLE I, TABLE III, TABLE IV, §III.
  • [35] K. Zhang, W. Zuo, and L. Zhang (2018) FFDNet: toward a fast and flexible solution for cnn-based image denoising. IEEE Transactions on Image Processing 27 (9), pp. 4608–4622. Cited by: §I, 11(e), 12(e), §III-A, TABLE III, §III.
  • [36] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu (2018) Residual dense network for image restoration. arXiv preprint arXiv:1812.10477. Cited by: §I.
  • [37] D. Zoran and Y. Weiss (2011-11) From learning models of natural image patches to whole image restoration. In Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 479–486. External Links: Document, ISSN 1550-5499 Cited by: §I.