Fully Unsupervised Probabilistic Noise2Void

11/27/2019 ∙ by Mangal Prakash, et al. ∙ 0

Image denoising is the first step in many biomedical image analysis pipelines and Deep Learning (DL) based methods are currently best performing. A new category of DL methods such as Noise2Void or Noise2Self can be used fully unsupervised, requiring nothing but the noisy data. However, this comes at the price of reduced reconstruction quality. The recently proposed Probabilistic Noise2Void (PN2V) improves results, but requires an additional noise model for which calibration data needs to be acquired. Here, we present improvements to PN2V that (i) replace histogram based noise models by parametric noise models, and (ii) show how suitable noise models can be created even in the absence of calibration data. This is a major step since it actually renders PN2V fully unsupervised. We demonstrate that all proposed improvements are not only academic but indeed relevant.



There are no comments yet.


page 1

page 2

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the advent of Deep Learning (DL), the field of biomedical image denoising has recently taken rapid strides 

[weigert2018content, zhang2019poisson, buchholz2019cryo, buchholz2019content, krull2019noise2void, krull2019probabilistic, belthangady2019applications]. Today, CARE methods are leading the field due to their content awareness – learning a strong prior on the visual nature of the data to be reconstructed [weigert2018content, sui2018differential, laine2019nanoj, ouyang2018deep].

[width=0.23trim=0 8 0 0,clip=true] figs/Teaser/rat/Teaser_ratInput.png  RAW[width=0.23trim=0 8 0 0,clip=true] figs/Teaser/rat/Teaser_ratOurs.png  Ours
[width=0.23trim=0 8 0 0,clip=true] figs/Teaser/rat/Teaser_ratn2v.png  Noise2Void[width=0.23trim=0 8 0 0,clip=true] figs/Teaser/rat/Teaser_ratGt.png  Ground Truth

Figure 1: Our proposed GMM bootstrapping approach does not require paired training or calibration data, but achieves superior results compared to other fully unsupervised methods.

While CARE was initially proposed using pairs of noisy and clean (ground truth) images during training, several ways to circumvent this requirement have been proposed. Noise2Noise [noise2noise] shows how corresponding noisy image pairs can lead to virtually the same results. Self-supervised models, like Noise2Void (N2V) [krull2019noise2void] and Noise2Self [batson2019noise2self] show how even the requirement for a second noisy image can be avoided. These methods can train directly on the body of data to be denoised, making them extremely useful for practical applications. However, self-supervised methods are known to perform less well than models trained using paired training data [krull2019noise2void, krull2019probabilistic].

The recently proposed Probabilistic Noise2Void (PN2V) [krull2019probabilistic] shows how using sensor specific noise models can improve the quality of self-supervised denoising, bringing it close to traditional paired training. A PN2V noise model is computed from a sequence of noisy calibration images and characterizes the distribution of noisy pixel around their respective ground truth signal value. In the context of PN2V, this information is represented as a collection of histograms [krull2019probabilistic].

In this work we make three major contributions.  We improve PN2V by introducing parametric noise models based on Gaussian Mixture Models (GMM) and show why they perform better than histogram based representations.  We show how to bootstrap a suitable noise model, even in the absence of calibration data. This renders PN2V fully unsupervised, where nothing besides the data to be denoised is required for the method to be applied.  All calibration data and corresponding noisy image data is made publicly available together with the code (github.com/juglab/ppn2v).

Figure 2: A visual comparison of results obtained by CARE, N2V, PN2V, and our proposed methods (bold). We distinguish three families of methods: fully supervised (CARE), unsupervised but requiring additional calibration data (PN2V, our PN2V GMM), and fully unsupervised (N2V, PN2V using our bootstrapped histogram and GMM based noise models). The leftmost column in the unsupervised+calibration category shows the average of all available calibration images used for PN2V and PN2V GMM (see main text). Note that results of our fully unsupervised methods reach very similar quality to methods requiring either clean GT, or additional calibration data.
Methods Convallaria Mouse nuclei Mouse actin
CARE 36.710.026 36.580.019 34.200.021
PN2V 36.510.025 36.290.007 33.780.006

36.470.031 36.350.018 33.860.018
N2V 35.730.037 35.840.015 33.390.014

Boot. Hist.
36.190.016 36.310.013 33.610.016

Boot. GMM
36.700.012 36.430.014 33.740.012
Table 1: Comparision of the denoising performance of all tested methods. Mean PSNR and standard error over five repetitions of each experiment are shown. Names of our proposed methods are shown in bold. Bold numbers indicate the best performing method in its respective category (supervised, unsupervised + calibration, and fully unsupervised; from top to bottom, separated by dashed lines).

2 Proposed Approaches and Methods

Histogram based noise models, as originally suggested for PN2V, are built from a stack of calibration images . The imaged structures in this sequence can be arbitrary but must be static. Such images can, for example, be recorded by imaging the back illuminated half opened field diaphragm (see Fig. 2). We call the average signal ground truth (GT). By discretizing each GT pixel signal and corresponding noisy observations , a histogram can be created for each GT signal covering all corresponding noisy observations. The normalized set of histograms constitutes the camera noise model used in PN2V [krull2019probabilistic], describing the distribution of noisy pixel values that are to be expected for each GT signal.

GMM based noise models describe the distribution of noisy observations for a GT signal as the weighted average of normal distributions:



is the probability density function of the the normal distribution. We define each component’s weight

, mean

, and variance

as a function of the signal . To ensure all weights are positive and sum to one we define


where is a polynomial of degree . To ensure that our distributions are always centered around the true signal , we define


where is again a polynomial of degree . Finally, to ensure numerical stability, we define the variance


where is a constant, and is again a polynomial of degree .

Hence, our GMM based noise model is fully described by the

long vector of polynomial coefficients

. We use a maximum likelihood approach to fit the parameters to our callibration data, optimizing for


where , is the GMM as decribed in Eq. 1. We use numerical optimization, see Section 3.

[width=0.24] figs/Ablation/systematicreduction_revewrsedaxis.pdf (a)[width=0.24] figs/Ablation/signalreduction_reversedaxis.pdf (b)

Figure 3: Ablation studies on Convallaria data. Denoising performance of PN2V with histogram and linear GMM noise models is shown. (a) The five noise models we tested are deduced from subsets of the available calibration data, such that: the entire range of signals used in the Convallaria data is covered (NM1), only the lower 40% are covered (NM2), the lower 25% (NM3), 15% (NM4) and 9% (NM5). (b) This case is obtained by reducing the fraction of available noisy calibration pixels from NM1, via random subsampling.
Gaussians Two coefficients Three coefficients
1 36.560.022 36.340.040
2 36.480.020 36.350.014
3 36.470.031 36.310.022
Table 2: Testing a variety of GMM hyper-parameters. We tested GMMs using one, two, and three Gaussians, each using linear () and quadratic () parametrizations (see Section 2), to denoise the Convallaria data. The table shows the mean PSNR and standard error over 5 repetitions for each setup.
Gaussians NM1 NM2 NM3 NM4 NM5
Table 3: Denoising performance of PN2V GMM with linear noise models using one versus three Gaussians. For each case, five noise models were derived from different subsets of the available calibration data (see Fig. 3). We report the mean PSNR over 5 repetitions for each setup.

Bootstrapped PN2V allows us to address the scenarios where no calibration data is available, e.g., data that was acquired without denoising in mind. We propose the following bootstrapping procedure. First, we train and apply the unsupervised N2V [krull2019noise2void] on the body of available noisy images . Then, we treat the resulting denoised images as if they were the GT, henceforth calling them pseudo ground truth. We can now use the corresponding noisy and denoised pixel values to either construct a histogram or learn a GMM based noise model.

3 Experiments and Results

Datasets: We acquired three datasets (Fig. 2) which are made publicly available:  Convallaria data, available online as part of PN2V, consisting of 100 calibration images (diaphragm images, as previously explained) and 100 noisy images of a Convallaria section,  mouse skull nuclei dataset consisting of 500 calibration images (showing the edge of a fluorescent slide) and 200 noisy realizations of the same static mouse skull nuclei, and  mouse actin data consisting of 100 calibration images (diaphragm images with only the sample mounting medium in field of view) and 100 noisy realizations of the same static actin sample. The Convallaria and mouse actin datasets are acquired on a spinning disc confocal microscope while the mouse skull nuclei dataset is acquired with a point scanning confocal microscope.

Implementation and training details: All evaluated training schemes are based on the implementation from [krull2019probabilistic] and use the same network architecture: a U-Net [ronneberger2015u] with depth 3, 1 input channel, and 64 feature channels in the first layer. All networks are trained with ADAM [kingma2014adam] with initial learning rate of , a patch size of , a batch size of , a virtual batch size of and the standard learning rate scheduler as used in [krull2019probabilistic]. Training is done for epochs, each consisting of steps. We use the N2V and CARE (traditional supervised training) implementations from [krull2019probabilistic].

With PN2V, we will refer to the version with the original histogram based noise model, derived form the available calibration data. As in [krull2019probabilistic], for each dataset, we use a bin discretization, where is an integer determined in an empirically optimal manner for which the denoising performance (PSNR) of histogram based PN2V is maximized. The minimum and maximum bins are set to the minimum and maximum values present in the data to be denoised.

Whenever we use our proposed GMM noise model, we will label results with PN2V GMM. As long as not stated differently, all GMM noise models use Gaussians and coefficients per parameter, and are trained on the available calibration data. Starting from a random initialization, optimization is performed using ADAM, with a batch size of , running for iterations with learning rate .

For bootstrapped PN2V (histogram and GMM based), we use the same setup as for PN2V but naturally taking the bootstrapped noise models instead. They are referred to as Boot. Hist. and Boot. GMM respectively. For the latter, we disregard the top and bottom percentile of the pseudo GT pixels during noise model training, as we empirically observe that their N2V predictions can be often unreliable.

Comparing different training schemes:

For each dataset and denoising method, we repeated each experiment 5 times and then compared the denoised images in terms of peak-signal-to-noise-ratio (PSNR) to available GT images. Results can be seen in Fig. 

2, as well as Table 1.

[width=0.48] figs/Ablation/combinedImage_final_4.pdf (a)(b)(c)(d)

Figure 4: Noise models for Convallaria data. Left column shows histogram based noise models, the center column their respective GMM based equivalent. Rightmost column shows noise models for specific signals (colors, histograms shown as vertical lines). For comparison, full calibration data histogram is always included as black curve. (a) Noise models trained on full calibration data. (b) Bootstrapped noise models. (c) Noise models trained on sub-sampled () calibration data (Fig. 3b). (d) Noise models trained on reduced available calibration data (NM5 from Fig. 3a).

Naturally, the fully supervised CARE networks, trained on clean ground truth images, show the best performance on all datasets. On the mouse actin dataset, PN2V using our GMM based noise model derived from high quality calibration data outperforms all other methods. Notably, on the other two datasets, our fully unsupervised bootstrapped approach provides superior results and is remarkably close to CARE. For a discussion of these results, see Section 4.

Ablation and parameter study: Next we compare the robustness of histogram and GMM based noise models with respect to increasingly imperfect calibration data, using the Convallaria dataset as an example. These ablation studies consist of two scenarios, where  the available calibration data covers less and less of the range of signals in the data to be denoised, and  the amount of available calibration pixels decreases successively. Figure 4 (c,d) shows example noise models that are derived from ablated calibration data. Evidently, for both ablation tests PN2V GMM performance is more robust compared to PN2V (see Fig. 3).

We also investigated the sensitivity of GMM noise models with respect to the chosen hyper parameters. We performed a parameter study, varying the number of Gaussian kernels and polynomial coefficients , using the Convallaria dataset with full available calibration data. Results are summarized in Table 2. While these tests suggest that the simple linear model (one Gaussian, two coefficients) is slightly preferable, the performance of all configurations remains superior to N2V (see Table 1). We additionally measured the performance of a linear noise model using 1 Gaussian and 3 Gaussians with imperfect calibration data (see Table 3). We observe that a noise model with 3 Gaussians leads to more stable results.

4 Discussion

We presented a GMM based variation of PN2V noise models and showed that they can achieve higher reconstruction quality even with imperfect calibration data (Fig. 3). Additionally, we introduced a novel bootstrapping scheme, which allows PN2V to be trained fully unsupervised using only the data to be denoised (Fig. 4(b)). Our results (Table 1) show that the denoising quality of bootstrapped PN2V is quite close to fully supervised CARE [weigert2018content] and significantly outperforms N2V [krull2019noise2void]. Hence, if calibration data for a given microscope is unavailable, bootstrapping offers an excellent alternative.

Interestingly, at times, bootstrapped GMM based noise models even outperform models derived from calibration data. A possible reason for such good performance is that the distribution of pseudo GT signals used in bootstrapping corresponds well to the distribution of signals in the data to be denoised. The distribution of GT signals in the calibration data however, can be quite different.

GMM noise models, trained according to Eq. 5, prioritize signals that are abundant in the (pseudo) GT and provide a better fit in these regions compared to others. Figure 4(b) corroborates that our bootstrapped GMM fits well to the true noise distribution for lower signals, which frequently occur in the Convallaria data, but fails for higher signals. However, the GMM trained on calibration data (Fig. 4(a)), prioritizes its fit for higher signals, which are frequent in the calibration data, but barely present in the Convallaria dataset.

We strongly believe that the methods we propose will help to make high quality DL based denoising an easily applicable tool that does not require the acquisition of paired training data or calibration data. This would facilitate a plethora of projects in cell biology, where the processes to be imaged are very photosensitive or so dynamic that suitable training image pairs cannot be obtained.

5 Acknowledgements

The authors would like to acknowledge the Light Microscopy Facility of MPI-CBG, Diana Afonso and Jacqueline Tabler from MPI-CBG for kindly sharing their samples and expertise, and Matthias Arzt from CSBD/MPI-CBG for helpful discussions on possible noise model formulations. This work was supported by the German Federal Ministry of Research and Education (BMBF) under the codes 031L0102 (de.NBI) and 01IS18026C (ScaDS2), and the German Research Foundation (DFG) under the code JU3110/1-1(FiSS).