With the advent of Deep Learning (DL), the field of biomedical image denoising has recently taken rapid strides[weigert2018content, zhang2019poisson, buchholz2019cryo, buchholz2019content, krull2019noise2void, krull2019probabilistic, belthangady2019applications]. Today, CARE methods are leading the field due to their content awareness – learning a strong prior on the visual nature of the data to be reconstructed [weigert2018content, sui2018differential, laine2019nanoj, ouyang2018deep].
While CARE was initially proposed using pairs of noisy and clean (ground truth) images during training, several ways to circumvent this requirement have been proposed. Noise2Noise [noise2noise] shows how corresponding noisy image pairs can lead to virtually the same results. Self-supervised models, like Noise2Void (N2V) [krull2019noise2void] and Noise2Self [batson2019noise2self] show how even the requirement for a second noisy image can be avoided. These methods can train directly on the body of data to be denoised, making them extremely useful for practical applications. However, self-supervised methods are known to perform less well than models trained using paired training data [krull2019noise2void, krull2019probabilistic].
The recently proposed Probabilistic Noise2Void (PN2V) [krull2019probabilistic] shows how using sensor specific noise models can improve the quality of self-supervised denoising, bringing it close to traditional paired training. A PN2V noise model is computed from a sequence of noisy calibration images and characterizes the distribution of noisy pixel around their respective ground truth signal value. In the context of PN2V, this information is represented as a collection of histograms [krull2019probabilistic].
In this work we make three major contributions. We improve PN2V by introducing parametric noise models based on Gaussian Mixture Models (GMM) and show why they perform better than histogram based representations. We show how to bootstrap a suitable noise model, even in the absence of calibration data. This renders PN2V fully unsupervised, where nothing besides the data to be denoised is required for the method to be applied. All calibration data and corresponding noisy image data is made publicly available together with the code (github.com/juglab/ppn2v).
|Methods||Convallaria||Mouse nuclei||Mouse actin|
2 Proposed Approaches and Methods
Histogram based noise models, as originally suggested for PN2V, are built from a stack of calibration images . The imaged structures in this sequence can be arbitrary but must be static. Such images can, for example, be recorded by imaging the back illuminated half opened field diaphragm (see Fig. 2). We call the average signal ground truth (GT). By discretizing each GT pixel signal and corresponding noisy observations , a histogram can be created for each GT signal covering all corresponding noisy observations. The normalized set of histograms constitutes the camera noise model used in PN2V [krull2019probabilistic], describing the distribution of noisy pixel values that are to be expected for each GT signal.
GMM based noise models describe the distribution of noisy observations for a GT signal as the weighted average of normal distributions:
is the probability density function of the the normal distribution. We define each component’s weight, mean
, and varianceas a function of the signal . To ensure all weights are positive and sum to one we define
where is a polynomial of degree . To ensure that our distributions are always centered around the true signal , we define
where is again a polynomial of degree . Finally, to ensure numerical stability, we define the variance
where is a constant, and is again a polynomial of degree .
Hence, our GMM based noise model is fully described by the
long vector of polynomial coefficients. We use a maximum likelihood approach to fit the parameters to our callibration data, optimizing for
|Gaussians||Two coefficients||Three coefficients|
Bootstrapped PN2V allows us to address the scenarios where no calibration data is available, e.g., data that was acquired without denoising in mind. We propose the following bootstrapping procedure. First, we train and apply the unsupervised N2V [krull2019noise2void] on the body of available noisy images . Then, we treat the resulting denoised images as if they were the GT, henceforth calling them pseudo ground truth. We can now use the corresponding noisy and denoised pixel values to either construct a histogram or learn a GMM based noise model.
3 Experiments and Results
Datasets: We acquired three datasets (Fig. 2) which are made publicly available: Convallaria data, available online as part of PN2V, consisting of 100 calibration images (diaphragm images, as previously explained) and 100 noisy images of a Convallaria section, mouse skull nuclei dataset consisting of 500 calibration images (showing the edge of a fluorescent slide) and 200 noisy realizations of the same static mouse skull nuclei, and mouse actin data consisting of 100 calibration images (diaphragm images with only the sample mounting medium in field of view) and 100 noisy realizations of the same static actin sample. The Convallaria and mouse actin datasets are acquired on a spinning disc confocal microscope while the mouse skull nuclei dataset is acquired with a point scanning confocal microscope.
Implementation and training details: All evaluated training schemes are based on the implementation from [krull2019probabilistic] and use the same network architecture: a U-Net [ronneberger2015u] with depth 3, 1 input channel, and 64 feature channels in the first layer. All networks are trained with ADAM [kingma2014adam] with initial learning rate of , a patch size of , a batch size of , a virtual batch size of and the standard learning rate scheduler as used in [krull2019probabilistic]. Training is done for epochs, each consisting of steps. We use the N2V and CARE (traditional supervised training) implementations from [krull2019probabilistic].
With PN2V, we will refer to the version with the original histogram based noise model, derived form the available calibration data. As in [krull2019probabilistic], for each dataset, we use a bin discretization, where is an integer determined in an empirically optimal manner for which the denoising performance (PSNR) of histogram based PN2V is maximized. The minimum and maximum bins are set to the minimum and maximum values present in the data to be denoised.
Whenever we use our proposed GMM noise model, we will label results with PN2V GMM. As long as not stated differently, all GMM noise models use Gaussians and coefficients per parameter, and are trained on the available calibration data. Starting from a random initialization, optimization is performed using ADAM, with a batch size of , running for iterations with learning rate .
For bootstrapped PN2V (histogram and GMM based), we use the same setup as for PN2V but naturally taking the bootstrapped noise models instead. They are referred to as Boot. Hist. and Boot. GMM respectively. For the latter, we disregard the top and bottom percentile of the pseudo GT pixels during noise model training, as we empirically observe that their N2V predictions can be often unreliable.
Comparing different training schemes:
For each dataset and denoising method, we repeated each experiment 5 times and then compared the denoised images in terms of peak-signal-to-noise-ratio (PSNR) to available GT images. Results can be seen in Fig.2, as well as Table 1.
Naturally, the fully supervised CARE networks, trained on clean ground truth images, show the best performance on all datasets. On the mouse actin dataset, PN2V using our GMM based noise model derived from high quality calibration data outperforms all other methods. Notably, on the other two datasets, our fully unsupervised bootstrapped approach provides superior results and is remarkably close to CARE. For a discussion of these results, see Section 4.
Ablation and parameter study: Next we compare the robustness of histogram and GMM based noise models with respect to increasingly imperfect calibration data, using the Convallaria dataset as an example. These ablation studies consist of two scenarios, where the available calibration data covers less and less of the range of signals in the data to be denoised, and the amount of available calibration pixels decreases successively. Figure 4 (c,d) shows example noise models that are derived from ablated calibration data. Evidently, for both ablation tests PN2V GMM performance is more robust compared to PN2V (see Fig. 3).
We also investigated the sensitivity of GMM noise models with respect to the chosen hyper parameters. We performed a parameter study, varying the number of Gaussian kernels and polynomial coefficients , using the Convallaria dataset with full available calibration data. Results are summarized in Table 2. While these tests suggest that the simple linear model (one Gaussian, two coefficients) is slightly preferable, the performance of all configurations remains superior to N2V (see Table 1). We additionally measured the performance of a linear noise model using 1 Gaussian and 3 Gaussians with imperfect calibration data (see Table 3). We observe that a noise model with 3 Gaussians leads to more stable results.
We presented a GMM based variation of PN2V noise models and showed that they can achieve higher reconstruction quality even with imperfect calibration data (Fig. 3). Additionally, we introduced a novel bootstrapping scheme, which allows PN2V to be trained fully unsupervised using only the data to be denoised (Fig. 4(b)). Our results (Table 1) show that the denoising quality of bootstrapped PN2V is quite close to fully supervised CARE [weigert2018content] and significantly outperforms N2V [krull2019noise2void]. Hence, if calibration data for a given microscope is unavailable, bootstrapping offers an excellent alternative.
Interestingly, at times, bootstrapped GMM based noise models even outperform models derived from calibration data. A possible reason for such good performance is that the distribution of pseudo GT signals used in bootstrapping corresponds well to the distribution of signals in the data to be denoised. The distribution of GT signals in the calibration data however, can be quite different.
GMM noise models, trained according to Eq. 5, prioritize signals that are abundant in the (pseudo) GT and provide a better fit in these regions compared to others. Figure 4(b) corroborates that our bootstrapped GMM fits well to the true noise distribution for lower signals, which frequently occur in the Convallaria data, but fails for higher signals. However, the GMM trained on calibration data (Fig. 4(a)), prioritizes its fit for higher signals, which are frequent in the calibration data, but barely present in the Convallaria dataset.
We strongly believe that the methods we propose will help to make high quality DL based denoising an easily applicable tool that does not require the acquisition of paired training data or calibration data. This would facilitate a plethora of projects in cell biology, where the processes to be imaged are very photosensitive or so dynamic that suitable training image pairs cannot be obtained.
The authors would like to acknowledge the Light Microscopy Facility of MPI-CBG, Diana Afonso and Jacqueline Tabler from MPI-CBG for kindly sharing their samples and expertise, and Matthias Arzt from CSBD/MPI-CBG for helpful discussions on possible noise model formulations. This work was supported by the German Federal Ministry of Research and Education (BMBF) under the codes 031L0102 (de.NBI) and 01IS18026C (ScaDS2), and the German Research Foundation (DFG) under the code JU3110/1-1(FiSS).