Basis Prediction Networks for Effective Burst Denoising with Large Kernels

12/09/2019 ∙ by Zhihao Xia, et al. ∙ adobe Washington University in St Louis 7

Bursts of images exhibit significant self-similarity across both time and space. This motivates a representation of the kernels as linear combinations of a small set of basis elements. To this end, we introduce a novel basis prediction network that, given an input burst, predicts a set of global basis kernels — shared within the image — and the corresponding mixing coefficients — which are specific to individual pixels. Compared to other state-of-the-art deep learning techniques that output a large tensor of per-pixel spatiotemporal kernels, our formulation substantially reduces the dimensionality of the network output. This allows us to effectively exploit larger denoising kernels and achieve significant quality improvements (over 1dB PSNR) at reduced run-times compared to state-of-the-art methods.



There are no comments yet.


page 1

page 5

page 8

page 11

page 12

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: (top) Qualitative result of our denoising network. The proposed approach (b) recovers fine geometric details from a very noisy image burst (a, only one frame is shown). (Bottom) our per-burst bases can compactly represent a large variety of kernels by exploiting the redundancy of local image structures. We clustered the per-pixel coefficients predicted from the noisy burst using -means. The clusters we obtain show strong spatial structure (d). For instance, we can identify a cluster corresponding to horizontal image edges (e, orange), one corresponding to vertical edges (e, green), and another for homogeneous regions with no structure (e, gray).

Burst denoising algorithms [24, 12, 27] seek to enable high-quality photography in challenging conditions and are increasingly being deployed in commercial mobile cameras [12, 21]

. A burst captures a sequence of short-exposure frames of the scene that are free of motion-blur, but with a high amount of noise in each frame and relative motion between frames. By accounting for this relative motion and using the fact that the noise is independent across frames, burst denoising attempts to aggregate these inputs and predict a single noise- and blur-free image estimate.

Recently, Mildenhall et al[27] proposed an elegantly simple yet surprisingly successful approach to burst denoising. Rather than explicitly estimating inter-frame motion [12, 24, 13, 14, 18], their method produces denoised estimates at each pixel as a weighted average of observed noisy intensities in a window around that pixel’s location in all frames. These averaging weights, or kernels, are allowed to vary from pixel-to-pixel to implicitly account for motion and image discontinuities, and are predicted from the noisy input burst using a “kernel prediction network” (KPN).

However, KPNs need to produce an output that is significantly higher-dimensional than the denoised image—even for

kernels with eight frames, a KPN must predict 400 times as many kernel weights as image intensities. This comes with significant memory and computational costs, as well as difficulty in training given the many degrees of freedom in the output. As a result, KPNs have so far been used only with small kernels. This limits their denoising ability by preventing averaging across bigger spatial regions, and over frames with larger relative motion.

In this paper, we introduce an approach to predict large denoising kernels, and thus benefit from wider aggregation, while simultaneously limiting output dimensionality at each pixel, making the prediction network easier to train and compute- and memory-efficient. This approach is motivated by a long history of successful image restoration methods that have leveraged internal structure and self-similarity in natural images [5, 2, 35, 22, 40]. In burst denoising, self-similarity is particularly strong because we expect both spatial structure in the form of similar patterns that recur within a frame and across frames of the same scene, and temporal structure caused by consistency in scene and camera motion. Given the expected self-similarity and structure in the image intensities themselves, we argue that the corresponding denoising kernels must also have similar structure. Specifically, while allowing for individual per-pixel denoising kernels to be large, we assume that all the kernels for a given image span a lower-dimensional subspace.

Based on this observation, we introduce a new method for kernel-based burst denoising that achieves better accuracy and efficiency. Our contributions are:

  • We train a network that, given an input noisy burst, predicts both a global low-dimensional basis set of large kernels, and per-pixel coefficient vectors relative to this basis. This corresponds to a significantly lower-dimensional output than a KPN that predicts arbitrary kernels of the same size at each pixel.

  • Enforcing this structure on the denoising kernels acts as a form of regularization, and our experiments demonstrate that it leads to state-of-the-art denoising performance with significantly higher-quality (>1 dB PSNR) than regular KPNs.

  • Beyond reducing memory usage and computational burden at the output layer of our network, the structure of our output enables the final kernel filtering step to be performed much more efficiently in the Fourier domain. We show this in terms of required number of FLOPs, as well as experimentally with actual run-times.

2 Related Work

Single-image and video denoising.

Single-image denoising has been studied extensively. To overcome the ill-posed nature of the problem, classical approaches [32, 33, 42] developed regularization schemes that model the local statistics of natural images. The most successful approaches [5, 2] exploit non-local self-similarity within the image and denoise pixels by aggregating similar pixels or patches from distant regions. These methods have been extended to denoise videos [19, 25], where the search for similar patches proceeds not only within frames, but also across different frames. Recent work have improved image denoising performance using convolutional networks trained on large datasets [3, 38, 39, 22, 40, 35, 36]

Burst denoising.

Single-image denoising is a fundamentally under-constrained problem. Burst processing can reduce this ambiguity by using multiple observations (the frames of the burst) to recover a noise-free depiction of the scene. Burst denoising algorithms are now extensively used in commercial smartphone cameras [12] and can produce compelling results even in extreme low light scenarios [4, 21]. Like in video denoising, a significant challenge for burst processing is the robustness to inter-frame motion. Many methods explicitly estimate this motion to align and denoise the frames [12, 24, 13, 14, 18]. Current state-of-the-art burst denoising techniques [27, 18, 11, 26]

are based on deep neural networks. Many of them only require coarse registration, relying on the network to account for the small residual misalignments 

[27, 11, 26].

Kernel Prediction Networks.

Given a burst sequence, Mildenhall et al. [27] propose predicting per-pixel kernels that are then applied to the input burst to produce the denoised output. They demonstrate that KPNs outperform direct pixel synthesis networks that produce oversmooth results. Subsequent work has extended this idea to use multiple kernels of varying sizes at every pixel [26]. KPNs have also been used in other applications, including denoising Monte Carlo renderings [1, 34, 9]

, video super-resolution 

[16] and deblurring [41]

, frame interpolation 

[28, 29, 23] and video prediction [15, 6, 23, 37].

Given their high-dimensional output (per-pixel kernels, that are three-dimensional in the case of burst denoising), KPNs have significant memory and compute requirements, as well as a large number of parameters in their final layer. To ameliorate this, Marinč et al[26] and Niklaus et al[29] propose predicting spatially separable kernels: producing a horizontal and vertical kernel for each frame as output, and forming the spatial kernel as an outer product of these. However, this makes a strong a-priori assumption about kernel structure, and still requires constructing and filtering with different per-pixel kernels. In contrast, our approach assumes that the set of per-pixel kernels for a scene span a low-dimensional sub-space, and predicts a basis for this sub-space based on the burst input. This approach also allows us to benefit for fast filtering in the Fourier domain.

3 Method

Figure 2: Our basis prediction network takes as input a burst of noisy input frames (a) together with the noise parameters (b). The frames are encoded into a shared feature space (c). These features are then decoded by two decoders with skip connections into a burst-specific basis of 3D kernels (e) and a set of per-pixel mixing coefficients (d). Both the coefficients and basis kernels are individually unit-normalized. Finally, we obtain per-pixel kernels by mixing the basis elements according to the coefficients and we apply them to the input burst to produce the final denoised image (f).

In burst denoising, we are given an input noisy burst of images , where indexes spatial locations and

the different frames in the burst. Using a heteroscedastic Gaussian noise model 

[7], which accounts for both read and shot noise, we relate this to the corresponding noise-free frames as:


where and are the read- and shot-noise parameters. Choosing the first frame as reference, our goal is to produce a single denoised image as an estimate of the first noise-free frame .

3.1 Kernel-based burst denoising

Rather than train a network to regress directly, Kernel Prediction Networks output a field of denoising kernels , one for each pixel at each frame . The kernels have a spatial support , indexed by , with separate weights for each frame. Given these predicted kernels, the denoised estimate is formed as:


A key bottleneck in this pipeline is the prediction of this dense kernel field , which requires producing numbers at every pixel of the output. Since networks with high-dimensional outputs are both expensive and require learning a large number of parameters in their last layer, KPNs have typically been used only with small kernels ( in [27]).

3.2 Basis Prediction Networks

Instead of directly predicting unconstrained kernels for each spatial location, we designed a network that outputs: (1) a global kernel basis , of size with ; and (2) a dimensional coefficient vector at each spatial location.


Note that we typically choose the number of basis kernels . This implies that all the kernels for a given burst lie in a low-dimensional subspace, but this subspace will be different for different bursts (i.e. the basis is burst-specific). This procedure allows us to recreate a full kernel field with far fewer predictions. Assuming a resolution image, we need only make predictions to effectively recreate a kernel field of size .

We designed our network following an encoder-decoder architecture with skip connections [31]. Our model, however, has two decoder branches, one for the basis, the other for the coefficients (Figure 2). The encoder is shared between the two branches because the meaning of the coefficients is dependent on the predicted basis in Equation (3), so the two outputs need to be co-ordinated. This encoder takes the noisy burst and noisy parameters as input, and through multiple levels of downsampling and global average pooling at the end, yields a single global feature vector as its encoding of the image. The per-pixel coefficients are then decoded from the encoder bottleneck to the full image resolution , with channels as output. The common basis is decoded up to distinct spatial dimensions — that of the kernels — with output channels.

Since the basis branch decodes to a different spatial resolution, we need a careful treatment of the skip connections. Unlike a usual U-Net, the encoder and decoder feature size do not match. Specifically, a pixel in the basis kernel has no meaningful relation to a pixel in the input frames . Therefore, in the skip connections from the shared encoder to the basis decoder, we apply a global spatial average pooling of the encoder’s activations, and replicate the average vector to the resolution of the decoder layer. This mechanism ensures the encoder information is globally aggregated without creating nonsensical correspondences between kernel and image locations, while allowing features at multiple scales of the encoder to inform the basis decoder.

We ensure each of the reconstructed kernel has positive weights that sum to one, to represent averaging. We implement this constraint using soft-max normalizations on both the coefficient and basis decoder outputs. So every 3D kernel of the basis and every coefficient vector is normalized individually. A more detailed description of the architecture is provided in the supplementary.

Our network is trained with respect to the quality of the final denoised output —with an loss on intensities and loss on gradients. Like [27], we additionally use a per-frame loss to bias the network away from relying only on the reference frame. We do this with separate losses on denoised estimates from each individual frame of the input burst (formed as ). These are added to main training loss, with a weight that is decayed across training iterations.

3.3 Efficient Fourier domain filtering

Filtering by convolution with large kernels is commonly implemented in the Fourier domain, where the filtering complexity is quasilinear in image size, while the complexity of direct convolution scales with the product of image- and kernel-size. But because the kernels in KPNs vary spatially, Equation (2) does not represent a standard convolution, ruling out this acceleration.

In our case, because our kernels are defined with respect to a small set of “global” basis vectors, we can leverage Fourier-domain convolution to speed up filtering. We achieve this by combining and re-writing the expressions in Eq. (2) and Eq. (3) as:


where denotes standard spatial 2D convolution with a spatially-uniform kernel.

In other words, we first form a set of filtered versions of the input burst by standard convolution with each of the basis kernels—convolving each frame in the burst with the corresponding “slice” of the basis kernel—and then taking a spatially-varying linear combination of the filtered intensities at each pixel based on the coefficients . We can carry out these standard convolutions in the Fourier domain as:


where and

are spatial forward and inverse Fourier transforms. This is significantly more efficient for larger kernels, especially since we need not repeat forward Fourier transform of the inputs

for different basis kernels.

Method Gain 1 Gain 2 Gain 4 Gain 8
HDR+ [12] 31.96 28.25 24.25 20.05
BM3D [5] 33.89 31.17 28.53 25.92
NLM [2] 33.23 30.46 27.43 23.86
VBM4D [25] 34.60 31.89 29.20 26.52
Direct 35.93 33.36 30.70 27.97
KPN [27] 36.47 33.93 31.19 27.97
KPN* () 36.35 33.69 31.02 28.16
MKPN* [26] 36.88 34.22 31.45 28.52
Ours 38.18 35.42 32.54 29.45
Table 1: Denoising performance on a synthetic benchmark test set [27]. We report performance in terms of Average PSNR (dB). Following [27, 26], our network was not trained on the noise levels implied by the gain in the fourth column. Numbers for KPN* () and MKPN* are based on our implementation of these techniques, and numbers for all other methods are from [27]. Our method widely outperforms all prior methods at all noise levels.

4 Experiments

We closely follow the protocol of Mildenhall et al. [27] to train and evaluate our network. Our model is designed for bursts of frames with resolution . It is trained using training and validation sets constructed from the Open Images dataset [20] following the procedure of [27]. We also use their test set of 73 images for evaluation.

Our default configuration uses bases with kernels of size . We train our network (as well as all ablation baselines) using the Adam optimizer [17] with an initial learning rate of . We drop the learning twice, by a factor of each time, whenever the validation loss saturates. Training takes around 600k iterations with batches of 24 images.

Figure 3: We illustrate denoising performance on a benchmark synthetic test set [27] for our method, a direct prediction network (which directly regresses denoised pixels), and two KPN variants [27, 26] with the same kernel size as our method. The numbers in inset refer to the Average PSNR (dB) on the full image. In addition to better quantitative performance, our method does better at reproducing perceptual details like textures, edges, and text.

4.1 Denoising performance

Table 1 reports the PSNR of our denoised outputs on the test set. Each noise level corresponds to a sensor gain value (one stop increments of the ISO setting in a camera). Higher gain lead to noisier images. The highest noise level, denoted as , lies outside the range we trained on. We use it to evaluate our model’s extrapolation capability. In addition to our own model, we also report results for a motion-alignment-based method [12], several approaches based on non-local filtering [2, 5, 25], as well as the standard KPN burst denoiser [27]—which is the current state-of-the-art. Since we did not have access to the original KPN model, we implemented a version ourselves (that we use in ablations in the next section) and also report its performance in Table 1. We find it closely matches those from [27]). Additionally, we train a network to directly regress the denoised pixel values from the input burst (i.e., without kernels), as well as our implementation of [26] with a larger kernel size of for fair comparison.

We find that our method outperforms KPN [27] by a significant margin, over 1 dB PSNR at all noise levels. Our implementation of [26] also does well, but remains inferior to our model. We show qualitative results for a subset of methods in Figure 3. Our have fewer artifacts, especially in textured regions and around thin structures like printed text.

4.2 Ablation and analysis

Gain 1 Gain 2 Gain 4 Gain 8
KPN () 34.29 31.80 28.23 24.86
Separable () 34.67 32.05 28.52 25.12
Ours () 35.70 33.02 29.16 25.57
Ours () 36.22 33.41 29.56 25.94
Ours () 35.31 32.69 28.95 25.43
Ours () 36.10 33.33 29.45 25.88
Ours () 36.27 33.47 29.57 25.99
Ours (, ) 36.29 33.57 29.62 25.99
Common Spatial Basis 35.71 33.04 29.23 25.71
Per-frame Spatial Basis 36.21 33.46 29.56 25.92
Fixed basis 34.66 32.15 28.68 25.39
Table 2: Ablation study on our validation dataset. Performance is reported in terms of Average PSNR (dB). Beyond motivating our parameter choices (,), this demonstrates that our use of a burst-specific spatio-temporal basis outperforms standard KPN [27], separable spatial kernels, a common spatial basis for all burst frames, separate spatial bases per-frame, and a fixed, input-agnostic basis. All these variants were trained with the same settings (, ) as our model.

Our approach leads to better denoising quality because it enables larger kernels without vastly increasing the network’s output dimensionality and number of learnable parameters. To tease apart the contributions of kernel size and the structure of our kernel decomposition, we conduct an ablation study on our validation set. The results can be found in Table 2. The performance gap between the test and validation set results (Table 1 and 2) comes from differences in the datasets themselves.

Kernel Size. As a baseline, we consider using KPN directly with our larger kernel size of . We also consider predicting a single separable kernel at that size ([26] predicts separable kernels at multiple sizes, and adds them together). We find that our network outperforms the large kernel KPN variant at all noise levels—suggesting that simply increasing the kernel size is not enough. It also outperforms separable kernel prediction, suggesting that a low-dimensional subspace constraint better captures the structure of natural images than spatial separability.

For completeness, we also evaluate our basis prediction network with smaller kernels, and . Although, this leads to a drop in performance compared to our default configuration, these variants still perform better than the original KPN—suggesting our approach has a regularizing effect that benefits even smaller kernels.

Basis Size. The number of basis elements in our default configuration, , was selected from a parameter search on the validation set. We include this analysis in Table 2, reporting PSNR values for ranging from 10 to 130. We find that bases with fewer than 90 kernels lead to a drop in quality. The larger bases, , also performs very slightly worse than . We hypothesize that large bases start to have too many degrees of freedom. This increases the dimensionality of the network’s output, which negates the benefits of a subspace restriction.

Spatial vs. Spatio-temporal Basis Decomposition. Note that we define our basis as a subspace to span 3D kernels—i.e., each of our basis elements is a 3D spatio-temporal kernel. We predict a single weight at each location, which is applied to corresponding spatial kernels for all frames . However, there are other possible choices for decomposing 3D kernels, and we consider two of these in our ablation (Table 2). In both cases, we output coefficients that vary per-frame, in addition to per-location—and are interpreted as separate coefficients corresponding to a spatial basis kernel. In one case, we use a common spatial basis across all frames, with . In the other, we have a per-frame spatial basis for each frame, and . The per-frame basis increases the dimensionality of our coefficient output and leads to a slight drop in performance, likely due to a reduced regularizing effect. The common spatial basis, however, suffers a greater performance drop since it also forces kernels in all frames to share the same subspace.

We also compare qualitatively the spatio-temporal kernels produced by our default configuration with those predicted by standard KPN in Figure 4

. Our model makes better use of the temporal information, applying large weights to pixels across many frames in the burst, whereas KPN tends to overly favor the reference frame. Our network better tracks the apparent motion in the burst, shifting the kernel accordingly. And it is capable of ignoring outliers caused to excessive motion (all black kernels in Fig. 


Fixed vs. Burst-specific Basis. Given that our network predicts both a basis and per-pixel coefficients, a natural question is whether a burst-specific kernel basis is even needed. To address this, we train a network architecture without a basis decoder to only predict coefficients for each burst, and instead learn a basis that is fixed across all bursts. We still learn the fixed basis jointly with this network, as a direct learnable tensor. Table 2 shows that using a fixed basis in this manner leads to a significant decrease in denoising quality (although still better than standard KPN).

This suggests that while a subspace restriction on kernels is useful, the ideal subspace is scene-dependent and must be predicted adaptively. We further explore this phenomenon in Table 3, where we quantify the rank of the predicted bases for individual images, and for pairs of images. Note that the rank can be lower than , since we do not effectively require the ‘basis’ vectors to be linearly independent. We find that the combined rank of basis kernels of image pairs (obtained by concatenating the two bases) is nearly twice the rank obtained from individual images—suggesting limited overlap between the basis sets of different images. We also explicitly compute the average overlap ratio across image pairs as , and find it to be around on average. This low overlap implies that different bursts do indeed require different bases, justifying our use of burst-specific bases.

Gain 1 Gain 2 Gain 4 Gain 8
80.6 81.8 84.3 86.2
152.2 154.5 159.6 165.2
Overlap ratio 5.4% 5.5% 5.4% 4.4%
Table 3: Average basis rank for each noise level (first row), average rank of the union of two bases from random burst pairs (second row), and the average overlap ratio (third row) between the subspaces spanned by the two bases. The low overlap justifies our prediction of a burst-specific basis.
GFLOPs Runtime (s)
KPN () 59.3 0.63
Separable () 29.9 0.43
Ours () 28.9 0.24
Ours () 29.1 0.29
Ours () 26.5 0.19
Ours () 28.2 0.27
Ours () 31.7 0.41
Ours (, ) 29.9 0.30
Common Spatial Basis 40.8 0.49
Per-frame Spatial Basis 41.9 0.57
Table 4: FLOPS and runtimes on 1024768 resolution images for different KPN denoising approaches. All variants of our basis prediction network are significantly faster than KPN and match the compute cost of Separable filters (with better denoising quality). Increasing the kernel size for our technique comes at marginal cost thanks for the Fourier filtering approach. This allows us to use large kernels for better denoising performance.

4.3 Computational expense

Finally, we evaluate the computational expense of our approach and compare it to the different ablation settings considered in Table 2, including standard KPN. We report the total number of floating point operations (FLOPs) required for network prediction and filtering in Table 4. We find that in addition to producing higher-quality results, our approach also requires significantly fewer FLOPs than regular KPN—for kernel size . This is due to the reduced complexity of our final prediction layer, as well as the benefit of efficient filtering in the Fourier domain. Also, we find that our approach has nearly identical complexity as separable kernel prediction, while achieving higher denoising performance because it can express a more general class of kernels.

In addition to the evaluation FLOPs, Table 4 reports measured running times for the various approaches, benchmarked on a 1024 768 image on an NVIDIA 1080Ti GPU. To compute these timings, we divide the image into 128128 non-overlapping patches to form a batch and send it to the denoising network. Since regular KPN have very high memory requirements, we select the maximum batch size for each method and denoise the entire image in multiple runs. This maximizes GPU throughput. We find that our approach retains its running time advantage over KPN in practice. It is also a little faster than separable kernel prediction—likely due to the improved cache performance we get from using Fourier-domain convolutions with spatially-uniform basis kernels.

Figure 4: We visualize a few 3D kernels predicted by our approach (with ), and those produced by standard KPN (with and ). For kernels predicted at a given location, we also show crops of the different noise-free frames centered at that point, with the support of the kernel marked in blue. We find that, in comparison to those from KPN, our kernels distribute weight more evenly across all frames in the burst, and that the spatial patterns of these weights closely follow the apparent motion in the burst.

5 Conclusion and future work

In this work, we argue that local, per-pixel burst denoising kernels are highly coherent. Based on this, we present a basis prediction network that jointly infers a global, low-dimensional kernel basis and the corresponding per-pixel mixing coefficients that can be used to construct per-pixel denoising kernels. This formulation significantly reduces memory and compute requirements compared to prior kernel-predicting burst denoising methods, allowing us to substantially improve performance by using large kernels, while reducing running time.

While this work focuses on burst denoising, KPN-based methods have been applied to other image and video enhancement tasks including video super-resolution [16], frame interpolation [28, 29, 23], video prediction [15, 6], and video deblurring [41]. All these tasks exhibit similar structure and will likely benefit from our approach.

There are other forms of spatiotemporal structure that can be explored to build on our work. For example, image enhancement methods have exploited self-similarity at different scales [10] suggesting other decompositions in scale space. Also, we assume a fixed basis size globally. Adapting this spatially to local content could yield further benefits. Finally, KPNs are still, at their heart, local filtering methods and it would be interesting to extend our work to non-local filtering methods [5, 19, 25].

Acknowledgments. ZX and AC acknowledge support from the NSF under award no. IIS-1820693, and from a gift from Adobe research.


  • [1] Steve Bako, Thijs Vogels, Brian McWilliams, Mark Meyer, Jan Novák, Alex Harvill, Pradeep Sen, Tony Derose, and Fabrice Rousselle. Kernel-predicting convolutional networks for denoising monte carlo renderings. ACM Transactions on Graphics (TOG), 36(4):97, 2017.
  • [2] Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In Proc. CVPR, 2005.
  • [3] Harold C Burger, Christian J Schuler, and Stefan Harmeling. Image denoising: Can plain neural networks compete with bm3d? In Proc. CVPR, 2012.
  • [4] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In Proc. CVPR, 2018.
  • [5] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In Proc. ICIP, 2007.
  • [6] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In Advances in neural information processing systems, pages 64–72, 2016.
  • [7] Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian. Practical poissonian-gaussian noise modeling and fitting for single-image raw-data. IEEE Transactions on Image Processing, 17(10):1737–1754, 2008.
  • [8] Pascal Getreuer, Ignacio Garcia-Dorado, John Isidoro, Sungjoon Choi, Frank Ong, and Peyman Milanfar. Blade: Filter learning for general purpose computational photography. In 2018 IEEE International Conference on Computational Photography (ICCP), pages 1–11. IEEE, 2018.
  • [9] Michaël Gharbi, Tzu-Mao Li, Miika Aittala, Jaakko Lehtinen, and Frédo Durand. Sample-based monte carlo denoising using a kernel-splatting network. ACM Transactions on Graphics (TOG), 38(4):125, 2019.
  • [10] Daniel Glasner, Shai Bagon, and Michal Irani. Super-resolution from a single image. In ICCV, 2009.
  • [11] Clément Godard, Kevin Matzen, and Matt Uyttendaele. Deep burst denoising. In

    Proceedings of the European Conference on Computer Vision (ECCV)

    , pages 538–554, 2018.
  • [12] Samuel W Hasinoff, Dillon Sharlet, Ryan Geiss, Andrew Adams, Jonathan T Barron, Florian Kainz, Jiawen Chen, and Marc Levoy. Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics (TOG), 35(6):192, 2016.
  • [13] Felix Heide, Steven Diamond, Matthias Nießner, Jonathan Ragan-Kelley, Wolfgang Heidrich, and Gordon Wetzstein. Proximal: Efficient image optimization using proximal algorithms. ACM Transactions on Graphics (TOG), 35(4):84, 2016.
  • [14] Felix Heide, Markus Steinberger, Yun-Ta Tsai, Mushfiqur Rouf, Dawid Pająk, Dikpal Reddy, Orazio Gallo, Jing Liu, Wolfgang Heidrich, Karen Egiazarian, et al. Flexisp: A flexible camera image processing framework. ACM Transactions on Graphics (TOG), 33(6):231, 2014.
  • [15] Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. In Advances in Neural Information Processing Systems, pages 667–675, 2016.
  • [16] Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim. Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 3224–3232, 2018.
  • [17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [18] Filippos Kokkinos and Stamatis Lefkimmiatis. Iterative residual cnns for burst photography applications. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5929–5938, 2019.
  • [19] Dabov Kostadin, Foi Alessandro, and Egiazarian Karen. Video denoising by sparse 3d transform-domain collaborative filtering. In European signal processing conference, volume 149, 2007.
  • [20] Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from, 2017.
  • [21] Orly Liba, Kiran Murthy, Yun-Ta Tsai, Tim Brooks, Tianfan Xue, Nikhil Karnad, Qiurui He, Jonathan T. Barron, Dillon Sharlet, Ryan Geiss, Samuel W. Hasinoff, Yael Pritch, and Marc Levoy. Handheld mobile photography in very low light. ACM Trans. Graph., 38(6):164:1–164:16, Nov. 2019.
  • [22] Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, and Thomas S Huang. Non-local recurrent network for image restoration. In Advances in Neural Information Processing Systems, pages 1680–1689, 2018.
  • [23] Ziwei Liu, Raymond A Yeh, Xiaoou Tang, Yiming Liu, and Aseem Agarwala. Video frame synthesis using deep voxel flow. In Proceedings of the IEEE International Conference on Computer Vision, pages 4463–4471, 2017.
  • [24] Ziwei Liu, Lu Yuan, Xiaoou Tang, Matt Uyttendaele, and Jian Sun. Fast burst images denoising. ACM Transactions on Graphics (TOG), 33(6):232, 2014.
  • [25] Matteo Maggioni, Giacomo Boracchi, Alessandro Foi, and Karen Egiazarian. Video denoising, deblocking, and enhancement through separable 4-d nonlocal spatiotemporal transforms. IEEE Transactions on image processing, 21(9):3952–3966, 2012.
  • [26] Talmaj Marinč, Vignesh Srinivasan, Serhan Gül, Cornelius Hellge, and Wojciech Samek. Multi-kernel prediction networks for denoising of burst images. arXiv preprint arXiv:1902.05392, 2019.
  • [27] Ben Mildenhall, Jonathan T Barron, Jiawen Chen, Dillon Sharlet, Ren Ng, and Robert Carroll. Burst denoising with kernel prediction networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2502–2510, 2018.
  • [28] Simon Niklaus, Long Mai, and Feng Liu. Video frame interpolation via adaptive convolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 670–679, 2017.
  • [29] Simon Niklaus, Long Mai, and Feng Liu. Video frame interpolation via adaptive separable convolution. In Proceedings of the IEEE International Conference on Computer Vision, pages 261–270, 2017.
  • [30] Yaniv Romano, John Isidoro, and Peyman Milanfar. Raisr: rapid and accurate image super resolution. IEEE Transactions on Computational Imaging, 3(1):110–125, 2016.
  • [31] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241, 2015.
  • [32] Stefan Roth and Michael J Black. Fields of experts. IJCV, 2009.
  • [33] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In Proc. CVPR, 2014.
  • [34] Thijs Vogels, Fabrice Rousselle, Brian McWilliams, Gerhard Röthlin, Alex Harvill, David Adler, Mark Meyer, and Jan Novák.

    Denoising with kernel prediction and asymmetric loss functions.

    ACM Transactions on Graphics (TOG), 37(4):124, 2018.
  • [35] Zhihao Xia and Ayan Chakrabarti. Identifying recurring patterns with deep neural networks for natural image denoising. arXiv preprint arXiv:1806.05229, 2018.
  • [36] Junyuan Xie, Linli Xu, and Enhong Chen. Image denoising and inpainting with deep neural networks. In NeurIPS, pages 341–349, 2012.
  • [37] Tianfan Xue, Jiajun Wu, Katherine Bouman, and Bill Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances in neural information processing systems, pages 91–99, 2016.
  • [38] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
  • [39] Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. In Proc. CVPR, 2017.
  • [40] Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, and Yun Fu. Residual non-local attention networks for image restoration. arXiv preprint arXiv:1903.10082, 2019.
  • [41] Shangchen Zhou, Jiawei Zhang, Jinshan Pan, Haozhe Xie1, Wangmeng Zuo, and Jimmy Ren. Spatio-temporal filter adaptive network for video deblurring. In Proc. ICCV, 2019.
  • [42] Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration. In Proc. ICCV, 2011.

Appendix A Architecture

We now provide a detailed description of the architecture of our basis prediction network, which includes a shared encoder and two decoders. The shared encoder network consists of five down-sampling blocks and encodes noisy frames into a shared feature space. The coefficients decoder decodes these features into a set of per-pixel mixing coefficients. Coefficients for each output pixel are normalized with a softmax function separately. The basis decoder first reduces these features to a 1D vector, and then decodes them to an output of shape , representing a burst-specific set of basis kernels, each of shape ( and for our model). Each 3D basis element is normalized with a softmax function so that the basis kernel sums up to 1. We include regular skip connections from the encoder to the coefficient decoder, and pooled-skip connections to the basis decoder. The entire architecture is illustrated in Figure 5.

Figure 5:

Details for our basis prediction network. All convolutional layers are 3x3 convolutional layers with same padding, except

which is a 2x2 convolutional layer with valid padding. The down-sampling block reduces the spatial size of the input by 2 with a 2x2 stride 2 max-pooling layer. The up-sampling block has a bilinear up-sampling layer with a factor of 2. The pooled skip connection first applies a global spatial average pooling of the encoder’s activations and replicate the average vector to the resolution of the decoder layer.

In our ablation study, we considered versions of our network that produced smaller kernels with and . For , we used one less up-sampling block in the basis decoder than for the case shown in Figure 5, and added one 33 convolutional layer with valid padding before the layer in Figure 5. For , we also used one less up-sampling block in the basis decoder, but in this case, replaced the layer in with a 22 transpose convolution layer with valid padding (i.e., one that does a 22 “full” convolution).

Our fixed basis ablation replaced the entire decoder branch with just a learned tensor, of size , to serve as the basis. However, we still retain the encoder and coefficient decoder, and the weights of these networks are learned jointly with the fixed basis tensor. Note that in this case, the denoising kernels at each pixel are formed as a linear combination of this fixed set of basis kernels, based on the coefficients predicted from the input burst by the decoder at each location (this is different from approaches that select one kernel at each location from a fixed kernel set [30, 8]). As our ablation showed, having a fixed basis set yields worse denoising performance than an adaptive basis.

Appendix B Additional Results

We show additional denoising results on images from the benchmark [27] in Figs. 6-9, comparing the quality of outputs from our method to those from regular KPN [27] and MKPN [26].

Noisy Ref. Ours GT
Noisy Ref. Direct KPN [27] MKPN [26] Ours GT
33.29 33.40 33.72 34.35
Figure 6: Addtional results on a benchmark synthetic test set [27]. Numbers are the Average PSNR (dB) on the full image.
Noisy Ref. Ours GT
Noisy Ref. Direct KPN [27] MKPN [26] Ours GT
34.29 34.27 34.82 35.52
Noisy Ref. Ours GT
Noisy Ref. Direct KPN [27] MKPN [26] Ours GT
PSNR (dB) 28.32 28.90 29.20 30.34
Figure 7: Addtional results on a benchmark synthetic test set [27]. Numbers are the Average PSNR (dB) on the full image.
Noisy Ref. Ours GT
Noisy Ref. Direct KPN [27] MKPN [26] Ours GT
29.21 28.64 29.36 30.35
Noisy Ref. Ours GT
Noisy Ref. Direct KPN [27] MKPN [26] Ours GT
28.81 29.17 29.33 30.86
Figure 8: Addtional results on a benchmark synthetic test set [27]. Numbers are the Average PSNR (dB) on the full image.
Noisy Ref. Ours GT
Noisy Ref. Direct KPN [27] MKPN [26] Ours GT
30.66 30.92 31.22 32.60
Noisy Ref. Ours GT
Noisy Ref. Direct KPN [27] MKPN [26] Ours GT
34.20 34.68 35.19 36.34
Figure 9: Addtional results on a benchmark synthetic test set [27]. Numbers are the Average PSNR (dB) on the full image.