1 Introduction
Burst denoising algorithms [24, 12, 27] seek to enable highquality photography in challenging conditions and are increasingly being deployed in commercial mobile cameras [12, 21]
. A burst captures a sequence of shortexposure frames of the scene that are free of motionblur, but with a high amount of noise in each frame and relative motion between frames. By accounting for this relative motion and using the fact that the noise is independent across frames, burst denoising attempts to aggregate these inputs and predict a single noise and blurfree image estimate.
Recently, Mildenhall et al. [27] proposed an elegantly simple yet surprisingly successful approach to burst denoising. Rather than explicitly estimating interframe motion [12, 24, 13, 14, 18], their method produces denoised estimates at each pixel as a weighted average of observed noisy intensities in a window around that pixel’s location in all frames. These averaging weights, or kernels, are allowed to vary from pixeltopixel to implicitly account for motion and image discontinuities, and are predicted from the noisy input burst using a “kernel prediction network” (KPN).
However, KPNs need to produce an output that is significantly higherdimensional than the denoised image—even for
kernels with eight frames, a KPN must predict 400 times as many kernel weights as image intensities. This comes with significant memory and computational costs, as well as difficulty in training given the many degrees of freedom in the output. As a result, KPNs have so far been used only with small kernels. This limits their denoising ability by preventing averaging across bigger spatial regions, and over frames with larger relative motion.
In this paper, we introduce an approach to predict large denoising kernels, and thus benefit from wider aggregation, while simultaneously limiting output dimensionality at each pixel, making the prediction network easier to train and compute and memoryefficient. This approach is motivated by a long history of successful image restoration methods that have leveraged internal structure and selfsimilarity in natural images [5, 2, 35, 22, 40]. In burst denoising, selfsimilarity is particularly strong because we expect both spatial structure in the form of similar patterns that recur within a frame and across frames of the same scene, and temporal structure caused by consistency in scene and camera motion. Given the expected selfsimilarity and structure in the image intensities themselves, we argue that the corresponding denoising kernels must also have similar structure. Specifically, while allowing for individual perpixel denoising kernels to be large, we assume that all the kernels for a given image span a lowerdimensional subspace.
Based on this observation, we introduce a new method for kernelbased burst denoising that achieves better accuracy and efficiency. Our contributions are:

We train a network that, given an input noisy burst, predicts both a global lowdimensional basis set of large kernels, and perpixel coefficient vectors relative to this basis. This corresponds to a significantly lowerdimensional output than a KPN that predicts arbitrary kernels of the same size at each pixel.

Enforcing this structure on the denoising kernels acts as a form of regularization, and our experiments demonstrate that it leads to stateoftheart denoising performance with significantly higherquality (>1 dB PSNR) than regular KPNs.

Beyond reducing memory usage and computational burden at the output layer of our network, the structure of our output enables the final kernel filtering step to be performed much more efficiently in the Fourier domain. We show this in terms of required number of FLOPs, as well as experimentally with actual runtimes.
2 Related Work
Singleimage and video denoising.
Singleimage denoising has been studied extensively. To overcome the illposed nature of the problem, classical approaches [32, 33, 42] developed regularization schemes that model the local statistics of natural images. The most successful approaches [5, 2] exploit nonlocal selfsimilarity within the image and denoise pixels by aggregating similar pixels or patches from distant regions. These methods have been extended to denoise videos [19, 25], where the search for similar patches proceeds not only within frames, but also across different frames. Recent work have improved image denoising performance using convolutional networks trained on large datasets [3, 38, 39, 22, 40, 35, 36]
Burst denoising.
Singleimage denoising is a fundamentally underconstrained problem. Burst processing can reduce this ambiguity by using multiple observations (the frames of the burst) to recover a noisefree depiction of the scene. Burst denoising algorithms are now extensively used in commercial smartphone cameras [12] and can produce compelling results even in extreme low light scenarios [4, 21]. Like in video denoising, a significant challenge for burst processing is the robustness to interframe motion. Many methods explicitly estimate this motion to align and denoise the frames [12, 24, 13, 14, 18]. Current stateoftheart burst denoising techniques [27, 18, 11, 26]
are based on deep neural networks. Many of them only require coarse registration, relying on the network to account for the small residual misalignments
[27, 11, 26].Kernel Prediction Networks.
Given a burst sequence, Mildenhall et al. [27] propose predicting perpixel kernels that are then applied to the input burst to produce the denoised output. They demonstrate that KPNs outperform direct pixel synthesis networks that produce oversmooth results. Subsequent work has extended this idea to use multiple kernels of varying sizes at every pixel [26]. KPNs have also been used in other applications, including denoising Monte Carlo renderings [1, 34, 9]
, video superresolution
[16] and deblurring [41], frame interpolation
[28, 29, 23] and video prediction [15, 6, 23, 37].Given their highdimensional output (perpixel kernels, that are threedimensional in the case of burst denoising), KPNs have significant memory and compute requirements, as well as a large number of parameters in their final layer. To ameliorate this, Marinč et al. [26] and Niklaus et al. [29] propose predicting spatially separable kernels: producing a horizontal and vertical kernel for each frame as output, and forming the spatial kernel as an outer product of these. However, this makes a strong apriori assumption about kernel structure, and still requires constructing and filtering with different perpixel kernels. In contrast, our approach assumes that the set of perpixel kernels for a scene span a lowdimensional subspace, and predicts a basis for this subspace based on the burst input. This approach also allows us to benefit for fast filtering in the Fourier domain.
3 Method
In burst denoising, we are given an input noisy burst of images , where indexes spatial locations and
the different frames in the burst. Using a heteroscedastic Gaussian noise model
[7], which accounts for both read and shot noise, we relate this to the corresponding noisefree frames as:(1) 
where and are the read and shotnoise parameters. Choosing the first frame as reference, our goal is to produce a single denoised image as an estimate of the first noisefree frame .
3.1 Kernelbased burst denoising
Rather than train a network to regress directly, Kernel Prediction Networks output a field of denoising kernels , one for each pixel at each frame . The kernels have a spatial support , indexed by , with separate weights for each frame. Given these predicted kernels, the denoised estimate is formed as:
(2) 
A key bottleneck in this pipeline is the prediction of this dense kernel field , which requires producing numbers at every pixel of the output. Since networks with highdimensional outputs are both expensive and require learning a large number of parameters in their last layer, KPNs have typically been used only with small kernels ( in [27]).
3.2 Basis Prediction Networks
Instead of directly predicting unconstrained kernels for each spatial location, we designed a network that outputs: (1) a global kernel basis , of size with ; and (2) a dimensional coefficient vector at each spatial location.
(3) 
Note that we typically choose the number of basis kernels . This implies that all the kernels for a given burst lie in a lowdimensional subspace, but this subspace will be different for different bursts (i.e. the basis is burstspecific). This procedure allows us to recreate a full kernel field with far fewer predictions. Assuming a resolution image, we need only make predictions to effectively recreate a kernel field of size .
We designed our network following an encoderdecoder architecture with skip connections [31]. Our model, however, has two decoder branches, one for the basis, the other for the coefficients (Figure 2). The encoder is shared between the two branches because the meaning of the coefficients is dependent on the predicted basis in Equation (3), so the two outputs need to be coordinated. This encoder takes the noisy burst and noisy parameters as input, and through multiple levels of downsampling and global average pooling at the end, yields a single global feature vector as its encoding of the image. The perpixel coefficients are then decoded from the encoder bottleneck to the full image resolution , with channels as output. The common basis is decoded up to distinct spatial dimensions — that of the kernels — with output channels.
Since the basis branch decodes to a different spatial resolution, we need a careful treatment of the skip connections. Unlike a usual UNet, the encoder and decoder feature size do not match. Specifically, a pixel in the basis kernel has no meaningful relation to a pixel in the input frames . Therefore, in the skip connections from the shared encoder to the basis decoder, we apply a global spatial average pooling of the encoder’s activations, and replicate the average vector to the resolution of the decoder layer. This mechanism ensures the encoder information is globally aggregated without creating nonsensical correspondences between kernel and image locations, while allowing features at multiple scales of the encoder to inform the basis decoder.
We ensure each of the reconstructed kernel has positive weights that sum to one, to represent averaging. We implement this constraint using softmax normalizations on both the coefficient and basis decoder outputs. So every 3D kernel of the basis and every coefficient vector is normalized individually. A more detailed description of the architecture is provided in the supplementary.
Our network is trained with respect to the quality of the final denoised output —with an loss on intensities and loss on gradients. Like [27], we additionally use a perframe loss to bias the network away from relying only on the reference frame. We do this with separate losses on denoised estimates from each individual frame of the input burst (formed as ). These are added to main training loss, with a weight that is decayed across training iterations.
3.3 Efficient Fourier domain filtering
Filtering by convolution with large kernels is commonly implemented in the Fourier domain, where the filtering complexity is quasilinear in image size, while the complexity of direct convolution scales with the product of image and kernelsize. But because the kernels in KPNs vary spatially, Equation (2) does not represent a standard convolution, ruling out this acceleration.
In our case, because our kernels are defined with respect to a small set of “global” basis vectors, we can leverage Fourierdomain convolution to speed up filtering. We achieve this by combining and rewriting the expressions in Eq. (2) and Eq. (3) as:
(4)  
where denotes standard spatial 2D convolution with a spatiallyuniform kernel.
In other words, we first form a set of filtered versions of the input burst by standard convolution with each of the basis kernels—convolving each frame in the burst with the corresponding “slice” of the basis kernel—and then taking a spatiallyvarying linear combination of the filtered intensities at each pixel based on the coefficients . We can carry out these standard convolutions in the Fourier domain as:
(5) 
where and
are spatial forward and inverse Fourier transforms. This is significantly more efficient for larger kernels, especially since we need not repeat forward Fourier transform of the inputs
for different basis kernels.Method  Gain 1  Gain 2  Gain 4  Gain 8 

HDR+ [12]  31.96  28.25  24.25  20.05 
BM3D [5]  33.89  31.17  28.53  25.92 
NLM [2]  33.23  30.46  27.43  23.86 
VBM4D [25]  34.60  31.89  29.20  26.52 
Direct  35.93  33.36  30.70  27.97 
KPN [27]  36.47  33.93  31.19  27.97 
KPN* ()  36.35  33.69  31.02  28.16 
MKPN* [26]  36.88  34.22  31.45  28.52 
Ours  38.18  35.42  32.54  29.45 
4 Experiments
We closely follow the protocol of Mildenhall et al. [27] to train and evaluate our network. Our model is designed for bursts of frames with resolution . It is trained using training and validation sets constructed from the Open Images dataset [20] following the procedure of [27]. We also use their test set of 73 images for evaluation.
Our default configuration uses bases with kernels of size . We train our network (as well as all ablation baselines) using the Adam optimizer [17] with an initial learning rate of . We drop the learning twice, by a factor of each time, whenever the validation loss saturates. Training takes around 600k iterations with batches of 24 images.
4.1 Denoising performance
Table 1 reports the PSNR of our denoised outputs on the test set. Each noise level corresponds to a sensor gain value (one stop increments of the ISO setting in a camera). Higher gain lead to noisier images. The highest noise level, denoted as , lies outside the range we trained on. We use it to evaluate our model’s extrapolation capability. In addition to our own model, we also report results for a motionalignmentbased method [12], several approaches based on nonlocal filtering [2, 5, 25], as well as the standard KPN burst denoiser [27]—which is the current stateoftheart. Since we did not have access to the original KPN model, we implemented a version ourselves (that we use in ablations in the next section) and also report its performance in Table 1. We find it closely matches those from [27]). Additionally, we train a network to directly regress the denoised pixel values from the input burst (i.e., without kernels), as well as our implementation of [26] with a larger kernel size of for fair comparison.
We find that our method outperforms KPN [27] by a significant margin, over 1 dB PSNR at all noise levels. Our implementation of [26] also does well, but remains inferior to our model. We show qualitative results for a subset of methods in Figure 3. Our have fewer artifacts, especially in textured regions and around thin structures like printed text.
4.2 Ablation and analysis
Gain 1  Gain 2  Gain 4  Gain 8  

KPN ()  34.29  31.80  28.23  24.86 
Separable ()  34.67  32.05  28.52  25.12 
Ours ()  35.70  33.02  29.16  25.57 
Ours ()  36.22  33.41  29.56  25.94 
Ours ()  35.31  32.69  28.95  25.43 
Ours ()  36.10  33.33  29.45  25.88 
Ours ()  36.27  33.47  29.57  25.99 
Ours (, )  36.29  33.57  29.62  25.99 
Common Spatial Basis  35.71  33.04  29.23  25.71 
Perframe Spatial Basis  36.21  33.46  29.56  25.92 
Fixed basis  34.66  32.15  28.68  25.39 
Our approach leads to better denoising quality because it enables larger kernels without vastly increasing the network’s output dimensionality and number of learnable parameters. To tease apart the contributions of kernel size and the structure of our kernel decomposition, we conduct an ablation study on our validation set. The results can be found in Table 2. The performance gap between the test and validation set results (Table 1 and 2) comes from differences in the datasets themselves.
Kernel Size. As a baseline, we consider using KPN directly with our larger kernel size of . We also consider predicting a single separable kernel at that size ([26] predicts separable kernels at multiple sizes, and adds them together). We find that our network outperforms the large kernel KPN variant at all noise levels—suggesting that simply increasing the kernel size is not enough. It also outperforms separable kernel prediction, suggesting that a lowdimensional subspace constraint better captures the structure of natural images than spatial separability.
For completeness, we also evaluate our basis prediction network with smaller kernels, and . Although, this leads to a drop in performance compared to our default configuration, these variants still perform better than the original KPN—suggesting our approach has a regularizing effect that benefits even smaller kernels.
Basis Size. The number of basis elements in our default configuration, , was selected from a parameter search on the validation set. We include this analysis in Table 2, reporting PSNR values for ranging from 10 to 130. We find that bases with fewer than 90 kernels lead to a drop in quality. The larger bases, , also performs very slightly worse than . We hypothesize that large bases start to have too many degrees of freedom. This increases the dimensionality of the network’s output, which negates the benefits of a subspace restriction.
Spatial vs. Spatiotemporal Basis Decomposition. Note that we define our basis as a subspace to span 3D kernels—i.e., each of our basis elements is a 3D spatiotemporal kernel. We predict a single weight at each location, which is applied to corresponding spatial kernels for all frames . However, there are other possible choices for decomposing 3D kernels, and we consider two of these in our ablation (Table 2). In both cases, we output coefficients that vary perframe, in addition to perlocation—and are interpreted as separate coefficients corresponding to a spatial basis kernel. In one case, we use a common spatial basis across all frames, with . In the other, we have a perframe spatial basis for each frame, and . The perframe basis increases the dimensionality of our coefficient output and leads to a slight drop in performance, likely due to a reduced regularizing effect. The common spatial basis, however, suffers a greater performance drop since it also forces kernels in all frames to share the same subspace.
We also compare qualitatively the spatiotemporal kernels produced by our default configuration with those predicted by standard KPN in Figure 4
. Our model makes better use of the temporal information, applying large weights to pixels across many frames in the burst, whereas KPN tends to overly favor the reference frame. Our network better tracks the apparent motion in the burst, shifting the kernel accordingly. And it is capable of ignoring outliers caused to excessive motion (all black kernels in Fig.
4).Fixed vs. Burstspecific Basis. Given that our network predicts both a basis and perpixel coefficients, a natural question is whether a burstspecific kernel basis is even needed. To address this, we train a network architecture without a basis decoder to only predict coefficients for each burst, and instead learn a basis that is fixed across all bursts. We still learn the fixed basis jointly with this network, as a direct learnable tensor. Table 2 shows that using a fixed basis in this manner leads to a significant decrease in denoising quality (although still better than standard KPN).
This suggests that while a subspace restriction on kernels is useful, the ideal subspace is scenedependent and must be predicted adaptively. We further explore this phenomenon in Table 3, where we quantify the rank of the predicted bases for individual images, and for pairs of images. Note that the rank can be lower than , since we do not effectively require the ‘basis’ vectors to be linearly independent. We find that the combined rank of basis kernels of image pairs (obtained by concatenating the two bases) is nearly twice the rank obtained from individual images—suggesting limited overlap between the basis sets of different images. We also explicitly compute the average overlap ratio across image pairs as , and find it to be around on average. This low overlap implies that different bursts do indeed require different bases, justifying our use of burstspecific bases.
Gain 1  Gain 2  Gain 4  Gain 8  

80.6  81.8  84.3  86.2  
152.2  154.5  159.6  165.2  
Overlap ratio  5.4%  5.5%  5.4%  4.4% 
GFLOPs  Runtime (s)  

KPN ()  59.3  0.63 
Separable ()  29.9  0.43 
Ours ()  28.9  0.24 
Ours ()  29.1  0.29 
Ours ()  26.5  0.19 
Ours ()  28.2  0.27 
Ours ()  31.7  0.41 
Ours (, )  29.9  0.30 
Common Spatial Basis  40.8  0.49 
Perframe Spatial Basis  41.9  0.57 
4.3 Computational expense
Finally, we evaluate the computational expense of our approach and compare it to the different ablation settings considered in Table 2, including standard KPN. We report the total number of floating point operations (FLOPs) required for network prediction and filtering in Table 4. We find that in addition to producing higherquality results, our approach also requires significantly fewer FLOPs than regular KPN—for kernel size . This is due to the reduced complexity of our final prediction layer, as well as the benefit of efficient filtering in the Fourier domain. Also, we find that our approach has nearly identical complexity as separable kernel prediction, while achieving higher denoising performance because it can express a more general class of kernels.
In addition to the evaluation FLOPs, Table 4 reports measured running times for the various approaches, benchmarked on a 1024 768 image on an NVIDIA 1080Ti GPU. To compute these timings, we divide the image into 128128 nonoverlapping patches to form a batch and send it to the denoising network. Since regular KPN have very high memory requirements, we select the maximum batch size for each method and denoise the entire image in multiple runs. This maximizes GPU throughput. We find that our approach retains its running time advantage over KPN in practice. It is also a little faster than separable kernel prediction—likely due to the improved cache performance we get from using Fourierdomain convolutions with spatiallyuniform basis kernels.
5 Conclusion and future work
In this work, we argue that local, perpixel burst denoising kernels are highly coherent. Based on this, we present a basis prediction network that jointly infers a global, lowdimensional kernel basis and the corresponding perpixel mixing coefficients that can be used to construct perpixel denoising kernels. This formulation significantly reduces memory and compute requirements compared to prior kernelpredicting burst denoising methods, allowing us to substantially improve performance by using large kernels, while reducing running time.
While this work focuses on burst denoising, KPNbased methods have been applied to other image and video enhancement tasks including video superresolution [16], frame interpolation [28, 29, 23], video prediction [15, 6], and video deblurring [41]. All these tasks exhibit similar structure and will likely benefit from our approach.
There are other forms of spatiotemporal structure that can be explored to build on our work. For example, image enhancement methods have exploited selfsimilarity at different scales [10] suggesting other decompositions in scale space. Also, we assume a fixed basis size globally. Adapting this spatially to local content could yield further benefits. Finally, KPNs are still, at their heart, local filtering methods and it would be interesting to extend our work to nonlocal filtering methods [5, 19, 25].
Acknowledgments. ZX and AC acknowledge support from the NSF under award no. IIS1820693, and from a gift from Adobe research.
References
 [1] Steve Bako, Thijs Vogels, Brian McWilliams, Mark Meyer, Jan Novák, Alex Harvill, Pradeep Sen, Tony Derose, and Fabrice Rousselle. Kernelpredicting convolutional networks for denoising monte carlo renderings. ACM Transactions on Graphics (TOG), 36(4):97, 2017.
 [2] Antoni Buades, Bartomeu Coll, and JM Morel. A nonlocal algorithm for image denoising. In Proc. CVPR, 2005.
 [3] Harold C Burger, Christian J Schuler, and Stefan Harmeling. Image denoising: Can plain neural networks compete with bm3d? In Proc. CVPR, 2012.
 [4] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In Proc. CVPR, 2018.
 [5] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminancechrominance space. In Proc. ICIP, 2007.
 [6] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In Advances in neural information processing systems, pages 64–72, 2016.
 [7] Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian. Practical poissoniangaussian noise modeling and fitting for singleimage rawdata. IEEE Transactions on Image Processing, 17(10):1737–1754, 2008.
 [8] Pascal Getreuer, Ignacio GarciaDorado, John Isidoro, Sungjoon Choi, Frank Ong, and Peyman Milanfar. Blade: Filter learning for general purpose computational photography. In 2018 IEEE International Conference on Computational Photography (ICCP), pages 1–11. IEEE, 2018.
 [9] Michaël Gharbi, TzuMao Li, Miika Aittala, Jaakko Lehtinen, and Frédo Durand. Samplebased monte carlo denoising using a kernelsplatting network. ACM Transactions on Graphics (TOG), 38(4):125, 2019.
 [10] Daniel Glasner, Shai Bagon, and Michal Irani. Superresolution from a single image. In ICCV, 2009.

[11]
Clément Godard, Kevin Matzen, and Matt Uyttendaele.
Deep burst denoising.
In
Proceedings of the European Conference on Computer Vision (ECCV)
, pages 538–554, 2018.  [12] Samuel W Hasinoff, Dillon Sharlet, Ryan Geiss, Andrew Adams, Jonathan T Barron, Florian Kainz, Jiawen Chen, and Marc Levoy. Burst photography for high dynamic range and lowlight imaging on mobile cameras. ACM Transactions on Graphics (TOG), 35(6):192, 2016.
 [13] Felix Heide, Steven Diamond, Matthias Nießner, Jonathan RaganKelley, Wolfgang Heidrich, and Gordon Wetzstein. Proximal: Efficient image optimization using proximal algorithms. ACM Transactions on Graphics (TOG), 35(4):84, 2016.
 [14] Felix Heide, Markus Steinberger, YunTa Tsai, Mushfiqur Rouf, Dawid Pająk, Dikpal Reddy, Orazio Gallo, Jing Liu, Wolfgang Heidrich, Karen Egiazarian, et al. Flexisp: A flexible camera image processing framework. ACM Transactions on Graphics (TOG), 33(6):231, 2014.
 [15] Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. In Advances in Neural Information Processing Systems, pages 667–675, 2016.

[16]
Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim.
Deep video superresolution network using dynamic upsampling filters
without explicit motion compensation.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 3224–3232, 2018.  [17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [18] Filippos Kokkinos and Stamatis Lefkimmiatis. Iterative residual cnns for burst photography applications. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5929–5938, 2019.
 [19] Dabov Kostadin, Foi Alessandro, and Egiazarian Karen. Video denoising by sparse 3d transformdomain collaborative filtering. In European signal processing conference, volume 149, 2007.
 [20] Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami AbuElHaija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. Openimages: A public dataset for largescale multilabel and multiclass image classification. Dataset available from https://github.com/openimages, 2017.
 [21] Orly Liba, Kiran Murthy, YunTa Tsai, Tim Brooks, Tianfan Xue, Nikhil Karnad, Qiurui He, Jonathan T. Barron, Dillon Sharlet, Ryan Geiss, Samuel W. Hasinoff, Yael Pritch, and Marc Levoy. Handheld mobile photography in very low light. ACM Trans. Graph., 38(6):164:1–164:16, Nov. 2019.
 [22] Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, and Thomas S Huang. Nonlocal recurrent network for image restoration. In Advances in Neural Information Processing Systems, pages 1680–1689, 2018.
 [23] Ziwei Liu, Raymond A Yeh, Xiaoou Tang, Yiming Liu, and Aseem Agarwala. Video frame synthesis using deep voxel flow. In Proceedings of the IEEE International Conference on Computer Vision, pages 4463–4471, 2017.
 [24] Ziwei Liu, Lu Yuan, Xiaoou Tang, Matt Uyttendaele, and Jian Sun. Fast burst images denoising. ACM Transactions on Graphics (TOG), 33(6):232, 2014.
 [25] Matteo Maggioni, Giacomo Boracchi, Alessandro Foi, and Karen Egiazarian. Video denoising, deblocking, and enhancement through separable 4d nonlocal spatiotemporal transforms. IEEE Transactions on image processing, 21(9):3952–3966, 2012.
 [26] Talmaj Marinč, Vignesh Srinivasan, Serhan Gül, Cornelius Hellge, and Wojciech Samek. Multikernel prediction networks for denoising of burst images. arXiv preprint arXiv:1902.05392, 2019.
 [27] Ben Mildenhall, Jonathan T Barron, Jiawen Chen, Dillon Sharlet, Ren Ng, and Robert Carroll. Burst denoising with kernel prediction networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2502–2510, 2018.
 [28] Simon Niklaus, Long Mai, and Feng Liu. Video frame interpolation via adaptive convolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 670–679, 2017.
 [29] Simon Niklaus, Long Mai, and Feng Liu. Video frame interpolation via adaptive separable convolution. In Proceedings of the IEEE International Conference on Computer Vision, pages 261–270, 2017.
 [30] Yaniv Romano, John Isidoro, and Peyman Milanfar. Raisr: rapid and accurate image super resolution. IEEE Transactions on Computational Imaging, 3(1):110–125, 2016.
 [31] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. Unet: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computerassisted intervention, pages 234–241, 2015.
 [32] Stefan Roth and Michael J Black. Fields of experts. IJCV, 2009.
 [33] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In Proc. CVPR, 2014.

[34]
Thijs Vogels, Fabrice Rousselle, Brian McWilliams, Gerhard Röthlin, Alex
Harvill, David Adler, Mark Meyer, and Jan Novák.
Denoising with kernel prediction and asymmetric loss functions.
ACM Transactions on Graphics (TOG), 37(4):124, 2018.  [35] Zhihao Xia and Ayan Chakrabarti. Identifying recurring patterns with deep neural networks for natural image denoising. arXiv preprint arXiv:1806.05229, 2018.
 [36] Junyuan Xie, Linli Xu, and Enhong Chen. Image denoising and inpainting with deep neural networks. In NeurIPS, pages 341–349, 2012.
 [37] Tianfan Xue, Jiajun Wu, Katherine Bouman, and Bill Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances in neural information processing systems, pages 91–99, 2016.
 [38] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
 [39] Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. In Proc. CVPR, 2017.
 [40] Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, and Yun Fu. Residual nonlocal attention networks for image restoration. arXiv preprint arXiv:1903.10082, 2019.
 [41] Shangchen Zhou, Jiawei Zhang, Jinshan Pan, Haozhe Xie1, Wangmeng Zuo, and Jimmy Ren. Spatiotemporal filter adaptive network for video deblurring. In Proc. ICCV, 2019.
 [42] Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration. In Proc. ICCV, 2011.
Appendix A Architecture
We now provide a detailed description of the architecture of our basis prediction network, which includes a shared encoder and two decoders. The shared encoder network consists of five downsampling blocks and encodes noisy frames into a shared feature space. The coefficients decoder decodes these features into a set of perpixel mixing coefficients. Coefficients for each output pixel are normalized with a softmax function separately. The basis decoder first reduces these features to a 1D vector, and then decodes them to an output of shape , representing a burstspecific set of basis kernels, each of shape ( and for our model). Each 3D basis element is normalized with a softmax function so that the basis kernel sums up to 1. We include regular skip connections from the encoder to the coefficient decoder, and pooledskip connections to the basis decoder. The entire architecture is illustrated in Figure 5.
In our ablation study, we considered versions of our network that produced smaller kernels with and . For , we used one less upsampling block in the basis decoder than for the case shown in Figure 5, and added one 33 convolutional layer with valid padding before the layer in Figure 5. For , we also used one less upsampling block in the basis decoder, but in this case, replaced the layer in with a 22 transpose convolution layer with valid padding (i.e., one that does a 22 “full” convolution).
Our fixed basis ablation replaced the entire decoder branch with just a learned tensor, of size , to serve as the basis. However, we still retain the encoder and coefficient decoder, and the weights of these networks are learned jointly with the fixed basis tensor. Note that in this case, the denoising kernels at each pixel are formed as a linear combination of this fixed set of basis kernels, based on the coefficients predicted from the input burst by the decoder at each location (this is different from approaches that select one kernel at each location from a fixed kernel set [30, 8]). As our ablation showed, having a fixed basis set yields worse denoising performance than an adaptive basis.
Appendix B Additional Results
We show additional denoising results on images from the benchmark [27] in Figs. 69, comparing the quality of outputs from our method to those from regular KPN [27] and MKPN [26].
Noisy Ref.  Ours  GT  
Noisy Ref.  Direct  KPN [27]  MKPN [26]  Ours  GT 
33.29  33.40  33.72  34.35 
Noisy Ref.  Ours  GT  
Noisy Ref.  Direct  KPN [27]  MKPN [26]  Ours  GT 
34.29  34.27  34.82  35.52  
Noisy Ref.  Ours  GT  
Noisy Ref.  Direct  KPN [27]  MKPN [26]  Ours  GT 
PSNR (dB)  28.32  28.90  29.20  30.34 
Comments
There are no comments yet.