1 Introduction
Motion blur is one of the most commonly arising artifact types when taking photos. Camera shaking and fast object motions degrade image quality to undesired blurry images. Furthermore, various causes such as depth variation, occlusion in motion boundaries make blurs even more complex. Single image deblurring problem is to estimate the unknown sharp image given a blurry image. Earlier studies focused on removing blurs caused by simple translational or rotational camera motions. More recent works try to handle removing general nonuniform blurs caused by depth variation, camera shakes and object motions in dynamic environments. Most of these approaches are based on following blur model
[29, 10, 13, 11].(1) 
where B, S and n are vectorizd blurry image, latent sharp image, and noise, respectively. K is a large sparse matrix whose rows each contain a local blur kernel acting on S to generate a blurry pixel. In practice, blur kernel is unknown. Thus, blind deblurring methods try to estimate latent sharp image S and blur kernel K simultaneously, given a blurry image B.
Finding blur kernel for every pixel is severely illposed problem. Thus, some approaches tried to parametrize blur models with simple assumptions on the sources of blurs. In [29, 10], they assumed that blur is caused by 3D camera motion only. However, in dynamic scenes, the kernel estimation is more challenging as there are multiple moving objects as well as camera motion. Thus, Kim et al. [14] proposed a dynamic scene deblurring method that jointly segments and deblurs a nonuniformly blurred image, allowing the estimation of complex (nonlinear) kernel within a segment. In addition, Kim and Lee [15] approximated the blur kernel as locally linear and proposed an approach that estimates both the latent image and the local linear motions jointly. However, these blur kernel approximations are still inaccurate, especially in the cases of abrupt motion discontinuities and occlusions. Note that such erroneous kernel estimation directly affects to the quality of the latent image, resulting undesired ringing artifacts.
Recently, CNNs (Convolutional Neural Networks) have been applied in numerous computer vision problems and showed promising results including deblurring problem [30, 25, 27, 1]
. Since no pairs of real blurry image and ground truth sharp image are available for supervised learning, they commonly used blurry images generated by convolving synthetic blur kernels. In
[30, 25, 1], synthesized blur images with uniform blur kernel are used for training. And, in [27], classification CNN is trained to estimate locally linear blur kernels. Thus, still, CNN based models are only suitable for several specific types of blurs and have limits against more general spatially varying blurs.Therefore, all the existing methods still have many problems before they could be generalized and used in practice. These are mainly due to the use of simple and unrealistic blur kernel models. Thus, to solve those problems, in this work, we propose a novel endtoend learning approach for dynamic scene deblurring.
First, we propose a multiscale CNN that directly restores latent images without assuming any restricted blur kernel model. Unlike other approaches, our method does not estimate explicit blur kernels. Accordingly, our method is free from artifacts that arise from kernel estimation errors. Especially, the multiscale architecture is designed to mimic conventional coarsetofine optimization methods. Second, we train the proposed model with multiscale loss that is appropriate for coarsetofine architecture, and enhances convergence greatly. In addition, we further improve the results by employing adversarial loss [9]. Third, we propose a new realistic blurry image dataset with ground truth sharp images. To obtain kernel modelfree dataset for training, we employ the dataset acquisition method introduced in [17]. As the blurring process can be modeled by the integration of sharp images during shutter time [17, 21, 16], we captured a sequence of sharp frames of a dynamic scene with a highspeed camera, and averaged them to generate blurry image by considering gamma correction.
By training with the proposed dataset and adding proper augmentation, our model can handle general local blur kernel implicitly. As the loss term optimizes the result to resemble the ground truth, it even restores occluded regions where blur kernel is extremely complex as shown in Fig. 1. We trained our model with millions of pairs of image patches, and achieved significant improvements in dynamic scene deblurring. Extensive experimental results demonstrate that the performance of the proposed method is far superior to those of the stateoftheart dynamic scene deblurring methods in both qualitative and quantitative evaluations.
1.1 Related Works
Xu et al. [30] proposed an image deconvolution CNN to deblur a blurry image in a nonblind setting. They built a network based on the separable kernel property that the (inverse) blur kernel can be decomposed into a small number of significant filters. Additionally, they incorporated the denoising network [6] to reduce visual artifacts such as noise and color saturation by concatenating the module at the end of their proposed network.
On the other hand, Schuler et al. [25]
proposed a blind deblurring method with CNN. The proposed network mimics conventional optimizationbased deblurring methods and iterates the feature extraction, kernel estimation, and the latent image estimation steps in a coarsetofine manner. To obtain pairs of sharp and blurry images for network training, they generated uniform blur kernels using a Gaussian Process and synthesized lots of blurry images by convolving them to the sharp images collected from the ImageNet dataset
[3]. However, they reported performance limits for large blurs due to their suboptimal architecture.Similarly to the work of CouzinieDevy et al. [2], Sun et al. [27] proposed a sequential deblurring approach. First, they generated pairs of blurry and sharp patches with 73 candidate blur kernels. Next, they trained classification CNN to measure the local likelihood of a specific blur kernel of local patch. And then smoothly varying blur kernel is obtained by optimizing an energy model that is composed of the CNN likelihoods and smoothness priors. Final latent image estimation is performed with conventional optimization method [31].
Note that all these methods require an accurate kernel estimation step for restoring the latent sharp image. In contrast, our proposed system is learned to produce the latent image directly without estimating blur kernels.
In other computer vision tasks, several forms of coarsetofine architecture or multiscale architecture were applied [7, 5, 4, 23, 8]. However, not all multiscale CNNs are designed to produce optimal results similarly to [25]. In depth estimation, optical flow estimation, etc., networks usually produce outputs having smaller resolution compared to input image resolution [7, 5, 8]. These methods have difficulties in handling longrange dependency even if multiscale architecture is used.
Therefore, we make a multiscale architecture that preserves finegrained detail information as well as longrange dependency from coarser scales. Furthermore, we make sure intermediate level networks help the final stage in an explicit way by training network with multiscale losses.
1.2 KernelFree Learning for Dynamic Scene Deblurring
Conventionally, it was essential to find blur kernel before estimating latent image. CNN based approaches are not exception [25, 26]. However, estimating kernel involves several problems. First, assuming simple kernel convolution cannot model several challenging cases such as occluded regions or depth variations. Second, kernel estimation process is subtle and sensitive to noise and saturations, unless blur model is carefully designed. Furthermore, incorrectly estimated kernels give rise to artifacts in latent images. Third, finding spatially varying kernel for every pixel in dynamic scene requires huge amount of memory and computation.
Therefore, we adopt kernelfree methods in both blur dataset generation and latent image estimation. In blur image generation, we follow to approximate camera imaging process, rather than assuming specific motions, instead of finding or designing complex blur kernel. We capture successive sharp frames and integrate to simulate blurring process. The detailed procedure is described in section 2. Note that our dataset is composed of blurry and sharp image pairs only, and that the local kernel information is implicitly embedded in it. In Fig. 2, our kernelfree blurry image is compared with a conventional synthesized image with uniform blur kernel. Notably, the blur image generated by our method exhibits realistic and spatially varying blurs caused by the moving person and the static background, while the blur image synthesized by conventional method does not. For latent image estimation, we do not assume blur sources, and train the model solely on our blurry and sharp image pairs. Thus, our proposed method does not suffer from kernelrelated problems in deblurring.
2 Blur Dataset
Instead of modeling a kernel to convolve on a sharp image, we choose to record the sharp information to be integrated over time for blur image generation. As camera sensor receives light during exposure, sharp image stimulation at every time is accumulated, generating blurry image [13]. The integrated signal is then transformed into pixel value by nonlinear CRF (Camera Response Function). Thus, the process could be approximated by accumulating signals from highspeed video frames.
Blur accumulation process can be modeled as follows.
(2) 
where and denote the exposure time and the sensor signal of a sharp image at time , respectively. Similarly, , are the number of sampled frames and the th sharp frame captured during the exposure time, respectively. is the CRF that maps a sharp latent image into an observed image such that , or In practice, we only have observed video frames while the original signal and the CRF is unknown.
It is known that nonuniform deblurring becomes significantly difficult when nonlinear CRF is involved, and nonlinearity should be taken into account. However, currently, there are no CRF estimation techniques available for an image with spatially varying blur [28]. When the ground truth CRF is not given, a common practical method is to approximate CRF as a gamma curve with as follows, since it is known as an aproximated average of known CRFs [28].
(3) 
Thus, by correcting the gamma function, we obtain the sharp latent frame from the observed image by , and then synthesize the corresponding blur image by using (2).
We used GOPRO4 Hero Black camera to generate our dataset. We took 240 fps videos with GOPRO camera and then averaged varying number (7  13) of successive latent frames to produce blurs of different strengths. For example, averaging 15 frames simulates a photo taken at 1/16 shutter speed, while corresponding sharp image shutter speed is 1/240. Notably, the sharp latent image corresponding to each blurry one is defined as the midframe among the sharp frames that are used to make the blurry image. Finally, our dataset is composed of 3214 pairs of blurry and sharp images at 1280x720 resolution. The proposed GOPRO dataset is publicly available on our website ^{1}^{1}1https://github.com/SeungjunNah/DeepDeblur_release.
3 Proposed Method
In our model, finer scale image deblurring is aided by coarser scale features. To exploit coarse and middle level information while preserving fine level information at the same time, input and output to our network take form of Gaussian pyramids. Note that most other coarsetofine networks takes single input and single output.
3.1 Model Architecture
In addition to the multiscale architecture, we employ slightly modified version of residual network structure [12] as a building block of our model. Using residual network structure enables deeper architecture compared to a plain CNN. Also, as blurry and sharp image pairs are similar in values, it is efficient to let parameters learn the difference only. We found that removing the rectified linear unit after the shortcut connection of the original residual building block boosts the convergence speed at training time. We denote the modified building block as ResBlock. The original and our modified building block is compared in Fig. 3.
By stacking enough number of convolution layers with ResBlocks, the receptive field at each scale is expanded. Details are described in the following paragraphs. For sake of consistency, we define scale levels in the order of decreasing resolution (i.e. level 1 for finest scale). Unless denoted otherwise, we use total scales. At training time, we set the resolution of the input and output Gaussian pyramid patches to be . The scale ratio between consecutive scales is 0.5. For all convolution layers, we set the filter size to be . As our model is fully convolutional, at test time, the patch size may vary as the GPU memory allows. The overall architecture is shown in Fig. 4.
Coarsest level network
At the front of the network locates the coarsest level network. The first convolution layer transforms 1/4 resolution,
size input to 64 feature maps. Then, 19 ResBlocks are stacked followed by last convolution layer that transforms the feature map into input dimension. Every convolution layer preserves dimensions with zero padding. In total, there are 40 convolution layers. The number of convolution layer at each scale level is determined so that total model should have 120 convolution layers. Thus, the coarsestscale network has receptive field large enough to ‘see’ the whole patch. At the end of the stage, a coarsest level latent sharp image is generated. Moreover, information from the coarsest level output is delivered to the next stage where finer scale network is. To convert a coarsest output to fit the input size of the next finer scale, the output patch passes an upconvolution
[22] layer, while other multiscale methods uses reshaping [7] or upsampling [4, 5, 23]. Since the sharp and blurry patches share low frequency information, learning suitable feature with upconvolution helps to remove redundancy. In our experiment, using upconvolution showed better performance than upsampling. Then, the upconvolution feature is concatenated with the finer scale blurry patch as an input.Finer level network
Finer level networks basically have the same structure as in the coarsest scale network. However, the first convolution layer takes sharp feature from previous stage as well as its own blurry input image, in a concatenated form. Every convolution filter size is with the same number of feature maps as in the coarsest scale. Except for the last finest scale, there is an upconvolution layer before the next stage. At the finest scale, the original resolution sharp image is restored.
3.2 Training
Our model is trained on the proposed GOPRO dataset. Among 3214 pairs, 2103 pairs were used for training and remainings were used for the test. To prevent our network from overfitting, several data augmentation techniques are involved. In terms of geometric transformations, patches are flipped horizontally/vertically, rotated at random degrees. For color, RGB channels are randomly permuted. To take image degradations into account, saturation in HSV colorspace is multiplied by random number within
. Also, Gaussian random noise is added to blurry images. To make our network be robust against different strengths of noise, standard deviation of noise is also randomly sampled from Gaussian distribution,
. Finally, augmented image values are clipped in range of .In optimizing the network parameters, we trained the model in the combination of two losses, multiscale content loss and adversarial loss.
Multiscale content loss
Basically, the coarsetofine approach desires that every midlevel outputs are the sharp images of the corresponding scales. Thus, we train our network so that every intermediate latent images should form Gaussian pyramid of sharp image. MSE criterion is applied to every level of pyramids. Hence, the loss function is defined as follows:
(4) 
where denote the model output and ground truth image at scale level , respectively. The loss at each scale is normalized by the number of channels , width , and the height (i.e. the total number of elements).
Adversarial loss
Recently, adversarial networks are reported to generate sharp realistic images [9, 4, 24]. Following the architecture introduced in [24], we build discriminator as in Table 1
. Discriminator takes the output of the finest scale or the ground truth sharp image as input and classifies whether it is network output or not.
The adversarial loss is defined as follows.
(5) 
where G and D denote the generator, that is our multiscale deblurring network in Fig. 4 and the discriminator (classifier), respectively.
# 
Layer  Weight dimension  Stride 

1  conv  2  
2  conv  1  
3  conv  2  
4  conv  1  
5  conv  2  
6  conv  1  
7  conv  4  
8  conv  1  
9  conv  4  
10  conv  2  
11  fc    
12  sigmoid     

Finally, by combining the multiscale content loss and adversarial loss, the generator network and discriminator network is jointly trained. Thus, our final loss term is
(6) 
where the weight constant .
We used ADAM [18] optimizer with a minibatch size 4 for training. The learning rate is adaptively tuned beginning from . After iterations, learning rate is decreased to 1/10 of the previous learning rate. Total training takes iterations to converge.
4 Experimental Results
4.1 GOPRO Dataset
We evaluate the performance of our model on the proposed GOPRO dataset. Our test dataset consists of 1111 pairs, which is approximately of the total dataset. We compare the results with those of the stateofthe art methods [16, 27] in both qualitative and quantitative ways. Our results show significant improvement in terms of image quality. Some deblurring results are shown in Fig. 5. We notice from the results of Sun et al. [27], deblurring is not successful on the regions where blurs are nonlinearly shaped or located at the boundary of motion. Kim and Lee [16]’s results also fail in cases where strong edges are not found. In contrast, our results are free from those kernelestimation related problems. Table 2, shows the quantitative evaluation results of the competing methods and ours with different scale level in terms of PSNR, SSIM over the test data.
4.2 Köhler Dataset
4.3 Dataset of Lai et al.
Lai et al. [20] generated synthetic dataset by convolving nonuniform blur kernels and imposing several common degradations. They also recorded 6D camera trajectories to generate blur kernels. However, their blurry images and sharp images are not aligned in the way of our dataset, making simple image quality measures such as PSNR and SSIM less correlated with perceptual quality. Thus, we show qualitative comparisons in Fig. 6. Clearly, our results avoid ringing artifacts while preserving details such as wave ripple.
5 Conclusion
In this paper, we proposed blind deblurring neural networks for sharp image estimation. Contrary to existing works, our model avoids kernel estimation related problems. The proposed model follows coarsetofine approach and also trained in multiscale spaces. Furthermore, we generated realistic blur dataset with ground truth, enabling efficient supervised learning and rigorous evaluation. Experimental results show that our approach outperforms the stateoftheart methods in both qualitative and quantitative ways.
References
 [1] A. Chakrabarti. A neural approach to blind motion deblurring. In ECCV, 2016.
 [2] F. CouzinieDevy, J. Sun, K. Alahari, and J. Ponce. Learning to estimate and remove nonuniform image blur. In CVPR, 2013.
 [3] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei. Imagenet: A largescale hierarchical image database. In CVPR, pages 248–255. IEEE, 2009.
 [4] E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a￼ laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems, pages 1486–1494, 2015.
 [5] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multiscale convolutional architecture. In ICCV, pages 2650–2658, 2015.
 [6] D. Eigen, D. Krishnan, and R. Fergus. Restoring an image taken through a window covered with dirt or rain. In ICCV, pages 633–640, 2013.
 [7] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multiscale deep network. In Advances in Neural Information Ppocessing Ssytems, pages 2366–2374, 2014.
 [8] P. Fischer, A. Dosovitskiy, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. arXiv preprint arXiv:1504.06852, 2015.
 [9] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
 [10] A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, and B. Curless. Single image deblurring using motion density functions. In ECCV, pages 171–184. Springer, 2010.
 [11] S. Harmeling, H. Michael, and B. Schölkopf. Spacevariant singleimage blind deconvolution for removing camera shake. In Advances in Neural Information Processing Systems, pages 829–837, 2010.
 [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
 [13] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf. Fast removal of nonuniform camera shake. In ICCV, 2011.
 [14] T. H. Kim, B. Ahn, and K. M. Lee. Dynamic scene deblurring. In ICCV, 2013.
 [15] T. H. Kim and K. M. Lee. Segmentationfree dynamic scene deblurring. In CVPR, 2014.
 [16] T. H. Kim and K. M. Lee. Generalized video deblurring for dynamic scenes. In CVPR, 2015.
 [17] T. H. Kim, S. Nah, and K. M. Lee. Dynamic scene deblurring using a locally adaptive linear blur model. arXiv preprint arXiv:1603.04265, 2016.
 [18] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [19] R. Köhler, M. Hirsch, B. Mohler, B. Schölkopf, and S. Harmeling. Recording and playback of camera shake: Benchmarking blind deconvolution with a realworld database. In ECCV, pages 27–40. Springer, 2012.
 [20] W.S. Lai, J.B. Huang, Z. Hu, N. Ahuja, and M.H. Yang. A comparative study for single image blind deblurring.
 [21] Y. Li, S. B. Kang, N. Joshi, S. M. Seitz, and D. P. Huttenlocher. Generating sharp panoramas from motionblurred videos. In CVPR, 2010.
 [22] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440, 2015.
 [23] M. Mathieu, C. Couprie, and Y. LeCun. Deep multiscale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015.
 [24] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
 [25] C. J. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf. Learning to deblur. arXiv preprint arXiv:1406.7444, 2014.
 [26] D. Sun, S. Roth, and M. J. Black. A quantitative analysis of current practices in optical flow estimation and the principles behind them. IJCV, 106(2):115–137, 2014.
 [27] J. Sun, W. Cao, Z. Xu, and J. Ponce. Learning a convolutional neural network for nonuniform motion blur removal. In CVPR, pages 769–777. IEEE, 2015.
 [28] Y.W. Tai, X. Chen, S. Kim, S. J. Kim, F. Li, J. Yang, J. Yu, Y. Matsushita, and M. S. Brown. Nonlinear camera response functions and image deblurring: Theoretical analysis and practice. PAMI, 35(10):2498–2512, 2013.
 [29] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce. Nonuniform deblurring for shaken images. 2010.
 [30] L. Xu, J. S. Ren, C. Liu, and J. Jia. Deep convolutional neural network for image deconvolution. In Advances in Neural Information Processing Systems, pages 1790–1798, 2014.
 [31] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In ICCV, pages 479–486. IEEE, 2011.
A Appendix
In this appendix, we present more comparative experimental results to demonstrate the effectiveness of our proposed deblurring method.
a.1 Comparison of loss function
In section 3.2, we employed a loss function that combines both the multiscale content loss (MSE) and the adversarial loss for training our network. We examine the effect of the adversarial loss term quantitatively and qualitatively. The PSNR and SSIM results are shown in table A.1. From this results, we observe that adding adversarial loss does not increases PSNR, but increase SSIM, which means that it encourages to generate more natural and structure preserving images.
Loss  

PSNR  28.62  28.45 
SSIM  0.9094  0.9170 

a.2 Comparison on GOPRO dataset
a.3 Comparison on Lai et al. [20] dataset
We provide qualitative results on the dataset of Lai et al. [20]. The Lai et al. dataset is composed of synthetic and real blurry images, and we showed the deblurring result of a synthetically generated blurry image in section 4.3. We present the qualitative deblurring results of competing methods on real images in Fig. A.5 and A.6.