1 Introduction
Motion blurs are the most common artifacts in videos recorded from handheld cameras. In lowlight conditions, these blurs are caused by camera shake and object motions during exposure time. In addition, fast moving objects in the scene cause blurring artifacts in a video even when the light conditions are acceptable. For decades, this problem has motivated considerable works on deblurring and different approaches have been sought depending on whether the captured scenes are static or dynamic.
Early works on a single image deblurring problem are based on assumptions that the captured scene is static and has constant depth [1, 2, 3, 4, 5, 6] and they estimated uniform or nonuniform blur kernel by camera shake. These approaches were naturally extended to video deblurring methods. Cai et al. [7] proposed a deconvolution method with multiple frames using sparsity of both blur kernels and clear images to reduce errors from inaccurate registration and render highquality latent image. However, this approach removes only uniform blur caused by twodimensional translational camera motion, and the proposed approach cannot handle nonuniform blur from rotational camera motion around zaxis, which is the main cause of motion blurs [6]. To solve this problem, Li et al. [8] adopted a method parameterizing spatially varying motions with 3x3 homographies based on the previous work of Tai et al. [9], and could handle nonuniform blurs by rotational camera shake. In the work of Cho et al. [10], camera motion in threedimensional space was estimated without any assistance of specialized hardware, and spatially varying blurs caused by projective camera motion were obtained. Moreover, in the works of Paramanand et al. [11] and Lee and Lee [12], spatially varying blurs by depth variation in a static scene were estimated and removed.
However, these previous methods, which assume static scene, suffer from spatially varying blurs from not only camera shake but also moving objects in a dynamic scene. Because it is difficult to parameterize the pixelwise varying blur kernel in the dynamic scene with simple homography, kernel estimation becomes more challenging task. Therefore, several researchers have studied on removing blurs in dynamic scenes, which are grouped into two approaches: segmentationbased deblurring approach, and exemplarbased deblurring approach.
Segmentationbased approaches usually estimate multiple motions, kernels, and associated segments. In the work of Cho et al. [13], a method that segments homogeneous motions and estimates segmentwise different 1D Gaussian blur kernels, was proposed. However, it cannot handle complex motions by rotational camera shakes due to the limitation of Gaussian kernels. In the work of Bar et al. [14], a layered model was proposed that segments images into foreground and background layers, and estimates a linear blur kernel within the foreground layer. By using the layered model, explicit occlusion handling is possible, but the kernel is restricted to linear. To overcome these limitations, Wulff and Black [15] improved the previous layered model of Bar et al. by estimating the different motions of both foreground and background layers. However, these motions are restricted to affine models and it is difficult to extend to multilayered scenes because such task requires depth ordering of the layers. To sum up, segmentationbased deblurring approaches have the advantage of removing blurs caused by moving objects in dynamic scenes. However, segmentation itself is very difficult problem and remains still an challenging issue as reported in [16]. Moreover, they fail to segment complex motions like motions of people, because simple parametric motion models used in [14, 15] cannot fit the complex motions accurately.
Exemplarbased approaches were proposed in the works of Matsushita et al. [17] and Cho et al. [18]
. These methods usually do not rely on accurate segmentation and deconvolution. Instead, the latent frames are rendered by interpolating lucky sharp frames that frequently exist in videos, thus avoiding severe ringing artifacts. However, the work of Matsushita et al.
[17] cannot remove blurs caused by moving objects. In addition the work of Cho et al. [18] allows only slowmoving objects in dynamic scenes because it searches sharp patches corresponding to blurry patch after registration with homography. Therefore, it cannot handle fast moving objects which have distinct motions from those of backgrounds. Moreover, since it does not use deconvolution with spatial priors but simple interpolation, it degrades midfrequency textures such as grasses and trees, and renders smooth results.On the other hand, defocus from limited depthoffield (DOF) of conventional digital cameras also results in blurry effects in videos. Although shallow DOF is often used to render aesthetic images and highlight the focused objects, frequent misfocus of moving objects in video yields image degradation when the motion is large and fast. Moreover, depth variation in the scene generates spatially varying defocus blurs, making the estimation of defocus blur map is also a difficult problem. Thus many researches have studied to estimate defocus blur kernel. Most of them have approximated the kernel as simple Gaussian or disc model, making the kernel estimation problem becomes a parameter (e.g. standard deviation of Gaussian blur, disc radius) estimation problem
[19, 20, 21, 22].To magnify focus differences, Bae and Durand [19] estimated defocus blur map at the edges first, and then propagated the results to other regions. However, the estimated blur map is inaccurate where the blurs are strong, since it is imagebased approach and depends on the detected edges that can be localized. Similarly, Zhuo and Sim [22] propagated the amount of blur at the edges to elsewhere, that obtained by measuring the ratio between the gradients of the defocused input and reblurred input with a Gaussian kernel. To reduce reliance on strong edges in the defocused image, Zhu et al. [21]
utilized statistics of blur spectrum within the defocused image, since statistical models could be applicable where there are no strong edges. Specifically, local image statistics is used to measure the probability of defocus scale and determine the locally varying scale of defocus blur in a single image. However, local image statisticsbased methods do not work when there are motion blurs as well as defocus blurs within a single image; Motion blurs change local statistics and yield much complex blurs combined with defocus blurs.
In the recent work of Kim and Lee [23], a new and generalized video deblurring (GVD) method that estimates latent frames without using global motion parametrization and segmentation was proposed to remove motion blurs in dynamic scenes. In GVD, bidirectional optical flows are estimated and used to infer pixelwise varying kernels. Therefore, the proposed method naturally handle coexisting blurs by camera shake, and moving objects with complex motions. Because estimating flow fields and restoring sharp frames are a joint problem, both variables are simultaneously estimated in GVD. To do so, a new single energy model to solve the joint problem was proposed and efficient solvers to optimize the model is provided.
However, since GVD method is based on piecewise linear kernel approximation, it cannot handle nonlinear blurs combined with motion and defocus blurs which are common in videos captured from handheld cameras. Therefore, in this work, we propose an extended and more generalized method of GVD that can handle not only motion blur but also defocus blur which further improves the deblurring quality significantly. Under an assumption that, the complex nonlinear blur kernel can be decomposed into motion and defocus blur kernels, we estimate bidirectional optical flows to approximate motion blur kernel, scales of Gaussian blurs to approximate defocus blur kernel, and the latent frames jointly. The result of our system is shown in Fig.1, in which the motion blurs of differently moving people and Gaussian blurs in the background are successfully removed and accurate optical flows are jointly estimated.
Finally, we provide a new realistic blur dataset with ground truth sharp frames captured by a highspeed camera to overcome the lack of realistic ground truth dataset in this field. Though there have been some evaluation datasets for deblurring problem, they are not appropriate to carry out meaningful evaluation for the deblurring of spatially varying blurs. First, synthetically generated uniform blur kernels and blurry images from sharp images were provided in the work of Levin et al. [24]. Next, 6D camera motion in 3D space was recorded with a hardwareassisted camera to represent blur from camera shake during exposure time in the work of Köhler et al. [25]
. Moreover, there have been some recent approaches to generate synthetic dataset for the sake of machine learning algorithms. To benefit from large training data, lots of blur kernels and blurry images were synthetically generated. In the work of Xu et al.
[26], more than 2500 blurry images are generated using decomposable symmetric kernels. Schuler et al. [27] sampled naturally looking blur kernels with Gaussian Process, and Sun et al. [28] used a set of linear kernels to synthesize blurry images. However, these datasets are generated under an assumption that the scene is static and cannot synthesize infinitely many blurs in real world. Real blurs in dynamic scenes are complex and spatially varying, so synthesizing realistic dataset is a difficult problem. To solve this problem, we construct a new blur dataset that provides pairs of realistically blurred videos and sharp videos with the use of a highspeed camera.Using the proposed dataset and real challenging videos as shown in Fig.2, we demonstrate the significant improvements of the proposed deblurring method in both quantitatively and qualitatively. Moreover, we show empirically that more accurate optical flows are estimated by our method compared with the stateoftheart optical flow method that can handle blurry images.
2 More Generalized Video Deblurring
Most conventional video deblurring methods suffer from the coexistence of various motion blurs from dynamic scenes because the motions cannot be fully parameterized using global or segmentwise blur models. To make things worse, frequent misfocus of moving objects in dynamic scenes yields more complex nonlinear blurs combined with motion blurs.
To handle these joint motion and defocus blurs, we propose a new blur model that estimates locally (pixelwise) different blur kernels rather than global or segmentwise kernel estimation. As blind deblurring problem is highly illposed, we propose a single energy model consists of not only data and spatial regularization terms but also a temporal term. The model is expressed as follows:
(1) 
and the detailed models of each term in (1) are given in the following sections.
2.1 Data Model based on Kernel Approximation
Motion blurs are generally caused by camera shake and moving objects, and defocus blurs are mainly due to the aperture size, focal length, and the distance between camera and focused object. These two different blurs are combined and yield more complex blurs in real video. For example, Fig. 3 shows how different the blurred images are when point light sources are captured by the same moving camera with and without defocused blur. We observe that the light streak of the defocused light source is much smoother and nonlinear in comparison with the focused one. Notably, the light streaks indicate the blur kernel shapes.
However, it is difficult to directly remove the complex blur in Fig. 3 (c). Thus, to alleviate the problem, we assume that the combined blur kernel can be decomposed into two different kernels, which are motion blur kernel and defocus blur kernel. Our assumption holds when the depth change in the scene during exposure period is relatively small, and it is acceptable since we treat video of rather short exposure time. So, the underlying blurringprocedure can be modeled as sequential process of defocus blurring followed by motion blur as illustrated in Fig. 4.
Note that in conventional video deblurring works [14, 18, 8, 15]
, the motion blurs of each frame are usually approximated by parametric models such as homography and affine models. However, these kernel approximations are only valid when the motions are parameterizable within an entire frame or a segment, and cannot cope with spatially varying motion blurs. To solve this problem, we approximate the pixelwise motion blur kernel using bidirectional optical flows as suggested in previous works
[29, 16, 30].Spatially varying defocus blur is approximated by using Gaussian or disc model in conventional works [20, 21]. Therefore, the defocus maps and scales of local blurs are determined by simply estimating the standard deviations of Gaussian models or the radii of disc models. In particular, local image statistics is widely used to estimate spatially varying defocus blur. Specifically, within a uniformly blurred patch, local frequency spectrum provides information on the blur kernel and can be used to determine the likelihood of specific blur kernel [21]; thus scales of defocus blurs can be estimated by measuring the fidelities of the likelihood model. However, it is difficult to apply this statisticsbased technique when the blurry image has motion blurs in addition to defocus blurs. In Fig. 5, we observe that the maximum likelihood (ML) estimator used in [21] finds the optimal defocus blur kernel when a patch is blurred by only defocus blur, however ML cannot estimate true defocus kernel when a blurry patch contains motion blur as well as defocus blur. Therefore we cannot adopt local image statistics to remove defocus blurs in dynamic scenes with motion blurs. In this study, we approximate the pixelwise varying defocus blur using Gaussian model, and determine the standard deviation of defocus blur by jointly estimating the defocus blur maps and latent frames unlike conventional works that utilize only local image statistics.
Therefore, under two assumptions that the latent frames are blurred by defocus blurs, and subsequently blurred by motion, and the velocity of the motion is constant between adjacent frames, our blur model is expressed as follows:
(2) 
where and denote the blurry frame and the latent frame at the frame, respectively, and x denotes pixel location on 2D image domain. At x, the motion blur kernel is denoted as and the Gaussian blur kernel by defocus is denoted as , and the operator means convolution.
To handle locally varying motion blurs, we should reduce the size of solution space using approximation and use parametrized kernel, because the solution space of locally varying kernel in video is extremely large; the dimension of kernel is when the size of image is , length of the image sequence is , and the size of local kernel is . Therefore, we approximate the motion blur kernel as piecewise linear using bidirectional optical flows as illustrated in Fig. 6 (a). Although our motion blur kernel is based on simple approximation, our model is valid since we assume that the videos have relatively short exposure time. The pixelwise kernel using bidirectional flow can be written by,
(3) 
where , and denote pixelwise bidirectional optical flows at frame . Camera duty cycle of the frame is and denotes relative exposure time as used in [8], and denotes Kronecker delta.
Using this pixelwise motion blur kernel approximation, we can easily manage multiple different motion blurs in a frame, unlike conventional methods. The superiority of our locally varying kernel model is shown in Fig. 7. Our kernel model fits blurs from differently moving objects and camera shake much better than the conventional homographybased model.
Moreover, we approximate the spatially varying defocus blur kernel using Gaussian model as shown in Fig. 6 (b), and estimate the pixelwise different standard deviation of the Gaussian kernel . Although we cannot utilize the features in the blurred frame, which has been used significantly in conventional methods [21, 31], due to combined motion blurs, we determine the scales of defocus blurs with the simultaneous estimation of latent frames and achieve significant improvements in comparison with the stateoftheart defocus blur map estimator [31] when there exist both motion and defocus blurs in a real blurry frame as shown in Fig. 9. Moreover, even when the motion blurs are not existing, we achieve competitive result as shown in Fig. 9.
Now, the proposed data model that handles both motion and defocus blurs is expressed as follows:
(4) 
where the row vector of the motion blur kernel matrix
, which corresponds to the motion blur kernel at pixel x, is the vector form of , and its elements are nonnegative and their sum is equal to one. Similarly, the row vector of the defocus blur kernel matrix , which corresponds to the Gaussian kernel at x, is the vector form of and denotes the scales (standard deviation of Gaussian kernel) of defocus blurs. Linear operator denotes the Toeplitz matrices corresponding to the partial (e.g., horizontal and vertical) derivative filters. Parameter controls the weight of the data term, and L, u, , and B denote the set of latent frames, optical flows, scales of defocus blurs and blurry frames, respectively.2.2 A new Optical Flow Constraint and Temporal Regularization
As discussed above, to remove locally varying motion blurs, we employ bidirectional optical flow model in (4). However, for optical flow estimation, conventional optical flow constraints such as brightness constancy and gradient constancy can not be utilized directly, since such constraints do not hold between two blurry frames. A bluraware optical flow estimation method from blurry images has been proposed by Portz et al. [30]. This method is based on the commutative law of shiftinvariant kernels such that the brightness of the corresponding points is constant after convolving the blur kernel of each image with the other image. However, the commutative law does not hold when the motion is not translational and when the blur kernels vary spatially. Therefore, this approach only works when the motion is very smooth.
To address this problem, we propose a new model that estimates optical flow between two latent sharp frames to enable abrupt changes in motions and the blur kernels. In using this model, we need not restrict our motion blur kernels to be shift invariant. Our model is based on the conventional optical flow constraint between latent frames, that is, brightness constancy. The formulation of this model is given by,
(5) 
where denotes the index of neighboring frames at , and the parameter controls the weight. We apply the robust
norm for robustness against outliers and occlusions.
Notably, a major difference between the proposed model and the conventional optical flow estimation methods is that our problem is a joint problem. That is, the latent frames and optical flows should be solved simultaneously. Therefore, the proposed model in (5) estimates the latent frames which are temporally coherent among neighboring frames, and optical flows between neighboring frames, simultaneously. Therefore we can estimate accurate flows at the motion boundaries as shown in Fig. 10. Notice that, our flows at the motion boundaries of moving car is much clearer in comparison with the bluraware flow estimation method by [30].
2.3 Spatial Regularization
To alleviate the difficulties of highly illposed deblurring, optical flow estimation, and defocus blur map estimation problems, it is important to adopt spatial regularizers. In doing so, we enforce spatial coherence to penalize spatial fluctuations while allowing discontinuities in latent frames, flow fields, and defocus blur maps. With the assumption that spatial priors for the latent frames, optical flows, and defocus blur maps are independent, we can formulate the spatial regularization as follows:
(6) 
wherer parameters and control the weights of the second and third terms.
The first term in (6) denotes the spatial regularization term for the latent frames. Although more sparse norms (e.g. ) fit the gradient statistics of natural sharp images better [32, 33, 34], we use conventional total variation (TV) based regularization [35, 36, 16], as TV is computationally less expensive and easy to minimize. The second and third terms enforce spatial smoothness for defocus blur maps and optical flows, respectively. These regularizers are also based on TV regularization, and coupled with edgemap to preserve discontinuities at the edges in both vector fields. Similar to the edgemap used in conventional optical flow estimation method [37], our edgemap is expressed as follows:
(7) 
where the fixed parameter controls the weight of the edgemap, and is an initial latent image in the iterative optimization framework.
3 Optimization Framework
Under the condition that the camera duty cycle is known, by combining , , and , we can have the final objective function as follows: and the final objective function when camera duty cycle is known becomes as follows:
(8) 
Note that contrast with Cho et al. [18] that performs multiple approaches sequentially, our model finds a solution by minimizing the proposed single objective function in (8). However, because of its nonconvexity, our model needs to adopt a practical optimization method to obtain an approximated solution. Therefore, we divide the original problem into several simple subproblems and then use conventional iterative and alternating optimization techniques [1, 16, 15] to minimize the original nonconvex objective function. In the following sections, we introduce efficient solvers and describe how to estimate unknowns L, u and alternatively.
3.1 Sharp Video Restoration
If the motion blur kernels K and the defocus blur kernels G are fixed, then the objective function in (8) becomes convex with respect to L, and it can be reformulated as follows:
(9) 
To restore the latent frames L, we adopt the conventional convex optimization method proposed in [38], and derive the primaldual update scheme as follows:
(10) 
where indicates the iteration number, and, and denote the dual variables of the concatenated latent frames . Parameters and denote the update steps. Linear operator A calculates the spatial difference between neighboring pixels, and operator D calculates the temporal differences among neighboring frames using fixed optical flows. The last formulation in (10) is to update and optimize the primal variable , and we apply the conjugate gradient method to minimize it since it is a quadratic function.
3.2 Optical Flows Estimation
Note that , although the latent frames L and the defocus blur kernels G are fixed, the temporal coherence term and the data term are still nonconvex. So, let us denote those two terms as a nonconvex function as follows:
(11) 
To find the optimal optical flows u, we first convexify the nonconvex function by applying the firstorder Taylor expansion. Similar to the technique in [16], we linearize the function near an initial in the iterative process as follows:
(12) 
In doing so, (8) can be approximated by a convex function w.r.t u as follows:
(13) 
Now, we can apply the convex optimization technique in [38] to the approximated convex function, and the primaldual update process is expressed as follows:
(14) 
where p denotes the dual variable of u on the vector space. Weighting matrix is diagonal, and its submatrix associated with is defined as . Linear operator calculates the spatial difference between four nearest neighboring pixels, and parameters and denote the update steps.
3.3 Defocus Blur Map Estimation
When the latent frames L and the motion blur kernels K are fixed, we can estimate the defocus blur maps from (8). However, the data term is nonconvex, and thus an approximation technique is required to optimize the objective function. Similar to our optical flows estimation technique, we approximate and convexify the original function using linearization.
First, we define a nonconvex data function , and approximate it near an initial values as follows:
(15) 
and the approximated convex function for defocus blur map estimation is given by,
(16) 
4 Implementation Details
To handle large blurs and guide fast convergence, we implement our algorithm on the conventional coarsetofine framework with empirically determined parameters. In the coarsetofine framework, we build image pyramid with 17 levels for a highdefinition(1280x720) video, and use the scale factor 0.9.
Moreover, to reduce the number of unknowns in optical flows, we only estimate and . We approximate using and . For example, it satisfies, , as illustrated in Fig. 11, and we can easily apply this for . Please see our publicly available source code for more details ^{1}^{1}1http://cv.snu.ac.kr/research/~VD/.
The overall process of our algorithm is in Algorithm 1. Further details on initialization, estimating the duty cycle and postprocessing step that reduces artifacts are given below.
4.1 Initialization and Duty Cycle Estimation
In this study, we assume that the camera duty cycle is known for every frame. However, when we conduct deblurring with conventional datasets, which do not provide exposure information, we apply the technique proposed in [18] to estimate the duty cycle. Contrary to the original method [18], we use optical flows instead of homographies to obtain initially approximated blur kernels. Therefore, we first estimate flow fields from blurry images with [39], which runs in near realtime. We then use them as initial flows and approximate the kernels to estimate the duty cycle. Moreover, we use as initial defocus blur scale.
4.2 Occlusion Detection and Refinement
Our pixelwise kernel estimation naturally results in approximation error and it causes problems such as ringing artifacts. Specifically, our data model in (4), and temporal coherence model in (5) are invalid at occluded regions.
To reduce such artifacts from kernel approximation errors and occlusions, we use spatiotemporal filtering as a postprocessing:
(18) 
where y denotes a pixel in the 3x3 neighboring patch at location and is the normalization factor (e.g. ). Notably, we enable in (18) for spatial filtering. Our occlusionaware weight is defined as follows:
(19) 
where occlusion state is determined by crosschecking forward and backward flows similar to the occlusion detection technique used in [40]. The 5x5 patch is centered at x in frame . The similarity control parameter is fixed as .
5 Motion Blur Dataset
Because conventional evaluation datasets for deblurring [24, 25] are generated under static scene assumption, complex and spatially varying blurs in dynamic scenes are not provided. Therefore, in this section, we provide a new method generating blur dataset for the quantitative evaluation of nonuniform video deblurring algorithms and later studies of learningbased deblurring approaches.
5.1 Dataset Generation
As we assume motion blur kernels can be approximated by using bidirectional optical flows in (3), we can generate blurry frames adversely by averaging consecutive frames whose relative motions between two neighboring frames are smaller than 1 pixel. In doing so, we use GOPRO Hero4 handheld camera which supports taking 240 fps video of 1280x720 resolution. Similar approach was introduced in [41], which uses highspeed camera to generate blurry images. However, they captured only linearly moving objects with a static camera.
Our captured videos include various dynamic scenes as well as static scenes. We calculate the average of successive frames to generate a single blurry frame. By averaging successive frames, realistic motion blurs from both moving objects and the camera shake can be rendered in the blurry frame and the fps blurry video can be generated (i.e. 16 fps video is generated by averaging every 15 frames). Notably, ground truth sharp frame is chosen to be the midframe used in averaging, since we aim to restore the latent frame captured in the middle of exposure time as shown in fig. 6. Thus the duty cycle is , in our whole dataset. The videos are recorded with caution so that the motions should be no greater than 1 pixel between two neighboring frames to render more smooth and realistic blurry frame.
Our dataset mainly captured outdoor scenes to avoid flickering effect of fluorescent light which occurs when we capture indoor scenes with the highspeed camera. We captured numerous scenes in both dynamic and static environments, and each frame has HD (1280x720) size. In Fig. 12, some examples of our ground truth frames and rendered blurry frames are shown. We can see that the generated blurs are locally varying according to the depth changes and moving objects. Our dataset is publicly available on our website ^{2}^{2}2http://cv.snu.ac.kr. We provide the generated blurry and corresponding sharp videos as well as the original videos we recorded.
6 Experimental Results
In this section, we empirically demonstrate the superiority of the proposed method over conventional methods.
In Table II and Table II, our deblurring results are quantitatively evaluated with the proposed dataset. For evaluation, we use fixed parameters and the values are , , , , and
. Since the source codes of other video deblurring methods that can handle nonuniform blur are not available, we evaluate our method in different settings. First, we calculate and compare both the PSNR and SSIM values of each original blurry sequence and the corresponding deblurred one. As our dataset contains only motion blurs in it, we restore the latent frames without considering the defocus blur (defocus blur kernel is set to be identity matrix). Next, to demonstrate the good performance of the proposed method in removing defocus blurs, we regenerate blurry dataset by adding Gaussian blur (
) to the original sharp video before averaging. Using this dataset which contains both motion blur and defocus blur, we compare the our result against each original blurry sequence and our deblurring result that does not consider defocus blur. We verify that, our approach improves the deblurring results significantly in terms of PSNR and SSIM by removing blurs from defocus. In Fig. 13, qualitative comparisons using our dataset are shown. Ours restores the edges of buildings, letters, and moving persons, clearly. However, we observe some failure cases in our results. In Fig. 14, we fail to estimate motions of fast moving hand, and thus fail in deblurring, since it is difficult to estimate accurate flows of small structure with distinct motions in the coarsetofine framework as reported in [42].Motion blur only  Motion blur + Gaussian blur ( = 1.5)  
Seq.  Blurry  ours  Blurry  ours  ours 
(w/o defocus)  (w/o defocus)  (full)  
#1 
26.8  27.79  25.88  27.53  27.67 
#2  26.5  27.68  24.29  25.18  26.18 
#3  33.28  34.78  30.55  31.27  32.65 
#4  37.07  36.94  36.52  36.50  36.36 
#5  24.34  23.62  23.78  23.05  24.24 
#6  26.83  29.07  24.04  25.18  26.46 
#7  29.03  30.52  25.95  27.31  28.55 
#8  24.80  29.81  23.57  26.05  27.01 
#9  28.55  31.41  27.19  29.05  27.74 
#10  26.13  30.55  24.83  27.61  28.25 
#11  29.24  33.61  27.73  30.47  30.86 
Avg. 
28.42  30.52  26.76  28.11  28.73 
Motion blur only  Motion blur + Gaussian blur ( = 1.5)  
Seq.  Blurry  ours  Blurry  ours  ours 
(w/o defocus)  (w/o defocus)  (full)  
#1 
0.8212  0.8611  0.7898  0.8374  0.8476 
#2  0.8571  0.8847  0.7526  0.7809  0.8197 
#3  0.9327  0.9473  0.8750  0.8849  0.9087 
#4  0.9701  0.9695  0.9652  0.9665  0.9657 
#5  0.7154  0.7181  0.6853  0.6598  0.7293 
#6  0.8362  0.9178  0.7362  0.7880  0.8334 
#7  0.8751  0.9244  0.7928  0.8360  0.8694 
#8  0.8068  0.9269  0.7529  0.8320  0.8582 
#9  0.8322  0.9100  0.7908  0.8427  0.8854 
#10  0.8083  0.9198  0.7620  0.8432  0.8645 
#11  0.9176  0.9608  0.8945  0.9283  0.9367 
Avg. 
0.8521  0.9037  0.7997  0.8363  0.9367 
Next, we compare our deblurring results with those of the stateofthe art exemplar based method [18] with the videos used in [18]. As shown in Fig. 16, the captured scenes are dynamic and contain multiple moving objects. The method [18] fails in restoring the moving objects, because the object motions are large and distinct from the backgrounds. By contrast, our results show better performances in deblurring moving objects and backgrounds. Notably, the exemplarbased approach also fails in handling large blurs, as shown in Fig. 16, as the initially estimated homographies in the largely blurred images are inaccurate. Moreover, this approach renders excessively smooth results for midfrequency textures such as trees, as the method is based on interpolation without spatial prior for latent frames.
We also compare our method with the stateoftheart segmentationbased approach [15]. The test video is shown in Fig. 17, which is a bilayer scene used in [15]. Although the bilayer scene is a good example to verify the performance of the layered model, inaccurate segmentation near the boundaries causes serious artifacts in the restored frame. By contrast, since our method does not need segmentation and it restores the boundaries much better than the layered model.
In Fig. 19, we quantitatively compare the optical flow accuracies with [30] on synthetic blurry images. As publicly available code of [30] cannot handle Gaussian blur, we synthesize blurry frames which have motion blurs only. Although [30] was proposed to handle blurry images in optical flow estimation, its assumption does not hold in motion boundaries, which is very important for deblurring. Therefore, their optical flow is inaccurate in the motion boundaries of moving objects. However, our model can cope with abrupt motion changes, and thus performs better than the previous models.
Moreover, we show the deblurring results with and without using the temporal coherence term in (5), and verify that our temporal coherence model clearly restores edges and significantly reduces ringing artifacts near the edges in Fig. 19.
Finally, other deblurring results from numerous real videos are shown in Fig. 20. Notably, our model successfully restores the face which has highly nonuniform blurs because the person moves rotationally (Fig. 20(e)).
The video demo is provided in the supplementary material. For additional results, please see the supplementary material.
7 Conclusion
In this study, we introduced a novel method that removes general blurs in dynamic scenes which conventional methods fail to. We inferred bidirectional optical flows to approximate motion blur kernels, and estimated the scales of Gaussian blurs to approximate defocus blur kernels. Therefore we handled general blurs, by estimating a pixelwise different blur kernel. In addition, we proposed a new single energy model that estimates optical flows, defocus blur maps and latent frames, jointly. We also provided a framework and efficient solvers to minimize the proposed energy function and it has been shown that our method yields superior deblurring results to several stateoftheart deblurring methods through intensive experiments with real challenging blurred videos. Moreover, we provided the publicly available benchmark dataset to evaluate the nonuniform deblurring methods and we quantitatively evaluated the performance of the proposed method using the proposed dataset. Nevertheless, our model has its limitations in handling large displacement fields. Therefore, improving the proposed algorithm to handle large displacements is required. Moreover, since our current work is implemented on Matlab, it is time consuming and needs large resources. Thus, for practical applications, reducing the running time by code optimization and parallel implementation as well as efficient memory management will be considered in our future work.
References
 [1] S. Cho and S. Lee, “Fast motion deblurring,” in SIGGRAPH, 2009.
 [2] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. Freeman, “Removing camera shake from a single photograph,” in SIGGRAPH, 2006.
 [3] A. Gupta, N. Joshi, L. Zitnick, M. Cohen, and B. Curless, “Single image deblurring using motion density functions,” in ECCV, 2010.
 [4] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Scholkopf, “Fast removal of nonuniform camera shake,” in Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011, pp. 463–470.
 [5] Q. Shan, J. Jia, and A. Agarwala, “Highquality motion deblurring from a single image,” in SIGGRAPH, 2008.
 [6] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Nonuniform deblurring for shaken images,” International Journal of Computer Vision, vol. 98, no. 2, pp. 168–186, 2012.
 [7] J.F. Cai, H. Ji, C. Liu, and Z. Shen, “Blind motion deblurring using multiple images,” Journal of computational physics, vol. 228, no. 14, pp. 5057–5071, 2009.

[8]
Y. Li, S. B. Kang, N. Joshi, S. M. Seitz, and D. P. Huttenlocher, “Generating
sharp panoramas from motionblurred videos,” in
Proc. IEEE International Conference on Computer Vision and Pattern Recognition
, 2010.  [9] Y.W. Tai, P. Tan, and M. S. Brown, “Richardsonlucy deblurring for scenes under a projective motion path,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 33, no. 8, pp. 1603–1618, 2011.
 [10] S. Cho, H. Cho, Y.W. Tai, and S. Lee, “Registration based nonuniform motion deblurring,” in Computer Graphics Forum, vol. 31, no. 7. Wiley Online Library, 2012, pp. 2183–2192.
 [11] C. Paramanand and A. N. Rajagopalan, “Nonuniform motion deblurring for bilayer scenes,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2013.
 [12] H. S. Lee and K. M. Lee, “Dense 3d reconstruction from severely blurred images using a single moving camera,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2013.
 [13] S. Cho, Y. Matsushita, and S. Lee, “Removing nonuniform motion blur from images,” in Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE, 2007, pp. 1–8.
 [14] L. Bar, B. Berkels, M. Rumpf, and G. Sapiro, “A variational framework for simultaneous motion estimation and restoration of motionblurred video,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2007.
 [15] J. Wulff and M. J. Black, “Modeling blurred video with layers,” in ECCV, 2014.
 [16] T. H. Kim and K. M. Lee, “Segmentationfree dynamic scene deblurring,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2014.
 [17] Y. Matsushita, E. Ofek, W. Ge, X. Tang, and H.Y. Shum, “Fullframe video stabilization with motion inpainting,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 28, no. 7, pp. 1150–1163, 2006.
 [18] S. Cho, J. Wang, and S. Lee, “Video deblurring for handheld cameras using patchbased synthesis,” ACM Transactions on Graphics, vol. 31, no. 4, pp. 64:1–64:9, 2012.
 [19] S. Bae and F. Durand, “Defocus magnification,” in Computer Graphics Forum, vol. 26, no. 3. Wiley Online Library, 2007, pp. 571–579.
 [20] E. Kee, S. Paris, S. Chen, and J. Wang, “Modeling and removing spatiallyvarying optical blur,” in Computational Photography (ICCP), 2011 IEEE International Conference on. IEEE, 2011, pp. 1–8.
 [21] X. Zhu, S. Cohen, S. Schiller, and P. Milanfar, “Estimating spatially varying defocus blur from a single image,” Image Processing, IEEE Transactions on, vol. 22, no. 12, pp. 4879–4891, 2013.
 [22] S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recognition, vol. 44, no. 9, pp. 1852–1858, 2011.
 [23] T. Hyun Kim and K. Mu Lee, “Generalized video deblurring for dynamic scenes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5426–5434.
 [24] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2009.
 [25] R. Köhler, M. Hirsch, B. Mohler, B. Schölkopf, and S. Harmeling, “Recording and playback of camera shake: Benchmarking blind deconvolution with a realworld database,” in Computer Vision–ECCV 2012. Springer, 2012, pp. 27–40.

[26]
L. Xu, J. S. Ren, C. Liu, and J. Jia, “Deep convolutional neural network for image deconvolution,” in
Advances in Neural Information Processing Systems, 2014, pp. 1790–1798.  [27] C. J. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf, “Learning to deblur,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2015.
 [28] J. Sun, W. Cao, Z. Xu, and J. Ponce, “Learning a convolutional neural network for nonuniform motion blur removal,” arXiv preprint arXiv:1503.00593, 2015.
 [29] S. Dai and Y. Wu, “Motion from blur,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2008.
 [30] T. Portz, L. Zhang, and H. Jiang, “Optical flow in the presence of spatiallyvarying motion blur,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2012.
 [31] J. Shi, L. Xu, and J. Jia, “Just noticeable defocus blur detection and estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 657–665.
 [32] D. Krishnan and R. Fergus, “Fast image deconvolution using hyperlaplacian priors,” in NIPS, 2009.
 [33] D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2009.
 [34] A. Levin and Y. Weiss, “User assisted separation of reflections from a single image using a sparsity prior,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 29, no. 9, pp. 1647–1654, 2007.
 [35] Z. Hu, L. Xu, and M.H. Yang, “Joint depth estimation and camera shake removal from single blurry image,” in Proc. IEEE International Conference on Computer Vision and Pattern Recognition, 2014.
 [36] T. H. Kim, B. Ahn, and K. M. Lee, “Dynamic scene deblurring,” in Computer Vision (ICCV), 2013 IEEE International Conference on. IEEE, 2013, pp. 3160–3167.
 [37] T. H. Kim, H. S. Lee, and K. M. Lee, “Optical flow via locally adaptive fusion of complementary data costs,” in Computer Vision (ICCV), 2013 IEEE International Conference on. IEEE, 2013, pp. 3344–3351.
 [38] A. Chambolle and T. Pock, “A firstorder primaldual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120–145, May 2011. [Online]. Available: http://dx.doi.org/10.1007/s1085101002511
 [39] A. Wedel, T. Pock, C. Zach, H. Bischof, and D. Cremers, “An improved algorithm for tvl 1 optical flow,” in Statistical and Geometrical Approaches to Visual Motion Analysis. Springer, 2009, pp. 23–45.
 [40] C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast costvolume filtering for visual correspondence and beyond,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp. 3017–3024.
 [41] A. Agrawal and R. Raskar, “Optimal single image capture for motion deblurring,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 2560–2567.
 [42] L. Xu, J. Jia, and Y. Matsushita, “Motion detail preserving optical flow estimation,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 34, no. 9, pp. 1744–1757, 2012.
Comments
There are no comments yet.