1 Introduction
(a) Original blur images  
(b) Kim and Lee CVPR 2015 [16]  
(c) Sellent et al. ECCV 2016 [31]  
(a) Ours 
Image deblurring aims at recovering latent clean images from a single or multiple images, which is a fundamental task in image processing and computer vision. Image blur could be caused by various reasons, for example, optical aberration
[30], medium perturbation [18], temperature variation [23], defocus [33], and motion [4, 10, 17, 34, 42]. The blur not only reduces the quality of the image causing loss of important details, but also hampers further analysis. Image deblurring has been extensively studied and various methods have been proposed.In this work, we focus on image blur caused by motion. Motion blur is widely encountered in real world applications such as autonomous driving [5, 7]. Camera and object motion blur effects become more apparent when the exposure time of the camera increases due to lowlight conditions. It is common to model the blur effect using kernels [17, 21]. Under motion blur, the induced blur kernel would be in 2D [16] or 3D [31]. For a scenario where both camera motion and multiple moving objects exist, the blur kernel is, in principle, defined for each pixel. Therefore, conventional blur removal methods, such as [2, 10, 26, 40] cannot be directly applied since they are restricted to a single or a fixed number of blur kernels, making them inferior in tackling general motion blur problems.
On another front, stereobased depth and motion estimation have witnessed significant progress over the last decade thanks to the availability of large benchmark datasets such as Middlebury [29] and KITTI [7]. These benchmarks provide realistic scenarios with meaningful object classes and associated groundtruth annotations. The success of stereobased motion estimation naturally prompts more advanced stereo based deblurring solutions, promising more accurate motion estimations to compensate for motion blurs. Very recently, Sellent et al. [31] proposed to exploit stereo information in aiding the challenging video deblurring task, where a piecewise rigid 3D scene flow representation is used to estimate motion blur kernels via local homographies. It makes a strong assumption that 3D scene flow can be reliably estimated, even under adverse conditions. While they reported favorable results on both synthetic and real data, all the experiments are confined to indoor scenarios.
The phenomenon around motion and blur can be viewed as a chickenegg problem: More effective motion blur removal requires more accurate motion estimation. Yet, the accuracy of motion estimation highly depends on the quality of the images. We would like to argue that, scene flow estimation approaches that make use of color brightness constancy may be hindered by the blur images. In Fig. 2, we compare the scene flow estimation results of the stateoftheart solutions on different blur images. It could be observed that the scene flow estimation performance deteriorates quickly w.r.t. the image blur.
Here, we aim to solve the above two problems simultaneously in a unified framework. Our motivation is that motion estimation and video deblur benefit from each other, i.e., better scene flow estimation will lead to a better deblurring result, and a cleaner image will lead to better flow estimation. We tackle a more general blur problem that is not only caused by camera motion but also by moving objects and depth variations in a dynamic scene. We define our problem as “generalized stereo deblur”, where moving stereo cameras observe a dynamic scene with varying depths. We propose a new pipeline (see Fig. 4 for simultaneously estimating the 3D scene flow and deblurring images. Using our formulation, we attain significant improvement in numerous real challenging scenes as illustrated in Fig. 1.
(a) Blurry Image & GT Flow  (b) Menze CVPR 2015 [25] 
(c) Sellent ECCV 2016 [31]  (d) Ours 
The main contributions of our work are as follows:

We propose a novel joint optimization framework to simultaneously estimate the scene flow and deblurred latent images for dynamic scenes. Our deblurring objective benefits from the improved scene flow estimates and the estimated scene structure. Similarly, the scene flow objective allows deriving more accurate pixelwise spatially varying blur kernels.

Based on the piecewise planar assumption, we obtain a structured blur kernel model. More specifically, the optical flows for pixels in the same superpixel are constrained by a single homography (see Section.3.1).

As our experiments demonstrate, our method can successfully handle complex realworld scenes depicting fast moving objects, camera motions, uncontrolled lighting conditions, and shadows.
2 Related Work
Blur removal is an illposed problem, thus certain assumptions or additional constraints are required to regularize the solution space. Numerous methods have been proposed to address this problem [16, 17, 31, 34], which can be categorized into two groups: monocular based approaches and binocular based approaches.
Monocular based approaches often assume that the captured scene is static and has a constant depth. Based on these assumptions, uniform or nonuniform blur kernels are estimated from a single image [10, 12, 14]. Hu et al. [14] proposed to jointly estimate the depth layering and remove nonuniform blur from a single blur image. While this unified framework is promising, user input for depth layers partition is required, and potential depth values should be known in advance. In practical settings, blur is spatially varying due to camera and object motion, which makes the kernel estimation a difficult problem.
Since blur parameters and the latent image are difficult to be estimated from a single image, the monocular based approaches are extended to video to remove blurs in dynamic scenes [32, 37]. To this end, Deng et al. [4] and He et al. [11] apply feature tracking of a single moving object to obtain 2D displacementbased blur kernels for deblurring. Matsushita et al. [24] and Cho et al. [3] proposed to exploit the existence of salient sharp frames in videos. Nevertheless, the method of Matsushita et al. [24] cannot remove blurs caused by moving objects. Moreover, the work of Cho [3] cannot handle fast moving objects which have distinct motions from those of backgrounds. Wulff and Black [38] proposed a layered model to estimate the different motions of both foreground and background layers. However, these motions are restricted to affine models, and it is difficult to be extended to multilayer scenes due to the requirement of depth ordering of the layers.
Kim and Lee [15] proposed a method based on a local linear motion without segmentation. This method incorporates optical flow estimation to guide the blur kernel estimation and is able to deal with certain object motion blur. In [16], a new method is proposed to simultaneously estimate optical flow and tackle the case of general blur by minimization a single nonconvex energy function. This method represents the stateoftheart in video deblurring and is used for comparison in the experimental section.
As depth can significantly simplify the deblurring problem, the multiview methods have been proposed to leverage on depth information. Building upon the work of Ezra and Nayar [28], Li et al. [22] extended the hybrid camera with an additional lowresolution video camera where two lowresolution cameras form a stereo pair and provide a lowresolution depth map. Tai et al. [35] used a hybrid camera system to compute a pixelwise kernel with optical flow. Xu et al. [39] inferred depth from two blur images captured by a stereo camera and proposed a hierarchical estimation framework to remove motion blur caused by inplane translation. Just recently, Sellent et al. [31] proposed a video deblurring technique based on stereo video, where 3D scene flow is estimated from blur images using a piecewise rigid 3D scene flow representation.
3 Formulation
Our goal is to handle the blurs in stereo videos caused by the motion of the camera, objects, and large depth variations in a scene. To this end, we formulate our problem as a joint estimation of scene flow and image deblurring for dynamic scenes. In particular, we rely on the assumptions that the scene can be approximated by a set of 3D planes [41] belonging to a finite number of objects^{1}^{1}1The background can be regarded as a single ’object’ due to the camera motion. performing rigid motions [25]. Based on these assumptions, we define our structured blur kernel as well as the energy functions for deblurring in the following sections.
3.1 Blur Image Formation based on the Structured Pixelwise Blur Kernel
Blur images are formed by the integration of light intensity emitted from the dynamic scene over the aperture time interval of the camera. This defines the image frame in the video sequence as
(1) 
where is the blur frame, is a continuous latent video sequence over a time interval , is the duty cycle, is the optical flow at . We denote . This leads to the discretized version of blur model in Eq. (1) as
(2) 
where
is the blur kernel vector for the image at location
. We obtain the blur kernel matrix by stacking . This leads to the blur model for the image as . In order to handle multiple types of blurs, Kim et al. [15] approximated the pixelwise blur kernel using bidirectional optical flows(3)  
where is the blur kernel at , denotes the Kronecker delta, and are the bidirectional optical flows at frame . In particular, and . They jointly estimated the optical flow and the deblurred images. In our setup, the stereo video provides the depth information for each frame. Based on our piecewise planar assumptions on the scene, optical flows for pixels lying on the same plane are constrained by a single homography. In particular, we represent the scene in terms of superpixels and finite number of objects with rigid motions. We denote and as the set of superpixels and moving objects, respectively. Each superpixel is associated with a region in the image with a plane variable in 3D ( for ), where denotes that superpixel is associated with object inheriting its corresponding motion parameters , where is the rotation matrix and is the translation vector. Note that encodes the scene flow information [25]. Given the parameters , we can obtain the homography defined for superpixel as
(4) 
where is the intrinsic matrix. The optical flow is then defined as
(5) 
where is the coordinate of pixel in superpixel . This shows that the optical flows for pixels in a superpixel are constrained by the homography. Thus, it leads to a structured version of blur kernel defined in Eq. (3). In Fig. 3, we compare our blur kernel estimation with the Kim and Lee [16] and Sellent et al. [31]. Our kernels are more structural, which also leads to more accurate scene flow estimation.
(a) Original Blurry image  (b) Kim and Lee [16] 

(c) Sellent et al. [31]  (d) Ours 
3.2 Energy Minimization
We formulate the problem in a single framework as a discretecontinuous optimization problem to jointly estimate the scene flow and deblur the images. In particular, our energy minimization model is formulated as
(6) 
which consists of a data term, a smoothness term for scene flow, and a spatial regularization term for latent clean images. Our model is initially defined on three consecutive pairs of stereo video sequences. It can also allow the input with two pairs of frames. Details are provided in Section 5. The energy terms are discussed in the following sections.
In Section 4, we solve the optimization problem in an alternative manner to handle mixed discrete and continuous variables, thus allowing us to jointly estimate the scene flow and deblur the images.
3.3 Data Term
Our data term involves mixed discrete and continuous variables, and are of three different kinds. The first kind encodes the fact that the corresponding pixels across the six latent images should have similar appearance (brightness constancy). This lets us write the term as
(7) 
where the superscript denotes the warping direction to other images and denotes the forward and backward direction, respectively (see Fig. 4). We adopt the robust norm to enforce its robustness against noise and occlusions.
Our second potential, similar to one term used in [25], is defined as
where denotes the truncated penalty function. More specifically, it encodes the information that the warping of feature points based on should match its extracted correspondences in the target view. In particular, is obtained in a similar manner as [25].
The third data term, making use of the observed blur images, is defined as
where are the Toeplitz matrices corresponding to the horizontal and vertical derivative filters. This term encourages the intensity changes in the estimated blur images to be close to that of the observed blur images.
3.4 Smoothness Term for Scene Flow
Our energy model exploits a smoothness potential that involves the discrete and continuous variables. It is similar to the ones used in [25]. In particular, our smoothness term includes three different types. The first one is to encode the compatibility of two superpixels that share a common boundary by respecting the depth discontinuities. To this end, we define our potential function as
(8) 
where is the disparity of pixel in superpixel in the reference disparity map, are the distance of disparity for pixel on the boundary.
The second potential is to encourage the neighbor superpixels to orient in the same direction. It is expressed as
(9) 
The third potential is to encode the fact that the motion boundaries are coaligned with disparity discontinuities. This potential can be expressed as
where denotes the number of pixels shared along boundary between superpixels and .
3.5 Regularization Term for Latent Images
Spatial regularization has proven its importance in image deblurring [19, 20]. In our model, we use the total variation term to suppress the noise in the latent image while preserving edges, and penalize spatial fluctuations. Therefore, our potential takes the form
(10) 
Note that the total variation is applied to each color channel.
4 Solution
The optimization of our energy function defined in Eq. (6), involving both discrete and continuous variables, is challenging to solve. Recall that our model involves two set of variables, namely scene flow variables and latent images. Fortunately, given one set of variables, we can solve the other efficiently. Therefore, we perform the optimization iteratively by the following steps,
In the following sections, we describe the details for each optimization step.
4.1 Scene flow estimation
4.2 Deblurring
Given the scene flow parameters, namely , and , the blur kernel matrix, is derived based on Eq. (3), and Eq. (5). The objective function in Eq. (6) becomes convex with respect to and is expressed as
(12) 
In order to obtain sharp image , we adopt the conventional convex optimization method [1] and derive the primaldual updating scheme as follows
(13) 
where , are the dual variables, and are the step variants which can be modified at each iteration, and is the iteration number.
5 Experiments
To demonstrate the effectiveness of our method, we evaluate it on two datasets: the synthetic chair sequence [31] and KITTI dataset [6]. We discuss our results on both datasets in the following sections.
5.1 Experimental Setup
Initialization. Our model is formulated on three consecutive stereo pairs. In particular, we treat the middle frame in the left view as the reference image. We adopt the StereoSLIC [41] to generate the superpixels. Given the stereo images, we apply the approach in [8] to obtain sparse feature correspondences. The traditional SGM [13] method is applied to obtain a disparity map which is used to initialize the plane parameters. The motion hypotheses are generated using RANSAC as implemented in [8]. In order to obtain the model parameters and , we performed blockcoordinatedescent on a subset of 30 randomly selected training images.
Evaluations. Since our method estimates the scene flow and deblurs the images, we evaluate these two tasks separately. For the scene flow estimation results, we evaluate both the optical flow and disparity map by the same error metric, which is by counting the number of pixels having errors more than pixels and of its groundtruth. We adopt the PSNR to evaluate the deblurred image sequences for left and right view separately. Thus, for each sequence, we report three values: disparity errors for three stereo image pairs, flow errors in forward and backward directions, and PSNR values for six images.
Baseline Methods. As for our scene flow results, we compare with piecewise rigid scene flow method (PRSF) [36], which ranks the first on KITTI stereo and optical flow benchmark. Note that PRSF is used as a preprocessing stage in [31]. We then compare our deblurring results with the stateoftheart deblurring approach for monocular video sequence [16], and the approach for stereo videos [31].
5.2 Experimental Results
Results on KITTI. To the best of our knowledge, there are no realistic benchmark datasets that provide blur and its corresponding groundtruth clear images and scene flow. In this paper, we take advantage of the KITTI dataset [6] to create a synthetic Blurred KITTI dataset (will be publicly available) on realistic scenery. It contains 199 scenes, each of which includes 6 images of size . Since the KITTI benchmark does not provide dense groundtruth flow, we use a stateoftheart scene flow method [25] to generate dense groundtruth flows. Given the dense scene flow, the blur images are generated by using the piecewise linear 2D kernel, please refer to [16] and [31] for more details. The blur is caused by both objects motion and camera motion with occlusion and shadow.
We evaluated results by averaging errors and PSNR scores over to stereo image pairs. Table 1 shows the PSNR values, disparity errors, and flow errors averaged over the Blurred KITTI dataset. Our method consistently outperforms all baselines. We achieve the minimum error scores of 10.01% for optical flow and 6.82% for disparity in the reference view. In Fig. 5, we show qualitative results of our method and other methods on sample sequences from our dataset. Fig. 6 and Fig. 7 show the scene flow estimation and deblurring results of the Blurred KITTI dataset.
We then choose a subset of 50 more challenging sequences with large motion from the 199 scenes as test images, which contains daily traffic scenes covering urban areas (30 sequences), rural areas (10 sequences) and highway (10 sequences). Table 2 shows the PSNR values, disparity errors, and flow errors averaged over 50 test sequences on Blurred KITTI dataset. Fig. 8 (left) shows the performance of our deblurring stage with respect to the number of iterations. While we use 5 iterations for all our experiments, our experiments indicate that only 3 iterations are sufficient in most cases to achieve optimal performance under our model.
KITTI Dataset  Disparity  Flow  PSNR  

m  m+1  Left  Right  Left  Right  
Vogal et al. [36]  8.20  8.50  13.62  14.59  /  /  
Kim and Lee [16]  /  /  38.89  39.45  28.25  29.00  
Sellent et al. [31]  8.20  8.50  13.62  14.59  27.75  28.52  
Ours  2 Frames  7.02  8.55  11.44  19.34  30.24  30.71 
3 Frames  6.82  8.36  10.01  11.45  29.80  30.30 
Our Dataset  Disparity  Flow  PSNR  

m  m+1  Left  Right  Left  Right  
Vogal et al. [36]  6.67  6.70  7.26  7.90  /  /  
Kim and Lee [16]  /  /  25.83  26.36  29.58  30.30  
Sellent et al. [31]  6.67  6.70  7.26  7.90  28.73  29.44  
Ours  2 Frames  4.98  5.82  6.12  13.06  32.22  32.62 
3 Frames  4.90  5.76  6.16  6.17  31.80  32.28 
Results on Sellent et al. [31] dataset We further evaluate our approach on the dataset in [31] where the blur images are generated by 3D kernel model. Those sequences contain four real and four synthetic scenes and each of them includes six blur images with its sharp images, where groundtruth scene flow is only available for the synthetic scene “Chair”. We thus report the quantitative comparison in Table 3 on the scene “Chair” between our method and stateoftheart methods, where the evaluation results are averaged over 4 images. We also present the qualitative results in Fig. 9 for real images in this dataset. Fig. 8 (right) shows the performance comparison in deblurring between our method and other baselines with respect to iterations on scene “Chair”. These results affirm our assumption that simultaneously solving scene flow and video deblur benefit each other and that a simple combination of two stages cannot achieve the targeted results.
(a) Blur Images  (b) Our results 
Chair video  Disparity(%)  Flow Error(%)  PSNR(dB)  
Menze [25]  1.17  9.33  /  
Vogel [36]  1.34  2.13  /  
Kim [16]  /  9.08  19.95  
Sellent [31]  1.34  2.13  23.07  
Ours  2 Frames  1.28  1.22  23.13 
3 Frames  1.15  1.18  23.26 
Results on another blur model: We have also tested our method on another blur generation model, where the blurred image is an average of consecutive three frames [9, 27]. The results are shown in Table 4 and Fig. 10 respectively, where our method again achieves the best performance.
Kim [16]  Sellent et al. [31]  Ours  

PSNR(dB)  23.21  23.31  23.89 
SSIM  0.781  0.764  0.786 
(a) Blur Image  (b) Kim and Lee [16] 

((c) Sellent et al. [31]  (d) Ours 
Runtime: In all experiments, we simultaneously compute two direction scene flow and restoration six blur images. Our MATLAB implementation with C++ wrappers requires a total runtime of 40 minutes for processing one scene(6 images, 3 iterations) on a single i7 core running at 3.6 GHz.
6 Conclusion
In this paper, we present a joint optimization framework to tackle the challenging task of stereo video deblurring where scene flow estimation and video deblurring are solved in a coupled manner. Under our formulation, the motion cues from scene flow estimation and blur information could reinforce each other, and produce superior results than conventional scene flow estimation or stereo deblurring methods. We have demonstrated the benefits our framework on extensive synthetic and real stereo sequences.
Acknowledgement
This work was supported in part by China Scholarship Council (201506290130), Australian Research Council (ARC) grants (DP150104645, DE140100180), and Natural Science Foundation of China (61420106007, 61473230, 61135001), and Aviation fund of China (2014ZC5303). We thank all reviewers for their valuable comments.
References
 [1] Antonin Chambolle and Thomas Pock. A firstorder primaldual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011.
 [2] Sunghyun Cho and Seungyong Lee. Fast motion deblurring. In ACM Transactions on Graphics, volume 28, pages 145:1–145:8. ACM, 2009.
 [3] Sunghyun Cho, Jue Wang, and Seungyong Lee. Video deblurring for handheld cameras using patchbased synthesis. ACM Transactions on Graphics, 31(4):64, 2012.
 [4] Xiaoyu Deng, Yan Shen, Mingli Song, Dacheng Tao, Jiajun Bu, and Chun Chen. Videobased nonuniform object motion blur estimation and deblurring. Neurocomputing, 86:170–178, 2012.

[5]
Uwe Franke and Armin Joos.
Realtime stereo vision for urban traffic scene understanding.
In IEEE Symposium on Intelligent Vehicles, pages 273–278, 2000.  [6] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, page 0278364913491297, 2013.
 [7] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 3354–3361, 2012.
 [8] Andreas Geiger, Julius Ziegler, and Christoph Stiller. Stereoscan: Dense 3d reconstruction in realtime. In IEEE Symposium on Intelligent Vehicles, pages 963–968, 2011.
 [9] Dong Gong, Jie Yang, Lingqiao Liu, Yanning Zhang, Ian Reid, Chunhua Shen, Anton van den Hengel, and Qinfeng Shi. From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. arXiv preprint arXiv:1612.02583, 2016.
 [10] Ankit Gupta, Neel Joshi, C Lawrence Zitnick, Michael Cohen, and Brian Curless. Single image deblurring using motion density functions. In Proc. Eur. Conf. Comp. Vis., pages 171–184. Springer, 2010.
 [11] XC He, T Luo, SC Yuk, KP Chow, KYK Wong, and RHY Chung. Motion estimation method for blurred videos and application of deblurring with spatially varying blur kernels. In IEEE International Conference on Computer Sciences and Convergence Information Technology (ICCIT), pages 355–359, 2010.
 [12] Michael Hirsch, Christian J Schuler, Stefan Harmeling, and Bernhard Schölkopf. Fast removal of nonuniform camera shake. In Proc. IEEE Int. Conf. Comp. Vis., pages 463–470, 2011.
 [13] Heiko Hirschmuller. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell., 30(2):328–341, 2008.
 [14] Zhe Hu, Li Xu, and MingHsuan Yang. Joint depth estimation and camera shake removal from single blurry image. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2893–2900, 2014.
 [15] Tae Hyun Kim and Kyoung Mu Lee. Segmentationfree dynamic scene deblurring. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2766–2773, 2014.
 [16] Tae Hyun Kim and Kyoung Mu Lee. Generalized video deblurring for dynamic scenes. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5426–5434, 2015.
 [17] Jiaya Jia. Mathematical models and practical solvers for uniform motion deblurring. Motion Deblurring: Algorithms and Systems, page 1, 2014.
 [18] Sing Bing Kang. Automatic removal of chromatic aberration from a single image. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1–8, 2007.
 [19] Dilip Krishnan and Rob Fergus. Fast image deconvolution using hyperlaplacian priors. In Proc. Adv. Neural Inf. Process. Syst., pages 1033–1041, 2009.
 [20] Dilip Krishnan, Terence Tay, and Rob Fergus. Blind deconvolution using a normalized sparsity measure. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 233–240, 2011.
 [21] Seungyong Lee and Sunghyun Cho. Recent advances in image deblurring. In SIGGRAPH Asia Courses, page 6. ACM, 2013.

[22]
Feng Li, Jingyi Yu, and Jinxiang Chai.
A hybrid camera for motion deblurring and depth map superresolution.
In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1–8, 2008.  [23] Xin Li. Finegranularity and spatiallyadaptive regularization for projectionbased image deblurring. IEEE Trans. Image Proc., 20(4):971–983, 2011.
 [24] Yasuyuki Matsushita, Eyal Ofek, Weina Ge, Xiaoou Tang, and HeungYeung Shum. Fullframe video stabilization with motion inpainting. IEEE Trans. Pattern Anal. Mach. Intell., 28(7):1150–1163, 2006.
 [25] Moritz Menze and Andreas Geiger. Object scene flow for autonomous vehicles. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 3061–3070, 2015.
 [26] Tomer Michaeli and Michal Irani. Blind deblurring using internal patch recurrence. In Proc. Eur. Conf. Comp. Vis., pages 783–798. Springer, 2014.
 [27] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multiscale convolutional neural network for dynamic scene deblurring. arXiv preprint arXiv:1612.02177, 2016.
 [28] SK Nayar and M BenEzra. Motionbased motion deblurring. IEEE Trans. Pattern Anal. Mach. Intell., 26(6):689–698, 2004.
 [29] Daniel Scharstein and Richard Szeliski. A taxonomy and evaluation of dense twoframe stereo correspondence algorithms. Int. J. Comp. Vis., 47(13):7–42, 2002.
 [30] Christian J Schuler, Michael Hirsch, Stefan Harmeling, and Bernhard Schölkopf. Blind correction of optical aberrations. In Proc. Eur. Conf. Comp. Vis., pages 187–200. Springer, 2012.
 [31] Anita Sellent, Carsten Rother, and Stefan Roth. Stereo video deblurring. In Proc. Eur. Conf. Comp. Vis., pages 558–575. Springer, 2016.
 [32] Hee Seok Lee and Kuoung Mu Lee. Dense 3d reconstruction from severely blurred images using a single moving camera. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 273–280, 2013.
 [33] Jianping Shi, Li Xu, and Jiaya Jia. Just noticeable defocus blur detection and estimation. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 657–665, 2015.

[34]
Jian Sun, Wenfei Cao, Zongben Xu, and Jean Ponce.
Learning a convolutional neural network for nonuniform motion blur removal.
In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 769–777, 2015.  [35] YuWing Tai, Hao Du, Michael S Brown, and Stephen Lin. Image/video deblurring using a hybrid camera. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1–8, 2008.
 [36] Christoph Vogel, Konrad Schindler, and Stefan Roth. 3d scene flow estimation with a piecewise rigid scene model. Int. J. Comp. Vis., 115(1):1–28, 2015.
 [37] Oliver Whyte, Josef Sivic, Andrew Zisserman, and Jean Ponce. Nonuniform deblurring for shaken images. Int. J. Comp. Vis., 98(2):168–186, 2012.
 [38] Jonas Wulff and Michael Julian Black. Modeling blurred video with layers. In Proc. Eur. Conf. Comp. Vis., pages 236–252. Springer, 2014.
 [39] Li Xu and Jiaya Jia. Depthaware motion deblurring. In Proc. IEEE Int. Conf. Computational Photography, pages 1–8, 2012.
 [40] Li Xu, Shicheng Zheng, and Jiaya Jia. Unnatural l0 sparse representation for natural image deblurring. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1107–1114, 2013.
 [41] Koichiro Yamaguchi, David McAllester, and Raquel Urtasun. Robust monocular epipolar flow estimation. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1862–1869, 2013.
 [42] Shicheng Zheng, Li Xu, and Jiaya Jia. Forward motion deblurring. In Proc. IEEE Int. Conf. Comp. Vis., pages 1465–1472, December 2013.
Comments
There are no comments yet.