Motion blur is one of the most common factors degrading image quality. It often arises when the image content changes quickly (e.g., due to fast camera motion) or when the environment is illuminated poorly, hence necessitating longer exposure times. Combining both situations, e.g., a self-driving car driving at dusk, further aggravates the problem. As many computer vision algorithms such as semantic segmentation, object detection, or visual odometry rely on visual input, blurry images challenge the performance of these algorithms. It is well known that many algorithms (e.g., depth prediction, feature detection, motion estimation, or object recognition) suffer from motion blur [25, 17, 33, 26]. The motion deblurring problem has thus received considerable attention in the past [28, 7, 21, 32, 17].
Existing techniques to solve this problem can be classified into two categories: the first type of approaches formulate the problem as an optimization problem[16, 37, 4, 28, 18, 2] where the latent sharp image and/or the blur kernel are optimized using gradient descent. One of the advantages of this kind of methods is that they do not require any ground truth sharp images. However, the resulting solvers usually have a large computational complexity, which limits their applicability in time-constrained settings, such as real-time robotic visual perception. Handcrafted priors on either the image or the blur kernel further limit their performances.
The second type of approaches phrase the task as a learning problem. Building upon the recent advances of deep convolutional neural networks, state-of-the-art results have been obtained for both single image deblurring [21, 32] and video deblurring , outperforming optimization-based techniques in terms of both quality and efficiency. However, learning-based methods typically require full supervision in the form of corresponding pairs of blurred and sharp images. Unfortunately, obtaining such pairs is not always easy due to two main reasons. One is that not every camera has the capability to capture images at enough high frame rate (1000 or more frames per second), such that we can use the recorded frames to synthesize the training data. Another reason is that it would also be difficult to obtain good quality images in real scenarios where the motion blur really occurs (e.g., at night). High frame rate limits the exposure time and would thus make the captured image extremely dark or even invisible.
, we propose a novel approach for self-supervised image deblurring which only relies on real-world blurry image sequences for training. Self-supervised learning improves the network’s generalization performance, by enabling the network to adapt to scenarios where ground truth sharp images are not available. Our network contains a deblurring network and an optical flow estimation network. However, instead of using ground truth sharp images[21, 32, 17, 10]
, we pose the task as an inverse rendering problem and take advantage of the physical image formation process for supervision. More specifically, given two consecutive blurry frames of a video sequence, we first predict the corresponding deblurred images using a deep neural network. A second deep neural network takes both deblurred images as input and computes the corresponding optical flow. Using this prediction, and assuming a locally linear blur kernel, our model re-renders the blurred images and compares the results to the original blurry inputs using a photometric loss function. Moreover, we constrain the optical flow network using a photo-consistency loss function. Our entire model can be trained end-to-end from pairs of consecutive blurry images captured with a consumer video camera. At test time, our network takes a single blurred image and deblurs it in real time on a single GTX 1080Ti graphic card using the learned parameters. As illustrated in Fig.1, our approach is competitive with respect to a state-of-the-art supervised method  despite being fully self-supervised.
Our second contribution is a novel synthetic dataset and a real dataset. The synthetic dataset has 3606 blurry-sharp image pairs recorded with a professional high-speed camera mounted on a ground vehicle. The real dataset has 2302 blurry images and is recorded with a normal camera. We use both datasets to evaluate our algorithm against several baselines both quantitatively and qualitatively.
Ii Related Work
Motion deblurring methods can be categorized into two groups: those that assume spatially uniform blur [27, 16, 37, 4, 28, 18, 2] and those considering spatially varying blur [35, 7, 31]. Uniform deblurring methods assume that the blur kernel is identical for each pixel of the input image. Spatially varying deblurring methods assume that the blur kernels for each pixel may change with respect to its spatial location. Motion deblurring methods can also be classified into non-blind deblurring [27, 16, 31] and blind deblurring [37, 4, 28, 18, 2, 35, 7] methods. Non-blind deblurring methods assume a known blur kernel to recover the latent sharp image. In contrast, blind deblurring methods need to simultaneously recover both the latent image and the blur kernel. In this paper, we solve the challenging blind single image deblurring problem with spatially varying blur.
Optimization-based methods rely on the blur formation model to recover the latent sharp image by minimizing an energy function [16, 37, 4, 28, 18, 2, 42], e.g., using Gaussian [16, 37, 2] or Poisson [27, 31] likelihood functions in the context of maximum-a-posteriori (MAP) estimation. Depending on the number of input blurry images, additional terms can be formulated by warping the other image(s) to a reference image using either dense flow or by combining relative camera poses with dense depth maps [15, 22]. Due to the nonlinear and ill-posed (in the case of a single image) nature of the problem, prior information on either the motion blur kernel or the latent sharp image must be used to constrain the solution space [16, 37, 4, 28, 16, 37, 2].
While optimization based approaches often offers better generalization performance, they are usually computational expensive, which prevents them from time constrained applications. We leverage the commonly used image formation model by optimization based approaches to construct a loss term for training our network in a self-supervised fashion. Since we optimize the parameters of our network once, the computationally complex optimization problem does not occur at test time, where we use the standard efficient feed-forward inference.
Deep Learning based methods use convolutional neural networks (CNNs) to recover the latent sharp image. Xu et al. 
propose a CNN with two sub-networks, a two-hidden-layer deconvolution network and a two-hidden-layer outlier rejection network. The network is trained end-to-end with known ground truth sharp images. An even deeper network with 15 layers was proposed in for text image deblurring.  further increased the number of layers to 40 in a multi-scale manner, resulting in a network with 120 layers for three scales. To further improve the network performance, an adversarial loss  was used in . [32, 41]
use recurrent neural networks for single image deblurring.
More recently, network architectures for multi-frame inputs, which are able to exploit temporal information, have been proposed [29, 36, 9]. Su et al.  uses a network with skip connections for video deblurring and Wieschollek et al.  exploit temporal information using a recurrent architecture. A spatio-temporal recurrent architecture with a small computational footprint was proposed in .
Deep learning-based approaches usually outperform optimization-based methods in terms of both efficiency and image quality. However, nearly all of the recent deep learning based methods are trained in a fully supervised manner with only few notable exceptions. Madam et al.  adapted CycleGAN  to the single image deblurring task. Using unpaired sharp images for training, they obtained good performance in the specific domain of images (e.g., text, faces). In contrast to our approach, they require (unpaired) sharp images for training and perform poorly on blurry images “in the wild” as demonstrated by our experiments. Chen et al.  propose to include a self-consistency loss for supervision. However, they report that their self-supervised model leads to degenerated solutions and hence optimize a hybrid loss function which heavily relies on supervision in the form of sharp images. Furthermore, in contrast to our approach, their method requires triplets of blurry images for training and uses a memory and computational expensive look-up table for representing the blur kernel in a differentiable fashion. We avoid this problem using forward warping and differentiable mesh rendering.
Fig. 1 shows the overall architecture of our model. It comprises four main parts, i.e., the DeblurNet, the FlowNet, the reblur block, and image warping. The DeblurNet is used for single image motion deblurring. It accepts a single blurry image as input and outputs the corresponding sharp image. The two deblurred images are then fed into the FlowNet to estimate a bi-directional dense optical flow field, which will be used to compute spatially varying blur kernels for each blurry image. Given the estimated blur kernels, we reblur the latent sharp image to form a self-consistency loss to supervise the training of our network. The FlowNet is trained by maximizing cross-view photometric consistency, which is estimated by image warping. While our approach uses two images for training, the DeblurNet only uses a single input image. After training, our method can thus be used for single image deblurring. The whole network is trained end-to-end without using any ground truth data in the form of sharp images or optical flow. We will now present all components of our model (i.e., the deblurring, optical flow, reblurring and image warping components as well as the loss functions) in detail.
Iii-a Deblurring and Optical Flow
For the deblurring and optical flow modules, we take advantage of existing neural network architectures which have performed well in the past for the respective supervised learning tasks [21, 17, 32, 30, 10]. In particular, we adopt the single image deblurring network from Tao et al.  and the dense optical flow estimation network PWC-Net from Sun et al. . We make the following modifications for the deblurring network for our particular problem: 1) We replace the deconvolution layer with bilinear upsampling followed by a 3x3 convolution to avoid upsampling artifacts. 2) We add one more Encoder-Decoder block to increase the capacity of the network. 3) We train the network at a single scale without using the LSTM layer to improve both the training and test efficiency. The resulting network is more efficient than the original network while keeping similar deblurring performance.
The reblurring module encapsulates the physical image formation process, which blurs a sharp image based on the optical flow. Digital cameras operate by collecting photons during the time of exposure and converting those into measurable charge. This process can be formalized by considering the blurred color image as the result of integrating virtual sharp images :
Here, is the exposure time, represents the pixel location, denotes the motion blurred image at pixel , and is the virtual sharp image at pixel and time . The continuous integration can be approximated by using a finite sample size of virtual sharp frames . We denote the central reference frame, which is the latent sharp image to be estimated, as .
As the exposure time is typically small (200 ms), we may assume that during the time of exposure the image content is primarily affected by image motion and not by other changes like object appearance or illumination. We thus model the virtual sharp frames as the result of the sharp central reference frame warped by optical flow :
Here, denotes the optical flow from virtual image to reference image at pixel . Thus, we can reformulate (1) as
and estimate as well as the optical flow fields instead of all virtual frames for solving the deblurring problem. However, the problem is still severely underconstrained as we would need to estimate one optical flow per frame .
We therefore further simplify the model by assuming linear motion during the time of exposure. This is a reasonable assumption in many scenarios where the exposure time is comparably small and rapid motion changes during this time are prevented by the mass and inertia of physical objects (e.g., when the camera is mounted to a vehicle).
Let denote the optical flow from frame to frame at pixel . Assuming linear motion and equidistant time steps, we obtain the optical flow from frame to frame as . Note that in this model the direction of the optical flow is reversed compared to (3). We must therefore apply forward warping to obtain the virtual sharp images . This yields
where the operator warps the reference frame into the virtual frame
based on the interpolated flow. We next describe our implementation of the forward warping operator . Note that this operator needs to be differentiable as both the reference frame and the optical flow are outputs of neural networks.
We first construct a regular triangular lattice from the pixel grid by connecting vertices from adjacent pixels as shown in Fig. 3 for an example image of size pixels. We then warp each vertex of this lattice according to the optical flow . The intensities of (i.e., of the blue pixels) are obtained by linear interpolation aaaNote that bilinear interpolation cannot be applied since the warped grid might not be rectangular due to the non-uniform optical flow, as shown in Fig. 3.. Consider the red pixel in Fig. 3 as an example. Let further , and denote the positions of the vertices belonging to the triangle which covers the red pixel. Then, is obtained as
where , and denote the barycentric coordinates of the point in the triangle. The synthesized motion blurred image can then be computed as the average of all the warped frames as described in (1
). In the case of occlusions, i.e., when multiple triangles overlap a single pixel, we consider the triangle with the largest motion to be in front. This is a commonly used heuristics which often holds true in practice (in particular when image motion is dominated by camera motion). Note that due to the linear interpolation, the warping function is piecewise smooth. As illustrated in Fig. 1, our model is symmetric, thus we reblur both the first and the second frame and compare the reblurred result to the original blurry images using a photoconsistency loss. We will use to denote the reblurred image and to denote the blurred input image in the following.
Iii-C Image Warping
We found that a photoconsistency loss on the reblurred images alone is insufficient to constrain the optical flow. We thus add an additional self-supervised photometric loss on the optical flow as proposed in prior work [13, 20, 34] and detailed in Section III-D. The input to this loss function is the deblurred image and the deblurred image from the other frame warped based on the estimated optical flow. To warp the images into each other, we exploit backward warping as the optical flow in both directions is known. Let and denote the first and the second deblurred image, and let and denote the optical flow between thembbbNote that the flow / and from the previous section are related by a known constant that depends on the frame rate, exposure time and .. The warped deblurred images are obtained as
using bilinear interpolation . Note that no triangular mesh needs to be constructed during backward warping.
Iii-D Loss Functions
Our network comprises two types of losses: a self-consistency loss and a forward-backward consistency loss . The self-consistency loss
penalizes differences between the synthesized motion blurred images , and the original blurred inputs , using a loss function. Similarly, the forward-backward consistency loss
penalizes differences between the warped deblurred images , and the estimated deblurred images , . The final loss is a weighted combination
where is a hyper-parameter to balance both losses.
Iii-E Occlusion Handling
As occlusions affect the training of our network, especially at image boundaries, we detect occluded image regions and mask the loss functions (8) and (9) accordingly. We follow the method used in  to detect occluded image regions. More specifically, we compute the non-occluded regions in by following the optical flow from to . We consider all pixels of which can be reached from via as non-occluded. Similarly, we can also compute a mask for each virtual frame by following the optical flow from the central image to the virtual frame . Since the synthesized blurry image is computed as the average of these virtual frames, we compute the final mask for as the product of all masks for the virtual frames.
Iii-F Differences with the method proposed by 
The overall structure of our method is similar to the work from . However, we are different in the following two key aspects: 1) In order to achieve state-of-the-art performance,  uses a supervised loss (section 3.4 from ).  is thus actually a supervised method. 2) The core component of both methods, i.e., the reblurring module, is different. In fact, blurring a sharp image using convolutions as done in  is physically incorrect (section 3.3 from ) and only holds for spatially uniform blur. We will make the differences to  more clear.
For simplicity, let us assume we have a one dimensional sharp image with pixels. We further assume the blur kernel corresponds to pixel as in the form of bidirectional 1D flow. Using the definition from , the convolution based model results in a blurred image of as . A blur kernel of means that will contribute to physically, in contrast to that will contribute to as what the convolution based model in  does. Our model eliminates this problem by forward warping the sharp image by a fraction of the blur kernels at each sampled timestamp. The blurred image is computed by averaging all these forward warped sharp images to simulate the real motion blurring image formation process. Experimental results shown later demonstrate that the algorithm relying on convolution based model exhibits ringing artifacts on egde boundaries, which degrade the deblurred images.
Iv Experimental Evaluation
Datasets: The dataset from  is commonly used to benchmark single image motion deblurring algorithms. It is collected from a hand-held camera. Stronger blur was artificially created by shaking the camera during the recordings. It results in very non-linear camera motions, which violates our motion assumption. Therefore, we collected a new large dataset using a professional Fastec TS5ccchttps://www.fastecimaging.com/fastec-high-speed-cameras-ts-series/ high speed camera mounted on a car. The dataset consists of 196 sequences in total, which are collected at 1200 fps with VGA resolution in diverse environments. The motion blurred images are generated by averaging several consecutive frames (i.e., 150 frames) to simulate the real physical image formation process. To reduce the redundancy per image sequence, we limit the maximum number of blurry-sharp image pairs to 20 per sequence, which results in a total of 3606 pairs. We split the dataset into 157 training sequences and 39 test sequences, which results in 2820 image pairs for training and 786 image pairs for evaluation.
We also collect a real motion blurry dataset with 2302 images. The camera is mounted on a tram and captures images at around 50 FPS with a resolution of 752480 pixels. The dataset is collected at late afternoon and night, when the motion blur would really occur. We split 2062 images for training and 240 images for test.
We implemented our network in PyTorch. We empirically set the hyper-parameter to be
. To better initialize the network, we pretrain both the DeblurNet and PWC-Net on the blurry images. In particular, we pretrain the DeblurNet for 30 epochs to learn the identity mapping from blurry image to blurry image. We pretrain the PWC-Net for 200 epochs with the blurry sequences in a self-supervised manner. The learning rate used for both networks is. The whole network is then trained jointly for another 500 epochs, with a learning rate of for the first 260 epochs and then decayed by half every 40 epochs.
Baselines and experimental settings: We compare the single image deblurring results of our network quantitatively and qualitatively with a state-of-the-art optimization-based method , supervised methods [21, 32, 17] as well as the domain specific self-supervised method from 
. We train all networks with their recommended hyperparameter settings on our synthetic dataset. For the optimization-based method from, we increase the blur kernel size to 10 pixels to account for the large motions present in our dataset.
We use the Peak Signal to Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) measures commonly used in the community[28, 21, 32] to evaluate the quality of the deblurring results. Larger PSNR/SSIM values indicate better image quality. The efficiency of the methods is evaluated by their total time consumption, but excluding the image loading and saving time.
Ablation studies on the modified DeblurNet architecture: As discussed in Sec.III-A, we did several improvements to the original network from . To evaluate the efficacy of the new network, we train both networks in a supervised manner on our synthetic dataset. Table I presents the comparisons when evaluated under the same settings. It demonstrates our DeblurNet is more efficient than the original network while has slightly better deblurring performance.
|SRN-Deblur ||34.64 dB||0.93||0.13 s|
|Ours (supervised)||35.04 dB||0.94||0.05 s|
Ablation studies on the self-supervision loss for the flow network: To better understand the proposed algorithm, we perform an ablation study on the necessity to train the flow network in a self-supervised manner. We train our proposed network with and without the self-supervision loss for the flow network. The officially provided pretrained model on FlyingChair dataset  is used if the self-supervision loss is disabled. Experimental results demonstrate that the flow network pretrained on the FlyingChair dataset  can generalize to our dataset, but with limited performance. The resulting deblur network gives a PSNR metric as 31.23dB and a SSIM metric as 0.89 on our synthetic dataset, in contract to 32.24dB/0.91 if the network is trained in a fully self-supervised manner. It proves it is beneficial to train the flow network with a self-supervision loss.
Ablation studies on our proposed reblur model: As discussed in Sec.III-F, our ablation study supports our claim about the difference between the reblurring modules. For fair comparisons, we trained both the network with convolution based reblur model and the network with our physically correct reblur model under the same settings in an unsupervised fashion. The network with convolution based image formation model yields a PSNR metric of 27.22dB and a SSIM metric of 0.8 on our synthetic dataset, while ours yields PSNR and SSIM metrics as 32.24dB and 0.91 respectively.
The necessity to do self-supervised motion deblurring: In real scenarios, motion blur usually occurs in bad illumination conditions. In these cases, it impedes the acquisition with low shutter times to obtain sharp images for supervised learning. One way to address this problem is to train a network with datasets collected under good illumination conditions and transfer the model to scenarios, where motion blur would occur. However, the generalization ability is still questionable due to the large difference between the image textures for both scenarios. We thus evaluate the generalization performance quantitatively and qualitatively with both our synthetic dataset and real dataset respectively. Note that it is not easy to obtain ground truth sharp images in real scenarios. We apply the pretrained networks on our test data directly, to evaluate the generalization performance. Table II and Fig. 5 present the experimental results. It demonstrates that all the baseline networks have limited generalization ability and perform worse than our method with a large margin. It proves that self-supervision is beneficial for the network to adapt to scenarios, where the ground truth data is difficult to obtain.
Quantitative and qualitative evaluations on synthetic dataset: Table II and Fig. 4 show quantitative and qualitative comparisons on the synthetic dataset. For the qualitative results, we only compare against the best supervised method . As can be seen in Table II, our method outperforms both  and  significantly in terms of PSNR and SSIM. For optimization-based single image deblurring algorithms (e.g., ), they usually assume the motion blur is caused by either camera rotation or in-plane translation. However, this assumption is violated in our setting for a self-driving scenario. Thus,  leads to poor performance on our dataset.  is designed for simple domain-specific blurry images, such as text and facial images. Therefore, it struggles on our dataset that exhibits complex real-world challenges which are harder to learn. In comparison to supervised methods, our method demonstrates competitive results in our quantitative and qualitative evaluation. As expected, there is still a gap between our method and the supervised methods if the ground truth sharp images are available. However, our method outperforms them with a large margin if they are pretrained on other datasets. It demonstrates that self-supervision enables the network to generalize better to real scenarios, where the ground truth data is usually difficult to obtain. It also demonstrates that our method is amongst the fastest methods and can run in real time on a single GTX1080Ti Graphic card.
|Opt.-based||Xu et al. ||26.04 dB||0.78||377.8 s|
|DeepDeblur||33.55 dB||0.92||3.45 s|
|Supervised||DeblurGAN||33.23 dB||0.91||0.06 s|
|-retrained||SRN-Deblur||34.64 dB||0.93||0.13 s|
|DeepDeblur||29.91 dB||0.87||3.45 s|
|Supervised||DeblurGAN||28.70 dB||0.88||0.06 s|
|-pretrained||SRN-Deblur||30.71 dB||0.88||0.13 s|
|self-||Madam et al.||21.69 dB||0.75||0.25 s|
|supervised||Ours||32.24 dB||0.91||0.05 s|
Qualitative evaluations on real dataset: Since we do not have ground truth sharp images in our real dataset, we cannot refine the baseline networks on it. We thus use the official pretrained networks for the experiments. Fig. 5 demonstrates that our method can successfully deblur the blurry images, while the pretrained network from Tao et al.  results in images with artifacts. More experimental results can be found from our supplementary material at https://github.com/ethliup/SelfDeblur.
V Conclusion and Future Work
In this paper, we have presented a self-supervised learning algorithm for image deblurring. Instead of using ground truth sharp images, we leverage the geometric constraints between two consecutive blurry images to supervise training of our network. Both the latent sharp image and motion blur kernel are estimated by a deblur network and an optical flow estimation nework, respectively. Experimental results show that the proposed algorithm outperforms the previously self-supervised method and can produce competitive results compared to supervised methods. It further demonstrates that our method can be trained with real motion blurry data and generalizes well to real unseen data.
-  (2018) Reblur2Deblur: deblurring videos via self-supervised learning. In International Conference on Computational Photography (ICCP), Cited by: §II, §III-F, §III-F, §III-F.
-  (2009) Fast motion deblurring. ACM Transactions on Graphics (SIGGRAPH ASIA 2009) 28 (5). Cited by: §I, §II, §II.
-  (2015) Unsupervised visual representation learning by context prediction. In iccv, Cited by: §I.
-  (2006) Removing camera shake from a single photograph. In siggraph, Cited by: §I, §II, §II.
-  (2017) Unsupervised monocular depth estimation with left-right consistency. In cvpr, Cited by: §I.
-  (2014) Generative adversarial nets. In nips, Cited by: §II.
-  (2010) Single image deblurring using motion density functions. In European Conference on Computer Vision (ECCV), Cited by: §I, §II.
-  (2015) Convolutional neural networks for direct text deblurring. In bmvc, Cited by: §II.
-  (2017) Online video deblurring via dynamic temporal blending network. In iccv, Cited by: §II.
-  (2017) FlowNet 2.0: evolution of optical flow estimation with deep networks. In cvpr, Cited by: §I, §III-A.
-  (2018) Occlusions, motion and depth boundaries with a generic network for disparity, optical flow or scene flow estimation. In eccv, Cited by: §IV.
-  (2015) Spatial transformer networks. In nips, Cited by: §III-C.
-  (2018) Unsupervised learning of multi-frame optical flow with occlusions. In eccv, Cited by: §I, §III-C.
-  (2018) Neural 3d mesh renderer. In cvpr, Cited by: §III-B.
-  (2015) Generalized video deblurring for dynamic scenes. In cvpr, Cited by: §II.
-  (2009) Fast image deconvolution using hyper-laplacian priors. In Neural Information Processing Systems Conference (NIPS), Cited by: §I, §II, §II.
DeblurGAN: blind motion deblurring using conditional adversarial networks. In cvpr, Cited by: §I, §I, §II, §III-A, TABLE II, §IV.
-  (2009) Understanding and evaluating blind deconvolution algorithm. In cvpr, Cited by: §I, §II, §II.
-  (2018) Unsupervised class-specific deblurring. In eccv, Cited by: §II, TABLE II, §IV, §IV.
-  (2018) UnFlow: unsupervised learning of optical flow with a bidirectional census loss. In aaai, Cited by: §I, §III-C.
-  (2017-07) Deep multi-scale convolutional neural network for dynamic scene deblurring. In cvpr, Cited by: §I, §I, §I, §II, §III-A, TABLE II, §IV, §IV, §IV.
Joint estimation of camera pose, depth, deblurring and super-resolution from a blurred image sequence. In iccv, Cited by: §II.
-  (2017) Automatic differentiation in PyTorch. Cited by: §IV.
-  (2016) Context encoders: feature learning by inpainting. In cvpr, Cited by: §I.
-  (2009) A visual odometry framework robust to motion blur. In icra, Cited by: §I.
-  (2019) World from blur. In cvpr, Cited by: §I.
-  (1972) Bayesian-based iterative method of image restoration. Journal of the optical society of America 62 (1). Cited by: §II, §II.
-  (2008) High-quality motion deblurring from a single image. ACM Transactions on Graphics 27 (3). Cited by: §I, §I, §II, §II, §IV.
-  (2017) Deep video deblurring for hand-held cameras. In cvpr, Cited by: §I, §II.
-  (2018) PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In cvpr, Cited by: §III-A.
-  (2011-08) Richardson-lucy deblurring for scenes under a projective motion path. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 33 (8), pp. 1603–1618. Cited by: §II, §II.
-  (2018-07) Scale-recurrent network for deep image deblurring. In cvpr, Cited by: Fig. 1, §I, §I, §I, §II, §III-A, Fig. 4, Fig. 5, TABLE I, TABLE II, §IV, §IV, §IV, §IV, §IV.
-  (2016) Rolling shutter and motion blur removal for depth cameras. In icra, Cited by: §I.
-  (2018) Occlusion aware unsupervised learning of optical flow. In cvpr, Cited by: §III-C, §III-E.
Non-uniform deblurring for shaken images.
Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II.
-  (2017) Learning blind motion deblurring. In iccv, Cited by: §II.
-  (2010) Two-phase kernel estimation for robust motion deblurring. In European Conference on Computer Vision (ECCV), Cited by: §I, §II, §II.
-  (2014) Deep convolutional neural network for image deconvolution. In nips, Cited by: §II.
-  (2013) Unnatural l0 sparse representation for natural image deblurring. In cvpr, Cited by: TABLE II, §IV, §IV.
-  (2018) ActiveStereoNet: end-to-end self-supervised learning for active stereo systems. In eccv, Cited by: §I.
-  (2018) Dynamic scene deblurring using spatially variant recurrent neural networks. In cvpr, Cited by: §II.
-  (2013) Forward motion deblurring. In iccv, Cited by: §II.
Unpaired image-to-image translation using cycle-consistent adversarial networks. In iccv, Cited by: §II.
In this supplementary material, we present details on the relationship between / (i.e., the optical flow between the latent sharp images) and (i.e., the optical flow between the central virtual frame and the first virtual frame), described in Section III.B and III.C of the main paper, the architecture of the deblurring network, as well as additional qualitative experimental results on single image deblurring.
Ii Relationship between / and
In this section, we present the relationship between / and . and are the bidirectional dense optical flows between the latent sharp images and , respectively. We assume the motion between and to be linear. Without loss of generality, we assume is used to synthesize the first blurry image . Thus, we obtain as the flow from the central virtual frame to the first virtual image by linearly scaling according to
where is the exposure time of , is the number of sampled virtual sharp frames to synthesize , and is the time interval between and . Similarly, if is used to synthesize the second blurry image , we get
where is the exposure time of .
Iii Architecture of the Deblurring Network
We adapt the single image deblurring network from Tao et al.  for our approach. We make the following modifications for our particular problem: 1) We replace the deconvolution layer with bilinear upsampling followed by a 3x3 convolution to avoid upsampling artifacts. 2) We train the network at a single scale without using the LSTM layer to improve both the training and test efficiency. 3) We add one more Encoder-Decoder block to increase the capacity of the network. The details are shown in Fig. 1.
Iv Additional Qualitative Experimental Results
In Fig. 2 to Fig. 8, we present additional qualitative experimental results on single image deblurring on the synthetic dataset. The results demonstrate that our method can generate visually compelling sharp images that are competitive to three state-of-the-art supervised methods [2, 4, 3]. For fair comparisons, we retrain all the networks on our Fastec dataset. It also significantly outperforms the state-of-the-art optimization-based method from Xu et al.  and the self-supervised method from . The optimization based method from Xu et al. fails to deblur blurry images from this dataset. To make the problem tractable, they assume that the motion blur is caused by either camera rotation or in-plane translation. However, those assumptions are violated in our Fastec dataset, where the motion blur is also caused by the 3D scene geometry.
-  L. Xu, S. Zheng and J. Jia, Unnatural L0 sparse representation for natural image deblurring, In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2013.
-  S. Nah, T. Kim and K. Lee, Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring, In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.
-  X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang and J. Jia, Scale-recurrent network for deep image deblurring, In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
-  O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin and J. Matas, DeblurGAN: Blind motion deblurring using conditional adversarial networks, In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
-  N. Madam, S. Kumar and A.N. Rajagopalan, Unsupervised Class-Specific Deblurring, In Proc. of the European Conf. on Computer Vision (ECCV), 2018.