Motion blur is a phenomenon caused by the relative motion of the camera and the target during the exposure time. Removing motion blur (see figure 1) from single photograph has been an intractable problem though there are many approved works. However, it is an ill-posed problem to restore the sharp image when only the blurred image is given. To solve this blind equalization, conventional methods[4, 34, 31, 17, 2, 5, 41, 26, 27, 45]
are based on the idea that regularizing the potential distributions of sharp image. In general, these methods deblur images through two steps including kernel estimation and non-blind deconvolution. On the one hand, the kernel estimation[41, 38, 27, 26, 25, 25, 21] is always a hard work because of the influence of variational motion and depth information which increase the difficulty of solving the inverse problem. On the other hand, the non-blind deconvolution[28, 39, 3, 44] often suffers ringing artifacts no matter how precisely the blur kernel is estimated.
Recently, the superior performance based upon deep learning attracts attention from all fields and some works[42, 29, 33, 40, 24, 18, 36, 9] have applied it to image restoration. At the beginning, the basic idea to deblur
is not different from the family of those conventional methods but through using Convolutional Neural Network (CNN) in the kernel estimation and non-blind deconvolution. However, it seems that the fatal drawbacks of deblurring model mentioned above are not overcome. Namely, a more sensible model should have ability to reduce the intermediate steps which often introduce extraordinary artifacts. Thinking further, applying an encoder-decoder to restore the image seems to be a proper way to reduce the complexity of process.
, we know that networks can learn more details from the input when shallow features and deep features are concatenated together. Inspired by this idea, a Bi-Skip network is proposed to improve the generating ability. Besides, considering that the complex motion blur will perturb the network in the training process, a self-paced mechanism is introduced to enhance the robustness of the network.
Our contributions are given in three aspects. First, we propose a Bi-Skip network, which obtains competitive results with faster speed in image deblurring. Second, we point out that the solution for pixel-based loss is not same with the one for perception-based loss and a bi-level loss is adopted to solve this non-identical problem. Thirdly, a self-paced learning mechanism is introduced in our training process. And according to our knowledge, this is the first time to use self-paced mechanism to optimize the generative adversarial network.
2 Related Work
2.1 Motion Deblurring
Conventional motion deblurring methods are divided into two classes that are uniform and non-uniform. Derived from the convolution model, almost all previous works perform the non-blind deconvolution based on the estimated kernel. Early works[4, 31, 2, 5, 41] mainly focus on uniform deblurring. As explained, these methods introduce the priors to regularize the distributions of sharp image. Nevertheless, neither heavy-tail nor hyper-laplacian can generalize the distribution of real images. To improve the robustness of deblurring, Cho and Xu introduce the edge information to regularize the optimal equation. Furthermore, Xu[41, 43] proposes a
-norm regularizer which can extract more salient structures. From the point of probability, Levin comprehensively elaborates the problem in the optimization and proposes an alternative . Due to the limitation of uniform hypothesis, the optimal models are often in the cases of under fitting. In order to overcome this drawback, Harmeling and Whyte contribute the solution in non-uniform deblurring. While, there exit two fatal drawbacks: the pool precision of blur kernels and the ringing artifacts of the non-blind deconvolution. By utilizing Deep Neural Network (DNN), impressive works[40, 24, 18, 33] have been done in the image deblurring. Recently, Nah proposes a kernel-free end-to-end approach to deblur images and Kupyn suggests a blur-to-sharp translation by using cGAN. Also, Tao proposes a scale-recurrent network for image deblurring.
2.2 Self-paced Learning
The philosophy under self-paced learning (SPL) is to simulate the learning principle of humans/animals, which generally starts by learning easier aspects of a learning task, and then gradually takes more complex examples into training . Formally, the SPL model can be expressed as:
Given a training dataset , in which and denote the observed sample and its label respectively.
is a loss function which calculates the cost between the estimated labeland its ground truth. represents the model parameter inside the decision function . is latent weight variable for training samples. The ‘age’ controls the learning space. When is small, only ’easy’ samples are considered into training. As grows, more samples with larger losses will be gradually appended to train a more mature model. And is the self-paced regularizer which follows the definition: 1) is convex with respect to ; 2) is monotonically decreasing with respect to , and it holds that , ; 3) is monotonically increasing with respect to , and it holds that , . Furthermore, Meng  proves that the solving strategy on SPL accords with a majorization minimization algorithm implemented on a latent objective function. And Li  propose a self-paced convolutional neural networks to improves CNNs.
2.3 Encoder-Decoder Networks
The idea of Autoencoder is that high-dimensional data can be converted to low-dimensional codes by training a multilayer network to reconstruct high-dimensional input vectors. That is, the features learned from networks should own the ability to describe the inputs as much as possible. Based on this idea, Denosing Autoencoder (DAE) and Variational Autoencoder (VAE) have been used in the area of image restoration.
At the beginning, U-Net is proposed for image segmentation. It consists of a contracting path and an expansive path. The contracting path follows down-sampling steps and the expansive path follows up-sampling steps. Due to the symmetry of U-Net, it can localize the pixels, which makes features concatenating easily. The Sip-Net proposed by Mao
mainly focuses on image denoising and super-resolution. Deriving from U-Net, Skip-Net links convolutional and deconvolutional layers by skip-layer connections. Furthermore, Ulyanov points that, untrained deep convolutional generators can be used to replace the surrogate natural prior (TV norm) to gain dramatically improved results. However, different from other image restoration, deblurring is still hard when directly using encoder-decoder.
2.4 Generative Adversarial Networks
Generative Adversarial Networks is designed by Goodfellow to define a game between two competing networks called generator and discriminator. The generator receives the noise as an input and output a sample. By receiving the real and the generated sample as inputs, the discriminator is designed to distinguish the two inputs as much as possible. The goal of generator is to fool the discriminator by generating a convincing sample which can not been distinguished from the real one. In ideal conditions, the probability that a trained discriminator treats the input as its real is 0.5.
In image-to-image translation, GAN is known for its ability to generate convincing samples. However, the vanilla version suffers from lots of problems such as the model collapse, vanishing gradients, and etc, as described by Salimans
. The criterion of GAN is to minimize JS divergence between the data and the generator’s distributions. JS divergence is 0 when the two inputs obey the uniform distribution, which would make gradients vanished. Though JS has strong divergence, its drawback mentioned above often makes the model fail to train. Arjovsky comprehensively elaborates the problem and proposes the Wasserstein GAN (WGAN) which uses Earth-Mover distance as the criterion. WGAN uses Lipschitz constraint to clip the weights in range [-1, 1], which makes training more stable. Meantime, the use of weight clipping on the criteria can also lead to undesired behaviors. Gulrajani 
proposes an alternative to clip weights by penalizing the norm of gradients of the criteria with respect to its inputs. This method can perform better than the standard WGAN and enable stable training of wide variety of GAN with almost no hyperparameter tuning.
2.5 Conditional Adversarial Networks
Conditional Adversarial Networks is a net that both the generator and discriminator are conditioned on some extra information. Isola proposes to use cGAN in image-to-image translation, whose architecture is also known as pix2pix. Unlike the vanilla version of GAN, cGAN learns a mapping from an observed image and a random noise to the ground truth with . In the cGAN architecture, the discriminator’s job remains unchanged, but the generator not only fools the discriminator but also minimizes the divergence between the generated sample and the ground truth. As described above, the valuable insight of cGAN makes image-to-image translation more diverse and stable. Thus, Kupyn and Nah use cGAN to deblur images in their models and output the very competitive results. Nah proposes a multi-scale convolutional generator and presents the relative content loss to regularize the criteria of cGAN. Kupyn et al. adopts the perceptual loss which is obtained by some feature extractors in the improved WGAN. All of above deblurring networks show that the cGAN that containing a specified generator has an unique strength to perform blur-to-sharp translation.
2.6 Blur Dataset
Conventional methods set up blur datasets by convolving the synthetic kernels on sharp images. In reality, the blurred image generated by this method is not identical with the real one. There are two convincing methods to generate blurred images. One is to record the sharp information to be integrated over time for blur image generation. Then the integrated signal is transformed into pixel values by the nonlinear Camera Response Function (CRF), such as GoPro dataset. Another is to record 6D camera trajectories to generate blur kernels and then convolve them with sharp images by generating several common degradations, such as Lai et al.. Besides, the blurs of dataset are caused by replaying recorded 6D camera motion, assuming linear CRF.
Before introducing the proposed method, we first explore the generator’s ability to extract deep priors and then analyze the problem in the optimization.
3.1 Deep Priors
Initially, U-Net is proposed to segment medical images for its excellent ability to extract useful features to characterize the contour of images. After that, Skip-Net is proposed to concatenate features of shallow and deep layers. This is a symmetric encoder-decoder that the decoder receives additional encoder features extracted byconv. This method has a prominent performance except for motion deblurring. As supposed, a generator network should own the ability to capture a great deal of image statistics priors. However, it always fail to use these networks to deblur images in some complex scenarios. In our opinion, a good generator should be capable to digest these priors rather than only to extract features.
Thus, we design a Bi-Skip network to improve the generating ability. In this section, a brief framework of the Bi-Skip is shown in figure 3. The network consists of a contracting path (D), skipping path (S) and expansive path (U). In the contracting path, the network performs down-sampling and generates shallow and deep features (shown in green arrow) for each scale. In the expansive path, a transposed convolution is utilized for up-sampling. In the skipping path, both shallow and deep features are concatenated with up-sampled features. As shown in figure 3, we present the network with 4 down-samplings and 4 up-samplings as an example.
To understand the characteristic of the proposed Bi-Skip network, we consider a typical reconstruction problem. Given a target image and a random noise , the task there is to optimize parameters to reproduce the target. Without any restriction on the generated image, the objective is defined below:
In figure 2, we show the generation ability of Skip and Bi-Skip at iteration 1000 and 2400. Obviously, Bi-Skip can fit the target with faster speed and better learning ability. Also, we utilize a blurred image as input to fit the sharp one and restored images at different iterations are shown in figure 4.
For image-to-image translation, the condition is added in the generator to restrict the distribution of the output. For image deblurring, Nah  applies MSE as the content loss and Kupyn  utilizes the perceptual loss . However, the above two conditions are not identical because the mapping about two loss spaces can not guarantee a one-to-one correspondence. For blur-to-sharp translation, the purpose of conditions is to guide the output to obey the distribution of the sharp. The MSE constrains the output to be identical with the sharp one in both pixel and feature level. While, the perceptual loss only constrains the feature similarity. Ideally, condition should be subject to rather than , where is a feature extractor. To elaborate this problem, we combine the two content losses as our condition.
Besides, we find that complex motion blur often perturbs the network in the training process. Intuitive choice to solve this problem is to train the model with simple motion blur first and then to train it with complex samples. However, it is intractable to clear up the dataset according to the motion complexity. Derived from this idea, we introduce a self-paced mechanism for our networks. And figure 6 presents the deblurred images in different optimization schemes. ’’ and ’’ denote the pixel-losses regularized by -norm and -norm, respectively. The ’Perception’ denotes the feature-loss.
4 Proposed Method
By considering the above issues, we propose a Bi-Skip network and adopt a combined condition which we call bi-level loss. At the same time, we introduce a self-paced mechanism in the training process.
4.1 Network Architecture
Figure 5 shows the architecture of the proposed Bi-Skip. The contracting path consists of 5 down-samplings and 5 up-samplings. Specially, the down-sampling is performed by introducing the residual which is shown in orange box. Concretely, a 55 conv is used to obtain the residual and a pooling is to downsample. In each scale except for the smallest one, this path follows a 33 conv (gray box) and 3Resblocks (bottle green box). The skipping path in our network is to extract both shallow and deep features. The first layer, in each scale above, follows a 11 conv (red box) to extract the shallow features. And the Res-Blocks follows a 33 conv (cyan box) to extract the deep features. Then the two features are concatenated in the expansive path. By using a 55 transposed convolution (yellow box), the features are up-sampled. After that, expansive path follows a 33 conv and a 11 conv
in each scale. Besides, the Batch Normalization is removed from the last layer which is a residual in our network. The channels of down-sampling and up-sampling are set to [32, 64, 128, 128, 128]. Meantime, the channels are set to [16, 32, 64, 64, 64] in the skipping path for both shallow and deep features.
And also, we propose some baselines which are brief models of Bi-SKip (BS). One baseline is skip network (S) where Res-Blocks and deep feature extractor are removed. The Res-Block only consists of 1 conv in BS-cR. The next baseline is BS-w/o-R that removing the Res-Blocks from BS.
4.2 Loss Function
An adversarial loss and a content loss are combined in our method. Total loss can be formulate as follows:
As we analyzed above, the feature-level loss are not identical with the pixel-level loss. We introduce a bi-level loss below:
where index the current sample, is sample number, is the sharp image, is the generated image,
is a VGG19-Net which is trained on ImageNet. The purpose ofis to extract perceptual features in our method. The pixel-level loss is normalized by image’s channel , width and height , and the feature-level loss is normalized by its channel , width and height . and are the weights for pixel and perception loss respectively. Additionally, is a self-paced weight to adjust the effect of current sample in the training process.
Most of the image-to-image translation works use the vanilla GAN as the loss. Further, we introduce the self-paced weight to WGN as our adversarial loss which is defined as below:
Similar with the bi-level loss, we also use self-paced weight to adjust the discriminator how to learn. And then, the optimization mechanism is:
where is the set of 1-Lipschitz function. More recently, WGAN-GP  is an alternative method to clip the weights in the training process of WGAN and the self-paced gradient penalty term here is:
where . Additionally, the task of this paper is to restore the sharp image from the blurred one and used in this work mainly focuses on the difference between two images rather than its own ability to distinguish them, so the sigmoid layer is removed from the discriminator.
The weight is controlled by the regime of self-paced learning in our method. However, the proposed total loss is a minimax problem and it is intractable to impose the self-paced regularizer on . An alternative solution is to adjust by imposing the regularizer on our bi-level loss and the relative optimization is designed as below:
According to the three conditions in the definition, we adopt the dynamic self-paced regularizer proposed in  and its formula is:
Where , which is a monotonic decreasing function with respect to the iteration . And we design the dynamic parameter as:
In this majorization step, the closed-form optimal solution for () is:
Where is the sample’s bi-level loss and . It is difficult to set the value of . There, we obtain the range of loss in advance and set the initial value. Concretely, we store the maximum bi-level loss among samples and set to it, . Specially, is set to , which is a way to select which sample to train according to the potential ability of the model.
Once the optimal self-paced weight is given, the minimax problem of the total loss can been easily solved. The discriminator can be optimized by the following:
And the corresponding generator can be obtained as following scheme:
The total optimization steps are shown in Algorithm 1.
Except for training the model with self-paced mechanism, we also train it with other optimization schemes which are listed in the table 1 and we set them as our baselines. Simply, there lists some shorthands to denote different schemes. And A denotes the adversarial loss, 1 and 2 denote the pixel-loss which is regularized by the -norm and -norm respectively, P denotes the feature-loss, S denotes the self-paced mechanism. By combining these singe losses, we present the relative optimization schemes. And we use Xavier  to initialize the model weights. The ratio of to is set to . Meanwhile, the ADAM  is utilized as optimizer and the initial learning rate is set to . The weight , are set to ,
respectively. The total epoch is set to 300 and the learning rate linearly decays to zero after half of the epoch. In the final training, we increase the epoch to 1000 and then obtain the optimized Bi-SKip network.
Our experiments are performed on a single Tesla P40 GPU platform. For fair comparison, we use GoPro dataset to train our model. And the dataset contains 2,103 pairs for training and 1111 pairs for evaluation. In each iteration, we sample a batch of 1 blurred image and randomly crop 256256 patch as training input. By comparing the baselines listed in table 1, we use A2-S, A2-BS-cR, A2-BS-w/o-R, A2-BS to present the ability of BS. There is a obvious performance improvement when deep features are extracted in the network. And also, the rests are utilized to explore the different optimization schemes. It is effective to adopt the bi-level loss A1P-BS to train the model and the performance has a substantial increase when self-paced mechanism (SA1P-BS) is utilized.
|Sun et al.||24.64||0.8429||25.22||0.7735||20min|
|Nah et al.||29.08||0.9135||26.48||0.8079||3.09s|
|Kupyn et al.||28.7||0.958||26.1||0.816||0.85s|
|Tao et al.||30.26||0.9342||26.75||0.8370||1.87s|
We compare our test results with methods [33, 24, 18, 35]on both qualitative and quantitative criteria and show the running time on a single image (table 2). Additionally, more evaluations in dataset and real datasets are show in figure 7. Köhler dataset  contains 4 sharp images and 48 blurred images where there exit 12 images to match each one sharp image. The blurs in this dataset are obtained by replaying recorded 6D motion trajectory. This is a benchmark for evaluating different deblurring methods and we show the quantitative results in table 2. Due to the synthetic kernels are complex in this dataset, results are not ideal as expected. Even so, the proposed method still has quite a performance.
6.2 Saliency Based Evaluation Indicator
Visual saliency is a process of getting a visual attention region precisely from an image. The attention is a behavioral and cognitive process of selectively concentrating on one aspect within an environment while ignoring other things. From the human eyes structure, the motion blur has a great influence on the visual attention.
In order to highlight the superiority of our method on visual effects, we propose a pixel-aware evaluation method, which is called saliency based evaluation indicator. By using this indicator, we can understand an image content from pixel level, which accurately represents the deblurring details from human perception. We use  to track the eye movement trajectory to highlight the visual focus. The visual comparison results are shown in figure 8. The first row represents the blurred image, sharp image and the deblurred images obtained by Sun, Kupyn , Tao , the proposed Bi-Skip, respectively. The second row represents the corresponding saliency maps. A human can perceive the details of this scene, and will give more attentions to the Barbie doll and the stickers on the computer. And the saliency maps obtained by the eye-tracking model  use the heat map to present the attention regions, which shows that the deeper the color, the more focus the human eyes. Two conclusions from the results of saliency maps are: (1) The fuzzy image obscures the human visual focus, which proves that the proposed debtor evaluation index is effective. (2) Compared with other methods, the results obtained by the proposed method can be capable of highlighting the visual focus and be more cater to the subjective perception of human vision on an image.
This paper presents an innovative approach which trains GAN to deblur the image by introducing the self-paced learning. We find that the proper generator can be used as deep priors and point out that the solution for pixel-based loss is not same with the one for perception-based loss. By using these ideas as starting points, a Bi-Skip network is proposed to improve the generating ability and a bi-level loss is adopted to solve the problem that common conditions are non-identical. Besides, considering that the complex motion blur will perturb the network in the training process, a self-paced mechanism is introduced to enhance the robustness of the network. Through extensive evaluations on both qualitative and quantitative criteria, it is demonstrated that our approach has a competitive advantage over state-of-the-art methods.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
-  S. Cho and S. Lee. Fast motion deblurring. In ACM Transactions on graphics (TOG), volume 28, page 145. ACM, 2009.
S. Cho, J. Wang, and S. Lee.
Handling outliers in non-blind image deconvolution.In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 495–502. IEEE, 2011.
-  R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake from a single photograph. In ACM transactions on graphics (TOG), volume 25, pages 787–794. ACM, 2006.
-  W. Freeman, F. Durand, Y. Weiss, and A. Levin. Understanding and evaluating blind deconvolution algorithms. 2009.
X. Glorot and Y. Bengio.
Understanding the difficulty of training deep feedforward neural
Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256, 2010.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
-  I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pages 5767–5777, 2017.
-  S. Harmeling, H. Michael, and B. Schölkopf. Space-variant single-image blind deconvolution for removing camera shake. In Advances in Neural Information Processing Systems, pages 829–837, 2010.
-  X. Hou, J. Harel, and C. Koch. Image signature: Highlighting sparse salient regions. IEEE transactions on pattern analysis and machine intelligence, 34(1):194–201, 2012.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint, 2017.
-  J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pages 694–711. Springer, 2016.
-  F. Khan, B. Mutlu, and X. Zhu. How do humans teach: On curriculum learning and teaching dimension. In Advances in Neural Information Processing Systems, pages 1449–1457, 2011.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
-  R. Köhler, M. Hirsch, B. Mohler, B. Schölkopf, and S. Harmeling. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In European Conference on Computer Vision, pages 27–40. Springer, 2012.
-  D. Krishnan and R. Fergus. Fast image deconvolution using hyper-laplacian priors. In Advances in Neural Information Processing Systems, pages 1033–1041, 2009.
-  O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas. Deblurgan: Blind motion deblurring using conditional adversarial networks. arXiv preprint arXiv:1711.07064, 2017.
W.-S. Lai, J.-B. Huang, Z. Hu, N. Ahuja, and M.-H. Yang.
A comparative study for single image blind deblurring.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1701–1709, 2016.
-  H. Li and M. Gong. Self-paced convolutional neural networks. In Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI, pages 2110–2116, 2017.
-  L. Mai and F. Liu. Kernel fusion for better image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 371–380, 2015.
-  X. Mao, C. Shen, and Y.-B. Yang. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in neural information processing systems, pages 2802–2810, 2016.
-  D. Meng, Q. Zhao, and L. Jiang. What objective does self-paced learning indeed optimize? arXiv preprint arXiv:1511.06049, 2015.
-  S. Nah, T. H. Kim, and K. M. Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, volume 1, page 3, 2017.
-  J. Pan, Z. Lin, Z. Su, and M.-H. Yang. Robust kernel estimation with outliers handling for image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2800–2808, 2016.
-  J. Pan, R. Liu, Z. Su, and G. Liu. Motion blur kernel estimation via salient edges and low rank prior. In Multimedia and Expo (ICME), 2014 IEEE International Conference on, pages 1–6. IEEE, 2014.
-  J. Pan, D. Sun, H. Pfister, and M.-H. Yang. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1628–1636, 2016.
-  W. H. Richardson. Bayesian-based iterative method of image restoration. JOSA, 62(1):55–59, 1972.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
-  Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. In Acm transactions on graphics (tog), volume 27, page 73. ACM, 2008.
-  S. Su, M. Delbracio, J. Wang, G. Sapiro, W. Heidrich, and O. Wang. Deep video deblurring for hand-held cameras. In CVPR, volume 2, page 6, 2017.
-  J. Sun, W. Cao, Z. Xu, and J. Ponce. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 769–777, 2015.
-  Y.-W. Tai, P. Tan, and M. S. Brown. Richardson-lucy deblurring for scenes under a projective motion path. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8):1603–1618, 2011.
-  X. Tao, H. Gao, X. Shen, J. Wang, and J. Jia. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8174–8182, 2018.
-  D. Ulyanov, A. Vedaldi, and V. Lempitsky. Deep image prior. arXiv preprint arXiv:1711.10925, 2017.
P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol.
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
Journal of machine learning research, 11(Dec):3371–3408, 2010.
-  O. Whyte, J. Sivic, A. Zisserman, and J. Ponce. Non-uniform deblurring for shaken images. International journal of computer vision, 98(2):168–186, 2012.
N. Wiener, N. Wiener, C. Mathematician, N. Wiener, N. Wiener, and
Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications.1949.
-  P. Wieschollek, M. Hirsch, B. Schölkopf, and H. P. Lensch. Learning blind motion deblurring. In ICCV, pages 231–240, 2017.
-  L. Xu and J. Jia. Two-phase kernel estimation for robust motion deblurring. In European conference on computer vision, pages 157–170. Springer, 2010.
-  L. Xu, J. S. Ren, C. Liu, and J. Jia. Deep convolutional neural network for image deconvolution. In Advances in Neural Information Processing Systems, pages 1790–1798, 2014.
-  L. Xu, S. Zheng, and J. Jia. Unnatural l0 sparse representation for natural image deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1107–1114, 2013.
-  J. Zhang, J. Pan, W.-S. Lai, R. W. Lau, and M.-H. Yang. Learning fully convolutional networks for iterative non-blind deconvolution. 2017.
-  X. Zhang, R. Wang, Y. Tian, W. Wang, and W. Gao. Image deblurring using robust sparsity priors. In IEEE International Conference on Image Processing, pages 138–142, 2015.