Blind image deblurring aims to recover a true image and a blur kernel from blurry and possibly noisy observation . For a uniform and spatially invariant blur, it can be mathematically formulated as
where is a convolution operator and is an additive Gaussian noise. In its full generality, the inverse problem (1) is severely ill-posed as many different instances of , and fit the observation ; see, [1, 2] for a thorough discussion on solution ambiguities in blind deconvolution.
To resolve between multiple instances, priors are introduced on images and/or blur kernels in the image deblurring algorithms. Priors assume an a priori model on the true image/blur kernel or both. Conventional priors include sparsity of the true image and/or blur kernel in some transform domain such as wavelets, curvelets, etc, sparsity of image gradients [3, 4], regularized prior , internal patch recurrence , low-rank [7, 8], and hyperlaplacian prior , etc. Although generic and applicable to multiple applications, these engineered models are not very effective as many unrealistic images also fit the prior model.
Recently, deep learning has emerged as a new state of the art in blind image deconvolution like in many other image restoration problems. The results so far focus on bypassing the blur kernel estimation, and training a network in an end-to-end manner to fit blurred images with corresponding sharp ones[10, 11, 12, 13]. The main drawback of this end-to-end deep learning approach is that it does not explicitly take into account the knowledge of forward map or the governing equation (1), but rather learns implicitly from training data. Consequently, the deblurring is more sensitive to changes in the blur kernels, images, or noise distributions in the test set that are not representative of the training data, and often requires expensive retraining of the network for a competitive performance. Comparatively, this paper seeks to deblur images by employing generative networks in a novel role of priors in the inverse problem.
In last couple of years, advances in implicit generative modeling of images  have taken it well beyond the conventional prior models outlined above. The introduction of such deep generative priors in the image deblurring should enable a far more effective regularization yielding sharper and better quality deblurred images. Our experiments in Figure 1 confirm this hypothesis, and show that embedding pretrained deep generative priors in an iterative scheme for image deblurring produces promising results on standard datasets of face images and house numbers. Some of the blurred faces are almost not recognizable by the naked eye due to extravagant blurs yet the recovery algorithm deblurs these images near perfectly with the assistance of generative priors.
This paper shows that an alternating gradient decent scheme assisted with generative priors is able to recover the blur kernel , and a visually appealing and sharp approximation of the true image from the blurry . Specifically, the algorithm searches for a pair in the range of respective pretrained generators of images and blur kernels that explains the blurred image . Implicitly, the generative priors aggressively regularize the alternating gradient descent algorithm to produce a sharp and a clean image. Since the range of the generative models can be traversed by a much lower dimensional latent representations compared to the ambient dimension of the images, it not only reduces the number of unknowns in the deblurring problem but also allows for an efficient implementation of gradients in this lower dimensional space using back propagation through the generators.
The numerical experiments manifest that, in general, the deep generative priors yield better deblurring results compared to the conventional image priors, studied extensively in the literature. Compared to end-to-end deep learning based frameworks, our approach explicitly takes into account the knowledge of forward map (convolution) in the gradient decent algorithm to achieve robust results. Moreover, our approach does not require expensive retraining of every deep network involved in case of partial changes in blur problem specifications such as alterations in the blur kernels or noise models; in the former case, we only have to retrain the blur generative model (a shallow network, and hence easy to train), and no change in the later case.
We have found that often strictly constraining the recovered image to lie in the generator range might be counter productive owing to the limitation of the generative models to faithfully learn the image distribution. We, therefore, investigate a modification of the loss function to allow the recovered images some leverage/slack to deviate from the range of the generator. This modification effectively addresses the performance limitation due to the range of the generator.
Another important contribution of this work is to show that even untrained deep convolutional generative networks can act as a good image prior in blind image deblurring. This learning free prior ability suggests that some of the image statistics are captured by the network structure alone. The weights of the untrained network are initialized using a single blurred image. The fact that untrained generative models can act as good priors , allows us to importantly elevate the performance of our algorithm on rich image datasets that are currently not effectively learned by the generative models.
The rest of the paper is organized as follows. In Section II, we give an overview of the related work. We formulate the problem in Section III followed by our proposed alternating gradient descent algorithms in Section IV. Section V contains experimental results followed by concluding remarks in Section VI.
Ii Related Work
Blind image deblurring is a well-studied topic and in general, various priors/regularizers exploiting the structure of an image or blur kernel are introduced to address the ill-posedness. These natural structures expect images or blur kernels to be sparse in some transform domain; see, for example, [3, 4, 16, 17, 18, 19]. Another approach [17, 18]
is to learn an over complete dictionary for sparse representations of image patches. The inverse problem is regularized to favor sparsely representable image patches in the learned dictionary. On the other hand, we learn a more powerful non-linear mapping (generative network) of full size images to lower-dimensional feature vectors. The inverse problem is now regularized by constraining the images in the range of the generator. Some of the other penalty functions to improve the conditioning of the blind image deblurring problem are low-rank and total variation  based priors. A recently introduced dark channel prior  also shows promising results; it assumes a sparse structure on the dark channel of the image, and exploits this structure in an optimization program  for the inverse problem. Other works include extreme channel priors 
, outlier robust deblurring, learned data fitting , and discriminative prior based blind image deblurring approaches .
Our approach bridges the gap between the conventional iterative schemes, and recent end-to-end deep networks for image deblurring [11, 27, 10, 28, 29, 30, 31]. The iterative schemes are generally adaptable to new images/blurs, and other modifications in the model such as noise statistics. Comparatively, the end-to-end approaches above breakdown to any such changes that are not reflected in the training data, and require a complete retraining of the network. Our approach combines some of the benefits of both these paradigms by employing powerful generative neural networks in an iterative scheme as priors that already are familiar with images and blurs under consideration. These neural networks help restrict the candidate solutions to come from the learned or familiar images, and blur kernels only. A change in, for example, blur kernel model only requires retraining of a shallow network of blurs while keeping the image generative network, and rest of the iterative scheme unchanged. Similarly, a change in noise statistics is handled in a complete adaptable manner as in classical iterative schemes. Retaining the adaptability while maintaining a strong influence of the powerful neural networks is an important feature of our algorithm.
Neural network based implicit generative models such as generative adversarial networks (GANs)  and variational autoencoders (VAEs)  have found much success in modeling complex data distributions especially that of images. Recently, GANs and VAEs have been used for blind image deblurring but only in an end-to end manner, which is completely different from our approach as discussed above in detail. In , authors jointly deblurs and super-resolves low resolution blurry text and face images by introducing a novel feature matching loss term during training process of GAN. In , authors proposed sparse coding based framework consisting of both variational learning, that learns data prior by encoding its features into compact form, and adversarial learning for discriminating clean and blurry image features. A conditional GAN has been employed by  for blind motion deblurring in an end to end framework and optimized it using a multi-component loss consisting of both content and adversarial terms. These methods show competitive performance, but since these generative model based approaches are end-to-end they suffer from the same draw backs as other deep learning techniques; discussed in detail above.
and image inpainting, etc. We also note work of  and  that use pretrained generators for circumventing the issue of adversarial attacks. To the best of our knowledge, our work is the first instance of using pretrained generative models as priors in the non-linear inverse problem of blind image deconvolution.
Iii Problem Formulation
We assume the image and blur kernel in (1) are members of some structured classes of images, and of blurs, respectively. For example, may be a set of celebrity faces and comprises of motion blurs. A representative sample set from both classes and is employed to train a generative model for each class. We denote by the mappings and , the generators for class , and , respectively. Given low-dimensional inputs , and , the pretrained generators , and generate new samples , and that are representative of the classes , and , respectively. Once trained, the weights of the generators are fixed. To recover the sharp image, and blur kernel from the blurred image in (1), we propose minimizing the following objective function
where Range() and Range() is the set of all the images and blurs that can be generated by and , respectively. In words, we want to find an image and a blur kernel in the range of their respective generators, that best explain the forward model (1). Ideally, the range of a pretrained generator comprises of only the samples drawn from the distribution of the image or blur class. Constraining the solution to lie only in generator ranges, therefore, implicitly reduces the solution ambiguities inherent to the ill-posed blind deconvolution problem, and forces the solution to be the members of classes , and .
The minimization program in (2) can be equivalently formulated in the lower dimensional, latent representation space as follows
This optimization program can be thought of as tweaking the latent representation vectors , and , (input to the generators , and , respectively) until these generators generate an image and blur kernel whose convolution comes as close to as possible.
The optimization program in (3) is obviously non-convex owing to the bilinear convolution operator, and non-linear deep generative models. We resort to an alternating gradient descent algorithm to find a local minima . Importantly, the weights of the generators are always fixed as they enter into this algorithm as pretrained models. At a given iteration, we fix and take a descent step in , and vice verse. The gradient step in each variable involves a forward and backward pass through the generator networks. Section IV-C talks about the back propagation, and gives explicit gradient forms for descent in each and for this particular algorithm.
The estimated deblurred image and the blur kernel are acquired by a forward pass of the solutions and through the generators and . Mathematically, .
Iv Image Deblurring Algorithm
Our approach requires pretrained generative models and for classes and , respectively. We use both GANs and VAEs as generative models on the clean images and blur kernels. We briefly recap the training process GANs and VAEs below.
Iv-a Training the Generative Models
Generative adversarial networks (GANs) learn the distribution of images in class by playing an adversarial game. A discriminator network learns to differentiate between true images sampled from and fake images of the generator network , while tries to fool the discriminator. The cost function describing the game is given by
is the distribution of latent random variables, and is usually defined to be a known and simple distribution such as .
Variational autoencoders (VAE) learn the distribution by maximizing a lower bound on the log likelihood:
where the second term on the right hand side is the Kullback-Leibler divergence between known distribution, and . The distribution is a proxy for the unknown . Under a rich enough function model , the lower bound is expected to be tight. The functional forms of , and each are modeled via a deep network. The right hand side is maximized by tweaking the network parameters. The deep network is the generative model that produces samples of from latent representation .
Generative model for the face and shoe images is trained using adversarial learning for visually better quality results. Each of the generative model for other image datasets, and for blur kernels are trained using variational inference framework above.
Iv-B Naive Deblurring
To deblur an image , a simplest possible strategy is to find an image closest to in the range of the given generator of clean images. Mathematically, this amounts to solving the following optimization program
where we emphasize again that in the optimization program above, the weights of the generator are fixed (pretrained). Although non-convex, a local minima can be achieved via gradient descent implemented using the back propagation algorithm. The recovered image is obtained by a forward pass of through the generative model . Expectedly, this approach fails to produce reasonable recovery results; see Figure 2. The reason being that this back projection approach completely ignores the knowledge of the forward blur model in (1).
We now address this shortcoming by including the forward model and blur kernel in the objective (4).
Iv-C Deconvolution using Deep Generative Priors
We discovered in the previous section that simply finding a clean image close to the blurred one in the range of the image generator is not good enough. A more natural and effective strategy is to instead find a pair consisting of a clean image and a blur kernel in the range of , and , respectively, whose convolution comes as close to the blurred image as possible. As outlined in Section III, this amounts to minimizing the measurement loss
over both , and , where is the convolution operator. Incorporating the fact that latent representation vectors , and
are assumed to be coming from standard Gaussian distributions in both the adversarial learning and variational inference framework, outlined in SectionIV-A, we further augment the measurement loss in (5) with penalty terms on the latent representations. The resultant optimization program is then
where , and are free scalar parameters. For brevity, we denote the objective function above by . To minimize this non-convex objective, we begin by initializing , and as standard Gaussian vectors, and then take a gradient step in one of these while fixing the other. To avoid being stuck in a not good enough local minima, we may restart the algorithm with a new random initialization (Random Restarts) when the measurement loss in (5) does not reduce sufficiently after reasonably many iterations. Algorithm 1 formally introduces the proposed alternating gradient descent scheme. Henceforth, we will denote the image deblurred using Algorithm 1 by .
For computational efficiency, we implement the gradients in the Fourier domain. Define an DFT matrix as
where denotes the th entry of the Fourier matrix. Since the DFT is an isometry, and also diagonalizes the (circular) convolution operator, we can write the loss function in the Fourier domain as
We now compute the gradient expressions222 For a function of variable , the Wirtinger derivatives of with respect to , and are defined as
Let and . Then, it is easy to see that
For illustration, take the example of a two layer generator , which is simply
where , and are the weight matrices at the first, and second layer, respectively. In this case where and . From this expression, it is clear that alternating gradient descent algorithm for the blind image deblurring requires alternate back propagation through the generators , and as illustrated in Figure 3. To update , we fix
, compute the Fourier transform of a scaling of the residual vector, and back propagate it through the generator . Similar update strategy is employed for keeping fixed.
Iv-D Beyond the Range of Generator
As described earlier, the optimization program (6) implicitly constrains the deblurred image to lie in the range of the generator . This may leads to some artifacts in the deblurred images when the generator range does not completely span the set . This inability of the generator to completely learn the image distribution is often evident in case of more rich and complex natural images. In such cases, it makes more sense to not strictly constrain the recovered image to come from the range of the generator, and rather also explore images a bit outside the range. To accomplish this, we propose minimizing the measurement loss of images inside the range exactly as in (5) together with the measurement loss of images not necessarily within the range. The in-range image , and the out-range image are then tied together by minimizing an additional penalty term, . The idea is to strictly minimize the range error when pretrained generator has effectively learned the image distribution, and afford some slack when it is not the case. The amount of slack can be controlled by tweaking the weights attached with each loss term in the final objective. Finally, to guide the search of a best deblurred image beyond the range of the generator, one of the conventional image priors such as total variation measure is also introduced. This leads to the following optimization program
All of the variables are randomly initialized, and the objective is minimized using gradient step in each of the unknowns, while fixing the others. The computations of gradients is very similar to the steps outlined in Section IV-C. We take the solution , and as the deblurred image, and the recovered blur kernel. The iterative scheme is formally given in Algorithm 2. For future references, we will denote the recovered image using Algorithm 2 by .
Iv-E Untrained Generative Priors
As will be shown in the numerics below that the pretrained generative models effectively regularize the deblurring and produce competitive results, however, convincing performance is limited to the image datasets such as faces, and numbers, etc. that are somewhat effectively learned by the generative models. In comparison, on more complex/rich, and hence not effectively learned image datasets such as natural scenery images, the regularization ability of generative models is expected to suffer. This discussion begs a question: can only a pretrained generator act as a good image prior in the deblurring inverse problem? The answer to this question is surprisingly, no; our experiments suggest that even an untrained structured generative network acts as a good prior for natural images in deblurring. Similar observation was first made in  in other image restoration contexts. This surprising observation suggests that the structure (deep convolutional layers) of the generative network alone (without any pretraining) captures some of the image statistics, and hence can act as a reasonable prior in the inverse problem. Of course, this untrained generative network is not as effective a prior as a pretrained one. However, importantly for us, this ability of a deep convolutional network makes the case for continuing to employ it as a prior on complex images on which the generator is either not well trained or even untrained.
We will continue to use the easy to train generator (slim network) for blurs as a pretrained network while the weights of the untrained image generator will be updated together with the input vectors , and to minimize the measurement loss. The deblurring scheme previously was concerned with only updating , and . Importantly, unlike the pretrained image generator; trained on thousands of image examples, the weights of the untrained generator are learned on one blurred image only in the deblurring process itself. To encourage a sane weight update (leading to realistic generated images), we add a total variation () penalty on the output of the image generator. This assists the generator to learn weights and produce natural images that typically have a smaller tv measure (piecewise constant). Just as before (6), we also add penalty on . The resultant optimization program for image deblurring in this case is
where denotes an image generator with weight parameters , and input . We minimize the objective in the optimization program above by alternatively taking gradient steps in each of the unknowns while fixing the others. The vectors , and are initialized as random Gaussain vectors. We initialize the weights of by fitting to the given blurry image for a fixed random input . This is equivalent to solving the optimization program below
Formally, the iterative scheme to minimize the optimization program in (IV-E) is given in Algorithm 3. From the minimizer , the desired deblurred image, and the blur kernel are obtained using a forward pass as , and , respectively.
V Experimental Results
We now provide a comprehensive set of experiments to evaluate the performance of proposed novel deblurring approach under generative priors. We begin by giving a description of the clean image and blur datasets, and a brief mention of the corresponding pretrained generative models for each dataset in Section V-A. A description of the baseline methods for deblurring is provided in Section V-B. Section V-C gives a detailed qualitative, and quantitative performance evaluations of our proposed techniques in comparison to the baseline methods. The choice of free parameters for both Algorithm 1 and 2 are mentioned in Table I and II, respectively. We also evaluate performance under increasing noise and large blurs. In addition, we discuss the impact of increasing the latent dimension, and multiple random restarts in the proposed algorithm on the deblurred images. Section V-D showcases the image deblurring results on complex natural images using untrained generative priors. In all experiments, we use noisy blurred images, generated by convolving images , and blurs from their respective test sets and adding 1 333For an image scaled between 0 and 1, Gaussian noise of translates to Gaussian noise with standard deviation
translates to Gaussian noise with standard deviationand mean . Gaussian noise (unless stated otherwise).
|Dataset||Steps(t)||Step Size||Random Restarts|
|Dataset||Steps(t)||Step Size||Random Restarts|
|conv(, , ) relu maxpool(, ) conv(, , ) relu maxpool(, ) fc(), fc()||fc() relu reshape upsample() convT(, , ) relu upsample() convT(, , ) relu convT(, , ) relu|
|conv(, , ) batch-norm relu conv(, , ) batch-norm relu conv(, , ) batch-norm relu fc(), fc()||fc() reshape convT(, , ) batch-norm relu convT(, , ) batch-norm relu convT(, , ) batch-norm relu conv(, , ) sigmoid|
and stride. Similarly, convT represents transposed convolution layer. Maxpool(,
) represents a max pooling layer with strideand pool size of . Finally, fc() represents a fully connected layer of size . The decoder is designed to be a mirror reflection of the encoder in each case.
V-a Datasets and Generative Models
To evaluate the proposed technique, we choose three image datasets. First dataset, SVHN, consists of house number images from Google street view. A total of 531K images, each of dimension , are available in SVHN out of which 30K are held out as test set. Second dataset, Shoes  consists of 50K RGB examples of shoes, resized to . We leave images for testing and use the rest as training set. Third dataset, CelebA, consists of relatively more complex images of celebrity faces. A total of 200K, each center cropped to dimension , are available out of which 22K are held out as a test set.
A motion blur dataset is generated consisting of small to very large blurs of lengths varying between 5 and 28; following strategy given in . Some of the representative blurs of this dataset are shown in Figure 4. We generate 80K blurs out of which 20K is held out as a test set.
The generative model of SVHN images is a trained VAE with the network architecture described in Table III. The dimension of the latent space of VAE is 100, and training is carried out on SVHN with a batch size of 1500, and a learning rate of using the Adam optimizer. After training, the decoder part is extracted as the desired generative model . For Shoes and CelebA, the generative model is the default deep convolutional generative adversarial network (DCGAN)of .
The generative model of motion blur dataset is a trained VAE with the network architecture given in Table III. This VAE is trained using Adam optimizer with latent dimension 50, batch size 5, and learning rate . After training, the decoder part is extracted as the desired generative model .
V-B Baseline Methods
Among the conventional algorithms using engineered priors, we choose dark prior (DP) , extreme channel prior (EP) , outlier handling (OH) , and learned data fitting (DF)  based blind deblurring as baseline algorithms. We optimized the parameters of these methods in each experiment to obtain the best possible baseline results. Out of the more recent, and very competitive data driven approaches for deblurring, we choose 
that trains a convolutional neural network (CNN) in an end-to-end manner, and that trains a neural network (DeblurGAN) in an adversarial manner. Each of these networks is trained on SVHN, and CelebA. For CNN, we train a slightly modified (fine-tuned) version of  using Adam optimizer with learning rate and batch size 16. To train the DeblurGAN, we use the code provided by authors of . Deblurred images from these baseline methods will be referred to as , , , , and .
V-C Deblurring Results under Pretrained Generative Priors
We now evaluate the performance of Algorithm 1 under small to heavy blurs, and varying degrees of additive measurement noise. As will be shown, both qualitatively and quantitatively, that Algorithm 1 produces encouraging deblurring results, especially, under large blurs, and heavy noise. However, the central limiting factor in the performance is the ability of the generator to represent the (original, clean) image to be recovered. As pointed out earlier that often the generators are not fully expressive (cannot generate new representative samples) on a rich/complex image class such as face images compared to a compact/simple image class such as numbers. Such a generator mostly cannot adequately represent a new image in its range. Since Algorithm 1 strictly constrains the recovered image to lie in the range of image generator, its performance depends on how well the range of the generator spans the image class. Given an arbitrary image in the set , the closest image , in the range of the generator, to is computed by solving the following optimization program
We solve the optimization program by running gradient descent steps with a step size of for CelebA(SVHN). Parameters for Shoes are the same as CelebA.
A more expressive generator leads to a better deblurring performance as it can well represent an arbitrary original (clean) image leading to a smaller mismatch
to the corresponding range image . Using the triangle inequality, we have the following upper bound on the overall recovery error between the deblurred image , and true image in terms of the range error.
V-C1 Impact of Generator Range on Image Deblurring
The range error purely depends on the expressive power of the generator that in turn relies on factors, such as training scheme, network structure and depth, clearly determined by the available computational resources. Therefore, to judge the deblurring algorithms independently of generator limitations, we present their deblurring performance on range image ; we do this by generating a blurred image from an image already in the range of the generator; this implicitly removes the range error in (14) as now . We call this range image deblurring, and specifically the deblurred image is obtained using Algorithm 1, and is denoted by . For completeness, we also assess the overall performance of the algorithm by deblurring arbitrary blurred images , where is not necessarily in the range of the generator. Unlike above, the overall error in this case accounts for the range error as well. We call this arbitrary image deblurring, and specifically the deblurred image is obtained using Algorithm 1, and is denoted by . Figure 6 shows a qualitative comparison between , , and on CelebA and Shoes dataset. It is clear that the recovered image is a good approximation of the range image , closest to the original (clean) image in the range of . Evidently, the deviations of in referenced figure from indicate the limitation of the used image generative network.
Algorithm 2 mitigates the range error by not strictly constraining the recovered image to lie in the range of the image generator, and uses a combination of the generative prior, and a classical engineered prior; for details, see Section IV-D. The blurred image in this case is again for an arbitrary (not necessarily in the range) image in . The image deblurred using Algorithm 2 is denoted as . For comparison, we present the recovered images using this approach in Figure 5. It can be seen, again, that is in close resemblance to , where as is almost exactly , thus mitigating the range issue of the generator .
V-C2 Qualitative Results on CelebA and Shoes
Figure 5 and 7 gives a qualitative comparison between , , , and on CelebA and Shoes dataset. We also show the image deblurring using the baseline methods introduced in Section V-B. Unfortuanately, the deblurred images under engineered priors are qualitatively a lot inferior than the deblurred images , and under the proposed generative priors, especially under large blurs. On the other hand, the end-to-end training based approaches CNN, and DeblurGAN perform relatively better, however, the well reputed CNN is still displaying over smoothed images with missing edge details, etc compared to our results . DeblurGAN, though competitive, is outperformed by the proposed Algorithm 2 by more than 1.5dB. On closer inspection, although sharp, deviates from , whereas tends to agree more closely with . A close comparison between the recovered images , and reveals that later often performs better than former. The images are sharp and with well defined facial boundaries and markers owing to the fact they strictly come from the range of the generator, however, in doing so these images might end up changing some image features such as expressions, nose, etc. On a close inspection, it becomes clear that how well approximates roughly depends (see, images in the second row specifically of Figure 6) on how close is to exactly, as discussed at length in the beginning of this section. While as are allowed some leverage, and are not strictly confined to the range of the generator, they tend to agree more closely with the ground truth.
V-C3 Qualitative Results on SVHN
Figure 10 gives qualitative comparison between proposed and baseline methods on SVHN dataset. Here the deblurring under classical priors again clearly under performs compared to the proposed image deblurring results . CNN also continues to be inferior, and the DeblurGAN that produced competitive results on CelebA and Shoes above also shows artifacts. We do not include the results in these comparison as already comprehensively outperform the other techniques on this dataset. The convincing results are a manifestation of the fact that unlike the relatively complex CelebA and Shoes datasets, the simpler image dataset SVHN is effectively spanned by the range of the image generator.
V-C4 Quantitative Results
Quantitative results for CelebA, Shoes and SVHN using peak-signal-to-noise ratio (PSNR) and structural-similarity index (SSIM) , averaged over 80 test set images, are given in Table IV. On CelebA and Shoes, the results clearly show a better performance of our proposed Algorithm 2, on average, compared to all baseline methods. On SVHN, the results show that Algorithm 1 outperforms all competitors. The fact that Algorithm 1 performs more convincingly on SVHN is explained by observing that the range images in SVHN are quantitatively much better compared to range images of CelebA and Shoes.
V-C5 Robustness against Noise
Figure 8 gives a quantitative comparison of the deblurring obtained via Algorithm 1 (the free parameters , and random restarts in the algorithm are fixed as before), and baseline methods CNN, DeblurGAN (trained on fixed 1% noise level and on varying 1-10% noise levels) in the presence of Gaussian noise. We also include the performance of deblurred range images , introduced in Section V-C, as a benchmark. Conventional prior based approaches are not included as their performance substantially suffers on noise compared to other approaches. On the vertical axis, we plot the performance metrics (PSNR, and SSIM) and on the horizontal axis, we vary the noise strength from 1 to 10%. In general, the quality of deblurred range images (expressible by the generators) under generative priors surpasses other algorithms on both CelebA, and SVHN. This in a way manifests that under expressive generative priors, the performance of our approach is far superior. The quality of deblurred images under generative priors with arbitrary (not necessarily in the range of the generator) input images is the second best on SVHN, however, it under performs on the CelebA dataset; the most convincing explanation of this performance deficit is the range error (not as expressive generator) on the relatively complex/rich images of CelebA. The end-to-end approaches trained on fixed 1% noise level display a rapid deterioration on other noise levels. Comparatively, the ones trained on 1-10% noise level, expectedly, show a more graceful performance. DeblurGAN generally under performs compared to our proposed algorithms, however, CNN displays competitive performance, and its deblurred images are second best after on CelebA, and third best on SVHN after both , and . Qualitative results under heavy noise are depicted in Figure 9. Our deblurred image visually agrees better with than other methods.
V-C6 Random Restarts
Since our proposed algorithms minimize non-convex objectives, the deblurring results depend on the initialization. Higher quality deblurred images are achieved if instead of running the algorithm once, we run it several times each time with a new random initialization of latent dimensions ( and ), and choosing the best based on the measurement loss (data misfit). Technically, multiple random restarts make us less vulnerable to being trapped in a not so good local minima of the non-convex objective by giving the gradient descent algorithm fresh starts. Figure 11 gives a bar plot of the average PSNR on CelebA and SVHN versus the number of random restarts. Evidently, the PSNR improves with increasing random restarts.
V-C7 Latent Dimension
The length of the latent parameters also affects the quality of the deblurred image. Figure 12 depicts a relationship between average PSNR of the recovered images , and the length of . We do this by training CelebA image generators with different lengths of , and employ each of the trained generator as a prior in Algorithm 1. The result shows that increasing the length of above 10 sharply increases the PSNR, which tapers off after the length of exceeds 200.
Increasing the length of parameters gives the generator more freedom to parameterize the latent distribution, and hence better model the underlying random process. Roughly speaking, this results in improving the expressive power of the generator to a certain degree. However, increasing length of also increases the number of unknowns in the deblurring process. Therefore, increasing the length of only improves the performance to a certain degree as depicted in Figure 12. As mentioned in the beginning of experiments that the length of was fixed at 100 in all the performance evaluations above; this plot shows that setting the length of to 200 should roughly improve the average PSNR by 1dB for the deblurred CelebA images.
V-C8 Robustness against Large Blurs
As is clear from the experiments above that owing to the more involved learning process, the generative priors appear to be far more effective than the classical priors, and firmly guide the deblurring algorithm towards yielding better quality deblurred images. This advantage of generative priors clearly becomes visible in case of large blurs when the blurred image is not even recognizable to the naked eye. Figure 13 shows the deblurred images obtained from a very blurry face image. The deblurred image using Algorithm 2 above is able to recover the true face from a completely unrecognizable face. The classical baseline algorithms totally succumb to such large blurs. The quantitative comparison against end-to-end neural network based methods CNN, and DeblurGAN is given in Figure 14. We plot the blur size against the average PSNR, and SSIM for both Shoes, and CelebA datasets. On both datasets, deblurred images using our Algorithm 2 convincingly outperforms all other techniques. For comparison, we also add the performance of . To summarize, the end-to-end approaches begin to lag a lot behind our proposed algorithms when the blur size increases. This is owing to the firm control induced by the powerful generative priors on the deblurring process in our newly proposed algorithms.
V-D Extension to Natural Images using Untrained Generators
As discussed in detail earlier, the extension of the proposed deblurring under generative priors to more complex/rich natural images is limited by the expressive power of the image generator. Generally, the range error of the image generator deteriorates for relatively more complex/rich image datasets, which in turn results in a below par deblurring performance. One way to address this drawback is to modify Algorithm 1, which strictly restricts the recovered image to the range of the generator, to Algorithm 2, which allows some leverage by going beyond the range of the generator under one of the classical priors. However, the question that still remains is that how to extend the deblurring algorithm under generative models alone (without the input from any classical prior as in Algorithm 2) to complex/rich natural images?
To answer the question, one extreme solution to avoid the shortcoming of generative networks on complex images is to completely skip the network training step, and employ untrained generative networks for image as priors. As mentioned, similar ideas has been recently explored in end-to-end networks . Algorithm 3 does exactly this, and updates the weights of a properly initialized network in addition to , and in the iterative scheme. We test this algorithm on complex blurred images. A properly initialized DCGAN, see (12), modified to the image resolution, was introduced as an untrained image generator in Algorithm 3. Initialization of DCGAN was carried out using Adam optimizer with step size set to for iterations. Later, we optimized the loss in (IV-E), again using Adam optimizer for iterations. The step size for updating , and were chosen to be , and , respectively. Smaller step size for the network weights is to discourage any large deviation of the weight parameters from our qualified initialization derived from the blurred image; the only available information in this case as the generator is not trained a priori.
Figure 15 shows the results of Algorithm 3 on few complex blurry images, and also compares against the classical prior based techniques. Interestingly, even the untrained generator performs quite competitively against these baseline methods. The PSNR, and SSIM values of the deblurred images are also reported in the inset. These initial results are meant to showcase the potential of generative priors on more complex image datasets. This shows that introducing a generative prior in image deblurring is in general a good idea regardless of the expressive power of the generator on the image dataset as it acts as a reasonable prior based on its structure alone. Future work focusing on novel network architecture designs that more strongly favor clear images over blurry ones could pave way for more effective utilization of generative priors in image deconvolution.
This paper proposes a novel framework for blind image deblurring that uses deep generative networks as priors rather than in a conventional end-to-end manner. We report convincing deblurring results under the generative priors in comparison to the existing methods. A thorough discussion on the possible limitations of this approach on more complex images is presented along with a few effective remedies to address these shortcomings. Importantly, the general strategy of invoking generative priors is not limited to deblurring only but can be employed in other interesting non-linear inverse problems in signal processing, and computer vision. The main contribution of the paper, therefore, goes beyond image deblurring, and is in introducing generative priors as effective method in challenging non-linear inverse problem with a multitude of interesting follow up questions.
-  P. Campisi and K. Egiazarian, Blind image deconvolution: theory and applications. CRC press, 2016.
-  D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE signal processing magazine, vol. 13, no. 3, pp. 43–64, 1996.
-  T. F. Chan and C.-K. Wong, “Total variation blind deconvolution,” IEEE transactions on Image Processing, vol. 7, no. 3, pp. 370–375, 1998.
-  R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” in ACM transactions on graphics (TOG), vol. 25, no. 3. ACM, 2006, pp. 787–794.
L. Xu, S. Zheng, and J. Jia, “Unnatural l0 sparse representation for natural
image deblurring,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 1107–1114.
-  T. Michaeli and M. Irani, “Blind deblurring using internal patch recurrence,” in European Conference on Computer Vision. Springer, 2014, pp. 783–798.
-  A. Ahmed, B. Recht, and J. Romberg, “Blind deconvolution using convex programming,” IEEE Transactions on Information Theory, vol. 60, no. 3, pp. 1711–1732, 2014.
-  W. Ren, X. Cao, J. Pan, X. Guo, W. Zuo, and M.-H. Yang, “Image deblurring via enhanced low-rank prior,” IEEE Transactions on Image Processing, vol. 25, no. 7, pp. 3426–3437, 2016.
-  D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Advances in Neural Information Processing Systems, 2009, pp. 1033–1041.
-  C. J. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf, “Learning to deblur,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 7, pp. 1439–1451, 2016.
-  M. Hradiš, J. Kotera, P. Zemcík, and F. Šroubek, “Convolutional neural networks for direct text deblurring,” in Proceedings of BMVC, vol. 10, 2015.
-  A. Chakrabarti, “A neural approach to blind motion deblurring,” in European Conference on Computer Vision. Springer, 2016, pp. 221–235.
-  P. Svoboda, M. Hradiš, L. Maršík, and P. Zemcík, “Cnn for license plate motion deblurring,” in Image Processing (ICIP), 2016 IEEE International Conference on. IEEE, 2016, pp. 3832–3836.
-  P. Hand and V. Voroninski, “Global guarantees for enforcing deep generative priors by empirical risk,” arXiv preprint arXiv:1705.07576, 2017.
-  D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” arXiv preprint arXiv:1711.10925, 2017.
-  A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 1964–1971.
-  Z. Hu, J.-B. Huang, and M.-H. Yang, “Single image deblurring with adaptive dictionary learning,” in Image Processing (ICIP), 2010 17th IEEE International Conference on. IEEE, 2010, pp. 1169–1172.
-  H. Zhang, J. Yang, Y. Zhang, and T. S. Huang, “Sparse representation based blind image deblurring,” in Multimedia and Expo (ICME), 2011 IEEE International Conference on. IEEE, 2011, pp. 1–6.
-  J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “Blind motion deblurring from a single image using sparse approximation,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 104–111.
-  J. Pan, R. Liu, Z. Su, and G. Liu, “Motion blur kernel estimation via salient edges and low rank prior,” in Multimedia and Expo (ICME), 2014 IEEE International Conference on. IEEE, 2014, pp. 1–6.
-  J. Pan, D. Sun, H. Pfister, and M.-H. Yang, “Blind image deblurring using dark channel prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1628–1636.
-  L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via l 0 gradient minimization,” in ACM Transactions on Graphics (TOG), vol. 30, no. 6. ACM, 2011, p. 174.
-  Y. Yan, W. Ren, Y. Guo, R. Wang, and X. Cao, “Image deblurring via extreme channels prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4003–4011.
-  J. Dong, J. Pan, Z. Su, and M.-H. Yang, “Blind image deblurring with outlier handling,” in IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2478–2486.
-  J. Pan, J. Dong, Y.-W. Tai, Z. Su, and M.-H. Yang, “Learning discriminative data fitting functions for blind image deblurring.” in ICCV, 2017, pp. 1077–1085.
-  L. Li, J. Pan, W.-S. Lai, C. Gao, N. Sang, and M.-H. Yang, “Learning a discriminative prior for blind image deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6616–6625.
-  S. Nah, T. H. Kim, and K. M. Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” arXiv preprint arXiv:1612.02177, 2016.
-  X. Xu, D. Sun, J. Pan, Y. Zhang, H. Pfister, and M.-H. Yang, “Learning to super-resolve blurry face and text images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 251–260.
-  T. Nimisha, A. K. Singh, and A. Rajagopalan, “Blur-invariant deep learning for blind-deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4752–4760.
-  O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “Deblurgan: Blind motion deblurring using conditional adversarial networks,” arXiv preprint arXiv:1711.07064, 2017.
-  Y. Chen, F. Wu, and J. Zhao, “Motion deblurring via using generative adversarial networks for space-based imaging,” in 2018 IEEE 16th International Conference on Software Engineering Research, Management and Applications (SERA). IEEE, 2018, pp. 37–41.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
-  D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
-  A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” arXiv preprint arXiv:1703.03208, 2017.
-  V. Shah and C. Hegde, “Solving linear inverse problems using gan priors: An algorithm with provable guarantees,” arXiv preprint arXiv:1802.08406, 2018.
-  R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do, “Semantic image inpainting with deep generative models,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5485–5493.
P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-gan: Protecting classifiers against adversarial attacks using generative models,” 2018.
-  A. Ilyas, A. Jalal, E. Asteri, C. Daskalakis, and A. G. Dimakis, “The robust manifold defense: Adversarial training using generative models,” arXiv preprint arXiv:1712.09196, 2017.
-  A. Yu and K. Grauman, “Fine-grained visual comparisons with local learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 192–199.
-  G. Boracchi, A. Foi et al., “Modeling the performance of image restoration from motion blur.” IEEE Trans. Image Processing, vol. 21, no. 8, pp. 3502–3517, 2012.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in Advances in Neural Information Processing Systems, 2016, pp. 2234–2242.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017.
-  S. Athar, E. Burnaev, and V. Lempitsky, “Latent convolutional models,” 2018.
Extended qualitative results for CelebA, SVHN, and Shoes are provided in Figures 16, 17 and 18. In addition to motion blurs, we also trained a generative model on Gaussian blurs and show qualitative results for Algorithm 1 on SVHN and CelebA in Figure 19 and 20, respectively.
We also employed powerful PG-GAN  as a generative model , that have been shown to produce high resolution realistic images. But as observed in  pretrained GANs do not generalize well to higher resolution. We suspect this is due to the mode collapse issue during training of PG-GAN. Therefore, we report qualitative results with as generator of PG-GAN on images generated by it as shown in Figure 21.