Robust Blind Deconvolution via Mirror Descent

03/21/2018 ∙ by Sathya N. Ravi, et al. ∙ University of Wisconsin-Madison 0

We revisit the Blind Deconvolution problem with a focus on understanding its robustness and convergence properties. Provable robustness to noise and other perturbations is receiving recent interest in vision, from obtaining immunity to adversarial attacks to assessing and describing failure modes of algorithms in mission critical applications. Further, many blind deconvolution methods based on deep architectures internally make use of or optimize the basic formulation, so a clearer understanding of how this sub-module behaves, when it can be solved, and what noise injection it can tolerate is a first order requirement. We derive new insights into the theoretical underpinnings of blind deconvolution. The algorithm that emerges has nice convergence guarantees and is provably robust in a sense we formalize in the paper. Interestingly, these technical results play out very well in practice, where on standard datasets our algorithm yields results competitive with or superior to the state of the art. Keywords: blind deconvolution, robust continuous optimization

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 12

page 13

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image deblurring has been an active area of study in computer vision for nearly five decades. The early proposals sought to sharpen or

deblur images from photographs by relying on parameters relating the exposure and the amplifier gain, e.g., via the use of the Stroke/Zech division filter Stroke and Halioua (1970). Most contemporary algorithms for deblurring, however, pose the problem as blind deconvolution, which refers to separating a true unknown signal and some unknown “kernel” or “filter” when provided knowledge only of the noisy measurement of the signal convolved with the filter. This is a fundamental topic today in signal processing and vision, and remains challenging due to its non-convex and ill-posed nature — only within the last few years has brisk progress been made towards methods that gracefully handle real images encountered in practice Levin et al. (2009); Campisi and Egiazarian (2016). These recent developments notwithstanding, due to the foregoing technical challenges, we are often unable to guarantee provably good solutions to the underlying optimization task, and strategies to address these issues are being studied by various researchers in our community today Perrone et al. (2015); Jin et al. (2017); Li et al. (2016); Campisi and Egiazarian (2016).

Modern approaches generally prefer one of two related but distinct strategies for blind deconvolution. On the statistical side, research has primarily revolved around Bayesian methods Ruiz et al. (2015), taking advantage of useful priors ranging from fundamental image geometry in the context of its relation to edge detection and saliency, to expert knowledge of the specific application domain of interest Cho and Lee (2009); Xu and Jia (2010). While these ideas provide guarantees in terms of robustness, the development of efficient sampling (e.g., Gibbs sampler) and inference algorithms remains an active topic of research. On the optimization side, total variation regularization has proven to be extremely effective in general image deblurring Perrone and Favaro (2014); Chan and Wong (1998); Osher et al. (2005)

in a variety of image domains. While the mathematical properties of total variation have been well studied in applied mathematics, signal processing and machine learning, our understanding of the robustness and convergence behavior of even the best performing algorithms for blind deconvolution based on this construct remains limited, although there is exciting progress being made

Srinivasan et al. (2017). A primary motivation of our work is to shed light on these theoretical issues.

Separate from, but complementary to the above lines of work, the enormous success of deep convolutional architectures in vision has led to a number of papers Sun et al. (2015); Schuler et al. (2016); Chakrabarti (2016); Noroozi et al. (2017) exploring how such successes can be adapted to deconvolution in general. While some initial attempts showed the use of machine learning methods for non-blind

image deconvolution (i.e., the blur kernel is provided), discriminatively trained architectures have now been shown to work quite well for the general setting, both with and without priors on motion blur types. A natural question one may ask is whether an in-depth study of the core blind deconvolution formulation and its properties is relevant in light of this still evolving body of convolutional neural networks based literature. The reader will see that our work is complementary. Of the recent proposals in this line of work reformulate deconvolution as a supervised learning problem by synthesizing blurred and sharp image pairs, and are often based on some form of blind deconvolution sub-routine internally

Schuler et al. (2016). As these methods get closer to practical deployment in mission critical applications, a detailed assessment of their behavior profile will be a first order requirement for regulation compliance. To enable investigating the robustness and convergence properties of these architectures and their resilience to adversarial examples — as is happening in the last few years for other problems in both computer vision and machine learning Su et al. (2017); Moosavi-Dezfooli et al. (2016); Moosavi Dezfooli et al. (2017) — we will necessarily rely on and benefit from a “first principles” understanding of such properties for the standalone (i.e., shallow regimes of) blind deconvolution.

Contribution. In this paper, we provide (1) a quantifiably and provably robust algorithm for blind deconvolution with (2) guaranteed convergence properties. To our knowledge, no algorithm is currently known that offers both these properties at once. Our convergence guarantees match the best known results in optimization at this time. Our technical analysis is also backed up by practical performance. Via an extensive experimental study, we show that on most available benchmarks, our simple algorithm competes favorably with (or is superior to) the state of the art, and provide a user-friendly implementation which can be easily extended to a complete user-interactive deblurring package.

1.1 Prior Work

Methods for image deblurring via blind deconvolution have employed a variety of regularizations derived from a wide range of image priors. The literature is vast and so we restrict our discussion to a subset of works that are closely related to or motivate our proposed strategy. The earlier forms of regularization were based on the -norm in You and Kaveh (1996), where an alternating minimization scheme was proposed. More recent improvements have been proposed by Cho and Lee Cho and Lee (2009) and Xu and Jia Xu and Jia (2010). On the other hand, total variation regularization – the defacto choice in many state of the art methods today – was initially deployed in image denoising applications Rudin et al. (1992); Vogel and Oman (1996). and brought to the image deconvolution problem by Chan and Wong Chan and Wong (1998). A nice result by Osher et al. (2005) gives a variational iterative procedure for solving the total variation objective. A conceptually distinct set of results for blind deconvolution adopt a more statistical approach instead. Levin et al. in Levin et al. (2009) provide analysis of algorithms following maximum a posteriori

(MAP) estimators. A recent work

Ruiz et al. (2015) gives a nice and comprehensive overview of Bayesian methods for blind deconvolution. A few years back, Perrone and Favaro (2014) built on analysis in Levin et al. (2009) and demonstrated experimentally the behavior of Chan and Wong (1998) .In a follow-up work, those authors showed the advantage of a logarithmic prior Perrone et al. (2015), obtaining state of the art results with a mild modification to the classical TV-norm based formulation which we will present shortly. Separate from total variation regularization based approaches, interesting results have been shown by Michaeli and Irani (2014) through an regularization on text images and by Michaeli and Irani (2014); Sun et al. (2013) via the use of patch priors. Recently, a detailed comparative study was conducted by Lai et al. (2016), in which participants were asked to qualitatively compare two results from multiple algorithms, a subset of which are described in our review above.

In the last few years, ideas based on specialized deep networks have started yielding interesting results for this problem. For example, Sun et al. (2015) was among the first approaches for motion blur removal by posing the problem as a supervised learning task and training a convolutional neural network (CNN) to infer the parameters. Schuler built on these results in Schuler et al. (2016), and Chakrabarti Chakrabarti (2016) constructed a network to predict the Fourier coefficients of the filter necessary to deblur specific image patches. Taking advantage of modern convolutional architectures, Nah et al. (2016) constructed deep multi-scale networks for dynamic scene deblurring with strong empirical results. In the past year, Generative Adversarial Networks have also been applied with measured success Ramakrishnan et al. (2017).

2 The Blind Deconvolution problem

Throughout this paper we assume that an image is an

dimensional vector taking values between

and without loss of generality. We will use and to denote the vectorized sharp image and blur kernel, both of which are to be estimated given the vectorized blurry image . Mathematically, the model can be written as,

(1)

where denotes the usual convolution between two signals and denotes the independent noise vector at each pixel. Assuming that , we can estimate by maximizing the log-likelihood, corresponding to solving the following least squares optimization problem,

(2)

Observe that the number of parameters to be estimated is and can be much larger than the number of observations if the kernel is large. To solve for solutions to (2), many regularization functions (or priors) and/or constraints have been proposed in the literature Levin et al. (2009); Ruiz et al. (2015); Campisi and Egiazarian (2016). To keep the presentation simple, we will focus our attention on two generic components that have shown strong empirical performance to specify the full model.

Component 1) The Total Variation (TV) -norm on has been shown to promote smoothness of the estimated image Chambolle and Lions (1997). The image TV norm is defined as some norm of its discrete gradient field over the image lattice :

(3)

Note that for , this corresponds to the classical anisotropic and isotropic TV norm respectively. Our theoretical analysis extends to any , but we will assume that to describe our results.

Component 2)

In order to define a reasonable constraint set, we appeal to the fundamentals of the image capture process. Pixel values are explicitly a positive function of the photon count at a specific point on the image sensor, and so we enforce the constraint that the kernel must be nonnegative. Further, a blurred image can be interpreted as a weighted average of a sharp image captured with slight shifts, typically stemming from an extended exposure time due to a variety of reasons. Together, these requirements form our constraint set: the probability simplex

. With these two pieces, the problem that we aim to solve can be formally written as,

(4)

where is a tunable regularization parameter. Intuitively, higher values of will encourage more smoothness in the optimal sharp image of (4).

3 The PRIDA Algorithm

In principle, Problem (4) should be easily amenable to many continuous optimization methods but in practice, Perrone and Favaro (2014) provides compelling evidence that choosing the right algorithm is critical to a successful recovery of the sharp image . Notice two important but straightforward properties of the optimization in (4): 1) the objective is smooth and convex in each argument individually but not jointly convex and 2) the feasible set is convex and compact.

Roadmap. We will see shortly that properly exploiting these two simple properties will suggest a natural choice of an algorithm that is familiar in non-linear optimization but not very broadly used in machine learning and vision. Interestingly, after we motivate the choice of the algorithm, we will see how the properties above provide certain technical results that yield guarantees for fast convergence rates and subsequently, suggest strategies for a rigorous robustness analysis. But first, let us analyze why some obvious simplifications and/or direct use of an alternating scheme may not be an effective strategy for this model.

Potential Idea: ignore nonconvexity? A natural strategy to solve (4) may be to use an algorithm which exploits the convexity of individually with respect to and . A well known method that offers this capability is the Alternating Minimization (AM) algorithm Hardt (2014). The AM algorithm for this model performs the following calculation (or update) at each iteration:

(5)
(6)

Since both the subproblems (5) and (6) are convex optimization problems, they can both be computed efficiently Hardt (2014).

A potential problem of Alternating Minimization: Random versus structured blur. There are some recent results that analyze the convergence behavior of the AM algorithm for random blur kernels Hardt (2014), and offer guarantees on its performance. Unfortunately, it is still an open question whether such guarantees are available for structured blur kernels that we universally encounter in vision. In fact, Perrone and Favaro (2014) explicitly constructs an illustrative example where the AM algorithm converges to a strict saddle point due to the nonconvexity of .

In the context of the blind deconvolution problem, strict saddle points correspond to a no blur solution, that is, when the kernel has only one nonzero entry. We see that in Perrone and Favaro (2014) (cf. Section 3.4), the authors give a clear example where the AM algorithm converges to the no blur solution, and thereby propose specific work-arounds to solve the subproblem (6) such that the algorithm empirically converges to the desired one instead. The authors also show that their scheme performs consistently better on many standard benchmark datasets. However, to our knowledge, it is not clear if the procedure suggested in Perrone and Favaro (2014) guarantees convergence in general. Whether the method in Perrone and Favaro (2014) provably returns a minimizer of (4) is also not described in their work.

Revisit Gradient Methods? Instead of the alternating scheme, we take a more “classical” approach to this problem and propose updating both and simultaneously at each iteration. Our choice of algorithm, described shortly, is motivated by two key insights in Problem (4). First, for a smooth optimization problem, it has recently become known that the set of initial points from where a first order gradient method converges to a saddle point has a Lebesgue measure of zero Panageas and Piliouras (2016). This immediately entails that with with very high probability, a gradient method will converge to a local minimizer. Second, the geometry of will allow us to provably speed up the convergence which is interesting from both a theoretical standpoint and a practical one.

Figure 1: Visualization of Projected Gradient Descent (PGD) and Mirror Descent (MD) on the probability simplex. If the step size is not selected carefully, projected updates are more likely to find solutions along the boundary.

A Mirror-descent style algorithm. To describe our algorithm, it is easiest to briefly review the form of a classical mirror descent (MD) scheme used in convex optimization. Recall that the standard way to solve constrained optimization problems is to use projections, that is, we first take a (negative) gradient step and then a Euclidean

projection on to the feasible set, assuming that this is easy to do (as is the case with norm balls, hyperplanes and so on). This procedure is often referred to as Projected Gradient Descent (PGD):

(7)

where is the Euclidean projection. Under mild conditions on the step size , PGD in fact guarantees convergence. However, the use of PGD type algorithms require some caution: PGD completely disregards the geometry of the feasible set and only uses the local behavior of the objective function. Hence, the algorithm can be very inefficient particularly in the high dimensional and large scale settings we see in vision Mahadevan and Liu (2012); Luong et al. (2012).

Intuitively, Mirror Descent (MD) addresses this problem with the following simple modification: it is better to choose a function that acts like a metric depending on the feasible set. This function is called the Distance Generating Function (DGF) and moreover, it is enough for that function to be a metric just on the feasible set Juditsky et al. . Exploiting this property, MD has been used to design algorithms that are provably faster than PGD Nesterov (2005) and is the preferred algorithm in many applications Srebro et al. (2011); Jain and Thakurta (2012). An excellent description of the MD algorithm and its variants is given in Nemirovski (2012). Recently, Zhou et al. (2017) showed how to extend MD to a class of nonconvex problems called variationally coherent problems. But unfortunately, our problem (4) does not satisfy the assumptions, hence it is not clear how or if the results shown in Zhou et al. (2017) apply.

Motivated by the above discussion, we propose a Provably Robust Image Deconvolution Algorithm (PRIDA), shown in Alg. 1. As alluded to previously, PRIDA is similar in spirit to the MD algorithm in Convex Optimization. The main difference between the standard MD algorithm and PRIDA is that the step size is chosen independently for each coordinate. The intuition behind the step size rule can be seen as follows: if a coordinate of the filter (kernel) at the th iteration is large in magnitude, then we expect it to remain reasonably high at the th iteration. Our empirical results show that this is very effective in practice. Next, we show that PRIDA converges provably to a minimizer.

  Pick a starting point , .
  for  do
     
     
      (*)
     
  end for
Algorithm 1 Provably Robust Image Deblurring Algorithm (PRIDA)

4 Convergence, Robustness, and More

To analyze PRIDA, we use the following equivalent interpretation of the update step (derived in the supplement):

(8)
(9)
(10)

where

represents the usual Kullback-Leibler divergence between

and , is the inner product, and denotes element-wise multiplication. Note that when the divergence function is replaced by the Euclidean norm, the algorithm becomes the standard PGD update. Observe that acts as a distance-generating function on simplex , and hence is unique. In order to show convergence we use the following intermediate result. Let . Then for any , we have that,

(11)

See supplement. With this in hand, we have the following convergence result. Let , then with step sizes where is fixed, PRIDA converges to a local minimizer of (avoids strict saddle points) almost surely. We will assume without loss of generality for the analysis that the step size . We prove the convergence in two steps. In step 1, we show that the iterates of the PRIDA algorithm 1 converges to a fixed point. In step 2, we show that there is a subsequence that converges to a stationary point, that is, a point that approximately satisfies the first order necessary conditions. We then use Lee et al. (2017) to show that such a stationary point is a locally optimal solution.

Step 1: For notational convenience, let , where the first coordinates denote and the last coordinates denote respectively. Define where is the indicator function that takes the value if and otherwise. Then for any , we have that,

(12)
(13)
(14)
(15)

where (12) is by smoothness of the gradient (assumption), (14) is by Cauchy-Schwarz inequality and (15) is by Pinsker’s inequality (see page 88 in Tsybakov (2008)). Note that the minimizer of with respect to exactly corresponds to the update rule in PRIDA and that is strongly convex in (again due to Pinsker’s inequality, see page 301 in Bubeck et al. (2015)). Hence we can bound the per iteration improvement by,

(16)
(17)

where inequality (17) follows from (15) . Summing up the inequalities (over ) in (17), we see that the , that is, algorithm converges to a fixed point.

Step 2: Since the update rule for is standard gradient descent, we know that the iterates converge to a point where the gradient vanishes, see section 1.2.3. in Nesterov (2013). So we focus on the update rule for the for which we use Lemma 4. Taking there, we have that,

(18)
(19)

where we used the smoothness assumption in (19). Again summing the inequalities in (19) (over ), we have that,

(20)
(21)

Taking the limit as , we showed that we can find a point that satisfies the first order optimality conditions of our optimization problem. Thus we have shown that after steps, we can find a point that is optimal. PRIDA iterates now satisfy the assumptions of Proposition 10 in Lee et al. (2017), and so by Corollary 7 therein it directly follows that PRIDA does not converge to a strict saddle point almost surely. While we can get the same convergence rate (up to logarithmic factors in ) of as that of PGD (see Ghadimi and Lan (2016)), the efficiency of PRIDA comes from the fact that each iteration of PRIDA takes time, compared to the required in PGD (see Chen and Ye (2011) for details) and is trivially parallelizable/amenable to GPU implementation. Details are included in the supplement.

4.1 Robustness

Having shown the convergence of PRIDA, the natural follow-up investigation is to characterize its behavior in terms of its noise tolerance. We call an algorithm robust if it produces the same output on two different images such that one of them is a slightly perturbed version of the other. This notion of robustness has been recently introduced in the machine learning literature under the context of algorithmic stability Hardt et al. (2015). Recent results in our community show that this is a critically desirable property of algorithms used in vision-based deployments since they are often sensitive to very small perturbations Moosavi Dezfooli et al. (2017); Su et al. (2017).

Plan of Attack. Using only the main concepts of stability, we aim to measure the robustness of our algorithm. In typical stability analyses, noise is often introduced in the gradient computation, as a proxy for stochastic or approximate gradient updates. We follow this idea, and aim to bound the difference between the result of a noisy gradient update and a clean one. To be specific, we assume that two images, one with noise and the other without, produce gradients that are approximately the same.

Hence, at iteration we observe some noisy gradient of (and respectively of ). We would like to bound the distance between and ( of ). In what follows, we look only at the update for the kernel , but note that an analogous argument can be made for the sharp image : the update step for is essentially a (sub)gradient step, and so the argument is simpler. Let be the initial point where all coordinates are equal. Let be the true gradient and be some noisy gradient. Then, we have that computed using and are -close in the sense. In order to study the robustness properties of our algorithm, we will use the interpretation of PRIDA given in (9) and (10). Because the noisy gradient is only being used in the (9), we analyze how much iterates can stray after each of the two updates separately. To that end define the intermediate iterate computed using the true gradient and similarly the noisy one. To make the proof simple, we will assume that the step size is same for all the coordinates, that is, (say ) and note that the argument can be easily extended for the general case. Then, the distance between and can be bounded as follows,

(22)
(23)

where the first step follows from the definition of and , and the last two from the fact that and . Now we show that the second step of the update, which corresponds to a simple normalization, is also well behaved:

(24)
(25)
(26)
(27)
(28)

where we use the triangle inequality for (25), the reverse triangle inequality for the inequality in (27), and (28) follows from (23). If the noise level satisfies , then we know the iterates computed using the noisy and true gradients are at most away. This result clearly shows the interplay between the noise level and the step size . When a sharp image undergoes convolution followed by the addition of noise, Lemma 4.1 tells us that it is better to take short steps instead of being overtly aggressive. Why are short steps sufficient in practice? Given that every pixel in the blurred image is a nonnegative combination of neighboring pixels in the sharp image, it is enough to search among its neighbors to form a realistic image rather than searching over the whole image space. This can be performed efficiently using short steps.

4.2 Implementation Details

Initialization. We follow the standard practice common across many vision problems and estimate both and at many resolutions. More specifically, our estimation proceeds through a coarse-to-fine pyramid scheme. For each level, we run PRIDA (Algorithm 1) and upscale the resulting estimated image and kernel for the next level.

At the coarsest level, we initialize the kernel to be uniform, that is, . While this choice of initialization is critical for many existing algorithms Perrone and Favaro (2014); Pan et al. (2014), it is not so important for PRIDA. Because the objective function is (jointly) bilinear, it may be the case that the initial few gradient steps will push some of the coordinates of the kernel to after a Euclidean projection. This is problematic because it will remain at during the entire course of optimization (at that scale), thus reducing the pyramid scheme’s effectiveness. PRIDA on the other hand can be thought of as a version of “soft-removal”: the multiplicative nature will naturally force all elements of any given kernel to remain strictly positive at all times, and hence a few “bad” steps will not necessarily hurt the overall performance.

!b

Figure 2: Convergence rate comparison of PRIDA (in red) and Perrone et al. in Perrone and Favaro (2014) (in blue).

Numerical Considerations. When calculating the step size per pixel , it may be the case that a given point in the kernel has already been driven close to 0. In this case if the (noisy) gradient is negative, however small, the computed step may be if the value at that point has fallen below machine precision. To avoid these issues, we apply a “Big ” correction Nemirovski (chapter 4) such that the step taken is the minimum of , where is a large positive constant. Intuitively, a large will allow PRIDA to take larger steps, thus encouraging faster convergence. We fix throughout our experiments.

5 Experimental Evaluation

Figure 3: From left to right: (a) Input blurred images with added Gaussian noise. (b) Result from Perrone and Favaro (2014). (c) Result from Perrone et al. (2015)

. (d) Our Result. (e) Ground Truth. From top to bottom, each row corresponds to added noise with standard deviation 0,0.1, and 0.5 respectively.

All experiments were conducted using MATLAB 2017a running on a 12-core Xeon E5-2620 @ 2.4 GHz machine with 64GB RAM. For all experiments on images of size

, we use a fixed regularization hyperparameter of

. The run time of each image on the finest scale is approximately - minutes. In the first two sets of experiments, our goal is to validate the theoretical properties of PRIDA shown in earlier sections viz., convergence and robustness. Finally, we test if PRIDA is efficient on real world color images. We compare with two recent standard baselines that are closely related to our algorithm Perrone and Favaro (2014); Perrone et al. (2015), and provide additional experimental details and comparisons with other algorithms in the supplement.

5.1 Convergence

Figure 2 shows the function value convergence rates for PRIDA and for Perrone and Favaro (2014). Using the same pyramid scheme, we compute the function value for 1000 iterations of both algorithms over the finest level, fixing as stated above for PRIDA and the default setting provided by the authors in Perrone and Favaro (2014). Notice that while Perrone and Favaro (2014)’s method initially drops quickly, our method eventually converges much faster to a lower objective function. We note also that the PRIDA updates are significantly more stable, providing evidence of our robustness analysis above.

Figure 4:

Color image recovery in the presence of intensity noise. From left to right, we add 0-mean Gaussian random noise with variance

respectively. The first row shows the blurred and noisy input, the second the recovered kernel, and the third our final image recovery. Standard denoising methods can be applied to the deblurred image.

5.2 Robustness

Noise Level
Alg. 1 3 5 7 9
Perrone and Favaro (2014) 0.0008 0.0223 0.0584 0.0849 0.0957
Perrone et al. (2015) 0.0006 0.0994 0.1375 0.1212 0.0941
PRIDA 0.0008 0.0041 0.0089 0.0149 0.0231
Table 1: Average endpoint error across the dataset in Levin et al. (2009) for varying levels of noise.

To exemplify the robustness of PRIDA to noise, we conduct experiments on the well-known dataset first introduced by Levin et al. (2009). The grayscale images are pixels in size with known blur kernels ranging in size from 13 to 27 pixels square. To evaluate robustness, we add varying levels of noise to each image, and qualitatively evaluate the end result. We compare our method to the algorithms presented in Perrone and Favaro (2014) and in Perrone et al. (2015).

We show the results of PRIDA in comparison with the standard baselines in Figure 3, see supplement for more results. To generate the noisy and blurred images, Gaussian random noise with mean was added to each blurred image. Here we can clearly observe the ability of our procedure to handle large amounts of noise. Over the entire dataset, we observe that in some interesting cases both algorithms from Perrone and Favaro (2014) and Perrone et al. (2015) are able to recover a reasonably sharp image in the presence of noise. Over the entire dataset from Levin et al. (2009), however, we note that their results are significantly more variable than that of PRIDA. On average, PRIDA is much more consistent in recovery over the entire dataset, shown in Table 1, validating our theoretical analysis above.

5.3 Scaling Up to the Real World

While the results above are valuable in validating our theoretical claims, we also evaluate our algorithms’ robustness on real world images. Computationally, an interesting property of PRIDA is that all of its operations involve convolutions (Fast Fourier Transforms) and elementwise operations, both of which can benefit from GPU efficiencies. We provide our (unoptimized) code in the supplement.

We apply PRIDA to a set of large, color images that have been synthetically blurred. A recent comparative study on modern blind deconvolution algorithms compiled a dataset of synthetically-blurred spanning a wide range of image sizes, image content, and blur difficulty Lai et al. (2016). 25 real-world images collected from the Internet were each uniformly blurred with 4 known kernels of various size and support. Applying our algorithm to these images we find results comparable to state-of-the-art.

In order to find an appropriate regularization, we perform a mild parameter sweep across all 25 images simultaneously for a given kernel size. For a kernel size of , we find that leads to the best qualitative results. Results on the front page include samples from this set.

To demonstrate the robustness of PRIDA on color images, noise was added to each pixel’s lightness value in LAB space Wyszecki and Stiles (1982) and converted back to the original RGB color space. Figure 4 shows how our recovery is affected by increasing amounts of Gaussian random noise. While our kernel recovery degrades with more added noise, it is clear that we are still able to recover the kernel structure, and that our final recovered image is in fact deblurred. Here, we present the raw output of our proposed model. Since the literature on denoising algorithms is mature, if necessary, a denoising algorithm can easily be run after PRIDA to remove the noise depending on its type. In fact, it is a common practice to have a “non-blind” stage at the end of the fine scale in many existing deblurring algorithms.

6 Conclusion

We propose a new algorithm, PRIDA, for recovering sharp images through blind deconvolution. PRIDA uniquely takes advantage of the specific problem domain, employing mirror descent over the simplex constraint set. We present theoretical analysis of PRIDA and derive guarantees on both convergence and robustness with no extra assumptions. In most real world settings, as noted by Zhu and Milanfar (2011), low light conditions and auto-focus software systems may introduce extra blur and noise since they depend on both exposure time and camera settings.

Our exhaustive experimentation shows that PRIDA can be a comprehensive solution for real world problems. We showed both qualitatively and quantitively that PRIDA performs as good as the state of the art under no noise conditions and unarguably better in the presence of noise. We believe that our results will be a strong foundation not only for single image blind deconvolution problems, but also for furthering the success of recent data driven approaches such as deep learning architectures.


Our code and additional experiments can be accessed through our Github repository at https://github.com/sravi-uwmadison/prida.

Appendix

7 Details of Lemma 4

Let . Then for any , we have that,

(29)

Define . Since minimizes over , and that is differentiable and strongly convex on with respect to -norm, (see page 88 in Tsybakov (2008)), the gradient at should satisfy the following inequality,

(30)

Now the derivative of divergence with respect to the th coordinate of is given by,

(31)

Plugging in the derivative of KL divergence, adding and subtracting into (30), we get,

(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)

where (34) is because . Now rearranging terms in (39) (with in (32)) we get the desired result. Comparison of PGD vs PRIDA: Even though both PGD and PRIDA achieve the same convergence result for smooth function as said in the main paper, PRIDA is more general since the smoothness assumption can be relaxed for any (instead of the specific as required by PGD). This can be seen from inequalities (13)-(15) as,

(40)
(41)

This is most useful when since it amounts to checking (absolute) maximum entry of the Hessian matrix which is easy to perform.

Moreover, PRIDA can be implemented in an atomic fashion, that is, each coordinate of can be updated individually followed by a simple normalization, thus the per iteration complexity is . In contrast, the most efficient algorithms to project onto the probability simplex requires at least , see Figure 1 in Duchi et al. (2008). While the penalty seems innocuous, these algorithms at the least require sorting (as a subroutine) and hence cannot be easily implemented in GPUs.

References

  • Bubeck et al. (2015) Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(3-4):231–357, 2015.
  • Campisi and Egiazarian (2016) Patrizio Campisi and Karen Egiazarian. Blind image deconvolution: theory and applications. CRC press, 2016.
  • Chakrabarti (2016) Ayan Chakrabarti. A neural approach to blind motion deblurring. In European Conference on Computer Vision, pages 221–235. Springer, 2016.
  • Chambolle and Lions (1997) Antonin Chambolle and Pierre-Louis Lions. Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188, 1997.
  • Chan and Wong (1998) Tony F Chan and Chiu-Kwong Wong. Total variation blind deconvolution. IEEE transactions on Image Processing, 7(3):370–375, 1998.
  • Chen and Ye (2011) Yunmei Chen and Xiaojing Ye. Projection onto a simplex. arXiv preprint arXiv:1101.6081, 2011.
  • Cho and Lee (2009) Sunghyun Cho and Seungyong Lee. Fast motion deblurring. In ACM Transactions on Graphics (TOG), volume 28, page 145. ACM, 2009.
  • Duchi et al. (2008) John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l 1-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pages 272–279. ACM, 2008.
  • Ghadimi and Lan (2016) Saeed Ghadimi and Guanghui Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Mathematical Programming, 156(1-2):59–99, 2016.
  • Hardt (2014) Moritz Hardt. Understanding alternating minimization for matrix completion. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 651–660. IEEE, 2014.
  • Hardt et al. (2015) Moritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
  • Jain and Thakurta (2012) Prateek Jain and Abhradeep Thakurta. Mirror descent based database privacy.

    Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques

    , pages 579–590, 2012.
  • Jin et al. (2017) Meiguang Jin, Stefan Roth, and Paolo Favaro. Noise-blind image deblurring. In

    Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE

    , 2017.
  • (14) Anatoli Juditsky, Arkadi Nemirovski, et al. First order methods for nonsmooth convex large-scale optimization, i: general purpose methods. Optimization for Machine Learning.
  • Lai et al. (2016) Wei-Sheng Lai, Jia-Bin Huang, Zhe Hu, Narendra Ahuja, and Ming-Hsuan Yang. A comparative study for single image blind deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1701–1709, 2016.
  • Lee et al. (2017) J. D. Lee, I. Panageas, G. Piliouras, M. Simchowitz, M. I. Jordan, and B. Recht. First-order Methods Almost Always Avoid Saddle Points. October 2017.
  • Levin et al. (2009) Anat Levin, Yair Weiss, Fredo Durand, and William T Freeman. Understanding and evaluating blind deconvolution algorithms. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1964–1971. IEEE, 2009.
  • Li et al. (2016) Yanjun Li, Kiryung Lee, and Yoram Bresler. Identifiability in blind deconvolution with subspace or sparsity constraints. IEEE Transactions on Information Theory, 62(7):4266–4275, 2016.
  • Luong et al. (2012) Duy VN Luong, Panos Parpas, Daniel Rueckert, and Berç Rustem. Solving mrf minimization by mirror descent. In International Symposium on Visual Computing, pages 587–598. Springer, 2012.
  • Mahadevan and Liu (2012) Sridhar Mahadevan and Bo Liu. Sparse q-learning with mirror descent. In

    Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence

    , pages 564–573. AUAI Press, 2012.
  • Michaeli and Irani (2014) Tomer Michaeli and Michal Irani. Blind deblurring using internal patch recurrence. In European Conference on Computer Vision, pages 783–798. Springer, 2014.
  • Moosavi-Dezfooli et al. (2016) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.
  • Moosavi Dezfooli et al. (2017) Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), number EPFL-CONF-226156, 2017.
  • Nah et al. (2016) Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. arXiv preprint arXiv:1612.02177, 2016.
  • (25) Arkadi Nemirovski. Interior point polynomial time methods in convex programming.
  • Nemirovski (2012) Arkadi Nemirovski. Tutorial: Mirror descent algorithms for large-scale deterministic and stochastic convex optimization. Edinburgh, 2012.
  • Nesterov (2005) Yu Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 2005.
  • Nesterov (2013) Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.
  • Noroozi et al. (2017) Mehdi Noroozi, Paramanand Chandramouli, and Paolo Favaro. Motion deblurring in the wild. arXiv preprint arXiv:1701.01486, 2017.
  • Osher et al. (2005) Stanley Osher, Martin Burger, Donald Goldfarb, Jinjun Xu, and Wotao Yin. An iterative regularization method for total variation-based image restoration. Multiscale Modeling & Simulation, 4(2):460–489, 2005.
  • Pan et al. (2014) Jinshan Pan, Zhe Hu, Zhixun Su, and Ming-Hsuan Yang. Deblurring text images via l0-regularized intensity and gradient prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901–2908, 2014.
  • Panageas and Piliouras (2016) Ioannis Panageas and Georgios Piliouras. Gradient descent only converges to minimizers: Non-isolated critical points and invariant regions. arXiv preprint arXiv:1605.00405, 2016.
  • Perrone and Favaro (2014) Daniele Perrone and Paolo Favaro. Total variation blind deconvolution: The devil is in the details. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2909–2916, 2014.
  • Perrone et al. (2015) Daniele Perrone, Remo Diethelm, and Paolo Favaro. Blind deconvolution via lower-bounded logarithmic image priors. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pages 112–125. Springer, 2015.
  • Ramakrishnan et al. (2017) Sainandan Ramakrishnan, Shubham Pachori, Aalok Gangopadhyay, and Shanmuganathan Raman. Deep generative filter for motion deblurring. arXiv preprint arXiv:1709.03481, 2017.
  • Rudin et al. (1992) Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1-4):259–268, 1992.
  • Ruiz et al. (2015) Pablo Ruiz, Xu Zhou, Javier Mateos, Rafael Molina, and Aggelos K Katsaggelos. Variational bayesian blind image deconvolution: A review. Digital Signal Processing, 47:116–127, 2015.
  • Schuler et al. (2016) Christian J Schuler, Michael Hirsch, Stefan Harmeling, and Bernhard Schölkopf. Learning to deblur. IEEE transactions on pattern analysis and machine intelligence, 38(7):1439–1451, 2016.
  • Srebro et al. (2011) Nati Srebro, Karthik Sridharan, and Ambuj Tewari. On the universality of online mirror descent. In Advances in neural information processing systems, pages 2645–2653, 2011.
  • Srinivasan et al. (2017) Pratul P. Srinivasan, Ren Ng, and Ravi Ramamoorthi. Light field blind motion deblurring. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • Stroke and Halioua (1970) George W Stroke and M Halioua. A new holographic image deblurring method. Physics Letters A, 33(1):3–4, 1970.
  • Su et al. (2017) J. Su, D. Vasconcellos Vargas, and S. Kouichi. One pixel attack for fooling deep neural networks. ArXiv e-prints, October 2017.
  • Sun et al. (2015) Jian Sun, Wenfei Cao, Zongben Xu, and Jean Ponce. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 769–777, 2015.
  • Sun et al. (2013) Libin Sun, Sunghyun Cho, Jue Wang, and James Hays. Edge-based blur kernel estimation using patch priors. In Computational Photography (ICCP), 2013 IEEE International Conference on, pages 1–8. IEEE, 2013.
  • Tsybakov (2008) Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer Publishing Company, Incorporated, 1st edition, 2008. ISBN 0387790519, 9780387790510.
  • Vogel and Oman (1996) Curtis R Vogel and Mary E Oman. Iterative methods for total variation denoising. SIAM Journal on Scientific Computing, 17(1):227–238, 1996.
  • Wyszecki and Stiles (1982) Gunter Wyszecki and Walter Stanley Stiles. Color science, volume 8. Wiley New York, 1982.
  • Xu and Jia (2010) Li Xu and Jiaya Jia. Two-phase kernel estimation for robust motion deblurring. In European Conference on Computer Vision, pages 157–170. Springer, 2010.
  • You and Kaveh (1996) Yu-Li You and Mostafa Kaveh. A regularization approach to joint blur identification and image restoration. IEEE Transactions on Image Processing, 5(3):416–428, 1996.
  • Zhou et al. (2017) Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Stephen Boyd, and Peter W Glynn. Stochastic mirror descent in variationally coherent optimization problems. In Advances in Neural Information Processing Systems, pages 7043–7052, 2017.
  • Zhu and Milanfar (2011) Xiang Zhu and Peyman Milanfar. Restoration for weakly blurred and strongly noisy images. In Applications of Computer Vision (WACV), 2011 IEEE Workshop on, pages 103–109. IEEE, 2011.