Blind deconvolution problems involve the estimation of some latent sharp signal of interest given only an observation that has been compromised by an unknown filtering process. Although relevant algorithms and analysis apply in a general setting, this paper will focus on the particular case of blind image deblurring, where an unknown convolution or blur operator, as well as additive noise, corrupt the image capture of an underlying natural scene. Such blurring is an undesirable consequence that often accompanies the image formation process and may arise, for example, because of camera-shake during acquisition. Blind image deconvolution or deblurring strategies aim to recover a sharp image from only a blurry, compromised observation, a long-standing problem (Richardson, 1972; Lucy, 1974; Kundur and Hatzinakos, 1996) that remains an active research topic (Fergus et al., 2006; Shan et al., 2008; Levin et al., 2009; Cho and Lee, 2009; Krishnan et al., 2011). Moreover, applications extend widely beyond standard photography, with astronomical, bio-imaging, and other signal processing data eliciting particular interest (Zhu and Milanfar, 2013; Kenig et al., 2010).
where is the point spread function (PSF) or blur kernel, denotes the 2D convolution operator, and is a noise term assumed to be zero-mean Gaussian with covariance (although as we shall see, these assumptions about the noise distribution can easily be relaxed via the framework described herein). The task of blind deconvolution is to estimate both the sharp image and blur kernel given only the blurry observation , where we will mostly be assuming that and represent filtered (e.g., gradient domain) versions of the original pixel-domain images. Because is non-invertible, some (typically) high frequency information is lost during the observation process, and thus even if were known, the non-blind estimation of is ill-posed. However, in the blind case where is also unknown, the difficulty is exacerbated considerably, with many possible image/kernel pairs explaining the observed data equally well.
To alleviate this problem, prior assumptions must be adopted to constrain the space of candidate solutions, which naturally suggests a Bayesian framework. In Section 2, we briefly review the two most common classes of Bayesian algorithms for blind deconvolution used in the literature, (i) Maximum a Posteriori (MAP) estimation and (ii) Variational Bayes (VB), and then later detail their fundamental limitations, which include heuristic implementational requirements and complex cost functions that are difficult to disentangle. Section 3 uses ideas from convex analysis to reformulate these Bayesian methods promoting greater understanding and suggesting useful enhancements, such as rigorous criteria for choosing appropriate image priors. Section 4 then situates our theoretical analysis within the context of existing analytic studies of blind deconvolution, notably the seminal work from (Levin et al., 2009, 2011b)
, and discusses the relevance of natural image statistics. Learning noise variances is later addressed in Section5, while experiments are carried out in Section 6 to provide corroborating empirical evidence for some of our theoretical claims. Finally, concluding remarks are contained in Section 7.
2 MAP versus VB
As mentioned above, to compensate for the ill-posedness of the blind deconvolution problem, a strong prior is required for both the sharp image and kernel to regularize the solution space. Recently, natural image statistics have been invoked to design prior (regularization) models, e.g., (Roth and Black, 2009; Levin et al., 2007; Krishnan and Fergus, 2009; Cho et al., 2012), and MAP estimation using these priors has been proposed for blind deconvolution, e.g., (Shan et al., 2008; Krishnan et al., 2011). While some specifications may differ, the basic idea is to find the mode (maximum) of
After a transformation, and ignoring constant factors, this is equivalent to computing
where is a penalty term over the desired image, typically of the form , while regularizes the blur kernel. Both penalties generally have embedded parameters that must be balanced along with . It is also typical to assume that , with and we will adopt this assumption throughout (however, Section 3.7 will discuss a type of scale invariance such that this assumption becomes irrelevant anyway).
Although straightforward, there are many problems with existing MAP approaches including ineffective global minima, e.g., poor priors may lead to degenerate solutions like the delta kernel (frequently called the no-blur solution), or too many local minima and subsequent convergence issues. Therefore, the generation of useful solutions (or to guide the algorithm carefully to a proper local minima) requires a delicate balancing of various factors such as dynamic noise levels, trade-off parameter values, and other heuristic regularizers such as salient structure selection (Shan et al., 2008; Cho and Lee, 2009; Krishnan et al., 2011) (we will discuss these issues more in Sections 3.5 and 3.6).
To mitigate some of these shortcomings of MAP, the influential work by Levin et al. and others proposes to instead solve (Levin et al., 2009)
where . This technique is sometimes referred to as Type II estimation in the statistics literature.111To be more specific, Type II estimation refers to the case where we optimize over one set of unknown variables after marginalizing out another set, in our case the image . In this context, standard MAP over both and via (2) can be viewed as Type I. Once is estimated in this way, can then be obtained via conventional non-blind deconvolution techniques. One motivation for the Type II strategy is based on the inherent asymmetry in the dimensionality of the image relative to the kernel (Levin et al., 2009). By integrating out (or averaging over) the high-dimensional image, the estimation process can then more accurately estimate the few remaining low-dimensional parameters in .
The challenge of course with (3) is that the evaluation of requires a marginalization over , which is a computationally intractable integral given realistic image priors. Consequently a variational Bayesian (VB) strategy is used to approximate the troublesome marginalization (Levin et al., 2011a). A similar idea has also been proposed by a number of other authors (Miskin and MacKay, 2000; Fergus et al., 2006; Babacan et al., 2012). In brief, VB provides a convenient way of computing a rigorous upper bound on , which can then be substituted into (3) for optimization purposes leading to an approximate Type II estimator.
The VB methodology can be easily applied whenever the image prior is expressible as a Gaussian scale mixture (GSM) (Palmer et al., 2006), meaning
where each represents a zero mean Gaussian with variance and prior distribution . The role of this decomposition will become apparent below. Also, with some abuse of notation, may characterize a discrete distribution, in which case the integral in (4) can be reduced to a summation. Note that all prior distributions expressible via (4) will be supergaussian (Palmer et al., 2006), and therefore will to varying degrees favor a sparse (we will return to this issue in Sections 3 and 4).
Given this , the negative log of can be upper bounded via
where is called the free energy, is an arbitrary distribution over , and
, the vector of all the variances from (4). Equality is obtained when . In fact, if we were able to iteratively minimize this over and
(i.e., a form of coordinate descent), this would be exactly equivalent to the standard expectation-maximization (EM) algorithm for minimizingwith respect to , treating and as hidden data and assuming is flat within the specified constraint set mentioned previously (see (Bishop, 2006, Ch.9.4) for a detailed examination of this fact). However, optimizing over is intractable since is generally not available in closed-form. Likewise, there is no closed-form update for , and hence no exact EM solution is possible.
The contribution of VB theory is to show that if we restrict the form of via structural assumptions, the updates can now actually be computed, albeit approximately. For this purpose the most common constraint is that must be factorized, namely, , sometimes called a mean-field approximation (Bishop, 2006, Ch.10.1). With this approximation we are effectively utilizing the revised (and looser) upper bound
which may be iteratively minimized over , , and independently while holding the other two fixed. In each case, closed-form updates are now possible, although because of the factorial approximation, we are of course no longer guaranteed to minimize .
Compared to the original Type II problem from (3), minimizing the bound from (5) is considerably simplified because the problematic marginalization over has been effectively decoupled from . However, when solving for at each iteration, it can be shown that a full covariance matrix of conditioned on , denoted as , must be computed. While this is possible in closed form, it requires operations, where is the number of pixels in the image. Because this is computationally impractical for reasonably-sized images, a diagonal approximation to must be adopted (Levin et al., 2011a). This assumption is equivalent to incorporating an additional factorization into the VB process such that now we are enforcing the constraint . This leads to the considerably looser upper bound
In summary then, the full Type II approach can be approximated by minimizing the VB upper bound via the optimization problem
The requisite update rules are shown in Algorithm 1.222For simplicity we have ignored image boundary effects when presenting the computation for in Algorithm 1. In fact, the complete expression for is described in Appendix A in the proof of Theorem 1. Additionally, Algorithm 1 in its present form includes a modest differentiability assumption on for updating the sufficient statistics of . Numerous methods fall within this category with some implementational differences. Note also that the full distributions for each and are generally not needed; only certain sufficient statistics are required (certain means and variances, see Algorithm 1), analogous to standard EM. These can be efficiently computed using techniques from (Palmer et al., 2006) for any produced by (4). In the VB algorithm from (Levin et al., 2011a), the sufficient statistic for is computed using an alternative methodology which applies only to finite Gaussian scale mixtures. However, the resulting updates are nonetheless equivalent to Algorithm 1 as shown in the proof of Theorem 1 presented later.
Update sufficient statistics for :
Update sufficient statistics for :
where , , is the convolution matrix of .
where and is the convolution matrix of .
Noise level reduction: If , then .
While possibly well-motivated in principle, the Type II approach relies on rather severe factorial assumptions which may compromise the original high-level justifications. In fact, at any minimizing solution denoted , it is easily shown that the gap between and is given explicitly by
where denotes the standard KL divergence between the distributions and . Because the posterior is generally highly coupled (non-factorial), this divergence will typically be quite high, indicating that the associated approximation is potentially very poor. We therefore have no reason to believe that this is anywhere near the maximizer of , which was the ultimate goal and motivation of Type II to begin with.
Other outstanding issues persist as well. For example, the free energy cost function, which involves both integration and function-valued arguments, is not nearly as transparent as the standard MAP estimation from (2). Moreover for practical use, VB depends on an appropriate schedule for reducing the noise variance during each iteration (see Algorithm 1), which implements a form of coarse-to-fine, multiresolution estimation scheme (Levin et al., 2011b) while potentially improving the convergence rate (Levin et al., 2011a).
It therefore becomes difficult to rigorously explain exactly why VB has often been empirically more successful than MAP in practice (see (Babacan et al., 2012; Levin et al., 2011b, a) for such comparisons), nor how to decide which image priors operate best in the VB framework.333Note that, as discussed later, certain MAP algorithms can perform reasonably well when carefully balanced with additional penalty factors and tuning paramters added to (2). However, in direct comparisons using the same basic probabilistic model, VB can perform substantially better, even achieving state-of-the-art performance without additional tuning. While Levin et al. have suggested that at a high level, marginalization over the latent sharp image using natural-image-statistic-based priors is a good idea to overcome some of the problems faced by MAP estimation (Levin et al., 2009, 2011b), this argument only directly motivates substituting (3) for (2) rather than providing explicit rationalization for (6). Thus, we intend to more meticulously investigate the exact mechanism by which VB operates, explicitly accounting for all of the approximations and assumptions involved by drawing on convex analysis and sparse estimation concepts from (Palmer et al., 2006; Wipf et al., 2011) (Section 4 will discuss direct comparisons with Levin et al. in detail). This endeavor then naturally motivates extensions to the VB framework and a simple prescription for choosing an appropriate image prior . Overall, we hope that we can further demystify VB providing an entry point for broader improvements such as robust non-uniform blur estimation.
Several surprising, possibly counterintuitive conclusions emerge from this investigation which strongly challenge much of the prevailing wisdom regarding why and how Bayesian algorithms can be advantageous for blind deconvolution. These include:
The optimal image prior for blind deconvolution purposes using VB or MAP is not the one which most closely reflects natural images statistics. Rather, it is the distribution that most significantly discriminates between blurry and sharp images, meaning a prior such that, for some good sharp image estimate , we have . Natural image statistics typically fail in this regard for explicit reasons, which apply to both MAP and VB, as discussed in Sections 3 and 4.
The advantage of VB over MAP is not directly related to the dimensionality differences between and and the conventional benefits of marginalization over the latter. In fact, we prove in Section 3.2 that the underlying cost functions are formally equivalent in ideal noiseless environments given the factorial assumptions required by practical VB algorithms. Instead, there is an intrinsic mechanism built into VB that allows bad locally minimizing solutions to be largely avoided even when using the highly non-convex, discriminative priors needed to distinguish between blurry and sharp images. This represents a completely new perspective on the relative advantages of VB.
The VB algorithm can be reformulated in such a way that generic, non-Gaussian noise models and other extensions are easily incorporated, circumventing one important perceived advantage of MAP.
While technically these conclusions only apply to the uniform blur model described by (1), many of the underlying principles can nonetheless be applied to more general models. In fact, we have already obtained success in more complex non-uniform and multi-image models using similar principles, e.g., see (Zhang and Wipf, 2013a, b).
3 Analysis of Variational Bayes
Drawing on ideas from (Palmer et al., 2006; Wipf et al., 2011), in this section we will completely reformulate the VB methodology to elucidate its behavior. More profoundly, we will demonstrate that VB is actually equivalent to using an unconventional MAP estimation-like cost function but with a particular penalty that couples image, blur kernel, and noise in a well-motivated fashion. This procedure removes the ambiguity introduced by the VB approximation, the subsequent diagonal covariance approximation, and the reduction heuristic that all contribute still somewhat mysteriously to the effectiveness of VB. It will then allow us to pinpoint the exact reasons why VB represents an improvement over conventional MAP estimations in the form of (2), and it provides us with a specific criteria for choosing the image prior .
3.1 Notation and Definitions
Following (Fergus et al., 2006) and (Levin et al., 2011a), we work in the derivative domain of images for ease of modeling and better performance, meaning that and will now denote the lexicographically ordered image derivatives of sharp and blurry images respectively obtained via a particular derivative filter. Given that convolution is a commutative operator, the blur kernel is unaltered.
For latent sharp image derivatives of size and blur kernel of size , we denote the lexicographically ordered vector of the sharp image derivatives, blurry image derivatives, and blur kernel as , and respectively, with , , and . This assumes a single derivative filter. The extension to multiple filters, typically one for each image dimension, follows naturally. For simplicity of notation however, we omit explicit referencing of multiple filters throughout this paper, although all related analysis follow through in a straightforward manner.
The likelihood model (1) can be rewritten as
where and are the convolution matrices constructed from the blur kernel and sharp image respectively. We introduce a matrix , where the -th row of is a binary vector with indicating that the -th element of (i.e. ) appears in the corresponding column of and otherwise. We define , which is equivalent to the norm of the -th column of . It can also be viewed as the effective norm of accounting for the boundary effects.444Technically depends on , the index of image pixels, but it only makes a difference near the image boundaries. We prefer to avoid an explicit notational dependency on to keep the presentation concise, although the proofs in Appendix A do consider -dependency when it is relevant. The subsequent analysis will also omit this dependency although all of the results carry through in the general case. The same is true for the other quantities that depend on , e.g., the parameter defined later in (11). The element-wise magnitude of is given by .
Finally we introduce the definition of relative concavity (Palmer, 2003) which will serve subsequent analyses:
Let be a strictly increasing function on . The function is concave relative to on the interval if and only if
In the following, we will use to denote that is concave relative to on . This can be understood as a natural generalization of the traditional notion of a concavity, in that a concave function is equivalently concave relative to a linear function per Definition 1. In general, if , then when and are set to have the same functional value and the same slope at any given point (i.e., by an affine transformation of ), then lies completely under .
It is well-known that functions concave in favor sparsity (meaning a strong preference for zero and relatively little distinction between nonzero values) (Rao et al., 2003; Wipf et al., 2011). The notion of relative concavity induces an ordering for many of the common sparsity promoting functions. Intuitively, a non-decreasing function of is more aggressive in promoting sparsity than some if it is concave relative to . For example, consider the class of functionals that are concave in whenever . The smaller , the more a sparse will be favored, with the extreme case producing the norm (a count of the number of nonzero elements in ), which is the most aggressive penalty for promoting sparsity. Meanwhile, using Definition 1 it is easy to verify that, as a function of , whenever .
3.2 Connecting VB with MAP
As mentioned previously, the VB algorithm of (Levin et al., 2011a) can be efficiently implemented using any image prior expressible in the form of (4). However, for our purposes we require an alternative representation with roots in convex analysis. Based on (Palmer et al., 2006), it can be shown that any prior given by (4) can also be represented as a maximization over scaled Gaussians with different variances leading to the alternative representation
is some non-negative energy function; the associated exponentiated factor is sometimes treated as a hyperprior, although it will not generally integrate to one. This, which determines the form of in (4), will ultimately play a central role in how VB penalizes images as will be explored via the results of this section.
Proofs will be deferred to the Appendix A. This reformulation of VB closely resembles (2), with a quadratic data fidelity term combined with additive image and kernel penalties. The penalty on in (11) is not unlike those incorporated into standard MAP schemes, meaning from (2). However, quite unlike MAP, the penalty on the image pixels is dependent on both the noise level and the kernel through the parameter , the ratio of the noise level to the squared kernel norm. Moreover, with a general , it is easily shown that the function is non-separable in and , meaning for any possible functions and . The remainder of Section 3 will explore the consequences of this crucial, yet previously unexamined distinction from typical MAP formulations.
In contrast, with , both MAP and VB possess a formally equivalent penalty on each via the following corollary:
If , then .
Therefore the underlying VB cost function is effectively no different than regular MAP from (2) in the noiseless setting, a surprising conclusion that seems to counter much of the prevailing understanding of VB deconvolution algorithms.
A simple 1D example described next will serve to motivate why VB can, perhaps paradoxically, still outperform MAP even in ideal noiseless scenarios. In brief, different solutions between VB and MAP are still possible here because a decreasing sequence of is used to optimize both techniques, and this may lead to a radically different optimization trajectory terminating at different locally minimizing solutions. Later, Sections 3.4-3.6 will provide rigorous analysis of , including how it may affect convergence paths, leading to several insights regarding VB performance. Finally, Section 3.7 will address the issue of choosing the optimal image prior , which is equivalent to choosing the optimal in (10).
3.3 Illustrative Example using 1D Signals
Here we will briefly illustrate the distinction between MAP and VB where other confounding factors have been conveniently removed. For this purposed we consider a simplified noiseless situation where the optimal value is zero. Based on Corollary 1 we know that whenever , and therefore , goes to zero, the MAP and VB cost functions become exactly equivalent when given the same prior selection , and thus share the same globally optimal solution in the limit .
As mentioned in Section 2 and shown in Algorithm 1 however, essentially all VB and MAP deconvolution algorithms begin with a large value of and gradually reduce it towards some minimal value as the iterations proceed as part of a multi-resolution approach designed to find good solutions. While the underlying cost functions may be equivalent when , they behave very differently for , and we will argue extensively that here is where the ultimate advantage of VB lies. Although details will be deferred to later sections, the embedding of and into leads to a powerful adjustment of the image penalty curvature during each iteration, smoothing out local minima (especially at the beginning of the estimation process) such that bad local solutions can largely be avoided. In contrast, MAP employs a static image penalty that is easily lured into suboptimal basins of attraction. Thus even if the global minima are the same when eventually , we may expect that VB has a better chance of reaching this solution.
We test this conclusion using perhaps the simplest case where is constant, i.e., (later Section 3.7 will argue that this selection is in some sense optimal). With this assumption, the associated MAP problem from (2) is easily shown to be
where the image penalty is obtained by applying a transformation to (10) giving
Irrelevant additive constants have been excluded, and we are assuming the same kernel penalty as VB in (11). In the limit as , based on the equivalency derived from Corollary 1, both VB and MAP are effectively solving
Moreover, given the stated assumptions and arguments made in Section 3.4 below, it can be shown that: (i) the term is no longer relevant for determining the globally optimal solution (assuming that the optimal is actually sparse), and (ii) the remaining penalty on reduces to the norm, which represents a count of the nonzero elements in and is sometimes considered as the canonical metric for quantifying sparsity. Thus, (15) effectively reduces to
Therefore at this simplified, stripped-down level both VB and MAP are merely minimizing the norm of subject to the linear convolutional constraint.
Of course we do not attempt to solve (16) directly, which is a difficult combinatorial problem in nature. Instead for both VB and MAP we begin with a large and gradually reduce it towards zero as described in Algorithm 1, where we use (this value is taken from Levin et al. (Levin et al., 2011a)) for updating .555The MAP algorithm can be implemented by simply setting to zero before the update in Algorithm 1, with guaranteed convergence to some local minima. For both MAP and VB, the sufficient statistic update is simply whenever is a constant. Before becomes small, the VB and MAP cost functions will behave very differently, since VB is based on the coupled image penalty in (11) while MAP employs a simple factor. The superiority of the VB convergence path will be now be demonstrated with a synthetic 1D signal.
In this example, we generate a 1D signal composed of multiple spikes and convolve it with two different blur kernels, one uniform and one random, creating two different blurry observations. Refer to Figure 1 (first row) for the ground-truth spike signal and associated blur kernels. We then apply the MAP and VB blind deconvolution algorithms, with the same prior ( equals a constant) and reduction schedule, to the blurry test signals and compare the quality of the reconstructed blur kernels and signals. The recovery results are shown in Figure 1 (second and third rows), where it is readily apparent that VB produces superior estimation quality of both kernel and image. Additionally, the signal recovered by VB is considerably more sparse than MAP, indicating that it has done a better job of optimizing (16), consistent with subsequent theoretical analysis that will be conducted in Sections 3.4-3.6. This is not to say that MAP cannot potentially be effective with careful tuning and initialization (perhaps coupled with additional regularization factors or clever optimization schemes), only that VB is much more robust to suboptimal experimental settings, etc., in its present form.
3.4 Evaluating the VB Image Penalty
Illustrative examples aside, we will now explore in more depth exactly how the image penalty in (11) contributes to the success of VB. While in a few special cases can be computed in closed-form for general leading to greater transparency, as we shall see below the VB algorithm and certain attendant analyses can nevertheless be carried through even when closed-form solutions for are not possible. Importantly, we can assess properties that may potentially affect the sparsity and quality of resulting solutions as and are varied.
A highly sparse prior, and therefore penalty function, is generally more effective in differentiating sharp images with fine structures from blurry ones (more on this later). Recall that concavity with respect to coefficient magnitudes is a signature property of such sparse penalties (Rao et al., 2003; Wipf et al., 2011). A potential advantage of MAP is that it is very straightforward to characterize the associated image penalty; namely, if from (2) is a highly concave, nondecreasing function of each , then we may expect that sparse image gradients will be heavily favored. And for two candidate image penalties and , if , then we may expect the former to promote an even sparser solution than the latter (provided we are not trapped at a bad local solution). Section 4 will argue that will then lead to a better estimate of and .
In contrast, with VB it is completely unclear to what degree favors sparse solutions, except in the special case from the previous section. We now explicitly describe sufficient and necessary conditions for to be a concave, nondecreasing function of , which turn out to be much stricter than the conditions required for MAP.
Theorem 2 explicitly quantifies what class of image priors leads to a strong, sparsity-promoting penalty when fully propagated through the VB framework. Yet while this attribute may anchor VB as a legitimate sparse estimator in the image (filter) domain given an appropriate , it does not explain precisely why VB often produces superior results to MAP. In fact, the associated MAP penalty (when generated from the same ) will actually promote sparse solutions under much weaker conditions as follows:
The MAP penalty will be a concave, non-decreasing function of if and only if is a concave, non-decreasing function on .
The extra factor implies that itself need not be concave to ensure that is concave. For example, the selection it not concave and yet the associated still will be since now , which is concave and non-decreasing as required by Corollary 2.
Moving forward then, to really understand VB we must look deeper and examine the role of in modulating the effective penalty on . First we define the function as , with . Note that because is a symmetric function with respect to the origin, we may conveniently examine its concavity properties considering only the positive half of the real line.
Let be a differentiable, non-decreasing function. Then we have the following:
As , for all and , . Therefore, and penalize large magnitudes of equally.
For any , and , if then . Therefore, as , is maximized, implying that favors zero-valued coefficients more heavily than .
This result implies that regardless of , penalizes large magnitudes of any nearly equivalently. In contrast, small magnitudes are penalized much less as becomes smaller. So Theorem 3 loosely suggests that sparse solutions are more heavily favored when is smaller. However, we would ideally like to make more rigorous statements about the relative concavity of the various penalty functions involved, allowing us to make stronger claims about sparsity-promotion.
Perhaps the simplest choice for which satisfies the conditions of Theorems 2 and 3, and a choice that has been advocated in the sparse estimation literature in different contexts, is to assume a constant value, . This in turn implies that is a Jeffreys non-informative prior on the coefficient magnitudes after solving the maximization from (10
), and is attractive in part because there are no embedded hyperparameters (the constantis irrelevant).666The Jeffreys prior is of the form , which represents an improper distribution that does not integrate to one. This selection for leads to a particularly interesting closed-form penalty as follows:
In the special case where , then
Figures 2 (a) and (b) display 1D and 2D plots of this penalty function. It is worth spending some time here to examine this particular selection for (and therefore ) in detail since it elucidates many of the mechanisms whereby VB, with all of its attendant approximations and heuristics, can be effective.
In the limit as , the first term in (17) converges to the indicator function , and thus when we sum over we obtain the norm of .777Although with , this term reduces to a constant, and therefore has no impact. The second term in (17), when we again sum over , converges to , ignoring a constant factor. Sometimes referred to as Gaussian entropy, this term can also be connected to the norm via the relations and (Wipf et al., 2011). Thus the cumulative effect when becomes small is an image prior that closely mimics the highly non-convex (in ) norm. In contrast, when becomes large, it can be shown that both terms in (17), when combined for all , approach scaled versions of the convex norm. Additionally, if we assume a fixed kernel and ignore boundary effects, this scaling turns out to be optimal in a particular Bayesian sense described in (Wipf and Wu, 2012) (this technical point will be addressed further in a future publication).
For intermediate values of between these two extremes, we obtain a that becomes less concave with respect to each as increases (in the formal sense of relative concavity discussed in Section 3.1). In particular, we have the following:
If , then for .
Thus, as the noise level is increased, increases and we have a penalty that behaves more like a convex (less sparse) function, and so becomes less prone to local minima. In contrast, as is increased, meaning that is now reduced, the penalty actually becomes more concave with respect to . This phenomena is in some ways similar to certain homotopy sparse estimation schemes (e.g., (Chartrand and Yin, 2008)), where heuristic hyperparameters are introduced to gradually introduce greater non-convexity into canonical compressive sensing problems, but without any dependence on the noise or other factors. The key difference here with VB however is that penalty shape modulation is explicitly dictated by both the noise level and the kernel in an entirely integrated fashion.
To summarize then, the ratio can be viewed as modulating a smooth transition of the penalty function shape from something akin to the non-convex norm to a properly-scaled norm. In contrast, all conventional MAP-based penalties on are independent from or , and thus retain a fixed shape. The crucial ramifications of this coupling and -controlled shape modification/augmentation exclusive to the VB framework will be addressed in the following two subsections. Other choices for , which exhibit a partially muted form of this coupling, will be considered in Section 3.7, which will also address a desirable form of invariance that only exists when is a constant.
3.5 Noise Dependency Analysis
The success of practical VB blind deconvolution algorithms is heavily dependent on some form of stagewise coarse-to-fine approach, whereby the kernel is repeatedly re-estimated at successively higher resolutions. At each stage, a lower resolution version is used to initialize the estimate at the next higher resolution. One way to implement this approach is to initially use large values of such that only dominant, primarily low-frequency image structures dictate the optimization (Levin et al., 2009). During subsequent iterations as the blur kernel begins to reflect the correct coarse shape, can be gradually reduced to allow the recovery of more detailed, fine structures.
A highly sparse (concave) prior can ultimately be more effective in differentiating sharp images and fine structures than a convex one. Detailed supported evidence for this claim can be found in (Fergus et al., 2006; Levin et al., 2007; Krishnan and Fergus, 2009; Cho et al., 2012), as well as in Section 4 below. However, if such a prior is applied at the initial stages of estimation, the iterations are likely to become trapped at suboptimal local minima, of which there will always be a combinatorial number. Moreover, in the early stages, the effective noise level is actually high due to errors contained in the estimated blur kernel, and exceedingly sparse image penalties are likely to produce unstable solutions. Given the reformulation outlined above, we can now argue that VB implicitly avoids these problems by beginning with a large (and therefore a large ), such that the penalty function is initially nearly convex in (see Figure 2). As the iterations proceed and fine structures need to be resolved, the penalty function becomes less convex as is reduced, but the risk of local minima and instability is ameliorated by the fact that we are likely to be already in the neighborhood of a desirable basin of attraction. Additionally, the implicit noise level (or modeling error) is now substantially less.
This kind of automatic ‘resolution’ adaptive penalty shaping is arguably superior to conventional MAP approaches based on (2), where the concavity/shape of the induced separable penalty function is kept fixed regardless of the variation in the noise level or scale, i.e., at different resolutions across the coarse-to-fine hierarchy. In general, it would seem very unreasonable that the same penalty shape would be optimal across vastly different noise scales. This advantage over MAP can be easily illustrated by simple head-to-head comparisons where the underlying prior distributions are identical (such as the previous example from Section 3.3). Additionally, this phenomena can be further enhanced by automatically learning as discussed in Section 5.
3.6 Blur Dependency Analysis
The shape parameter is also affected by the kernel norm, with larger values of leading to less convexity of the penalty function while small values increase the convexity as can also be observed from Figure 2. With the standard assumptions and , is bounded between and , where is the number of pixels in the kernel.888Actually, because of natural invariances embedded into the VB cost function, the assumption is not needed for what follows. See Section 3.7 for more details. An increase of indicates that the kernel is more sparse, with the extreme case of leading to . In this situation, is the most concave in (per the analysis of Section 3.4), which is reasonable, as this is the easiest kernel type to handle so the sparsest penalty function can be used without much concern over local minima. In contrast, is the smallest when all elements are equal, which is the more challenging case corresponding with a broad diffuse image blur, with many local minima. In this situation, the penalty function is more convex and conservative. In general, a highly concave prior is not needed to disambiguate a highly blurred image from a relatively sharp one.
Additionally, at the beginning of the learning process when is large and before any detailed structures have been resolved, the penalty on from (11) will naturally favor a blurry, diffuse kernel in the absence of additional information. This will help ensure that is relatively convex and less aggressive during the initial VB iterations. However, as the algorithm proceeds, is reduced, and some elements of are pushed towards zero, the penalty , with its embedded dependency, will gradually become less convex and can increasingly dominate the overall cost function (since for small and large the lower bound on can drop arbitrarily per the above described concavity modulation). Because is minimized as becomes relatively sparse, a more refined can be explored at this stage to the extent that can be pushed towards greater sparsity as well (if is not sparse, then there is no real benefit to refining ). Again, this desirable effect occurs with relatively limited risk of local minima because of the gradual, intrinsically-calibrated introduction of increased concavity.
We may also consider these ideas in the context of existing MAP algorithms, which adopt various structure selection heuristics, implicitly or explicitly, to achieve satisfactory performance (Shan et al., 2008; Cho and Lee, 2009; Xu and Jia, 2010). This can be viewed as adding additional image penalty terms and trade-off parameters to (2). For example, (Shan et al., 2008) incorporates an extra local penalty on the latent image, such that the gradients of small-scale structures in the recovered image are close to those in the blurry image. Thus they will actually contribute less to the subsequent kernel estimation step, allowing larger structures to be captured first. Similarly, a bilateral filtering step is used for pruning out small scale structures in (Cho and Lee, 2009). Finally, (Xu and Jia, 2010) develop an empirical structure selection metric designed such that small scale structures can be pruned away by thresholding the corresponding response map, allowing subsequent kernel estimation to be dominated by only large-scale structures.
Generally speaking, existing MAP strategies face a trade-off: either they must adopt a highly sparse image prior needed for properly resolving fine structures (see Section 4) and then deal with the attendant constellation of problematic local minima,999Appropriate use of continuation methods such as the algorithm from (Chartrand and Yin, 2008) may help in this regard. or rely on a more smooth image prior augmented with compensatory structure-selection measures such as those described above to avoid bad global solutions. In contrast, we may interpret the coupled penalty function intrinsic to VB as a principled alternative with a transparent, integrated functionality for estimation at different resolutions without any additional penalty factors or complexity.
3.7 Other Choices for
Because essentially any sparse prior on can be expressed using the alternative variational form from (10), choosing such a prior is tantamount to choosing which then determines . The results of Theorems 2 and 3 suggest that a concave, non-decreasing is useful for favoring sparsity (assumed to be in the gradient domain). Moreover, Theorem 4 and subsequent analyses suggest that the simplifying choice where possesses several attractive properties regarding the relative concavity of the resulting . But what about other selections for and therefore ?
While directly working with can sometimes be limiting (except in certain special cases like from before), the variational form of (12) allows us to closely examine the relative concavity of a useful proxy. Let
Then for fixed and the VB estimation problem can equivalently be viewed as solving
It now becomes clear that the sparsity of and are intimated related. More concretely, assuming is concave and non-decreasing (as motivated by Theorems 2 and 3), then there is actually a one-to-one correspondence in that whenever , the optimal equals zero as well, and vice versa.101010To see this first consider . The term can be ignored and so the optimal need only minimize , which is concave and non-decreasing whenever is. Therefore the optimal is trivially zero. Conversely if , then there is effectively an infinite penalty on , and so the optimal must also be zero. Therefore we may instead examine the relative concavity of for different values, which will directly determine the sparsity of and in turn, the sparsity of . This then motivates the following result:
Thus, although we have not been able to formally establish a relative concavity result for all general directly, Theorem 5 provides a nearly identical analog allowing us to draw similar conclusions to those detailed in Sections 3.5 and 3.6 whenever a general affine is adopted. Perhaps more importantly, it also suggests that as deviates from an affine function, we may begin to lose some of the desirable effects regarding the described penalty shape modulation.
While previously we closely scrutinized the special affine case where , it still remains to examine the more general affine form , . In fact, it is not difficult to show that as is increased, the resulting penalty on increasingly resembles an norm with lesser dependency on , thus severely muting the effect of the shape modulation that appears to be so effective (see arguments above and empirical results section below). So there currently does not seem to be any advantage to choosing some and we are left, out of the multitude of potential image priors, with the conveniently simple choice of , where the value of is inconsequential. Experimental results support this conclusion: namely, as is increased from zero performance gradually degrades (results not shown for space considerations).
As a final justification for simply choosing , there is a desirable form of invariance that uniquely accompanies this selection.
If and represent the optimal solution to (11) under the constraint , then and will always represent the optimal solution under the modified constraint if and only if .
This is unlike the myriad of MAP estimation techniques or VB with other choices of , where the exact calibration of the constraint can fundamentally alter the form of the optimal solution beyond a mere rescaling. Moreover, if such a constraint on is omitted altogether, these other methods must then carefully tune associated trade-off parameters, so in one way or another this lack of invariance will require additional tuning.
Interestingly, Babacan et al. (Babacan et al., 2012) experiment with a variety of VB algorithms using different underlying image priors, and empirically find that as a constant works best; however, no rigorous explanation is given for why this should be the case.111111Based on a strong simplifying assumption that the covariance from Algorithm 1 is a constant, Babacan et al. (Babacan et al., 2012) provide some preliminary discussion regarding possibly why VB may be advantageous over MAP. However, this material mostly exists in the sparse estimation literature (e.g., see (Palmer et al., 2006; Wipf et al., 2011) and related references) and therefore the behavior of VB blind deconvolution remains an open question, including why a constant might be optimal. Thus, our results provide a powerful theoretical confirmation of this selection, along with a number of useful attendant intuitions.
3.8 Analysis Summary
To summarize this section, we have shown that the shape of the effective VB image penalty is explicitly controlled by the ratio of the noise variance to the squared kernel norm, and that in many circumstances this leads to a desired mechanism for controlling relative concavity and balancing sparsity, largely mitigating issues such as local minima that compromise the convergence of more traditional MAP estimators. We have then demonstrated a unique choice for the image prior (i.e., when is constant) such that this mechanism is in some sense optimal and scale-invariant. Of course we readily concede that different choices for the image prior could still be useful when other factors are taken in to account. We also emphasize that none of this is meant to suggest that real imaging data follows a Jeffreys prior distribution (which is produced when is constant). We will return to this topic in Section 4 below. Overall, this perspective provides a much clearer picture of how VB is able to operate effectively and how we might expect to optimize performance.
While space precludes a detailed treatment, many natural extensions to VB are suggested by these developments. For example, in the original formation of VB given by (6) it is not clear the best way to incorporate alternative noise models because the required integrations are no longer tractable. However, when viewed alternatively using (11) it becomes obvious that different data-fidelity terms can easily be substituted in place of the quadratic likelihood factor. Likewise, given additional prior knowledge about the blur kernel, there is no difficulty in substituting for the -norm on or the uniform convolutional observation model to reflect additional domain knowledge. Thus, the proposed reformulation allows VB to inherit most of the transparent extensibility previously reserved for MAP.
4 The Trouble with Natural Image Statistics
Levin et al. (Levin et al., 2009, 2011a, 2011b), which represents the primary inspiration for our work, presents a compelling and highly influential case that joint MAP estimation over and generally favors a degenerate, no-blur solution, meaning that will be a delta function, even when the assumed image prior reflects the true underlying distribution of , meaning , and is assumed flat.121212Note that Levin et al. frequently use to refer to joint MAP estimation over both and (Type I) while using for MAP estimation of alone after has been marginalized out (Type II). In this terminology, then represents the inference ideal that VB purports to approximate, equivalent to (3) herein. In turn, this is presented as a primary argument for why MAP is inferior to VB. As this line of reasoning is considerably different from that given in Section 3, here we will take a closer look at these orthogonal perspectives in the hopes of providing a clarifying resolution.
To begin, it helps to revisit the formal analysis of MAP failure from (Levin et al., 2011b), where the following specialized scenario is presented. Assume that a blurry image is generated by , where and each image gradient
is drawn iid from the generalized Gaussian distribution, . Now consider the minimization problem
Solving (20) is equivalent to MAP estimation over and under the true image prior (and a flat prior on within the specified constraint set). In the limit as the image grows arbitrarily large, (Levin et al., 2011b, Claim 2) proves that the no-blur delta solution , will be favored over the true solution , . Intuitively, this occurs because the blurring operator contributes two opposing effects:
It reduces a measure of the image sparsity, which increases , and
It broadly reduces the overall image variance, which reduces .
Depending on the relative contributions, we may have the situation where the second effect dominates such that may be less than , meaning the cost function value at the blurred image is actually lower than at the true, sharp image. Consequently, MAP estimation may not be reliable.
The results presented herein then suggest a sort of paradox: in Section 3 we have argued that VB is actually equivalent to an unconventional form of MAP estimation over , but with an intrinsic mechanism for avoiding bad local minima, increasing the chances that a good global or near-global minima can be found. Moreover, at least in the noiseless case (), any such minima will be exactly equivalent to the standard MAP solution by virtue of Corollary 1. However, based on the noiseless analysis from Levin et al. above, any global MAP solution is unlikely to involve the true sharp image when the true image statistics are used for , meaning that VB performance should be poor as well at a global solution. Thus how can we reconcile the positive performance of VB actually observed in practice, and avoidance of degenerate no-blur solutions, with Levin et al.’s characterization of the MAP cost function?
First, when analyzing MAP Levin et al. consider only a flat prior on within the constraint set and . However, MAP estimation may still avoid no-blur solutions when equipped with an appropriate non-flat kernel prior. Likewise VB naturally produces an explicit penalty factor on given by (see Theorem 1) that favors blurry explanations (non-delta kernel), since the delta kernel will maximize the norm. Moreover, VB introduces this prior in a convenient form devoid of additional tuning parameters, whereas a traditional MAP estimator would generally require some form of cross-validation.
Secondly, and perhaps more importantly, the question of whether (the blurry image) or (the true sharp image) is more heavily favored by the true image prior is not really the most relevant issue to begin with. A more pertinent question is whether there exists some sparse with such that . If so, then the solution to the relaxation
with is very unlikely to be and . And this is guaranteed to be true as becomes sufficiently small, assuming is set appropriately. It is crucial to understand here that the exponent from (21) need not correspond with the true distribution , as long as is reasonably close to some sparse solution. The point then is that with small, regardless of , maximally sparse solutions will be favored, and this is very unlikely to involve the no-blur solution. Therefore, just because may not be exactly sparse, we may nonetheless locate a sparse approximation that is sufficiently reasonable such that the unknown can still be estimated accurately.
In general, it is more important that the assumed image prior be maximally discriminative with respect to blurred and sharp images, as opposed to accurately reflecting the statistics of real images. Mathematically, this implies that it is much more important that we have , or under less than ideal circumstances , than we enforce , even if were known exactly. This is because the sparsity/variance trade-off described above implies that it may often be the case that and/or .
To achieve the best results then, we must counteract the negative effects of variance reduction when choosing . For example, the penalty selection becomes less sensitive to the image variance as becomes small, and with , it actually becomes completely invariant. So we may expect that smaller values are more appropriate for disambiguating blurred from unblurred images. Importantly, the optimal estimator will generally not equal the generalized Gaussian distribution with , a commonly-reported estimate of true image statistics.
To illustrate these points, we reproduce (Levin et al., 2011b, Figure 1), where three 1D image slices (an ideal edge, an ideal spike, and a real image slice) are compared with respect to both before and after blurring. However, unlike (Levin et al., 2011b, Figure 1) we do not strictly enforce , but compare sharp and blurred images using the relaxed criterion from (21). Figure 3 displays the results, where the optimal value of (21) is computed as is varied using both the delta kernel and the true blur kernel . For strong edges (the simplest case), any is sufficient for avoiding the delta solution, while when finer structures need to be resolved, we require that . While panels (a) and (b) reflect the results and conclusions from (Levin et al., 2011b, Figure 1), panel (c) tells a very different story. Basically, whereas (Levin et al., 2011b, Figure 1) shows the delta kernel (blurred image) being favored even for small values of , the relaxed condition (21) strongly prefers the true blur kernel for a wide range of .
Generalizing to real 2D images, to ensure practical success and capture high-resolution details, lower values of are definitely preferred for limiting the undesirable effects of variance reduction mentioned above. To visualize this claim, we now present a revised version of (Levin et al., 2011b, Figure 2), which depicts the regions of a real image where the sharp image is preferred to the undesirable blurred solution, where relative preference is rated by , with . In Figure 4 we update (Levin et al., 2011b, Figure 2) to include both lower values as well as the analogous ranking using the from (17), with near zero. From this figure, it is readily apparent that smaller values are more effective, and that the VB penalty function does indeed behave like the norm when is small.
Of course from a practical standpoint solving (21
) represents a difficult, combinatorial optimization problem with numerous local minima whenis small, and we speculate that many have tried such direct minimization and concluded that smaller values were inadequate. However, the penalty function shape modulation intrinsic to VB ultimately provides a unique surrogate for circumventing this problem, hence its strong performance. Thus we can briefly summarize largely why VB can be superior to MAP: VB allows us to use a near-optimal image penalty, one that is maximally discriminative between blurry and sharp images, but with a reduced risk of getting stuck in bad local minima during the optimization process. Overall, these conclusions provide a more complete picture of the essential differences between MAP and VB.
Before proceeding to the next section, we emphasize that none of the arguments presented herein discredit the use natural image statistics when directly solving (3). In fact Levin et al. (Levin et al., 2011b) prove that when , then in the limit as the image grows large the MAP estimate for , after marginalizing over (Type II), will equal the true . But there is no inherent contradiction with our results, since it should now be readily apparent that VB is fundamentally different than solving
, and therefore justification for the latter cannot be directly transferred to justification for the former. This highlights the importance of properly differentiating various forms of Bayesian inference.
Natural image statistics are ideal in cases where and grow large and we are able to integrate out the unknown , benefitting from central limit arguments when estimating alone. However, when we jointly compute MAP estimates of both and (Type I) as in (2), we enjoy no such asymptotic welfare since the number of unknowns increases proportionally with the sample size. One of the insights of our paper is to show that, at least in this regard, VB is on an exactly equal footing with Type I MAP, and thus we must look for theoretical VB justification elsewhere, leading to the analysis of relative concavity, local minima, invariance, maximal sparsity, etc. presented herein.
While existing VB blind deconvolution algorithms typically utilize some preassigned decreasing sequence for as described in Section 3.5 and noted in Algorithm 1, it may be preferable to have learned automatically from the data itself as is common in other applications of VB. This alternative strategy also has the conceptual appeal of an integrated cost function that is universally reduced even as is updated, unlike Algorithm 1 where the reduction step may in fact increase the overall cost, unlike all of the other updates. However, current VB deblurring papers either do not mention such a seemingly obvious alternative (perhaps suggesting that the authors unsuccessfully tried such an approach) or explicitly mention that learning is problematic but without concrete details. For example, (Levin et al., 2011b) observed that the noise level learning used in (Fergus et al., 2006) represents a source of problems as the optimization diverges when the estimated noise level decreases too much. But there is no mention of why might decrease too much, and further details or analyses are absent.
Interestingly, the perspective presented herein provides a clear picture for why learning may be difficult and suggests some potential fixes. The problem stems from a degenerate global minimum to (11) and therefore (6) occurring at . The explanation for this is as follows: Consider minimization of (11) over , , and . Because the combined dimensionality of and is larger than , there are an infinite number of candidate solutions such that . Therefore the term in the VB cost function (11) can be minimized to exactly zero even in the limit as . Moreover, by Theorem 3, we observe that becomes increasingly small as around solutions where is small or near zero. In fact, it can actually be shown that as for all non-decreasing .131313Based on (12), it is clear that the optimizing value for computing will be