Deconvolving Images with Unknown Boundaries Using the Alternating Direction Method of Multipliers

10/09/2012 ∙ by Mariana S. C. Almeida, et al. ∙ 0

The alternating direction method of multipliers (ADMM) has recently sparked interest as a flexible and efficient optimization tool for imaging inverse problems, namely deconvolution and reconstruction under non-smooth convex regularization. ADMM achieves state-of-the-art speed by adopting a divide and conquer strategy, wherein a hard problem is split into simpler, efficiently solvable sub-problems (e.g., using fast Fourier or wavelet transforms, or simple proximity operators). In deconvolution, one of these sub-problems involves a matrix inversion (i.e., solving a linear system), which can be done efficiently (in the discrete Fourier domain) if the observation operator is circulant, i.e., under periodic boundary conditions. This paper extends ADMM-based image deconvolution to the more realistic scenario of unknown boundary, where the observation operator is modeled as the composition of a convolution (with arbitrary boundary conditions) with a spatial mask that keeps only pixels that do not depend on the unknown boundary. The proposed approach also handles, at no extra cost, problems that combine the recovery of missing pixels (i.e., inpainting) with deconvolution. We show that the resulting algorithms inherit the convergence guarantees of ADMM and illustrate its performance on non-periodic deblurring (with and without inpainting of interior pixels) under total-variation and frame-based regularization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 9

page 11

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The alternating direction method of multipliers (ADMM), originally proposed in the 1970’s [25], emerged recently as a flexible and efficient tool for several imaging inverse problems, such as denoising [23, 38], deblurring [2], inpainting [2], reconstruction [34], motion segmentation [44], to mention only a few classical problems (for a comprehensive review, see [9]). ADMM-based approaches make use of variable splitting, which allows a straightforward treatment of various priors/regularizers [1], such as those based on frames or on total-variation (TV) [36], as well as the seamless inclusion of several types of constraints (e.g., positivity) [23, 38]. ADMM is closely related to other techniques, namely the so-called Bregman and split Bregman methods [10, 26, 43] and Douglas-Rachford splitting [9, 14, 19, 21].

Several ADMM-based algorithms for imaging inverse problems require, at each iteration, solving a linear system (equivalently, inverting a matrix) [2, 10, 23, 33, 39]. This fact is simultaneously a blessing and a curse. On the one hand, the matrix to be inverted is related to the Hessian of the objective function, thus carrying second order information; arguably, this fact justifies the excellent speed of these methods, which have been shown (see, e.g., [2]) to be considerably faster than the classical iterative shrinkage-thresholding (IST) algorithms [16, 22] and even than their accelerated versions [7, 8, 42]. On the other hand, this inversion (due to its typically huge size) limits its applicability to problems where it can be efficiently computed (by exploiting some particular structure). For ADMM-based image deconvolution algorithms [2, 10, 23, 39], this inversion can be efficiently carried out using the

fast Fourier transform

(FFT), if the convolution is cyclic/periodic (or assumed to be so), thus diagonal in the discrete Fourier domain. However, as explained next, periodicity is an unnatural assumption, inadequate for most real imaging problems.

In deconvolution, the pixels located near the boundary of the observed image depend on pixels (of the unknown image) located outside of its domain. The typical way to formalize this issue is to adopt a so-called boundary condition (BC).

  • The periodic BC (the use of which, in image deconvolution, dates back to the 1970s [6]) assumes a periodic convolution; its matrix representation is circulant111In fact, when dealing with 2-dimensional (2D) images, these matrices are block-circulant with circulant blocks, thus diagonalized by the 2D discrete Fourier transform (DFT). For simplicity, we refer to such matrices simply as circulant and to the 2D DFT simply as DFT., diagonalized by the DFT11footnotemark: 1, which can be implemented via the FFT. This computational convenience makes it, arguably, the most commonly adopted BC.

  • The zero BC assumes that all the external pixels have zero value, thus the matrix representing the convolution is block-Toeplitz, with Toeplitz blocks [31]

    . By analogy with the BC for ordinary or partial differential equations that assumes fixed values at the domain boundary, this is commonly referred to Dirichlet BC

    [31].

  • In the reflexive and anti-reflexive

    BCs, the pixels outside the image domain are a reflection of those near the boundary, using even or odd symmetry, respectively

    [13, 18, 32]. In the reflexive BC, the discrete derivative at the boundary is zero; thus, by analogy with the BC for ordinary or partial differential equations that assumes fixed values of the derivative at the boundary, the reflexive BC is often referred to as Neumann BC [31].

Fig. 1: Illustration of the (unnatural) assumptions underlying the periodic, reflexive, and zero boundary conditions.

As illustrated in Figure 1, all these standard BCs are quite unnatural, as they do not correspond to any realistic imaging system, being motivated merely by computational convenience. Namely, assuming a periodic boundary condition has the advantage of allowing a very fast implementation of the convolution as a point-wise multiplication in the DFT domain, efficiently implementable using the FFT. However, in real problems, there is no reason for the external (unobserved) pixels to follow periodic (or any other) boundary conditions. A well known consequence of this mismatch is a degradation of the deconvolved images, such as the appearance of ringing artifacts emanating from the boundaries (see Figs. 2 and 3). These artifacts can be reduced by pre-processing the image to reduce the spurious discontinuities at the boundaries, created by the (wrong) periodicity assumption; this is what is done by the “edgetaper” function in the MATLAB Image Processing Toolbox. In a more sophisticated version of this idea that was recently proposed, the observed image is extrapolated to create a larger image with smooth BCs [27].

Convolutions under zero BC can still be efficiently performed in the DFT domain, by embedding (using zero-padding) the non-periodic convolution into a larger periodic one

[31]. However, in addition to the mismatch problem pointed out in the previous paragraph (i.e., there is usually no reason to assume that the boundary is zero), this choice has another drawback: in the context of ADMM-based methods, the zero-padding technique prevents the required matrix inversion from being efficiently computed via the FFT [4].

I-a Related Work and Contributions

Instead of adopting a standard BC or a boundary smoothing scheme, a more realistic model of actual imaging systems treats the external boundary pixels as unknown; i.e.

, the problem is seen as one of simultaneous deconvolution and inpainting, where the unobserved boundary pixels are estimated together with the deconvolved image. The corresponding observation model is the composition of a spatial mask with a convolution under arbitrary BC

[13, 29, 35, 40]; addressing this formulation using ADMM is the central theme of this paper. While we were concluding this manuscript, we became aware of very recent related work in [30].

Under quadratic (Tikhonov) regularization, image deconvolution with periodic BC corresponds to a linear system, where the corresponding matrix can be efficiently inverted in the DFT domain using the FFT. In the unknown boundary case, it was shown in [35] how this inversion can be reduced to an FFT-based inversion followed by the solution (via, e.g., conjugate gradient – CG) of a much smaller system (same dimension as the unknown boundary). Very recently, [40] adapted this technique to deconvolution under TV and frame-based analysis non-smooth regularization; that work proposes an algorithm based on variable splitting and quadratic penalization, using the method of [35] to solve the linear system at each iteration. That method is related to, but it is not ADMM, thus has no guarantees of converge to a minimizer of the original objective function. Although [40] mentions the possibility of using the method of [35] within ADMM, that option was not explored.

In this paper, we address image deconvolution with unknown boundaries, under TV-based and frame-based (synthesis and analysis) non-smooth regularization, using ADMM. Our first and main approach exploits the fact that the observation model involves the composition of a spatial mask with a periodic convolution, and uses ADMM in a way that decouples these two operations. The resulting algorithms inherit all the desirable properties of previous ADMM-based deconvolution methods: all the update equations (including the matrix inversion) can be computed efficiently without using inner iterations; convergence is formally guaranteed. The second approach considered is the direct extension of the methods from [2, 10, 23, 39] to the unknown boundary case; here (unlike with periodic BC), the linear system at each iteration cannot be solved efficiently using the FFT, thus we adopt the technique of [35, 40]. However, since the resulting algorithms are instances of ADMM, they have convergence guarantees, which are missing in [40]. Furthermore, we show how the method of [35] can also be used in ADMM for frame-based synthesis regularization, whereas [40] only considers analysis formulations.

The proposed methods are experimentally illustrated using frame-based and TV-based regularization; the results show the advantage of our approach over the use of the “edgetaper” function (in terms of improvement in SNR) and over an adaptation of the recent strategy of [40] (in terms of speed). Finally, our approach is also tested on inverse problems where the observation model consists of a (non-cyclic) deblurring followed by a generic loss of pixels (inpainting problems). Arguably due to its complexity, this problems has not been often addressed in the literature, although it is a relevant one: consider the problem of deblurring an image in which some pixel values are unaccessible, e.g., because they are saturated, thus should not be taken into account, or because they correspond to so-called dead pixels in the image sensor.

I-B Outline and Notation

Sections II and III review the ADMM and its use for image deconvolution with periodic BC, using frame-based and TV-based regularization, setting the stage for the proposed approaches for the unknown boundary case, which are introduced in Section IV. The experimental evaluation of our methods is reported in Section V, which also illustrates its use for simultaneous deblurring and inpaiting. Section VI concludes the manuscript.

We use the following notation:

is the extended real line; bold lower case denotes vectors (

e.g., , ), and bold upper case (Roman or Greek) are matrices (e.g., , ); the superscript denotes vector/matrix transpose, or conjugate transpose in the complex case; , , and

are the identity matrix and vectors with all elements equal to one and zero, respectively; as usual,

denotes function composition (e.g., ), and the component-wise product between vectors ().

Ii Alternating Direction Method of Multipliers

Ii-a The Standard ADMM Algorithm

Consider an unconstrained optimization problem

(1)

where and are convex functions, and is a matrix. The ADMM algorithm for this problem (which can be derived from different perspectives, namely, augmented Lagrangian and Douglas-Rachford splitting) is presented in Algorithm 1; for a recent comprehensive review, see [9]. Convergence of ADMM was shown in [19], where the following theorem was proved.

Theorem 1.

Consider problem (1), where has full column rank and and are closed, proper, convex functions. Consider arbitrary , . Let and for be two sequences such that and Consider three sequences , and , for satisfying

Then, if (1) has a solution, say , the sequence converges to . If (1) does not have a solution, at least one of the sequences or diverges.

1Initialization: set , choose , , and . repeat
2      
3      
4       )
5      
until stopping criterion is satisfied
Algorithm 1 ADMM

There are more recent results on ADMM, namely establishing linear convergence [17]; however, those results make stronger assumptions (such as strong convexity) which are not generally satisfied in deconvolution problems. Under the conditions of Theorem 1, convergence holds for any ; however, the choice of may strongly affect the convergence speed [9]. It is also possible to replace the scalar by a positive diagonal matrix , i.e., to replace the quadratic terms of the form by .

Ii-B The ADMM For Two or More Functions

Following the formulation proposed in [3, 23], consider a generalization of (1), with functions,

(2)

where are proper, closed, convex functions, and are arbitrary matrices. We map this problem into form (1) as follows: let ; define matrix as

(3)

where , and let be defined as

(4)

where each is a -dimensional sub-vector of , that is, .

The definitions in the previous paragraph lead to an ADMM for problem (2) with the following two key features.

  1. The fact that turns line 3 of Algorithm 1 into an unconstrained quadratic problem. Letting be a -dimensional positive block diagonal matrix (associating a possibly different parameter to each function )

    (5)

    the solution of this quadratic problem is given by (denoting )

    (6)

    with , where , , and , for , are the sub-vectors of , , and , respectively, corresponding to the partition in (3), and the second equality results from the block structure of matrices (in (3)) and (in (5)).

  2. The separable structure of (as defined in (4)) allows decoupling the minimization in line 4 of Algorithm 1 into independent minimizations, each of the form

    (7)

    for , where This minimization defines to the so-called Moreau proximity operator of (denoted as ) (see [14, 15] and references therein) applied to , thus

    For some functions, the Moreau proximity operators are known in closed form [14]; a well-known case is the norm (), for which the proximity operator is the soft-threshold function:

    (8)

    where the sign, max, and absolute value functions are applied in component-wise fashion.

The convergence of the resulting instance of ADMM (shown in Algorithm 2) is the subject of the following proposition.

Proposition 1.

Consider problem (2), where are proper, closed, convex functions, and . Consider positive constants and arbitrary . If matrix has full column rank and (2) has a solution, say , then, the sequence generated by Algorithm 2 converges to .

Proof.

We simply need to show that the conditions of Theorem 1 are satisfied. Problem (2) has the form (1), with and as given by (4); if all the functions are closed, proper, and convex, so are and . Furthermore, if has full column rank, a sufficient condition for the inverse in (6) to exist is that . Finally, the minimization in line 3 of Algorithm 1 is solved exactly in line 4 of Algorithm 2, and the minimization in line 4 of Algorithm 1 is solved exactly in line 6 of Algorithm 2. Thus, we can take the sequences and in Theorem 1 to be identically zero. ∎

1Initialization: set , choose , , . repeat
2      
3      
4       for  to  do
5            
6             )
7       end for
8      
until stopping criterion is satisfied
Algorithm 2 ADMM-2

If the proximity operators of the functions are simple, the computational bottleneck of Algorithm 2 is the matrix inversion in line 4. As shown in [39, 41], inversions of this form can be efficiently tackled using the FFT, if all the matrices are circulant, thus jointly diagonalized by the DFT (more details in Section III). Due to this condition, the work in [39, 41] was limited to deconvolution with periodic BCs, under TV regularization. More recent work in [2, 3, 23] has shown how these inversions can still be efficiently handled via the FFT (and other fast transforms) in problems such as image reconstruction from partial Fourier observations and inpainting, and with other regularizers, such as those based on tight frames. This paper extends that work, showing how to handle deconvolution problems with unknown boundaries.

Iii Image Deconvolution with Periodic BC

This section reviews the ADMM-based approach to image deconvolution with periodic BC, using the frame-based formulations and TV-based regularization [2, 3, 23, 36, 39], the standard regularizers for this class of imaging inverse problems. The next section will then show how this approach can be extended to deal with unknown boundaries.

Iii-a Frame-Based Synthesis Formulation

There are two standard ways to formulate frame-based regularization for image deconvolution, both exploiting the fact that natural images have sparse representations on wavelet222We use the generic term “wavelet” to mean any wavelet-like multiscale representation, such as “curvelets,” “beamlets,” or “ridgelets” [28]. frames: the synthesis and analysis formulations [2, 20, 37].

The synthesis formulation expresses the estimated image as a linear combination of the elements of some wavelet frame (an orthogonal basis or an overcomplete dictionary), i.e., , with given by

(9)

where is a vector containing all the pixels (in lexicographic order) of the observed image, is a matrix representation of the (periodic) convolution, the columns of matrix () are the elements of the adopted frame, is a regularizer encouraging to be sparse (a typical choice, herein adopted, is ), and is the regularization parameter [2, 20, 24]. The first term in (9) results from assuming the usual observation model

(10)

where , and

is a sample of white Gaussian noise with unit variance (any other value may be absorbed by

).

Clearly, problem (9) has the form (2), with and

(11)
(12)
(13)
(14)

The proximity operators of and , key components of Algorithm 2 for solving (9), have simple expressions,

(15)
(16)

where “soft” denotes the soft-threshold function in (8). Line 4 of Algorithm 2 (the other key component) has the form

(17)

As shown in [2], if corresponds to a Parseval frame [28], i.e., if333In fact, the frame only needs to be tight [28], i.e., ; without loss of generality, we assume , which yields lighter notation. Recently, a method for relaxing the tight frame condition was proposed in [33]. (although possibly ), the Sherman-Morrison-Woodbury inversion formula can be used to write the matrix inverse in (17) as

(18)

Assuming periodic BC, matrix is circulant, thus factors into

(19)

where and are the unitary matrices representing the DFT and its inverse, and is the diagonal matrix of the DFT coefficients of the convolution kernel. Using (19), matrix defined in (18) can be written as

(20)

where denotes the squared absolute values of the entries of . Since the product involves only diagonal matrices, it can be computed with only cost, while the products by and (DFT and its inverse) can be computed with cost, using the FFT; thus, computing and multiplying by it has cost.

The leading cost of each application of (17) (line 4 of Algorithm 2) is either the cost associated with or the cost of the products by and . For most tight frames used in image restoration (e.g. undecimated wavelet transforms, curvelets, complex wavelets), these products correspond to direct and inverse transforms, for which fast algorithms exist [28]. In conclusion, under periodic BC and for a large class of frames, each iteration of Algorithm 2 for solving (9) has cost. Finally, convergence of this instance of Algorithm 2 is established in the following proposition.

Proposition 2.

The instance of Algorithm 2, with the definitions in (11)–(14), with line 4 computed as in (17), and the proximity operators in line 6 as given in (15) and (16), converges to a solution of (9).

Proof.

Since and are proper, closed, convex functions, so is the objective function in (9). Because is coercive, so is the objective function in (9), thus its set of minimizers is not empty [15]. Matrix obviously has full column rank, which implies that also has full column rank; that fact combined with the fact that and are proper, closed, convex functions, and that a minimizer exists, allows invoking Proposition 1 which concludes the proof. ∎

Iii-B Frame-Based Analysis Formulation

In the frame-based analysis approach, the problem is formulated directly with respect to the unknown image (rather than its synthesis coefficients),

(21)

where is as in (9) and () is the analysis operator of a Parseval frame, i.e., satisfying444As above, the frame is only required to be tight, i.e., ; since can be obtained simply by normalization, there is no loss of generality. (although possibly , unless the frame is an orthonormal basis) [28]. Problem (21) has the form (2), with , and as given in (11) and (12), respectively, and

(22)
(23)

Since the proximity operators of and are the same as in the synthesis case (see (15) and (16)), only line 4 of Algorithm 2 has a different form:

(24)

Since and , the matrix inverse is simply

(25)

which has cost, since is a diagonal matrix and the products by and (the DFT and its inverse) can be computed using the FFT.

In conclusion, under periodic BC and for a large class of frames, each iteration of Algorithm 2 for solving (21) has cost. Finally, convergence of this instance of Algorithm 2 is claimed in the following proposition.

Proposition 3.

The instance of Algorithm 2, with the definitions in (11), (12), (22), and (23), with line 4 computed as in (24)–(25), and the proximity operators in line 6 as given in (15) and (16), converges to a solution of (21).

Proof.

Since and are proper, closed, convex functions, so is the objective function in (9). Because is coercive and is the analysis operator of a tight frame, (where is the null space of matrix ), the objective function in (9) is coercive, thus its set of minimizers is not empty [15]. Matrix has full column rank, thus the same is true of ; combining that with and being proper, closed, and convex, and given the existence of a minimizer, allows invoking Proposition 1 to conclude the proof. ∎

Iii-C Total Variation

The classical formulation of TV-based image deconvolution consists in a convex optimization problem

(26)

where , , and have the same meanings as above, and each is the matrix that computes the horizontal and vertical differences at pixel (also with periodic BC) [39, 41]. The sum defines the so-called total-variation555In fact, this is the so-called isotropic discrete approximation of the continuous total-variation [11, 36]. (TV) of image x.

Problem (26) has the form (2), with , and

(27)
(28)
(29)
(30)

For matrix (see (5)), we adopt . The main steps of the resulting instance of Algorithm 2 for solving (26) are as follows.

  • As shown in [39, 41], the matrix to be inverted in line 4 of Algorithm 2 can be written as

    (31)

    where are the matrices collecting the first and second rows, respectively, of each of the matrices ; that is, computes all the horizontal differences and all the vertical differences. With periodic BC, the matrices in (31) can be factorized as

    (32)

    where , , and are diagonal matrices. Thus, the inverse of the matrix in (31) can be written as

    (33)

    which involves a diagonal inversion, with cost, and the products by U and (the direct and inverse DFT), which have cost (by FFT).

  • The sum yielding the vector to which this inverse is applied (line 4 of Algorithm 2) is

    where contain the first and second component, respectively, of all the vectors, for . As seen above, the product by has cost via the FFT, while the products by and (which correspond to local differences) have cost.

  • The proximity operator of is as given in (15).

  • The proximity operator of , for , is the so-called vector-soft function [39, 41],

    with the convention that

Plugging all these equalities into Algorithm 2 yields Algorithm 3, which is similar to the one presented in [39].

1Set , choose , , , repeat
2      
3      
4      
5      
6      
7      
8       for  to  do
9            
10             )
11       end for
12      
until stopping criterion is satisfied
Algorithm 3 ADMM-TV

Finally, convergence of Algorithm 3 is addressed in the following proposition.

Proposition 4.

Consider problem (26) and assume that . Then Algorithm 3 converges to a solution of (26).

Proof.

Since (27)–(28) are proper, closed, convex functions, so is the objective function in (26). That implies coercivity of the objective function in (26), thus existence of minimizer(s), was shown in [12]. Matrix can be written as a permutation of the rows of , where ; since (), if , then has full column rank. Finally, since Algorithm 3 is an instance of Algorithm 2 (with all updates computed exactly), Proposition 1 guarantees its convergence to a solution of (26). ∎

Notice that the condition is very mild, and clearly satisfied by the convolution with any low-pass-type filter, such as those modeling uniform, motion, Gaussian, circular, and other types of blurs. In fact, for these blurring filters, , where is the DC gain (typically ).

Iv Deconvolution with Unknown Boundaries

Iv-a The Observation Model

To handle images with unknown boundaries, we model the boundary pixels as unobserved, which is achieved my modifying (10) into

(34)

where (with ) is a masking matrix, i.e., a matrix whose rows are a subset of the rows of an identity matrix. The role of is to observe only the subset of the image domain in which the elements of do not depend on the boundary pixels; consequently, the BC assumed for the convolution represented by is irrelevant, and we may adopt periodic BCs, for computational convenience.

Assuming that models the convolution with a blurring filter with a limited support of size , and that and represent square images of dimensions , then matrix , with , represents the removal of a band of width of the outermost pixels of the full convolved image .

Problem (34) can be seen as hybrid of deconvolution and inpainting [13], where the missing pixels constitute the unknown boundary. If , model (34) reduces to a standard periodic deconvolution problem. Conversely, if , (34) becomes a pure inpainting problem. Moreover, the formulation (34

) can be used to model problems where not only the boundary, but also other pixels, are missing, as in standard image inpainting.

The following subsections describe how to handle observation model (34), in the context of the ADMM-based deconvolution algorithms reviewed in the previous section.

Iv-B Frame-Based Synthesis Formulation

Iv-B1 Mask Decoupling (MD)

Under observation model (34), the frame-based synthesis formulation (9) changes to

(35)

At this point, one could be tempted to map (35) into (2) using the same correspondences as in (11), (12), and (14), and simply changing (13) into

(36)

The problem with this choice is that the matrix to be inverted in line 4 of Algorithm 2 would become

(37)

instead of the one in (17). The “trick” used in Subsection III-A to express this inversion in the DFT domain can no longer be used due to presence of , invalidating the corresponding FFT-based implementation of line 4 of Algorithm 2. It is clear that the source of the difficulty is the product , which is the composition of a mask (a spatial point-wise operation) with a circulant matrix (a point-wise operation in the DFT domain); to sidestep this difficulty, we need to decouple these two operations, which is achieved by defining (instead of (11))

(38)

while keeping , , and as in (12)–(14). With this choice, line 4 of Algorithm 2 is still given by (17) (with its efficient FFT-based implementation); the only change is the proximity operator of the new ,

(39)
(40)

notice that, due to the special structure of , matrix is diagonal, thus the inversion in (40) has cost, the same being true about the product , which corresponds to extending the observed image to the size of , by creating a boundary of zeros around it. Of course, both and can be pre-computed and then used throughout the algorithm, as long as is kept constant. We refer to this approach as mask decoupling (MD).

In conclusion, the proposed MD-based ADMM algorithm for image deconvolution with unknown boundaries, under frame-based synthesis regularization, is Algorithm 2 with line 4 implemented as in (17) and the proximity operators in line 6 given by (40) and (16). We refer to this algorithm as FS-MD (frame synthesis mask decoupling). As with the periodic BC, the leading cost is per iteration. Finally, convergence of the FS-MD algorithm is guaranteed by the following proposition (the proof of which is similar to that of Proposition 2).

Proposition 5.

Algorithm FS-MD (i.e., the instance of Algorithm 2 with the definitions in (38) and (12)–(14), with line 4 computed as in (17), and the proximity operators in line 6 as given in (40) and (16)) converges to a solution of (35).

Iv-B2 Using the Reeves-Šorel Technique

An alternative to the approach just presented of decoupling the convolution from the masking operator is to use the method of [35, 40] to tackle the inversion (37). Following [35], notice that

(41)

where contains the rows of that are missing from (recall that the rows of are a subset of those of an identity matrix) and is a permutation matrix that puts these missing rows in their original positions in . Noticing that

(42)

( is a permutation matrix, thus ), the inverse of (37) can be written (with ) as

(43)

where the second equality results from using the Sherman-Morrison-Woodbury matrix inversion identity, after defining . Since is circulant, can be efficiently computed via FFT, as explained in (17)–(20). The inversion in (43) cannot be computed via FFT; however, its dimension corresponds to the number of unknown boundary pixels (number of rows in ), usually much smaller than image itself. As in [35, 40], we use the CG algorithm to solve the corresponding system; we confirmed experimentally that (as in [40]) taking only one CG iteration (initialized with the estimate of the previous outer iteration) yields the fastest convergence, without degrading the final result. Thus, in our experiments, we use a single CG iteration per outer iteration.

Approximately solving line 4 of Algorithm 2 via one (or even more) iterations of the CG algorithm, rather than an FFT-based exact solution, makes convergence more difficult to analyze, so we will not present a formal proof. In a related problem [23], it was shown experimentally that if the iterative solvers used to implement the minimizations defining the ADMM steps are warm-started (i.e., initialized with the values from the previous outer iteration), then the error sequences and , for are absolutely summable as required by Theorem 1. Finally, we refer to this algorithm as FS-CG (frame synthesis conjugate gradient).

Iv-C Frame-Based Analysis Formulation

Iv-C1 Mask Decoupling (MD)

The frame-based analysis formulation corresponding to observation model (9) is

(44)

Following the MD approach introduced for the synthesis formulation, we map Problem (44) into the form (2), by using as defined in (38), and keeping , , and as in the periodic BC case: (22), (23), and (12), respectively.

The only difference in the resulting instance of Algorithm 2 is the use of the proximity operator of the new , as given in (40). In conclusion, the proposed ADMM-based algorithm for image deconvolution with unknown boundaries, under frame-based analysis regularization, is simply Algorithm 2, with line 4 implemented as in (24)–(25), and the proximity operators in line 6 given by (40) and (16). We refer to this algorithm as FA-MD (frame analysis mask decoupling). The computational cost of the algorithm is per iteration, as in the periodic BC case. Convergence of FA-MD is addressed by the following proposition (the proof of which is similar to that of Proposition 3).

Proposition 6.

Algorithm FA-MD (i.e., Algorithm 2 with the definitions in (38), (22), (23), and (12), with line 4 computed as in (24)–(25), and the proximity operators in line 6 as given in (40) and (16)) converges to a solution of (44).

Iv-C2 Using the Reeves-Šorel Technique

Consider Problem (44) and map into the form (2) using (11)–(12), (23), and

(45)

The matrix inverse computed in line 4 of Algorithm 2 is now (with )

(46)

which can no longer be computed as in (25), since matrix is not circulant. Using the same steps as in (41)–(43), with replacing and , leads to

(47)

Since is circulant, can be efficiently computed via FFT, as in (25). As in the synthesis case, the inverse in (47) is computed approximately by taking one (warm-started) CG iteration (for the reason explained in Subsection IV-B2). We refer to the resulting algorithm as FA-CG (frame analysis conjugate gradient).

Iv-D TV-Based Deconvolution with Unknown Boundaries

Iv-D1 Mask Decoupling (MD)

Given the observation model in (34), TV-based deconvolution with unknown boundaries is formulated as

(48)

with every element having the exact same meanings as above.

Following the MD approach, we map (48) into the form (2) using as defined in (38) and keeping all the other correspondences (28)–(30) unchanged. The only resulting change to Algorithm 3 is in line 7, which becomes

(49)

as a consequence of the application of the proximity operator of , as given in (40). In summary, the ADMM-based algorithm for TV-based deconvolution with unknown boundaries has the exact same structure as Algorithm 3, with line 7 replaced by 49. We refer to this algorithm as TV-MD (TV mask decoupling). Finally, convergence of TV-MD is addressed in the following proposition (the proof of which is similar to that of Proposition 4).

Proposition 7.

If , then TV-MD (i.e., the version of Algorithm 3 with line 7 replaced by (49)) converges to a solution of (48).

As for periodic BCs, the condition is very mild, and is satisfied by the convolution with any low-pass filter (for which , where is the DC gain, typically ) and , if (at least one pixel is observed).

Iv-D2 Using the Reeves-Šorel Technique

To use the Reeves-Šorel technique to handle (48), the mapping to the form (2) is done as in (27), (28), and (30), with (29) replaced by (45). The consequence is a simple modification of Algorithm 3, where is replaced by , in line 8, while in line 6, is replaced by and is redefined (instead of (33)) as the inverse of the matrix in (31), i.e. (with