I Introduction
The alternating direction method of multipliers (ADMM), originally proposed in the 1970’s [25], emerged recently as a flexible and efficient tool for several imaging inverse problems, such as denoising [23, 38], deblurring [2], inpainting [2], reconstruction [34], motion segmentation [44], to mention only a few classical problems (for a comprehensive review, see [9]). ADMMbased approaches make use of variable splitting, which allows a straightforward treatment of various priors/regularizers [1], such as those based on frames or on totalvariation (TV) [36], as well as the seamless inclusion of several types of constraints (e.g., positivity) [23, 38]. ADMM is closely related to other techniques, namely the socalled Bregman and split Bregman methods [10, 26, 43] and DouglasRachford splitting [9, 14, 19, 21].
Several ADMMbased algorithms for imaging inverse problems require, at each iteration, solving a linear system (equivalently, inverting a matrix) [2, 10, 23, 33, 39]. This fact is simultaneously a blessing and a curse. On the one hand, the matrix to be inverted is related to the Hessian of the objective function, thus carrying second order information; arguably, this fact justifies the excellent speed of these methods, which have been shown (see, e.g., [2]) to be considerably faster than the classical iterative shrinkagethresholding (IST) algorithms [16, 22] and even than their accelerated versions [7, 8, 42]. On the other hand, this inversion (due to its typically huge size) limits its applicability to problems where it can be efficiently computed (by exploiting some particular structure). For ADMMbased image deconvolution algorithms [2, 10, 23, 39], this inversion can be efficiently carried out using the
fast Fourier transform
(FFT), if the convolution is cyclic/periodic (or assumed to be so), thus diagonal in the discrete Fourier domain. However, as explained next, periodicity is an unnatural assumption, inadequate for most real imaging problems.In deconvolution, the pixels located near the boundary of the observed image depend on pixels (of the unknown image) located outside of its domain. The typical way to formalize this issue is to adopt a socalled boundary condition (BC).

The periodic BC (the use of which, in image deconvolution, dates back to the 1970s [6]) assumes a periodic convolution; its matrix representation is circulant^{1}^{1}1In fact, when dealing with 2dimensional (2D) images, these matrices are blockcirculant with circulant blocks, thus diagonalized by the 2D discrete Fourier transform (DFT). For simplicity, we refer to such matrices simply as circulant and to the 2D DFT simply as DFT., diagonalized by the DFT^{1}^{1}footnotemark: 1, which can be implemented via the FFT. This computational convenience makes it, arguably, the most commonly adopted BC.

The zero BC assumes that all the external pixels have zero value, thus the matrix representing the convolution is blockToeplitz, with Toeplitz blocks [31]
. By analogy with the BC for ordinary or partial differential equations that assumes fixed values at the domain boundary, this is commonly referred to Dirichlet BC
[31]. 
In the reflexive and antireflexive
BCs, the pixels outside the image domain are a reflection of those near the boundary, using even or odd symmetry, respectively
[13, 18, 32]. In the reflexive BC, the discrete derivative at the boundary is zero; thus, by analogy with the BC for ordinary or partial differential equations that assumes fixed values of the derivative at the boundary, the reflexive BC is often referred to as Neumann BC [31].
As illustrated in Figure 1, all these standard BCs are quite unnatural, as they do not correspond to any realistic imaging system, being motivated merely by computational convenience. Namely, assuming a periodic boundary condition has the advantage of allowing a very fast implementation of the convolution as a pointwise multiplication in the DFT domain, efficiently implementable using the FFT. However, in real problems, there is no reason for the external (unobserved) pixels to follow periodic (or any other) boundary conditions. A well known consequence of this mismatch is a degradation of the deconvolved images, such as the appearance of ringing artifacts emanating from the boundaries (see Figs. 2 and 3). These artifacts can be reduced by preprocessing the image to reduce the spurious discontinuities at the boundaries, created by the (wrong) periodicity assumption; this is what is done by the “edgetaper” function in the MATLAB Image Processing Toolbox. In a more sophisticated version of this idea that was recently proposed, the observed image is extrapolated to create a larger image with smooth BCs [27].
Convolutions under zero BC can still be efficiently performed in the DFT domain, by embedding (using zeropadding) the nonperiodic convolution into a larger periodic one
[31]. However, in addition to the mismatch problem pointed out in the previous paragraph (i.e., there is usually no reason to assume that the boundary is zero), this choice has another drawback: in the context of ADMMbased methods, the zeropadding technique prevents the required matrix inversion from being efficiently computed via the FFT [4].Ia Related Work and Contributions
Instead of adopting a standard BC or a boundary smoothing scheme, a more realistic model of actual imaging systems treats the external boundary pixels as unknown; i.e.
, the problem is seen as one of simultaneous deconvolution and inpainting, where the unobserved boundary pixels are estimated together with the deconvolved image. The corresponding observation model is the composition of a spatial mask with a convolution under arbitrary BC
[13, 29, 35, 40]; addressing this formulation using ADMM is the central theme of this paper. While we were concluding this manuscript, we became aware of very recent related work in [30].Under quadratic (Tikhonov) regularization, image deconvolution with periodic BC corresponds to a linear system, where the corresponding matrix can be efficiently inverted in the DFT domain using the FFT. In the unknown boundary case, it was shown in [35] how this inversion can be reduced to an FFTbased inversion followed by the solution (via, e.g., conjugate gradient – CG) of a much smaller system (same dimension as the unknown boundary). Very recently, [40] adapted this technique to deconvolution under TV and framebased analysis nonsmooth regularization; that work proposes an algorithm based on variable splitting and quadratic penalization, using the method of [35] to solve the linear system at each iteration. That method is related to, but it is not ADMM, thus has no guarantees of converge to a minimizer of the original objective function. Although [40] mentions the possibility of using the method of [35] within ADMM, that option was not explored.
In this paper, we address image deconvolution with unknown boundaries, under TVbased and framebased (synthesis and analysis) nonsmooth regularization, using ADMM. Our first and main approach exploits the fact that the observation model involves the composition of a spatial mask with a periodic convolution, and uses ADMM in a way that decouples these two operations. The resulting algorithms inherit all the desirable properties of previous ADMMbased deconvolution methods: all the update equations (including the matrix inversion) can be computed efficiently without using inner iterations; convergence is formally guaranteed. The second approach considered is the direct extension of the methods from [2, 10, 23, 39] to the unknown boundary case; here (unlike with periodic BC), the linear system at each iteration cannot be solved efficiently using the FFT, thus we adopt the technique of [35, 40]. However, since the resulting algorithms are instances of ADMM, they have convergence guarantees, which are missing in [40]. Furthermore, we show how the method of [35] can also be used in ADMM for framebased synthesis regularization, whereas [40] only considers analysis formulations.
The proposed methods are experimentally illustrated using framebased and TVbased regularization; the results show the advantage of our approach over the use of the “edgetaper” function (in terms of improvement in SNR) and over an adaptation of the recent strategy of [40] (in terms of speed). Finally, our approach is also tested on inverse problems where the observation model consists of a (noncyclic) deblurring followed by a generic loss of pixels (inpainting problems). Arguably due to its complexity, this problems has not been often addressed in the literature, although it is a relevant one: consider the problem of deblurring an image in which some pixel values are unaccessible, e.g., because they are saturated, thus should not be taken into account, or because they correspond to socalled dead pixels in the image sensor.
IB Outline and Notation
Sections II and III review the ADMM and its use for image deconvolution with periodic BC, using framebased and TVbased regularization, setting the stage for the proposed approaches for the unknown boundary case, which are introduced in Section IV. The experimental evaluation of our methods is reported in Section V, which also illustrates its use for simultaneous deblurring and inpaiting. Section VI concludes the manuscript.
We use the following notation:
is the extended real line; bold lower case denotes vectors (
e.g., , ), and bold upper case (Roman or Greek) are matrices (e.g., , ); the superscript denotes vector/matrix transpose, or conjugate transpose in the complex case; , , andare the identity matrix and vectors with all elements equal to one and zero, respectively; as usual,
denotes function composition (e.g., ), and the componentwise product between vectors ().Ii Alternating Direction Method of Multipliers
Iia The Standard ADMM Algorithm
Consider an unconstrained optimization problem
(1) 
where and are convex functions, and is a matrix. The ADMM algorithm for this problem (which can be derived from different perspectives, namely, augmented Lagrangian and DouglasRachford splitting) is presented in Algorithm 1; for a recent comprehensive review, see [9]. Convergence of ADMM was shown in [19], where the following theorem was proved.
Theorem 1.
Consider problem (1), where has full column rank and and are closed, proper, convex functions. Consider arbitrary , . Let and for be two sequences such that and Consider three sequences , and , for satisfying
Then, if (1) has a solution, say , the sequence converges to . If (1) does not have a solution, at least one of the sequences or diverges.
There are more recent results on ADMM, namely establishing linear convergence [17]; however, those results make stronger assumptions (such as strong convexity) which are not generally satisfied in deconvolution problems. Under the conditions of Theorem 1, convergence holds for any ; however, the choice of may strongly affect the convergence speed [9]. It is also possible to replace the scalar by a positive diagonal matrix , i.e., to replace the quadratic terms of the form by .
IiB The ADMM For Two or More Functions
Following the formulation proposed in [3, 23], consider a generalization of (1), with functions,
(2) 
where are proper, closed, convex functions, and are arbitrary matrices. We map this problem into form (1) as follows: let ; define matrix as
(3) 
where , and let be defined as
(4) 
where each is a dimensional subvector of , that is, .
The definitions in the previous paragraph lead to an ADMM for problem (2) with the following two key features.

The fact that turns line 3 of Algorithm 1 into an unconstrained quadratic problem. Letting be a dimensional positive block diagonal matrix (associating a possibly different parameter to each function )
(5) the solution of this quadratic problem is given by (denoting )
(6) with , where , , and , for , are the subvectors of , , and , respectively, corresponding to the partition in (3), and the second equality results from the block structure of matrices (in (3)) and (in (5)).

The separable structure of (as defined in (4)) allows decoupling the minimization in line 4 of Algorithm 1 into independent minimizations, each of the form
(7) for , where This minimization defines to the socalled Moreau proximity operator of (denoted as ) (see [14, 15] and references therein) applied to , thus
For some functions, the Moreau proximity operators are known in closed form [14]; a wellknown case is the norm (), for which the proximity operator is the softthreshold function:
(8) where the sign, max, and absolute value functions are applied in componentwise fashion.
The convergence of the resulting instance of ADMM (shown in Algorithm 2) is the subject of the following proposition.
Proposition 1.
Proof.
We simply need to show that the conditions of Theorem 1 are satisfied. Problem (2) has the form (1), with and as given by (4); if all the functions are closed, proper, and convex, so are and . Furthermore, if has full column rank, a sufficient condition for the inverse in (6) to exist is that . Finally, the minimization in line 3 of Algorithm 1 is solved exactly in line 4 of Algorithm 2, and the minimization in line 4 of Algorithm 1 is solved exactly in line 6 of Algorithm 2. Thus, we can take the sequences and in Theorem 1 to be identically zero. ∎
If the proximity operators of the functions are simple, the computational bottleneck of Algorithm 2 is the matrix inversion in line 4. As shown in [39, 41], inversions of this form can be efficiently tackled using the FFT, if all the matrices are circulant, thus jointly diagonalized by the DFT (more details in Section III). Due to this condition, the work in [39, 41] was limited to deconvolution with periodic BCs, under TV regularization. More recent work in [2, 3, 23] has shown how these inversions can still be efficiently handled via the FFT (and other fast transforms) in problems such as image reconstruction from partial Fourier observations and inpainting, and with other regularizers, such as those based on tight frames. This paper extends that work, showing how to handle deconvolution problems with unknown boundaries.
Iii Image Deconvolution with Periodic BC
This section reviews the ADMMbased approach to image deconvolution with periodic BC, using the framebased formulations and TVbased regularization [2, 3, 23, 36, 39], the standard regularizers for this class of imaging inverse problems. The next section will then show how this approach can be extended to deal with unknown boundaries.
Iiia FrameBased Synthesis Formulation
There are two standard ways to formulate framebased regularization for image deconvolution, both exploiting the fact that natural images have sparse representations on wavelet^{2}^{2}2We use the generic term “wavelet” to mean any waveletlike multiscale representation, such as “curvelets,” “beamlets,” or “ridgelets” [28]. frames: the synthesis and analysis formulations [2, 20, 37].
The synthesis formulation expresses the estimated image as a linear combination of the elements of some wavelet frame (an orthogonal basis or an overcomplete dictionary), i.e., , with given by
(9) 
where is a vector containing all the pixels (in lexicographic order) of the observed image, is a matrix representation of the (periodic) convolution, the columns of matrix () are the elements of the adopted frame, is a regularizer encouraging to be sparse (a typical choice, herein adopted, is ), and is the regularization parameter [2, 20, 24]. The first term in (9) results from assuming the usual observation model
(10) 
where , and
is a sample of white Gaussian noise with unit variance (any other value may be absorbed by
).Clearly, problem (9) has the form (2), with and
(11)  
(12)  
(13)  
(14) 
The proximity operators of and , key components of Algorithm 2 for solving (9), have simple expressions,
(15)  
(16) 
where “soft” denotes the softthreshold function in (8). Line 4 of Algorithm 2 (the other key component) has the form
(17) 
As shown in [2], if corresponds to a Parseval frame [28], i.e., if^{3}^{3}3In fact, the frame only needs to be tight [28], i.e., ; without loss of generality, we assume , which yields lighter notation. Recently, a method for relaxing the tight frame condition was proposed in [33]. (although possibly ), the ShermanMorrisonWoodbury inversion formula can be used to write the matrix inverse in (17) as
(18) 
Assuming periodic BC, matrix is circulant, thus factors into
(19) 
where and are the unitary matrices representing the DFT and its inverse, and is the diagonal matrix of the DFT coefficients of the convolution kernel. Using (19), matrix defined in (18) can be written as
(20) 
where denotes the squared absolute values of the entries of . Since the product involves only diagonal matrices, it can be computed with only cost, while the products by and (DFT and its inverse) can be computed with cost, using the FFT; thus, computing and multiplying by it has cost.
The leading cost of each application of (17) (line 4 of Algorithm 2) is either the cost associated with or the cost of the products by and . For most tight frames used in image restoration (e.g. undecimated wavelet transforms, curvelets, complex wavelets), these products correspond to direct and inverse transforms, for which fast algorithms exist [28]. In conclusion, under periodic BC and for a large class of frames, each iteration of Algorithm 2 for solving (9) has cost. Finally, convergence of this instance of Algorithm 2 is established in the following proposition.
Proposition 2.
Proof.
Since and are proper, closed, convex functions, so is the objective function in (9). Because is coercive, so is the objective function in (9), thus its set of minimizers is not empty [15]. Matrix obviously has full column rank, which implies that also has full column rank; that fact combined with the fact that and are proper, closed, convex functions, and that a minimizer exists, allows invoking Proposition 1 which concludes the proof. ∎
IiiB FrameBased Analysis Formulation
In the framebased analysis approach, the problem is formulated directly with respect to the unknown image (rather than its synthesis coefficients),
(21) 
where is as in (9) and () is the analysis operator of a Parseval frame, i.e., satisfying^{4}^{4}4As above, the frame is only required to be tight, i.e., ; since can be obtained simply by normalization, there is no loss of generality. (although possibly , unless the frame is an orthonormal basis) [28]. Problem (21) has the form (2), with , and as given in (11) and (12), respectively, and
(22)  
(23) 
Since the proximity operators of and are the same as in the synthesis case (see (15) and (16)), only line 4 of Algorithm 2 has a different form:
(24) 
Since and , the matrix inverse is simply
(25) 
which has cost, since is a diagonal matrix and the products by and (the DFT and its inverse) can be computed using the FFT.
In conclusion, under periodic BC and for a large class of frames, each iteration of Algorithm 2 for solving (21) has cost. Finally, convergence of this instance of Algorithm 2 is claimed in the following proposition.
Proposition 3.
Proof.
Since and are proper, closed, convex functions, so is the objective function in (9). Because is coercive and is the analysis operator of a tight frame, (where is the null space of matrix ), the objective function in (9) is coercive, thus its set of minimizers is not empty [15]. Matrix has full column rank, thus the same is true of ; combining that with and being proper, closed, and convex, and given the existence of a minimizer, allows invoking Proposition 1 to conclude the proof. ∎
IiiC Total Variation
The classical formulation of TVbased image deconvolution consists in a convex optimization problem
(26) 
where , , and have the same meanings as above, and each is the matrix that computes the horizontal and vertical differences at pixel (also with periodic BC) [39, 41]. The sum defines the socalled totalvariation^{5}^{5}5In fact, this is the socalled isotropic discrete approximation of the continuous totalvariation [11, 36]. (TV) of image x.
For matrix (see (5)), we adopt . The main steps of the resulting instance of Algorithm 2 for solving (26) are as follows.

As shown in [39, 41], the matrix to be inverted in line 4 of Algorithm 2 can be written as
(31) where are the matrices collecting the first and second rows, respectively, of each of the matrices ; that is, computes all the horizontal differences and all the vertical differences. With periodic BC, the matrices in (31) can be factorized as
(32) where , , and are diagonal matrices. Thus, the inverse of the matrix in (31) can be written as
(33) which involves a diagonal inversion, with cost, and the products by U and (the direct and inverse DFT), which have cost (by FFT).

The sum yielding the vector to which this inverse is applied (line 4 of Algorithm 2) is
where contain the first and second component, respectively, of all the vectors, for . As seen above, the product by has cost via the FFT, while the products by and (which correspond to local differences) have cost.

The proximity operator of is as given in (15).
Plugging all these equalities into Algorithm 2 yields Algorithm 3, which is similar to the one presented in [39].
Finally, convergence of Algorithm 3 is addressed in the following proposition.
Proposition 4.
Proof.
Since (27)–(28) are proper, closed, convex functions, so is the objective function in (26). That implies coercivity of the objective function in (26), thus existence of minimizer(s), was shown in [12]. Matrix can be written as a permutation of the rows of , where ; since (), if , then has full column rank. Finally, since Algorithm 3 is an instance of Algorithm 2 (with all updates computed exactly), Proposition 1 guarantees its convergence to a solution of (26). ∎
Notice that the condition is very mild, and clearly satisfied by the convolution with any lowpasstype filter, such as those modeling uniform, motion, Gaussian, circular, and other types of blurs. In fact, for these blurring filters, , where is the DC gain (typically ).
Iv Deconvolution with Unknown Boundaries
Iva The Observation Model
To handle images with unknown boundaries, we model the boundary pixels as unobserved, which is achieved my modifying (10) into
(34) 
where (with ) is a masking matrix, i.e., a matrix whose rows are a subset of the rows of an identity matrix. The role of is to observe only the subset of the image domain in which the elements of do not depend on the boundary pixels; consequently, the BC assumed for the convolution represented by is irrelevant, and we may adopt periodic BCs, for computational convenience.
Assuming that models the convolution with a blurring filter with a limited support of size , and that and represent square images of dimensions , then matrix , with , represents the removal of a band of width of the outermost pixels of the full convolved image .
Problem (34) can be seen as hybrid of deconvolution and inpainting [13], where the missing pixels constitute the unknown boundary. If , model (34) reduces to a standard periodic deconvolution problem. Conversely, if , (34) becomes a pure inpainting problem. Moreover, the formulation (34
) can be used to model problems where not only the boundary, but also other pixels, are missing, as in standard image inpainting.
The following subsections describe how to handle observation model (34), in the context of the ADMMbased deconvolution algorithms reviewed in the previous section.
IvB FrameBased Synthesis Formulation
IvB1 Mask Decoupling (MD)
Under observation model (34), the framebased synthesis formulation (9) changes to
(35) 
At this point, one could be tempted to map (35) into (2) using the same correspondences as in (11), (12), and (14), and simply changing (13) into
(36) 
The problem with this choice is that the matrix to be inverted in line 4 of Algorithm 2 would become
(37) 
instead of the one in (17). The “trick” used in Subsection IIIA to express this inversion in the DFT domain can no longer be used due to presence of , invalidating the corresponding FFTbased implementation of line 4 of Algorithm 2. It is clear that the source of the difficulty is the product , which is the composition of a mask (a spatial pointwise operation) with a circulant matrix (a pointwise operation in the DFT domain); to sidestep this difficulty, we need to decouple these two operations, which is achieved by defining (instead of (11))
(38) 
while keeping , , and as in (12)–(14). With this choice, line 4 of Algorithm 2 is still given by (17) (with its efficient FFTbased implementation); the only change is the proximity operator of the new ,
(39)  
(40) 
notice that, due to the special structure of , matrix is diagonal, thus the inversion in (40) has cost, the same being true about the product , which corresponds to extending the observed image to the size of , by creating a boundary of zeros around it. Of course, both and can be precomputed and then used throughout the algorithm, as long as is kept constant. We refer to this approach as mask decoupling (MD).
In conclusion, the proposed MDbased ADMM algorithm for image deconvolution with unknown boundaries, under framebased synthesis regularization, is Algorithm 2 with line 4 implemented as in (17) and the proximity operators in line 6 given by (40) and (16). We refer to this algorithm as FSMD (frame synthesis mask decoupling). As with the periodic BC, the leading cost is per iteration. Finally, convergence of the FSMD algorithm is guaranteed by the following proposition (the proof of which is similar to that of Proposition 2).
IvB2 Using the ReevesŠorel Technique
An alternative to the approach just presented of decoupling the convolution from the masking operator is to use the method of [35, 40] to tackle the inversion (37). Following [35], notice that
(41) 
where contains the rows of that are missing from (recall that the rows of are a subset of those of an identity matrix) and is a permutation matrix that puts these missing rows in their original positions in . Noticing that
(42) 
( is a permutation matrix, thus ), the inverse of (37) can be written (with ) as
(43) 
where the second equality results from using the ShermanMorrisonWoodbury matrix inversion identity, after defining . Since is circulant, can be efficiently computed via FFT, as explained in (17)–(20). The inversion in (43) cannot be computed via FFT; however, its dimension corresponds to the number of unknown boundary pixels (number of rows in ), usually much smaller than image itself. As in [35, 40], we use the CG algorithm to solve the corresponding system; we confirmed experimentally that (as in [40]) taking only one CG iteration (initialized with the estimate of the previous outer iteration) yields the fastest convergence, without degrading the final result. Thus, in our experiments, we use a single CG iteration per outer iteration.
Approximately solving line 4 of Algorithm 2 via one (or even more) iterations of the CG algorithm, rather than an FFTbased exact solution, makes convergence more difficult to analyze, so we will not present a formal proof. In a related problem [23], it was shown experimentally that if the iterative solvers used to implement the minimizations defining the ADMM steps are warmstarted (i.e., initialized with the values from the previous outer iteration), then the error sequences and , for are absolutely summable as required by Theorem 1. Finally, we refer to this algorithm as FSCG (frame synthesis conjugate gradient).
IvC FrameBased Analysis Formulation
IvC1 Mask Decoupling (MD)
The framebased analysis formulation corresponding to observation model (9) is
(44) 
Following the MD approach introduced for the synthesis formulation, we map Problem (44) into the form (2), by using as defined in (38), and keeping , , and as in the periodic BC case: (22), (23), and (12), respectively.
The only difference in the resulting instance of Algorithm 2 is the use of the proximity operator of the new , as given in (40). In conclusion, the proposed ADMMbased algorithm for image deconvolution with unknown boundaries, under framebased analysis regularization, is simply Algorithm 2, with line 4 implemented as in (24)–(25), and the proximity operators in line 6 given by (40) and (16). We refer to this algorithm as FAMD (frame analysis mask decoupling). The computational cost of the algorithm is per iteration, as in the periodic BC case. Convergence of FAMD is addressed by the following proposition (the proof of which is similar to that of Proposition 3).
IvC2 Using the ReevesŠorel Technique
Consider Problem (44) and map into the form (2) using (11)–(12), (23), and
(45) 
The matrix inverse computed in line 4 of Algorithm 2 is now (with )
(46) 
which can no longer be computed as in (25), since matrix is not circulant. Using the same steps as in (41)–(43), with replacing and , leads to
(47) 
Since is circulant, can be efficiently computed via FFT, as in (25). As in the synthesis case, the inverse in (47) is computed approximately by taking one (warmstarted) CG iteration (for the reason explained in Subsection IVB2). We refer to the resulting algorithm as FACG (frame analysis conjugate gradient).
IvD TVBased Deconvolution with Unknown Boundaries
IvD1 Mask Decoupling (MD)
Given the observation model in (34), TVbased deconvolution with unknown boundaries is formulated as
(48) 
with every element having the exact same meanings as above.
Following the MD approach, we map (48) into the form (2) using as defined in (38) and keeping all the other correspondences (28)–(30) unchanged. The only resulting change to Algorithm 3 is in line 7, which becomes
(49) 
as a consequence of the application of the proximity operator of , as given in (40). In summary, the ADMMbased algorithm for TVbased deconvolution with unknown boundaries has the exact same structure as Algorithm 3, with line 7 replaced by 49. We refer to this algorithm as TVMD (TV mask decoupling). Finally, convergence of TVMD is addressed in the following proposition (the proof of which is similar to that of Proposition 4).
Proposition 7.
As for periodic BCs, the condition is very mild, and is satisfied by the convolution with any lowpass filter (for which , where is the DC gain, typically ) and , if (at least one pixel is observed).
IvD2 Using the ReevesŠorel Technique
To use the ReevesŠorel technique to handle (48), the mapping to the form (2) is done as in (27), (28), and (30), with (29) replaced by (45). The consequence is a simple modification of Algorithm 3, where is replaced by , in line 8, while in line 6, is replaced by and is redefined (instead of (33)) as the inverse of the matrix in (31), i.e. (with ),