In Compressed Sensing (CS), a sparse signal can be recovered from a small set of linear measurements satisfying with where is the number of nonzero elements in . The results guaranteeing the recovery performance depend on the coherence, Restricted Isometry Property (RIP) and Restricted Orthogonality Constant (ROC) of the sensing matrix candes2005decodingKashinEulerSq. In many applications, one obtains some a priori information about the partial support of the sparse solution to be recovered. For instance, in interior reconstruction in Computed Tomography, one has before hand some apriori information corresponding to the support of the interior region Klann2015WaveletMF,localCBP. There are other applications, like recovering time-correlated signals vaswani2010modified, wherein prior-support constrained sparse recovery attains importance. Of late, the support constrained CS has caught the attention of several researchers friedlander2011recoveringKlann2015WaveletMFvaswani2010modified to name a few. In vaswani2010modified, the authors have modified the 1-norm function by taking zero weights on the known partial support, minimizing thereby the terms in the complement of prior support set. While in friedlander2011recovering, by considering general values for weights, the authors have provided stability and robustness of weighted -norm minimization problem in terms of Restricted Isometry Constant (RIC). The authors of liu2014compressed have studied similar performance guarantees in terms of mutual coherence of associated sensing matrix. The work embedded in Chen2016RecoveryOS has provided a less restrictive sufficient condition and a tighter error bounds under some conditions for the weighted- norm problem in terms of RIC and ROC. More recently, the authors of Ge2018RecoveryOS have provided the stability and robustness of the weighted- norm in terms of block RIP conditions on the underlying measurement matrix.
1.1 Motivation for our work
In applications like interior tomography, however, one is interested in the recovery guarantees limited to the interior portion that is accounted for by partial support. This is because the recovery on interior portion only attains importance in such an application and the reconstruction outside interior portion is, in general, bad FarrokhiKlann2015WaveletMF. Motivated by this, the present work proposes a new local recovery bound, in the sense that it pertains only to the prior support. It is to be emphasized here that by a prior support set , we mean an arbitrary subset of “full support” , which, in general, does not have to be fully contained in the true support of . This is in contrast to the existing results that provide global bounds (that is, on entire solution support). Further, both analytically and empirically we demonstrate the conditions on associated parameters that reduce the reconstruction error.
The paper is organized as 5 sections. In section 2, we provide basic introduction to Compressed Sensing, existing recovery bounds and a summary of our contribution. Section 3 presents the local recovery bound followed by an analysis and comparison in section 4. The paper ends with the concluding remarks in section 5.
2 Compressed sensing
Compressed sensing(CS) is a technique that reconstructs a signal, which is compressible or sparse in some domain, from a small set of linear measurements. Let be the set of all k-sparse signals in . Here stands for the number of nonzero components in . The best -term approximation of retains at most largest magnitude coordinates of , the rest of the coordinates are set to zero. For simplicity, we denote by . For (
) and an error vector, suppose such that . One may recover the sparsest solution of the noisy matrix system from the following minimization problem cai2010stable:
where is a bounded subset of For noiseless case, and for noisy case . Since minimization problem becomes NP-hard as the dimension increases, the convex relaxation of problem (1) has been proposed as
The coherence of a matrix is the largest absolute normalized inner product between different columns of it, that is,
where denotes the column in . For a sparse vector , it is known elad2010sparse that the following inequality holds:
The -th Restricted Isometry Constant (-RIC) of a matrix is the smallest number such that
for all -sparse vectors . The Restricted Orthogonality Constant(ROC) of a matrix is the smallest real number such that
for all disjoint sets and with and such that and for all vectors and . Here, denotes the sub matrix of columns of restricted to the indices in . D. Donoho and X. Huo donoho2001uncertainty have shown exact recovery condition in noiseless case for in terms of mutual coherence. If is sparse vector and matrix is -RIP compliant, is an exact recovery condition for problem. Then T. Cai et. al. cai2010stable have extended this result to noisy case.
(T. Cai et. al. cai2010stable): Consider the model with Suppose is in , and represents its best approximation with
Let be the minimizer of . Then obeys
2.1 Compressed sensing with prior support constraint
It may be noted that the reconstruction method given by in (2) is non adaptive as no information about is used in . It can, however, be made partially adaptive by imposing constraints on the support of the solution to be obtained. In friedlander2011recovering liu2014compressed vaswani2010modified (and the references therein) the authors have modified the cost function of problem by incorporating the prior support information into the reconstruction process as detailed in the following subsection.
2.2 Previous Work
Consider that is the known partial support information of signal , which is expected in recovered solution. Suppose stands for the support of the best -term approximation of , where is the actual solution of . In vaswani2010modified, the authors have modified the problem by considering zero weights in and have posed it as follows:
In the above problem, the weights are set to 1 on and to 0 on . In friedlander2011recovering, nevertheless, the authors have posed this problem for a general weight vector and an arbitrary subset of the following way:
where with for and for ,
can be drawn from the estimate of the support of the signalor from its largest coefficients. The stability result proposed in friedlander2011recovering is as follows:
(M. Friedlander et. al. friedlander2011recovering): Let and let be its best term approximation, supported on . Let be an arbitrary set and define such that and . Suppose that there exists an , with , , and the measurement matrix has RIP with
where for some given . Then the solution to the (6) obeys
It has been shown in friedlander2011recovering that a signal can be stably and robustly recovered from problem if at least of the partial support information is accurate. It is worth a mention here that the above stability result has been proposed in terms of RIC-. In liu2014compressed, however, the authors have proposed a similar stability result, albeit in terms of coherence parameter, which is summarized as follows:
(Haixiao et. al. liu2014compressed): Let , and let be its best -term approximation, supported on . Let be an arbitrary set and define such that and . Suppose that
where , and . Then the solution to (6) obeys
The authors of Chen2016RecoveryOS have proposed less restrictive sufficient conditions and a tighter bound with respect to the standard 1-norm problem under some conditions for the weighted 1-norm problem in terms of RIC and ROC. The stability result is as follows:
(Chen et. al. Chen2016RecoveryOS) Let be an arbitrary signal and its best -term approximation support on with . Let be an arbitrary set and denote and such that and . Let with and is the minimizer of (6). If for some with , where with . Let , for and =max, for . Then
A vector , where is the block of of size w.r.t with , is said to be block - sparse over if the number of non-zero blocks in is at most . Recently the authors of Ge2018RecoveryOS have introduced the following weighted block -norm problem for a given disjoint prior block support estimates for with as
where is defined by and for . Note that when for all , and block sparsity reduces to the standard sparsity and if the number of support estimates , then the weighted block -norm problem (15) reduces to the weighted 1-norm problem in (6). In this particular case the stable recovery result of (15) in Ge2018RecoveryOS deduces to the following result:
(Ge et.al. Ge2018RecoveryOS) For an arbitrary signal , which satisfies with , let be its best -term approximation and . Suppose that is the minimizer of (6) and is the prior block support of satisfying , . If A satisfies the RIP with for , where, and for . For , d=1 if and if . Then
3 Local recovery bounds
As discussed already, the present work deals with obtaining a recovery bound on for . It is shown in the later part that when contains the indices corresponding to the largest magnitude entries in , our error bound is much smaller than the global error bounds (8),(11),(13) and (16) for weighted-1-norm problem. Further, in most of the cases the sufficient condition on in this bound can be shown to be less pessimistic than the corresponding ones in the standard 1-norm (4) and weighted 1-norm cases (10). Our contribution may be summarized as the following result:
Let be in satisfying where and with . Let be its -term approximation supported on . Let be an arbitrary set. Define and such that and . Suppose that
then the solution on to obeys
Proof: Suppose . From the definition of , we have
Consider that . Then, we have
Now, since and by the inequality (3), it follows that
which results in
An investigation into the choices of , , and that result in smaller values for the RHS of (19) is presented in the following section.
It is clear that the bound on sparsity in (18) becomes less restrictive for small values of and . For , the bound reduces to , which is an increasing function of . Similarly for , it is easy to verify that the -bound is a decreasing function of and the largest bound is obtained at and .
4 Analysis and comparison of ‘local‘ and ‘global’ error bounds
In this section, we analyze the behaviour of the bound provided in (19) in terms of the associated parameters and .
It may be noted that determines the relative size of with respect to the size of support of best -term approximation of . From (20), it can be seen that the coefficients and decrease with decrease in . Again, when , the -term in the denominator of coefficients is , which being positive increases with , making the coefficients decrease with . When , however, the -term in the denominator of the coefficients is , which is negative as , since is a nonempty subset of . As a result, it decreases with increase in , which makes the coefficients increase with . The stated behaviour of coefficients can be seen in Fig 1. In generating the plots in this figure, as an example, we have taken , considering the coherence of the underlying matrix as , like in liu2014compressed. Note that and lead to three possibilities for the values of , viz, and . Similarly, when , takes and as possible values.
Since term is controlled by , for the reconstruction error to be small in (19), the multiplier of should be small, where . The first two terms are small in the error expression as is the best--term approximation of and is the support of which can be further reduced by choosing optimal . In order to make small, we need to be small. This is possible if contains the largest components of or the cardinality of is as small as possible. The latter case can happen when is close to 1. In application like in interior tomography, this condition translates to interior portion possessing dominating pixels. As the objective of the paper not is related to tomography, we do not go into the details of it any further.
We have computed the values of and against different and , which are depicted in Fig. 2. Here
has been taken to be a normalized vector from Gaussian distribution with previously stated values forand . The plots in this figure indicate that is large for , and for a given , it increases with . The smallest value for is obtained at . Again is large for . For smaller values of , decreases with and for , decreases with . This is due to the effect of . Overall, the error-bound takes its minimum value around and .
4.1 Comparison of bounds on
Though the objective of present work is to propose an error bound restricted to a nonempty subset , a natural question arises whether our bound in (18) is better behaved than the ones associated with the global error bounds in the standard case (4) and in the weighted case (10). We compare the -bounds numerically in terms of a -ratio. By a -ratio in the standard case we mean the right hand side of our -bound in (18) divided by its counterpart in (4). Similarly we consider the k-ratio in the weighted one norm case with respect to (10). Fig. 3 provides a comparison of -ratios, which has been generated by considering, as examples, , and . It can be seen in both the cases that the -ratios are strictly greater than 1 for all and for all . It is clear from the graphs that, even for larger values of , our bound becomes less pessimistic than that of (4) and (10) for smaller values of for all . But since we are interested in finding error bounds on a subset , which has a smaller size than the support of the best -term approximation , we do not consider the cases for larger values for .
4.2 Comparison of bounds on error
The local error bound in Theorem 3.1 (that is, (19)) becomes relevant if its right hand side is less than that of the global bounds in Theorems 2.2, 2.3, 2.4 and 2.5. Since the stated right hand sides are functions of the solution vector with various associated parameters and different underlying conditions, comparing them for a general solution vector does not look practical. In view of this, we try to compare the coefficients (i.e, and with their respective counterparts in Theorems 2.2, 2.3, 2.4 and 2.5) when the associated error expressions coincide. It is clear that the associated error parts coincide if is small. That is, contains largest magnitude entries from . Consider a particular case for this in which . In such a case, both the error terms associated with coefficients in global as well as local error bounds coincide. Hence in this case it is enough to compare the corresponding coefficients and (that is, and are respectively compared to and for ).
The coefficients in (12) and (20) can be compared directly as both of them are in terms of mutual coherence parameter . The coefficients in (9), (14) and (17) are in terms of the RIC and ROC. In order to compare them with the corresponding ones in (20) which are in terms of , we use the upper bounds candes2005decodingelad2010sparse: and . For comparing with (9) we need a constant such that and . We take a simple choice . Again, for comparison with (14), we need two constants and . We set these to , which is permissible. Finally, for comparing with (17), we need a constant . In our case, as , we take a simple choice . The plots in Fig. 4, comparing the coefficients in the stated setting, are denoted through the legends ‘Local’, ‘Global(1), Global(2), Global(3) and Global(4), which stand respectively for the coefficients in (20), (12),(9),(14) and (17). From Fig 4, it is clear that our coefficients being much small imply that the right hand side of our bound is small compared to their global bounds at least in the case where contains the indices corresponding to largest magnitude entries in for all . As highlighted already, comparison of bounds in other cases does not look possible.
The present work has proposed a local recovery bound for prior support constrained compressed sensing, while the existing bounds are global in nature. In particular, an error estimate restricted to prior support, providing recovery guarantee, has been provided along with a bound on the sparsity of the solution to be recovered.
The first author is thankful to the UGC, Government of India, (JRF/2016/409284) for its financial support. The second author gratefully acknowledges the support received from the MHRD, Government of India.