Dictionary LASSO: Guaranteed Sparse Recovery under Linear Transformation

04/30/2013 ∙ by Ji Liu, et al. ∙ Arizona State University University of Wisconsin-Madison 0

We consider the following signal recovery problem: given a measurement matrix Φ∈R^n× p and a noisy observation vector c∈R^n constructed from c = Φθ^* + ϵ where ϵ∈R^n is the noise vector whose entries follow i.i.d. centered sub-Gaussian distribution, how to recover the signal θ^* if Dθ^* is sparse under a linear transformation D∈R^m× p? One natural method using convex optimization is to solve the following problem: _θ1 2Φθ - c^2 + λDθ_1. This paper provides an upper bound of the estimate error and shows the consistency property of this method by assuming that the design matrix Φ is a Gaussian random matrix. Specifically, we show 1) in the noiseless case, if the condition number of D is bounded and the measurement number n≥Ω(s(p)) where s is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of D is bounded and the measurement increases faster than s(p), that is, s(p)=o(n), the estimate error converges to zero with probability 1 when p and s go to infinity. Our results are consistent with those for the special case D=I_p× p (equivalently LASSO) and improve the existing analysis. The condition number of D plays a critical role in our analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph: the condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if m p (i.e., #textedge #textvertex) is larger than a certain constant. Numerical simulations are consistent with our theoretical results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The sparse signal recovery problem has been well studied recently from the theory aspect to the application aspect in many areas including compressive sensing (Candès and Plan, 2009; Candès and Tao, 2007), statistics (Meinshausen et al., 2006; Ravikumar et al., 2008; Bunea et al., 2007; Lounici, 2008; Koltchinskii and Yuan, 2008)

, machine learning

(Zhao and Yu, 2006; Zhang, 2009b; Wainwright, 2009; Liu et al., 2012), and signal processing (Romberg, 2008; Donoho et al., 2006; Zhang, 2009a). The key idea is to use the norm to relax the norm (the number of nonzero entries). This paper considers a specific type of sparse signal recovery problems, that is, the signal is assumed to be sparse under a linear transformation . It includes the well-known fused LASSO (Tibshirani et al., 2005) as a special case. The theoretical property of such problem has not been well understood yet, although it has achieved success in many applications (Chan, 1998; Tibshirani et al., 2005; Candès et al., 2006; Sharpnack et al., 2012). Formally, we define the problem as follows: given a measurement matrix () and a noisy observation vector constructed from where is the noise vector whose entries follow i.i.d. centered sub-Gaussian distribution111Note that this “identical distribution” assumption can be removed; see Zhang (2009a). For simplification of analysis, we enforce this condition throughout this paper., how to recover the signal if is sparse where is a constant matrix dependent on the specific application222We study the most general case of , and thus our analysis is applicable for both or .? A natural model for such type of sparsity recovery problems is:

(1)

The least square term is from the sub-Gaussian noise assumption and the second term is due to the sparsity requirement. Since this combinatorial optimization problem is NP-hard, the conventional

relaxation technique can be applied to make it tractable, resulting in the following convex model:

(2)

Such model includes many well-known sparse formulations as special cases:

  • The fused LASSO (Tibshirani et al., 2005; Friedman et al., 2007) solves

    (3)

    where

    is defined as the total variance matrix

    , that is,

    One can write Eq. (3) in the form of Eq. (2) by letting and

    be the conjunction of the identity matrix and the total variance matrix, that is,

  • The general dimensional changing point detection problem (Candès et al., 2006; Needell and Ward, 2012a, b) can be expressed by

    (4)

    where is a

    dimensional tensor with a stepwise structure and

    is the set of indices. The second term is used to measure the total variance. The changing point is defined as the point where the signal changes. One can properly define to rewrite Eq. (4) in the form of Eq. (2). In addition, if the structure of the signal is piecewise constant, then one can replace the second term by

    It can be written in the form of Eq. (2) as well.

  • The second term of (4), that is, the total variance, is defined as the sum of differences between two neighboring entries (or nodes). A graph can generalize this definition by using edges to define neighboring entries rather than entry indexes. Let be a graph. One has

    (5)

    where defines the total variance over the graph . The edge between nodes and corresponds to the row of the matrix with zero at all entries except and . Taking , one obtains the edge LASSO (Sharpnack et al., 2012).

This paper studies the theoretical properties of the dictionary LASSO in (2) by providing an upper bound of the estimate error, that is, where denotes the estimation. The consistency property of this model is shown by assuming that the design matrix is a Gaussian random matrix. Specifically, we show 1) in the noiseless case, if the condition number of is bounded and the measurement number where is the sparsity number, then the true solution can be recovered under some mild conditions with high probability; and 2) in the noisy case, if the condition number of is bounded and the measurement number increases faster than , that is, , then the estimate error converges to zero with probability 1 under some mild conditions when goes to infinity. Our results are consistent with those for the special case (equivalently LASSO) and improve the existing analysis in Candès et al. (2011); Vaiter et al. (2013). To the best of our knowledge, this is the first work that establishes the consistency properties for the general problem (2). The condition number of plays a critical role in our analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph: the condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if (that is, ) is larger than a certain constant. Numerical simulations are consistent with our theoretical results.

1.1 Notations and Assumptions

Define

where and are nonnegative integers, is the dictionary matrix, and is the union of all subspaces spanned by columns of :

Note that the length of is the sum of and the number of rows of (which is in general not equal to ). The definition of and is inspired by the D-RIP constant in Candès et al. (2011). Recall that the D-RIP constant is defined by the smallest quantity such that

One can verify that if satisfies the D-RIP condition in terms of the sparsity and the dictionary . Denote and as and respectively for short.

Denote the compact singular value decomposition (SVD) of

as . Let and its pseudo-inverse be . One can verify that . denotes the minimal nonzero singular value of and denotes the maximal one, that is, the spectral norm . One has and . Define

Let be the support set of , that is, a subset of , with . Denote as its complementary index set with respect to . Without loss of generality, we assume that does not contain zero rows. Assume that where and all entries

’s are i.i.d. centered sub-Gaussian random variables with sub-Gaussian norm

(Readers who are not familiar with the sub-Gaussian norm can treat as the standard derivation in Gaussian random variable). In discussing the dimensions of the problem and how they are related to each other in the limit (as and both approach ), we make use of order notation. If and are both positive quantities that depend on the dimensions, we write if can be bounded by a fixed multiple of for all sufficiently large dimensions. We write if for any positive constant , we have for all sufficiently large dimensions. We write if both and . Throughout this paper, a Gaussian random matrix means that all entries follow i.i.d. standard Gaussian distribution . Denote the norm of as where is the column of .

1.2 Related Work

Candès et al. (2011) proposed the following formulation to solve the problem in this paper:

(6)

where is assumed to have orthogonal columns and is taken as the upper bound of . They showed that the estimate error is bounded by with high probability if is a Gaussian random matrix333Note that the “Gaussian random matrix” defined in Candès et al. (2011) is slightly different from ours. In Candès et al. (2011), is a Gaussian random matrix if each entry of is generated from . Please refer to Section 1.5 in Candès et al. (2011). Here we only restate the result in Candès et al. (2011) by using our definition for Gaussian random matrices. with , where and are two constants. Letting and , the error bound turns out to be . This result shows that in the noiseless case, with high probability, the true signal can be exactly recovered. In the noisy case, assume that ’s () are i.i.d. centered sub-Gaussian random variables, which implies that is bounded by with high probability. Note that since the measurement matrix is scaled by from the definition of “Gaussian random matrix” in Candès et al. (2011), the noise vector should be corrected similarly. In other words, should be bounded by rather than , which implies that the estimate error in Candès et al. (2011) converges to a constant asymptotically.

Needell and Ward (2012a, b) studied the formulation in Eq. (6) and considered the special case that is the total variance matrix corresponding to the () dimensional signal with the sparsity level . Their analysis shows that if the measurement matrix is properly designed and the measurement number satisfies , then in the noiseless case (that is, ), the true signal can be exactly recovered with high probability; in the noisy case, the estimate error is bounded by . Following the analysis above, one can see that the estimate error diverges when goes to infinity.

Nama et al. (2012) considered the noiseless case and analyzed the formulation

(7)

assuming all rows of to be in the general position, that is, any rows of are linearly independent, which is violated by the fused LASSO. An sufficient condition was proposed to recover the true signal using the cosparse analysis.

Vaiter et al. (2013) also considered the formulation in Eq. (2) but mainly gave robustness analysis for this model using the cosparse technique. A sufficient condition (different from Nama et al. (2012)) to exactly recover the true signal was given in the noiseless case. In the noisy case, they took to be a value proportional to and proved that the estimate error is bounded by under certain conditions. However, they did not consider the Gaussian ensembles for ; see Vaiter et al. (2013, Section 3.B).

The fused LASSO, a special case of Eq. (2), was also studied recently. The sufficient condition of detecting jumping points is given by Kolar et al. (2009). A special fused LASSO formulation was considered by Rinaldo (2009) in which was set to be the identity matrix and to be the combination of the identity matrix and the total variance matrix. Sharpnack et al. (2012) proposed and studied the edge LASSO by letting be the identity matrix and be the matrix corresponding to the edges of a graph.

1.3 Organization

The remaining of this paper is organized as follows. To build up a unified analysis framework, we simplify the formulation (2) in Section 2. The main results are presented in Section 3. Section 4 discusses the value of an important parameter in our main results in three cases: the fused LASSO, the random graph, and the total variance dictionary matrix. The numerical simulation is presented to verify the relationship between the estimate error and the condition number in Section 5. We conclude this paper in Section 6. All proofs are provided in Appendix.

2 Simplification

As highlighted by Vaiter et al. (2013), the analysis for a wide (that is, ) significantly differs from a tall (that is, ). To build up a unified analysis framework, we use the singular value decomposition (SVD) of to simplify Eq. (2), which leads to an equivalent formulation.

Consider the compact SVD of : where , ( is the rank of ), and . We then construct such that

is a unitary matrix. Let

and . These two linear transformations split the original signal into two parts as follows:

(8)
(9)
(10)

where , , and . Let be the solution of Eq. (10). One can see the relationship between and : 444Here we assume that is invertible. which can be used to further simplify Eq. (10):

Let

and

We obtain the following simplified formulation:

(11)

where and .

Denote the solution of Eq. (2) as and the ground truth as . One can verify . Define and . Note that unlike and the following usually does not hold: . Let and . We will study the upper bound of in terms of and based on the relationship .

3 Main Results

This section presents the main results in this paper. The estimate error by Eq. (2), or equivalently Eq. (11), is given in Theorem 1:

Theorem 1.

Define

where and denote and respectively for short. Taking in Eq. (2), we have if is invertible (apparently, is required) and there exists an integer such that , then

(12)

where

One can see from the proof that the first term of (12) is mainly due to the estimate error of the sparse part and the second term is due to the estimate error of the free part .

The upper bound in Eq. (12) strongly depends on parameters about and such as , , , and . Although for a given and , and are fixed, it is still challenging to evaluate these parameters. Similar to existing literature like Candès and Tao (2005), we assume to be a Gaussian random matrix and estimate the values of these parameters in Theorem 2.

Theorem 2.

Assume that is a Gaussian random matrix. The following holds with probability at least :

(13)
(14)
(15)
(16)

Now we are ready to analyze the estimate error given in Eq. (12). Two cases are considered in the following: the noiseless case and the noisy case .

3.1 Noiseless Case

First let us consider the noiseless case. Since , the second term in Eq. (12) vanishes. We can choose a value of to make the first term in Eq. (12) arbitrarily small. Hence the true signal can be recovered with an arbitrary precision as long as , which is equivalent to requiring . Actually, when is extremely small, Eq. (2) approximately solves the problem in Eq. (7) with .

Intuitively, the larger the measurement number is, the easier the true signal can be recovered, since more measurements give a feasible subspace with a lower dimension. In order to estimate how many measurements are required, we consider the measurement matrix to be a Gaussian random matrix (This is also a standard setup in compressive sensing.). Since this paper mainly focuses on the large scale case, one can treat the value of as a number proportional to .

Using Eq. (13) and Eq. (14), we can estimate the lower bound of in Lemma 1.

Lemma 1.

Assume to be a Gaussian random matrix. Let . With probability at least , we have

(17)

From Lemma 1, to recover the true signal, we only need

(18)

To simplify the discussion, we propose several minor conditions first in Assumption 1.

Assumption 1.

Assume that

  • in the noiseless case and in the noisy case 555This assumption indicates that the free dimension of the true signal (or the dimension of the free part ) should not be too large. Intuitively, one needs more measurements to recover the free part because it has no sparse constraint and much fewer measurements to recover the sparse part. Thus, if only limited measurements are available, we have to restrict the dimension of the free part.;

  • the condition number is bounded;

  • where , that is, can be a polynomial function in terms of .

One can verify that under Assumption 1, taking , the right hand side of (17) is greater than

Letting [or if without assuming ], one can have that

holds with high probability (since the probability in Lemma 1 converges to 1 while goes to infinity). In other words, in the noiseless case the true signal can be recovered at an arbitrary precision with high probability.

To compare with existing results, we consider two special cases: (Candès and Tao, 2005) and has orthogonal columns (Candès et al., 2011), that is, . When and is a Gaussian random matrix, the required measurements in Candès and Tao (2005) are , which is the same as ours. Also note that if , Assumption 1 is satisfied automatically. Thus our result does not enforce any additional condition and is consistent with existing analysis for the special case . Next we consider the case when has orthogonal columns as in Candès et al. (2011). In this situation, all conditions except in Assumption 1 are satisfied. One can easily verify that the required measurements to recover the true signal are without assuming from our analysis above, which is consistent with the result in Candès et al. (2011).

In addition, from Eq. (18), one can see that the boundedness requirement for can be removed as long as we choose the measurements number as .

3.2 Noisy Case

Next we consider the noisy case, that is, study the upper bound in (12) while . Similarly, we mainly focus on the large scale case and assume Gaussian ensembles for the measurement matrix . Theorem 3 provides the upper bound of the estimate error under the conditions in Assumption 1.

Theorem 3.

Assume that the measurement matrix is a Gaussian random matrix, the measurement satisfies , and Assumption 1 holds. Taking with in Eq. (2), we have

(19)

with probability at least .

One can verify that when goes to infinity, the upper bound in (19) converges to from and the probability converges to due to . It means that the estimate error converges to asymptotically given the measures .

This result shows the consistency property, that is, if the measurement number grows faster than , the estimate error will vanish. This consistency property is consistent with the special case LASSO by taking (Zhang, 2009a). Candès et al. (2011) considered Eq. (6) and obtained an upper bound for the estimate error which does not guarantee the consistency property like ours since . Their result only guarantees that the estimation error bound converges to a constant given .

In addition, from the derivation of Eq. (19), one can simply verify that the boundedness requirement for can actually be removed, if we allow more observations, for example, . Here we enforce the boundedness condition just for simplification of analysis and a convenient comparison to the standard LASSO (it needs measurements).

4 The Condition Number of

Since is a key factor from the derivation of Eq. (19), we consider the fused LASSO and the random graphs and estimate the values of in these two cases.

Figure 1: Illustration of the relationship between condition number and performance in terms of relative error. Three problem sizes are used as examples.

Let us consider the fused LASSO first. The transformation matrix is

One can verify that

and

which implies that and . Hence we have in the fused LASSO case.

Next we consider therandom graph. The transformation matrix corresponding to a random graph is generated in the following way: (1) each row is independent of the others; (2) two entries of each row are uniformly selected and are set to and respectively; (3) the remaining entries are set to . The following result shows that the condition number of is bounded with high probability.

Theorem 4.

For any and satisfying that where is large enough, the following holds:

with probability at least .

From this theorem, one can see that

  • If where is large enough, then

    is bounded with high probability;

  • If which is the maximal possible , then .

We consider the last special case for as the total variance matrix corresponding to the dimensional signal . In general, the condition number of is unbounded. Comparing with results in Needell and Ward (2012a, b) which focus on this particular case and need a special measurement matrix, our results still have advantages in some aspects, for example, when the measurements satisfy , the estimate error from our analysis converges to zero while it diverges given the same number of measurements from their results.

5 Numerical Simulations

In this section, we use numerical simulations to verify some of our theoretical results. Given a problem size and and condition number , we randomly generate as follows. We first construct a diagonal matrix such that

We then construct a random basis matrix , and let . Clearly, has independent columns and the condition number equals to . Next, a vector is generated such that , and , . is then obtained as . Finally, we generate a matrix with , noise with and .

We solve Eq. (2) using the standard optimization package CVX666cvxr.com/cvx/ and is set as as suggested by Theorem 1. We use three different sizes of problems, with , and ranging from 1 to 1000. For each problem setting, 100 random instances are generated and the average performance is reported. We use the relative error for evaluation, and present the performance with respect to different condition numbers in Figure 1. We can observe from Figure 1 that in all three cases the relative error increases when the condition number increases. If we fix the condition number, by comparing the three curves, we can see that the relative error decreases when the problem size increases. These are consistent with our theoretical results in Section 3 [see Eq. (19)].

6 Conclusion and Future Work

This paper considers the problem of estimating a specific type of signals which is sparse under a given linear transformation . A conventional convex relaxation technique is used to convert this NP-hard combinatorial optimization into a tractable problem: dictionary LASSO. We develop a unified framework to analyze the dictionary LASSO with a generic and provide the estimate error bound. Our main results establish that 1) in the noiseless case, if the condition number of is bounded and the measurement number where is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of is bounded and the measurement number grows faster than [that is, ], then the estimate error converges to zero when and go to infinity with probability 1. Our results are consistent with existing literature for the special case (equivalently LASSO) and improve the existing analysis for the same formulation. The condition number of plays a critical role in our theoretical analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph. The condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if (that is, ) is larger than a certain constant. Numerical simulations are consistent with our theoretical results.

In future work, we plan to study a more general formulation of Eq. (2):

where is an arbitrary matrix and is a convex and smooth function satisfying the restricted strong convexity property. We expect to obtain similar consistency properties for this general formulation.

Acknowledgments

This work was supported in part by NSF grants IIS-0953662 and MCB-1026710. We would like to sincerely thank Professor Sijian Wang and Professor Eric Bach of the University of Wisconsin-Madison for useful discussion and helpful advice.

References

  • Bunea et al. [2007] F. Bunea, A. Tsybakov, and M. Wegkamp. Sparsity oracle inequalities for the Lasso. Electronic Journal of Statistics, 1:169–194, 2007.
  • Candès and Plan [2009] E. J. Candès and Y. Plan. Near-ideal model selection by minimization. Annals of Statistics, 37(5A):2145–2177, 2009.
  • Candès and Tao [2005] E. J. Candès and T. Tao.

    Decoding by linear programming.

    IEEE Transactions on Information Theory, 51(12):4203–4215, 2005.
  • Candès and Tao [2007] E. J. Candès and T. Tao. The Dantzig selector: Statistical estimation when is much larger than . Annals of Statistics, 35(6):2313–2351, 2007.
  • Candès et al. [2006] E. J. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, 2006.
  • Candès et al. [2011] E. J. Candès, Y. C. Eldar, D. Needell, and P. Randall. Compressed sensing with coherent and redundant dictionaries. Applied and Computational Harmonic Analysis, 31:59–73, 2011.
  • Chan [1998] T. F. Chan. Total variation blind deconvolution. IEEE Transactions on Image Processing, 7(3):370–375, 1998.
  • Donoho et al. [2006] D. L. Donoho, M. Elad, and V. N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Transactions on Information Theory, 52(1):6–18, 2006.
  • Friedman et al. [2007] J. Friedman, T. Hastie, H. Hofling, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied Statistics, 1(2):302–332, 2007.
  • Kolar et al. [2009] M. Kolar, L. Song, and E.P. Xing. Sparsistent learning of varying-coefficient models with structural changes. NIPS, pages 1006–1014, 2009.
  • Koltchinskii and Yuan [2008] V. Koltchinskii and M. Yuan. Sparse recovery in large ensembles of kernel machines on-line learning and bandits. COLT, pages 229–238, 2008.
  • Liu et al. [2012] J. Liu, P. Wonka, and J. Ye. A multi-stage framework for Dantzig selector and Lasso. Journal of Machine Learning Research, 13:1189–1219, 2012.
  • Lounici [2008] K. Lounici. Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators. Electronic Journal of Statistics, 2:90–102, 2008.
  • Meinshausen et al. [2006] N. Meinshausen, P. Bhlmann, and E. Zrich. High dimensional graphs and variable selection with the Lasso. Annals of Statistics, 34(3):1436–1462, 2006.
  • Mendelson et al. [2008] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann. Uniform uncertainty principle for bernoulli and subgaussian ensembles. IEEE Transactions on Information Theory, 54:2210–2219, 2008.
  • Nama et al. [2012] S. Nama, M. E. Daviesb, M. Eladc, and R. Gribonvala. The cosparse analysis model and algorithms. Applied and Computational Harmonic Analysis, 34(1):30–56, 2012.
  • Needell and Ward [2012a] D. Needell and R. Ward. Stable image reconstruction using total variation minimization. ArXiv e-prints: 1202.6429, 2012a.
  • Needell and Ward [2012b] D. Needell and R. Ward. Near-optimal compressed sensing guarantees for total variation minimization. ArXiv e-prints: 1210.3098, 2012b.
  • Ravikumar et al. [2008] P. Ravikumar, G. Raskutti, M. J. Wainwright, and B. Yu. Model selection in gaussian graphical models: High-dimensional consistency of -regularized MLE. NIPS, 2008.
  • Rinaldo [2009] A. Rinaldo. Properties and refinements of the fusedLasso. The Annals of Statistics, 37(5B):2922–2952, 2009.
  • Romberg [2008] J. Romberg. The Dantzig selector and generalized thresholding. CISS, pages 22–25, 2008.
  • Sharpnack et al. [2012] J. Sharpnack, A. Rinaldo, and A. Singh. Sparsistency of the edgeLasso over graphs. AISTAT, 2012.
  • Tibshirani et al. [2005] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fusedLasso. Journal of the Royal Statistical Society Series B, pages 91–108, 2005.
  • Vaiter et al. [2013] S. Vaiter, G. Peyre, C. Dossal, and J. Fadili. Robust sparse analysis regularization. IEEE Transaction on Information Theory, 59(4):2001–2016, 2013.
  • Vershynin [2011] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv:1011.3027, 2011.
  • Wainwright [2009] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using -constrained quadratic programming (Lasso). IEEE Transactions on Information Theory, 55(5):2183–2202, 2009.
  • Zhang [2009a] T. Zhang. Some sharp performance bounds for least squares regression with regularization. Annals of Statistics, 37(5A):2109–2114, 2009a.
  • Zhang [2009b] T. Zhang.

    On the consistency of feature selection using greedy least squares regression.

    Journal of Machine Learning Research, 10:555–568, 2009b.
  • Zhao and Yu [2006] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2563, 2006.

Appendix A. Proof of Theorem 1

We first introduce several important definitions used in the proof. We divide the complementary index set into a group of subsets ’s (), without intersection, such that indicates the index set of the largest entries of (in the absolute value), contains the next-largest entries of , and so forth777The last subset may contain fewer than elements.. is denoted as for short.

First we give the proof skeleton of Theorem 1. Recall that the estimate error is bounded by the sum of the free part error (that is, ) and the sparse part error (that is, ). Lemma 7 and Lemma 8 studied the upper bound of and respectively and the proof of Theorem 1 makes use of these two upper bounds.

Assumption 2.

Assume that

(20)
Lemma 2.

Assume that Assumption 2 holds. We have

Proof.

Since is the optimal solution of Eq. (11), we have