Deterministic Analysis of Weighted BPDN With Partially Known Support Information

03/03/2019 ∙ by Wendong Wang, et al. ∙ Southwest University 0

In this paper, with the aid of the powerful Restricted Isometry Constant (RIC), a deterministic (or say non-stochastic) analysis, which includes a series of sufficient conditions (related to the RIC order) and their resultant error estimates, is established for the weighted Basis Pursuit De-Noising (BPDN) to guarantee the robust signal recovery when Partially Known Support Information (PKSI) of the signal is available. Specifically, the obtained conditions extend nontrivially the ones induced recently for the traditional constrained weighted ℓ_1-minimization model to those for its unconstrained counterpart, i.e., the weighted BPDN. The obtained error estimates are also comparable to the analogous ones induced previously for the robust recovery of the signals with PKSI from some constrained models. Moreover, these results to some degree may well complement the recent investigation of the weighted BPDN which is based on the stochastic analysis.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Compressed/compressive Sensing (CS), see, e.g., [1, 2, 3], has captured a lot of attention of the researchers in a wide range of fields over the past decade. In CS, one gets the observations of signal via the following model

(1)

where is called the measurement matrix and denotes the additive noise that satisfies a certain constraint. One of the key goals of CS is to effectively recover the original signal based on and . It has been shown that, if is -sparse with and satisfies certain conditions related to , see, e.g., [4, 5, 6, 7, 8], then one can achieve this goal by solving an -minimizing problem, i.e.,

(2)

where represents for the noise level, and we take if there is no noise, i.e., .

The above -minimization approach has been demonstrated to be effective in robust signal recovery. However, it does not incorporate any prior information on signal support since the -norm treats the entries of variable equally. In fact in many practical applications such as the time-series signal processing, see, e.g., [9, 10, 11], it is often possible to estimate a part of the signal support information. It thus becomes very necessary and important to use such prior information to further enhance the recovery performance of (2). This consideration directly leads to the following weighted -minimization problem

(3)

where denote the weights. For simplicity, in this paper we only consider a binary choice of , i.e.,

where is a given set, which models the Partially Known Support Information (PKSI) of . This problem has been well investigated in the past few years, see, e.g., [12, 13, 14, 15, 16, 17, 18, 19]. It was proved by Friedlander, et al. in [12] that if includes half of the accurate support of at least, then (3) will perform robustly under much weaker conditions than the analogous ones for (2). In [15], Flinth studied the optimal choice for general weights. Recently, Chen, et al. in [18] and [19] obtained some much tighter conditions for (3), and these conditions were proved to be sharp when the desired signal is exactly sparse and is also measured without noise.

In this paper, we consider the robust recovery of the signals with PKSI via the weighted Basis Pursuit De-Noising (BPDN)

(4)

where is a positive parameter. Obviously, (4) will be reduced to the widely known BPDN if one sets (i.e., no support information is available). Although there exists a large amount of research on the BPDN, see, e.g., [20, 21, 22, 23, 24, 25, 26, 27], the theoretical analysis of (4) for sparse recovery is relatively less studied. We note that Lian, et al. recently studied (4) from both theoretical and experimental aspects in [28], where they called it weighted LASSO. However, their obtained results are established on the stochastic strategy, and they are totally different from ours that are established in a deterministic manner.

The main contribution of this paper is that a series of (tight) sufficient conditions as well as their resultant error estimates are established for (4) with the help of the Restricted Isometry Property (RIP) [1], which to some degree well complement the recent theoretical analysis of the weighted BPDN (see, [28]) that is based on the stochastic strategy.

Ii Notations and Preliminaries

In this section, we first introduce some basic notations. For any given index set , we denote

as a vector whose entries

for and 0 otherwise, and also denote the best -term approximate of any signal as

Definition 1.

A matrix is said to obey the RIP of order , if there exists a constant such

(5)

for every -sparse signal . The smallest positive that satisfies (5) is denoted by 111When is not an integer, we define as . and is known as the Restricted Isometry Constant (RIC).

We also need the following two lemmas.

Lemma 1.

Assume that are two sets with , and for some and , and define

(6)

If is observed through (1) with the noise constrain , then for the optimal solution of (4), we have

(7)

and

(8)

where and is denoted by (12).

Proof:

Since is the optimal solution of (4), we have

which is equivalent to

(9)

As to the left-hand side of (9), we have

(10)

As to the right-hand side of (9), we know from [12] that

(11)

where . Since and , then and , and thus clearly

where

(12)

This directly turns (11) to be the following inequality

(13)

Therefore, combing (10) and (13) leads to the desired (7), and (8) follows trivially from (7). ∎

Lemma 2.

For any if satisfies the RIP of order with RIC and , then for any vector and any subset with , it holds that

(14)

where

Remark 1.

It is easy to know from Lemma 2 that both and are two monotone increasing functions on the variable . Therefore if one restricts to (19), it will be clear that

(15)
(16)

and

(17)
Proof:

The proof mainly follows from that of [25, Lemma 2]. We here only give some key steps.

Step 1: For a given , we start with denoting

Step 2: Using the similar skills in [25], one can prove

(18)

Step 3: Proving (14) by (18) and .

These three steps are sufficient to prove Lemma 2 when is an integer. When is not an integer, we define , then is an integer and . Obviously, Lemma 2 still holds in such case. In summary, Lemma 2 will hold no matter whether or not is an integer. ∎

Iii Main Results

With preparations above, we now give the main results.

Theorem 1.

Assume that is observed via (1) with and is denoted by . Let be defined as in Lemma 1. If the measurement matrix satisfies

(19)

where and are denoted by (6) and (12), respectively, then

(20)

where is the optimal solution of (4) and

with , and for being denoted by (32), (33) and (34), respectively.

Remark 2 (Recovery Condition).

The established condition (19) coincides with the one obtained recently by Chen, et al. in [18], which has been proved to be sharp for the exactly sparse signal recovery under noise-free measurements. However, their goal was to recover the signal with PKSI using the constrained model. On the other hand, our condition (19) in fact is not a simple extension of the one in [18], but is obtained in a totally different way. We refer the interested readers to [18] for more detailed discussion on (19) and its potential corollaries.

Remark 3 (Error Estimate).

It seems that the obtained error estimate (20) shows a bit complicated since it integrates and together. In what follows, we provide three special cases of (20) by selecting some simple but meaningful ’s and/or ’s.

Case 1): Suppose that , then by using (15), (16) and (17) we can deduce directly from (20) that

and

where

This directly yields

This new error estimate also coincides with the ones in [24, 25, 26, 27] in form, which are induced for the unconstrained models. However, their results do not take PKSI into consideration.

Case 2): Suppose that . Similar to the above analysis in Case 1, we can also obtain that

where

This result coincides with the ones induced for the traditional constrained models in form, see, e.g., [5, 12, 13], which to some extent indicates theoretically that the unconstrained model (4) and the constrained model (3) are equivalence in robust recovery of any (sparse) signals with PKSI.

Case 3): Suppose that , i.e., . In such case, it is also easy to deduce from (20) that

where

According to the above error estimate, it seems impossible to exactly recover any sparse signals through (4) in the absence of noise. However, if one sets the parameter to be a sufficient small positive number, the error between and will tend to depend only on the available PKSI of the original signal itself. On the other hand, from the viewpoint of non-uniform recovery [3], it has been shown that under certain conditions, one can successfully recover some (specific) sparse signals from the BPDN, see, e.g., [20]. This may brings possibility for (4) to realize the exact recovery of some sparse signals with PKSI when certain conditions are satisfied. More discussion on non-uniform recovery is beyond the scope of this paper, and we refer the interested readers to [3] and [29] for details.

Remark 4.

Due to the limited space, we can not discuss more on the obtained results in this paper. We refer the interested readers to the supplementary material for more discussion on the weight choice and its resultant performance analysis.

Proof:

We first denote , , , and . Then we have

(21)

and also know from Lemma 2 (with ) that

(22)

Besides, combing (8), (21) and (22) directly yields

(23)

where we used the condition (19) and thus

(24)

for the last inequality. Similarly, we can also deduce from (8), (21) and (22) that

On the other hand, let denote the index set of the largest entries of in magnitude. Then we can know from Lemma 2 (with ) and [24, inequality (2.3)] that

(25)
(26)

where . Then using (III) and (25), we have

(27)

where we used in the first inequality.

Now we estimate the upper bound of . We first know from (7), (21) and (22) that

which is equal to

(28)

Using (24) again, we can further deduce from (28) that

This directly leads to

(29)

Based on (III), we can give two new upper bound estimates for and , respectively, i.e.,

(30)
(31)

Now combing (26) and (III), together with (III)-(31), we have

where

(32)
(33)
(34)

This completes the proof. ∎

Iv Conclusion

This paper aims to provide a deterministic (non-stochastic) analysis for the sparse recovery of signals with partially known support information from the weighted BPDN. Equipped with the powerful RIC notation, we established a series of sufficient conditions and their resultant error estimates. These theoretical results, to some degree, are well complementary for the recent ones of the weighted BPDN established in a stochastic manner.

References

  • [1]

    E. J. Candès and T. Tao, “Decoding by linear programming,”

    IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4203–4215, Dec. 2005.
  • [2] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.
  • [3] S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing. Basel, Switzerland: Birkhäuser, 2013.
  • [4] E. J. Candès, “The restricted isometry property and its implications for compressed sensing,” Comp. Rendus Math., vol. 346, no. 9, pp. 589–592, May 2008.
  • [5] T. T. Cai and A. R. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices,” IEEE Trans. Inf. Theory, vol. 60, no. 1, pp. 122–132, Jan. 2014.
  • [6] R. Zhang and S. Li, A proof of conjecture on restricted isometry property constants ,” IEEE Trans. Inf. Theory, vol. 64, no. 3, pp. 1699–1705, Mar. 2018.
  • [7] R. Zhang and S. Li, “Optimal RIP bounds for sparse signals recovery via minimization,” Appl. Comput. Harmon. Anal., in press.
  • [8] T. T. Cai, L. Wang, and G. W. Xu, “Stable recovery of sparse signals and an oracle inequality,” IEEE Trans. Inf. Theory, vol. 56, no. 7, pp. 3516–3522, Jul. 2010.
  • [9] M. A. Khajehnejad, W. Xu, A. S. Avestimehr, and B. Hassibi, “Weighted l1 minimization for sparse recovery with prior information,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Jun. 2009, pp. 483–487
  • [10] N. Vaswani and W. Lu, “Modified-CS: Modifying compressive sensing for problems with partially known support,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Jun. 2009, pp. 488–492.
  • [11] L. Jacques, “A short note on compressed sensing with partially known signal support”, Signal Process., vol. 90, no. 12, pp. 3308–3312, 2010.
  • [12] M. P. Friedlander, H. Mansour, R. Saab, and Ö. Yilmaz, “Recovering compressively sampled signals using partial support information,” IEEE Trans. Inf. Theory, vol. 58, no. 2, pp. 1122–1134, Feb. 2012.
  • [13] T. Ince, A. Nacaroglu, and N. Watsuji. “Nonconvex compressed sensing with partially known signal support,” Signal Process., vol. 93, pp. 338–344, 2013.
  • [14] J. C. Zhan and N. Vaswani, “Time invariant error bounds for modified-cs-based sparse signal sequence recovery,” IEEE Trans. Inf. Theory, vol. 61, no. 3, pp. 1389–1409, Mar. 2015.
  • [15] A. Flinth, “Optimal choice of weights for sparse recovery with prior information,” IEEE Trans. Inf. Theory,, vol. 62, no. 7, pp. 4276–4284, Jul. 2016.
  • [16] H. Mansour and R. Saab, “Recovery analysis for weighted -minimization using the null space property,”, Appl. Comput. Harmon. Anal., vol. 43, no. 1, pp. 23–38, 2017.
  • [17] D. Needell, R. Saab, and T. Woolf, “Weighted -minimization for sparse recovery under arbitrary prior information,” Inf. Inference, vol. 6, no. 3, pp. 284–309, 2017.
  • [18] W. G. Chen, Y. L. Li, and G. Q. Wu, “Recovery of signals under the high order RIP condition via prior support information,”, Signal Process., vol. 153, pp. 83–94, 2018.
  • [19] W. G. Chen and Y. L. Li, “Recovery of signals under the condition on RIC and ROC via prior support information,” Appl. Comput. Harmon. Anal., in press.
  • [20] J.-J. Fuchs, “On sparse representations in arbitrary redundant bases,” IEEE Trans. Inf. Theory, vol. 50, no. 6, pp. 1341–1344, Jun. 2004.
  • [21] J.-J. Fuchs, “Recovery of exact sparse representations in the presence of bounded noise,” IEEE Trans. Inf. Theory, vol. 51, no. 10, pp. 3601–3608, Oct. 2005.
  • [22] C. W. Zhu, “Stable recovery of sparse signals via regularized minimization,” IEEE Trans. Inf. Theory, vol. 54, no. 7, p. 3364–3367, Jul. 2008.
  • [23] J. H. Lin and S. Li, “Sparse recovery with coherent tight frame via analysis Dantzig selector and analysis lasso,” Appl. Comput. Harmon. Anal., vol. 37, pp. 126–139, 2014.
  • [24] Y. Shen, B. Han, and E. Braverman, “Stable recovery of analysis based approaches,” Appl. Comput. Harmon. Anal., vol. 39, pp. 161–172, 2015.
  • [25] H. M. Ge, J. M. Wen, W. G. Chen, J. Weng, and M. J. Lai, “Stable sparse recovery with three unconstrained analysis based approaches,” [Online]. Available: http://alpha.math.uga.edu/ mjlai/papers/20180126.pdf
  • [26] P. Li and W. Chen, “Signal recovery under cumulative coherence,” J. Comput. Appl. Math., vol. 346, pp. 399–417, 2019.
  • [27] W. D. Wang, F. Zhang, Z. Wang and J. J. Wang, “Coherence-based performance guarantee of regularized -norm minimization and beyond”, [Online]. Available: https://arxiv.org/abs/1812.03739
  • [28] L. X. Lian, A. Liu, and V. K. N. Lau, “Weighted LASSO for sparse recovery with statistical prior support information,” IEEE Trans. Signal Process., vol. 66, no. 6, pp. 1607–1618, Mar. 2018.
  • [29] H. Zhang, M. Yan, and W. T. Yin, “One condition for solution uniqueness and robustness of both l1-synthesis and l1-analysis minimizations,” Adv. Comput. Math., vol. 42, pp. 1381–1399, 2016.