A New Analysis for Support Recovery with Block Orthogonal Matching Pursuit

11/06/2018 ∙ by Haifeng Li, et al. ∙ McGill University 0

Compressed Sensing (CS) is a signal processing technique which can accurately recover sparse signals from linear measurements with far fewer number of measurements than those required by the classical Shannon-Nyquist theorem. Block sparse signals, i.e., the sparse signals whose nonzero coefficients occur in few blocks, arise from many fields. Block orthogonal matching pursuit (BOMP) is a popular greedy algorithm for recovering block sparse signals due to its high efficiency and effectiveness. By fully using the block sparsity of block sparse signals, BOMP can achieve very good recovery performance. This paper proposes a sufficient condition to ensure that BOMP can exactly recover the support of block K-sparse signals under the noisy case. This condition is better than existing ones.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Compressed sensing (CS) [2, 3, 4, 6, 5] has attracted much attention in recent years. Suppose that we have linear model , where

is a measurement vector,

is a sensing matrix, is a -sparse signal (i.e., , where supp( is the support of and is the cardinality of supp()) and represents the measurement noise. Then under some conditions on , CS can accurately recover the support of based on and .

In many fields [7, 8], such as DNA microarrays [9], multiple measurement vector problem [10]

and direction of arrival estimation

[11], the nonzero entries of occur in blocks (or clusters). Such kind of signals are referred to as block sparse signals and are denoted as in this paper.

To mathematically define , analogous to [12], we view as a concatenation of blocks :

(1)

where with denotes the th block of . Then,

Definition 1.

([12]) A vector is called block -sparse if is nonzero for at most indices .

Denote

(2)

Then, by Definition 1, we have and .

Similar to , we also represent as a concatenation of column-blocks of size , , i.e.,

(3)

Since block sparse signals arise from many fields [7], this paper focus on study the recovery of from measurements

(4)

To this end, we introduce the definition of block restricted isometry property (RIP).

Definition 2.

([12, 13]) A matrix has block RIP with parameter if

(5)

holds for every block -sparse . The minimum satisfying (5) is defined as the block RIP constant .

To efficiently recover block sparse signals, the block OMP (BOMP) algorithm, which is described in Algorithm 1, has been proposed in [12]. Recently, using RIP, [14] investigated some sufficient conditions for exact or stable recovery of block sparse signals with BOMP. They also proved that their sufficient conditions are sharp in the noiseless case.

0:  , ,
0:  , , and .
1:  while “stopping criterion is not met” do
2:     Choose the block index that satisfies  .
3:     Let , and calculate.
4:     .
5:     .
6:  end while
6:   and .
Algorithm 1 The BOMP algorithm [12]

In order to analyze the recoverability of BOMP in the noisy case, we investigate the sufficient condition of the support recovery of block sparse signals with iterations of BOMP in the noisy case. The condition reduces to that for the noiseless case when and it is the results presented in [14].

The rest of the paper is organized as follows. We present our new sufficient conditions in Sections II and prove them in Sections III. The paper is summarized in Section IV.

Ii Main Results

Similar to [14], we define mixed -norm as

(6)

where with . Then our sufficient condition for the exact support recovery of block -sparse signals with BOMP is as follows:

Theorem 1.

Suppose that in (4), and satisfies the block RIP of order with

(7)

Then BOMP with the stopping criterion can exactly recover (see (2)) from (4) in iterations provided that

(8)

The proof of Theorem 1 will be given in Section III.

Remark 1.

[14, Corollary 1] shows that if and in (4) respectively satisfy the block RIP with satisfying (7) and , then BOMP with the stopping criterion exactly recovers (see (2)) in iterations provided that

(9)

In the following, we show that our condition (8) in Theorem 1 is less restrictive than (9). Equivalently, we need to show that

(10)

Equivalently, we need to show

(11)

Since , it is clear that (10) holds if

which is equivalent to

(12)

It is easy to see that (12) holds. Thus our condition is less restrictive than [14].

To clearly show the improvement of Theorem 6 over [14, Corollary 1], we display versus for several in Figure 1, where and respectively denote the right-hand sides of (8) and (9). From Figure 1, we can see that the improvement of Theorem 6 over [14, Corollary 1] is significant.

Fig. 1: The difference between and .
Remark 2.

We obtained a less restrictive sufficient condition for the exact support recovery of -block sparse signals with the BOMP algorithm based on RIC. Since the weaker the RIC bound is, the less number of measurements are needed. The improved RIC results can be used in many CS-based applications, see, e.g., [15].

In the following, we study the worst-case necessity condition for the exact support recovery by BOMP. Recall that BOMP may fail to recover the support of from if [14, Theorem 2]. Therefore, naturally becomes a necessity for the noisy case. Thus, we want to obtain the worst-case necessity condition on when .

Theorem 2.

Given any and positive integer . Let

(13)

Then, there always exist a matrix satisfying the RIP with , a block -sparse vector with

(14)

and a noise vector with , such that BOMP fails to recover (see (2)) from (4) in iterations.

Proof.

For any given positive integers , , and any real number , , we construct a matrix function , block -sparse signal and a noise vector . Let

(15)
(16)

and

(17)

where being the identity matrix, with all of its entries being 0,

(18)
(19)

and is the first coordinate unit vector. So, is supported on , and .

By simple calculations we get

(20)

When and

, the eigenvalues

of are

(21)

When and , the eigenvalues of are

(22)

Thus, the RIP constant of is for .

In the following, we will show that the block RIP constant of is .

Given any block -sparse vector . Let , with for and . Then

(23)

On the other hand, we have

(24)

Combining (23) and (24), the block RIP constant of is

(25)

We now show that BOMP may fail to recover from

(26)

Recall that the BOMP algorithm, in order to show this Theorem, we only need to show

(27)

By (14), it is easy to see that (II) holds.

This completes the proof. ∎

Remark 3.

We may find the gap between the necessary condition and the sufficient condition is small. So, our sufficient condition is nearly optimal. In fact, for example, Let and . The upper bound of (14) is , and the lower bound of (8) is . The gap is .

Iii Proof of Theorem 1

By steps 3 and 4 of Algorithm 1, we have

(28)

where (a) follows from (4) and supp. The symbol denotes the orthogonal projection onto that is the range space of and .

It is worth mentioning that the residual is orthogonal to the columns of , i.e.,

(29)

Iii-a Main Analysis

The proof of Theorem 1 is related to [16]. We will give a brief sketch for the proof of Theorem 1. Our proof consists of two steps. We show that BOMP chooses a correct index in each iteration in the first step. In the second step, we show that BOMP performs exactly iterations.

We prove the first step by induction. If BOMP selects a correct index at an iteration, we will say that BOMP makes a success at the iteration. First, we present the condition guaranteeing BOMP to make a success in the first iteration. Then, suppose that BOMP has been successful in the first iterations, we show that BOMP also makes its success in the th iteration. Here, we assume .

The proof for the first selection corresponds to the case of . Clearly the induction hypothesis holds for this case since .

If BOMP has been successful for the previous iterations, then it means that and , In this sense, BOMP will make a success in the th iteration, provided that (see Algorithm 1). Based on step 2 of Algorithm 1 and (29), in order to show that , in the th iteration, we need to show

(30)

From (III-A), for any , it suffices to show

(31)

Iii-B Proof of inequality (31)

In this subsection, we will show that (31) holds for when (7) and (8) hold.

Suppose that

(32)

with and supp. For simplicity, we denote

(33)

By (32), using the Cauchy-Schwarz inequality, we can have

(34)

where (a) follows from (6), (b) follows from Cauchy-Schwarz inequality, (c) is from , (d) .

Now, we can present a lower bound for left-hand-side of (31).

(35)

So, to show (31), we only need to show .

Proposition 1.

Define with

(36)

for . We have . Define

(37)
(38)

where is defined in (33). For any , we have

(39)

where is defined in (35).

The proof of Proposition 1 will be given in Section V.

By the property of block RIP, it follows that

(40)

where (a) follows from [14, Lemma 3], (b) follows from (38).

Applying arithmetic-geometric mean inequality to (

40),

(41)

It follows from (35) and (1) that

(42)

where (a) follows from (41), and [14, Lemma 3], (b) follows from the function is monotonously increasing on the interval , and , (c) follows from Lemma 1 (presented in Section VI) and (8).

It remains to show that BOMP stops under the stopping rule when it performs exactly iterations. Hence, we need to prove for and .

By (28), for , we have

where (a) follows from (8).

Similarly, from (28),

(43)

where (a) is from . Thus, BOMP performs iteration.

Iv Conclusion

In this paper, in the noisy case, we have presented a sufficient condition, which is weaker than existing ones, for the exact support recovery of block -sparse signals with iterations of BOMP.

V Proof of Proposition 1

Proof.

Recall that (32) and (33), we have

(44)

and

(45)

Using the property of norm and (36), we have

(46)

On the other hand, according to

(47)

and

we obtain

(48)

By (V)-(46) and (48), it follows that

After some manipulations, we can prove that (1) holds. ∎

Vi Proof of Lemma 1

Lemma 1.

Consider (4) and (32). Suppose that . Then we have

Proof.

Define