A New Analysis for Support Recovery with Block Orthogonal Matching Pursuit

11/06/2018 ∙ by Haifeng Li, et al. ∙ McGill University 0

Compressed Sensing (CS) is a signal processing technique which can accurately recover sparse signals from linear measurements with far fewer number of measurements than those required by the classical Shannon-Nyquist theorem. Block sparse signals, i.e., the sparse signals whose nonzero coefficients occur in few blocks, arise from many fields. Block orthogonal matching pursuit (BOMP) is a popular greedy algorithm for recovering block sparse signals due to its high efficiency and effectiveness. By fully using the block sparsity of block sparse signals, BOMP can achieve very good recovery performance. This paper proposes a sufficient condition to ensure that BOMP can exactly recover the support of block K-sparse signals under the noisy case. This condition is better than existing ones.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Compressed sensing (CS) [2, 3, 4, 6, 5] has attracted much attention in recent years. Suppose that we have linear model , where

is a measurement vector,

is a sensing matrix, is a -sparse signal (i.e., , where supp( is the support of and is the cardinality of supp()) and represents the measurement noise. Then under some conditions on , CS can accurately recover the support of based on and .

In many fields [7, 8], such as DNA microarrays [9], multiple measurement vector problem [10]

and direction of arrival estimation

[11], the nonzero entries of occur in blocks (or clusters). Such kind of signals are referred to as block sparse signals and are denoted as in this paper.

To mathematically define , analogous to [12], we view as a concatenation of blocks :


where with denotes the th block of . Then,

Definition 1.

([12]) A vector is called block -sparse if is nonzero for at most indices .



Then, by Definition 1, we have and .

Similar to , we also represent as a concatenation of column-blocks of size , , i.e.,


Since block sparse signals arise from many fields [7], this paper focus on study the recovery of from measurements


To this end, we introduce the definition of block restricted isometry property (RIP).

Definition 2.

([12, 13]) A matrix has block RIP with parameter if


holds for every block -sparse . The minimum satisfying (5) is defined as the block RIP constant .

To efficiently recover block sparse signals, the block OMP (BOMP) algorithm, which is described in Algorithm 1, has been proposed in [12]. Recently, using RIP, [14] investigated some sufficient conditions for exact or stable recovery of block sparse signals with BOMP. They also proved that their sufficient conditions are sharp in the noiseless case.

0:  , ,
0:  , , and .
1:  while “stopping criterion is not met” do
2:     Choose the block index that satisfies  .
3:     Let , and calculate.
4:     .
5:     .
6:  end while
6:   and .
Algorithm 1 The BOMP algorithm [12]

In order to analyze the recoverability of BOMP in the noisy case, we investigate the sufficient condition of the support recovery of block sparse signals with iterations of BOMP in the noisy case. The condition reduces to that for the noiseless case when and it is the results presented in [14].

The rest of the paper is organized as follows. We present our new sufficient conditions in Sections II and prove them in Sections III. The paper is summarized in Section IV.

Ii Main Results

Similar to [14], we define mixed -norm as


where with . Then our sufficient condition for the exact support recovery of block -sparse signals with BOMP is as follows:

Theorem 1.

Suppose that in (4), and satisfies the block RIP of order with


Then BOMP with the stopping criterion can exactly recover (see (2)) from (4) in iterations provided that


The proof of Theorem 1 will be given in Section III.

Remark 1.

[14, Corollary 1] shows that if and in (4) respectively satisfy the block RIP with satisfying (7) and , then BOMP with the stopping criterion exactly recovers (see (2)) in iterations provided that


In the following, we show that our condition (8) in Theorem 1 is less restrictive than (9). Equivalently, we need to show that


Equivalently, we need to show


Since , it is clear that (10) holds if

which is equivalent to


It is easy to see that (12) holds. Thus our condition is less restrictive than [14].

To clearly show the improvement of Theorem 6 over [14, Corollary 1], we display versus for several in Figure 1, where and respectively denote the right-hand sides of (8) and (9). From Figure 1, we can see that the improvement of Theorem 6 over [14, Corollary 1] is significant.

Fig. 1: The difference between and .
Remark 2.

We obtained a less restrictive sufficient condition for the exact support recovery of -block sparse signals with the BOMP algorithm based on RIC. Since the weaker the RIC bound is, the less number of measurements are needed. The improved RIC results can be used in many CS-based applications, see, e.g., [15].

In the following, we study the worst-case necessity condition for the exact support recovery by BOMP. Recall that BOMP may fail to recover the support of from if [14, Theorem 2]. Therefore, naturally becomes a necessity for the noisy case. Thus, we want to obtain the worst-case necessity condition on when .

Theorem 2.

Given any and positive integer . Let


Then, there always exist a matrix satisfying the RIP with , a block -sparse vector with


and a noise vector with , such that BOMP fails to recover (see (2)) from (4) in iterations.


For any given positive integers , , and any real number , , we construct a matrix function , block -sparse signal and a noise vector . Let




where being the identity matrix, with all of its entries being 0,


and is the first coordinate unit vector. So, is supported on , and .

By simple calculations we get


When and

, the eigenvalues

of are


When and , the eigenvalues of are


Thus, the RIP constant of is for .

In the following, we will show that the block RIP constant of is .

Given any block -sparse vector . Let , with for and . Then


On the other hand, we have


Combining (23) and (24), the block RIP constant of is


We now show that BOMP may fail to recover from


Recall that the BOMP algorithm, in order to show this Theorem, we only need to show


By (14), it is easy to see that (II) holds.

This completes the proof. ∎

Remark 3.

We may find the gap between the necessary condition and the sufficient condition is small. So, our sufficient condition is nearly optimal. In fact, for example, Let and . The upper bound of (14) is , and the lower bound of (8) is . The gap is .

Iii Proof of Theorem 1

By steps 3 and 4 of Algorithm 1, we have


where (a) follows from (4) and supp. The symbol denotes the orthogonal projection onto that is the range space of and .

It is worth mentioning that the residual is orthogonal to the columns of , i.e.,


Iii-a Main Analysis

The proof of Theorem 1 is related to [16]. We will give a brief sketch for the proof of Theorem 1. Our proof consists of two steps. We show that BOMP chooses a correct index in each iteration in the first step. In the second step, we show that BOMP performs exactly iterations.

We prove the first step by induction. If BOMP selects a correct index at an iteration, we will say that BOMP makes a success at the iteration. First, we present the condition guaranteeing BOMP to make a success in the first iteration. Then, suppose that BOMP has been successful in the first iterations, we show that BOMP also makes its success in the th iteration. Here, we assume .

The proof for the first selection corresponds to the case of . Clearly the induction hypothesis holds for this case since .

If BOMP has been successful for the previous iterations, then it means that and , In this sense, BOMP will make a success in the th iteration, provided that (see Algorithm 1). Based on step 2 of Algorithm 1 and (29), in order to show that , in the th iteration, we need to show


From (III-A), for any , it suffices to show


Iii-B Proof of inequality (31)

In this subsection, we will show that (31) holds for when (7) and (8) hold.

Suppose that


with and supp. For simplicity, we denote


By (32), using the Cauchy-Schwarz inequality, we can have


where (a) follows from (6), (b) follows from Cauchy-Schwarz inequality, (c) is from , (d) .

Now, we can present a lower bound for left-hand-side of (31).


So, to show (31), we only need to show .

Proposition 1.

Define with


for . We have . Define


where is defined in (33). For any , we have


where is defined in (35).

The proof of Proposition 1 will be given in Section V.

By the property of block RIP, it follows that


where (a) follows from [14, Lemma 3], (b) follows from (38).

Applying arithmetic-geometric mean inequality to (



It follows from (35) and (1) that


where (a) follows from (41), and [14, Lemma 3], (b) follows from the function is monotonously increasing on the interval , and , (c) follows from Lemma 1 (presented in Section VI) and (8).

It remains to show that BOMP stops under the stopping rule when it performs exactly iterations. Hence, we need to prove for and .

By (28), for , we have

where (a) follows from (8).

Similarly, from (28),


where (a) is from . Thus, BOMP performs iteration.

Iv Conclusion

In this paper, in the noisy case, we have presented a sufficient condition, which is weaker than existing ones, for the exact support recovery of block -sparse signals with iterations of BOMP.

V Proof of Proposition 1


Recall that (32) and (33), we have




Using the property of norm and (36), we have


On the other hand, according to



we obtain


By (V)-(46) and (48), it follows that

After some manipulations, we can prove that (1) holds. ∎

Vi Proof of Lemma 1

Lemma 1.

Consider (4) and (32). Suppose that . Then we have