# A New Analysis for Support Recovery with Block Orthogonal Matching Pursuit

Compressed Sensing (CS) is a signal processing technique which can accurately recover sparse signals from linear measurements with far fewer number of measurements than those required by the classical Shannon-Nyquist theorem. Block sparse signals, i.e., the sparse signals whose nonzero coefficients occur in few blocks, arise from many fields. Block orthogonal matching pursuit (BOMP) is a popular greedy algorithm for recovering block sparse signals due to its high efficiency and effectiveness. By fully using the block sparsity of block sparse signals, BOMP can achieve very good recovery performance. This paper proposes a sufficient condition to ensure that BOMP can exactly recover the support of block K-sparse signals under the noisy case. This condition is better than existing ones.

## Authors

• 22 publications
• 10 publications
• ### A Sharp Condition for Exact Support Recovery of Sparse Signals With Orthogonal Matching Pursuit

Support recovery of sparse signals from noisy measurements with orthogon...
07/10/2018 ∙ by Jinming Wen, et al. ∙ 0

• ### Matching Pursuit LASSO Part II: Applications and Sparse Recovery over Batch Signals

Matching Pursuit LASSIn Part I TanPMLPart1, a Matching Pursuit LASSO (MP...
02/20/2013 ∙ by Mingkui Tan, et al. ∙ 0

• ### Orthogonal Matching Pursuit with Tikhonov and Landweber Regularization

The Orthogonal Matching Pursuit (OMP) for compressed sensing iterates ov...
02/20/2019 ∙ by Robert Seidel, et al. ∙ 0

• ### Orthogonal Matching Pursuit with Replacement

In this paper, we consider the problem of compressed sensing where the g...
06/14/2011 ∙ by Prateek Jain, et al. ∙ 0

• ### Sparse Recovery of Fusion Frame Structured Signals

Fusion frames are collection of subspaces which provide a redundant repr...
04/05/2018 ∙ by Ulas Ayaz, et al. ∙ 0

• ### Greedy Algorithms for Hybrid Compressed Sensing

Compressed sensing (CS) is a technique which uses fewer measurements tha...
08/18/2019 ∙ by Ching-Lun Tai, et al. ∙ 0

• ### Exact Sparse Recovery with L0 Projections

Many applications concern sparse signals, for example, detecting anomali...
02/04/2013 ∙ by Ping Li, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Compressed sensing (CS) [2, 3, 4, 6, 5] has attracted much attention in recent years. Suppose that we have linear model , where

is a measurement vector,

is a sensing matrix, is a -sparse signal (i.e., , where supp( is the support of and is the cardinality of supp()) and represents the measurement noise. Then under some conditions on , CS can accurately recover the support of based on and .

In many fields [7, 8], such as DNA microarrays [9], multiple measurement vector problem [10]

and direction of arrival estimation

[11], the nonzero entries of occur in blocks (or clusters). Such kind of signals are referred to as block sparse signals and are denoted as in this paper.

To mathematically define , analogous to [12], we view as a concatenation of blocks :

 xB=[xTB[1]xTB[2]⋯xTB[M]]T, (1)

where with denotes the th block of . Then,

###### Definition 1.

([12]) A vector is called block -sparse if is nonzero for at most indices .

Denote

 T=suppB(xB):={ℓ|xB[ℓ]≠0d×1}. (2)

Then, by Definition 1, we have and .

Similar to , we also represent as a concatenation of column-blocks of size , , i.e.,

 A=[A[1]A[2]⋯A[M]]. (3)

Since block sparse signals arise from many fields [7], this paper focus on study the recovery of from measurements

 y=AxB+e. (4)

To this end, we introduce the definition of block restricted isometry property (RIP).

###### Definition 2.

([12, 13]) A matrix has block RIP with parameter if

 (1−δB)∥hB∥22≤∥AhB∥22≤(1+δB)∥hB∥22 (5)

holds for every block -sparse . The minimum satisfying (5) is defined as the block RIP constant .

To efficiently recover block sparse signals, the block OMP (BOMP) algorithm, which is described in Algorithm 1, has been proposed in [12]. Recently, using RIP, [14] investigated some sufficient conditions for exact or stable recovery of block sparse signals with BOMP. They also proved that their sufficient conditions are sharp in the noiseless case.

In order to analyze the recoverability of BOMP in the noisy case, we investigate the sufficient condition of the support recovery of block sparse signals with iterations of BOMP in the noisy case. The condition reduces to that for the noiseless case when and it is the results presented in [14].

The rest of the paper is organized as follows. We present our new sufficient conditions in Sections II and prove them in Sections III. The paper is summarized in Section IV.

## Ii Main Results

Similar to [14], we define mixed -norm as

 ∥xB∥2,p=∥w∥p,p=1,2,∞, (6)

where with . Then our sufficient condition for the exact support recovery of block -sparse signals with BOMP is as follows:

###### Theorem 1.

Suppose that in (4), and satisfies the block RIP of order with

 δBK+1<1√K+1. (7)

Then BOMP with the stopping criterion can exactly recover (see (2)) from (4) in iterations provided that

 mini∈T∥xB[i]∥2>ε√1−δBK+1+ε√1+δBK+11−√K+1δBK+1. (8)

The proof of Theorem 1 will be given in Section III.

###### Remark 1.

[14, Corollary 1] shows that if and in (4) respectively satisfy the block RIP with satisfying (7) and , then BOMP with the stopping criterion exactly recovers (see (2)) in iterations provided that

 mini∈T∥xB[i]∥2>2ε1−√K+1δBK+1. (9)

In the following, we show that our condition (8) in Theorem 1 is less restrictive than (9). Equivalently, we need to show that

 2ε1−√K+1δBK+1>ε√1−δBK+1+ε√1+δBK+11−√K+1δBK+1. (10)

Equivalently, we need to show

 (2−√1+δBK+1)√1−δBK+1>1−√K+1δBK+1. (11)

Since , it is clear that (10) holds if

 (2−√1+δBK+1)√1−δBK+1>1−δBK+1,

which is equivalent to

 2−√1+δBK+1>√1−δBK+1. (12)

It is easy to see that (12) holds. Thus our condition is less restrictive than [14].

To clearly show the improvement of Theorem 6 over [14, Corollary 1], we display versus for several in Figure 1, where and respectively denote the right-hand sides of (8) and (9). From Figure 1, we can see that the improvement of Theorem 6 over [14, Corollary 1] is significant.

###### Remark 2.

We obtained a less restrictive sufficient condition for the exact support recovery of -block sparse signals with the BOMP algorithm based on RIC. Since the weaker the RIC bound is, the less number of measurements are needed. The improved RIC results can be used in many CS-based applications, see, e.g., [15].

In the following, we study the worst-case necessity condition for the exact support recovery by BOMP. Recall that BOMP may fail to recover the support of from if [14, Theorem 2]. Therefore, naturally becomes a necessity for the noisy case. Thus, we want to obtain the worst-case necessity condition on when .

###### Theorem 2.

Given any and positive integer . Let

 0<δ<1√K+1. (13)

Then, there always exist a matrix satisfying the RIP with , a block -sparse vector with

 mini∈T∥xB[i]∥2<ε√1−(δBK+1)2(√1−(δBK+1)2−√KδBK+1), (14)

and a noise vector with , such that BOMP fails to recover (see (2)) from (4) in iterations.

###### Proof.

For any given positive integers , , and any real number , , we construct a matrix function , block -sparse signal and a noise vector . Let

 (15)
 (16)

and

 (17)

where being the identity matrix, with all of its entries being 0,

 E(dK)×d=(Id,⋯,Id)′∈R(dK)×d, (18)
 s=δ√K,a=√1−δ2, (19)

and is the first coordinate unit vector. So, is supported on , and .

By simple calculations we get

 (20)

When and

, the eigenvalues

of are

 λ1=1−δ,λ2=1+δ. (21)

When and , the eigenvalues of are

 λi=1−δ21≤i≤K−1, λK=1+δ,λK+1=1−δ. (22)

Thus, the RIP constant of is for .

In the following, we will show that the block RIP constant of is .

Given any block -sparse vector . Let , with for and . Then

On the other hand, we have

Combining (23) and (24), the block RIP constant of is

 δBK+1=δ. (25)

We now show that BOMP may fail to recover from

 (26)

Recall that the BOMP algorithm, in order to show this Theorem, we only need to show

 a2t0=∥a2t0e1∥2=maxi∈T∥(A(d)[i])′y∥2< maxj∈Tc∥(A(d)[j])′y∥2=∥(ε+Kast0)e1∥2=ε+Kast0. (27)

By (14), it is easy to see that (II) holds.

This completes the proof. ∎

###### Remark 3.

We may find the gap between the necessary condition and the sufficient condition is small. So, our sufficient condition is nearly optimal. In fact, for example, Let and . The upper bound of (14) is , and the lower bound of (8) is . The gap is .

## Iii Proof of Theorem 1

By steps 3 and 4 of Algorithm 1, we have

 rk=y−PΛky=P⊥Λky(a)=P⊥Λk(A[T]xB[T]+e) (28)

where (a) follows from (4) and supp. The symbol denotes the orthogonal projection onto that is the range space of and .

It is worth mentioning that the residual is orthogonal to the columns of , i.e.,

 ∥A′[i]rk∥2=∥A′[i]P⊥Λky∥2=0,i∈Λk. (29)

### Iii-a Main Analysis

The proof of Theorem 1 is related to [16]. We will give a brief sketch for the proof of Theorem 1. Our proof consists of two steps. We show that BOMP chooses a correct index in each iteration in the first step. In the second step, we show that BOMP performs exactly iterations.

We prove the first step by induction. If BOMP selects a correct index at an iteration, we will say that BOMP makes a success at the iteration. First, we present the condition guaranteeing BOMP to make a success in the first iteration. Then, suppose that BOMP has been successful in the first iterations, we show that BOMP also makes its success in the th iteration. Here, we assume .

The proof for the first selection corresponds to the case of . Clearly the induction hypothesis holds for this case since .

If BOMP has been successful for the previous iterations, then it means that and , In this sense, BOMP will make a success in the th iteration, provided that (see Algorithm 1). Based on step 2 of Algorithm 1 and (29), in order to show that , in the th iteration, we need to show

 ∥A′[T∖Λk]rk∥2,∞=maxi∈T∖Λk∥A′[i]rk∥2 >maxj∈Ω∖T∥A′[j]rk∥2=∥A′[Ω∖T]rk∥2,∞. (30)

From (III-A), for any , it suffices to show

 ∥A′[T∖Λk]rk∥2,∞−∥A′[j]rk∥2>0. (31)

### Iii-B Proof of inequality (31)

In this subsection, we will show that (31) holds for when (7) and (8) hold.

Suppose that

 PTy=A[T]ξB[T] (32)

with and supp. For simplicity, we denote

 α=ξB[T∖Λk]. (33)

By (32), using the Cauchy-Schwarz inequality, we can have

 ∥A′[T∖Λk]rk∥2,∞=∥A′[T∖Λk]rk∥2,∞∥α∥2,1∥α∥2,1 (a)≥∑i∈T∖Λk∥A′[i]rk∥2∥ξB[i]∥2∥α∥2,1(b)≥⟨rk,∑i∈T∖ΛkA[i]ξB[i]⟩∥α∥2,1 =⟨rk,P⊥Λk(y−P⊥Ty)⟩∥α∥2,1(c)=∥rk∥22−⟨P⊥Λky,P⊥ΛkP⊥Ty⟩∥α∥2,1 (d)=∥rk∥22−⟨y,P⊥Ty⟩∥α∥2,1=∥rk∥22−∥P⊥Te∥22∥α∥2,1, (34)

where (a) follows from (6), (b) follows from Cauchy-Schwarz inequality, (c) is from , (d) .

Now, we can present a lower bound for left-hand-side of (31).

 ∥A′[T∖Λk]rk∥2,∞−∥A′[j]rk∥2 (35)

So, to show (31), we only need to show .

###### Proposition 1.

Define with

 h=A′[j]P⊥Λky∥A′[j]P⊥Λky∥2, (36)

for . We have . Define

 B=P⊥Λk[A[T∖Λk] A[j]], (37)
 u=[α0]∈R|T∖Λk|d+d,v=[0h]∈R|T∖Λk|d+d, (38)

where is defined in (33). For any , we have

 η= 14t∥B((t+1∥α∥2,1)u−v)∥22 −14t∥B((t−1∥α∥2,1)u+v)∥22−e′P⊥TA[j]h, (39)

where is defined in (35).

The proof of Proposition 1 will be given in Section V.

By the property of block RIP, it follows that

 ∥B((t+1∥α∥2,1)u−v)∥22−∥B((t−1∥α∥2,1)u+v)∥22 (a)≥(1−δBK+1)∥(t+1∥α∥2,1)u−v∥22 −(1+δBK+1)∥(t−1∥α∥2,1)u+v∥22 (b)=4t∥α∥22,2∥α∥2,1−2t2δBK+1∥α∥22,2−2∥α∥22,2δBK+1∥α∥22,1−2δBK+1 =4t(∥α∥22,2∥α∥2,1−δBK+12(t∥α∥22,2+1t(∥α∥22,2∥α∥22,1+1))), (40)

where (a) follows from [14, Lemma 3], (b) follows from (38).

Applying arithmetic-geometric mean inequality to (

40),

 ∥B((t+1∥α∥2,1)u−v)∥22−∥B((t−1∥α∥2,1)u+v)∥22 ≥maxt>0{4t(∥α∥22,2∥α∥2,1−δBK+12(t∥α∥22,2+1t(∥α∥22,2∥α∥22,1+1)))} =4t∥α∥2,2(∥α∥2,2∥α∥2,1−δBK+1 ⎷1+∥α∥22,2∥α∥22,1) (41)

It follows from (35) and (1) that

 ∥A′[T∖Λk]rk∥2,∞−∥A′[j]rk∥2 ≥14t(∥B((t+1∥α∥2,1)u−v)∥22−∥B((t−1∥α∥2,1)u+v)∥22) −e′P⊥TA[j]h (a)≥∥α∥2,2(∥α∥2,2∥α∥2,1−δBK+1 ⎷1+∥α∥22,2∥α∥22,1−√1+δBK+1∥e∥2∥α∥2,2) (b)≥∥α∥2,2(1√K−k−δBK+1√1+(1√K−k)2) −∥α∥2,2√1+δBK+1∥e∥2√K−kmini∈T∥ξB[i]∥2 (c)>∥α∥2,2√K−k(δBK+1(√K+1−√K−k+1))≥0, (42)

where (a) follows from (41), and [14, Lemma 3], (b) follows from the function is monotonously increasing on the interval , and , (c) follows from Lemma 1 (presented in Section VI) and (8).

It remains to show that BOMP stops under the stopping rule when it performs exactly iterations. Hence, we need to prove for and .

By (28), for , we have

 ∥rk∥2=∥P⊥ΛkA[T∖Λk]xB[T∖Λk]+P⊥Λke∥2 ≥√1−δBK+1∥xB[T∖Λk]∥2−ε (a)≥√1−δBK+1√1+δBK+1ε1−√K+1δBK+1≥(1−δBK+1)ε1−√K+1δBK+1≥ε,

where (a) follows from (8).

Similarly, from (28),

 ∥rK∥2=∥P⊥ΛKA[T∖ΛK]xB[T∖ΛK]+P⊥ΛKe∥2 (a)=∥P⊥ΛKe∥2≤ε, (43)

where (a) is from . Thus, BOMP performs iteration.

## Iv Conclusion

In this paper, in the noisy case, we have presented a sufficient condition, which is weaker than existing ones, for the exact support recovery of block -sparse signals with iterations of BOMP.

## V Proof of Proposition 1

###### Proof.

Recall that (32) and (33), we have

 (t+1∥α∥2,1)P⊥Λky−P⊥ΛkA[j]h (44)

and

 (t−1∥α∥2,1)P⊥Λky+P⊥ΛkA[j]h (45)

Using the property of norm and (36), we have

 ∥(t+1∥α∥2,1)P⊥Λky−P⊥ΛkA[j]h∥22 −∥(t−1∥α∥2,1)P⊥Λky+P⊥ΛkA[j]h∥22 =4t∥α∥2,1∥rk∥22−4t∥A′[j]P⊥Λky∥2. (46)

On the other hand, according to

 =(t+1∥α∥2,1)(e)′(P⊥T)′B((t+1∥α∥2,1)u−v) =−(t+1∥α∥2,1)e′P⊥TA[j]h (47)

and

 =(t−1∥α∥2,1)e′P⊥TA[j]h,

we obtain

 ∥B((t+1∥α∥2,1)u−v)+(t+1∥α∥2,1)P⊥Te∥22 −∥B((t−1∥α∥2,1)u+v)+(t−1∥α∥2,1)P⊥Te∥22 =∥B((t+1∥α∥2,1)u−v)∥22−∥B((t−1∥α∥2,1)u+v)∥22 +4t∥α∥2,1∥P⊥Te∥22−4te′P⊥TA[j]h. (48)

By (V)-(46) and (48), it follows that

 ∥B((t+1∥α∥2,1)u−v)∥22−∥B((t−1∥α∥2,1)u+v)∥22 +4t∥α∥2,1∥P⊥Te∥22−4te′P⊥TA[j]h =4t∥α∥2,1∥rk∥22−4t∥A′[j]P⊥Λky∥2.

After some manipulations, we can prove that (1) holds. ∎

## Vi Proof of Lemma 1

###### Lemma 1.

Consider (4) and (32). Suppose that . Then we have

 mini∈T∥ξB[i]∥2≥mini∈T∥xB[i]∥2−ε√1−δBK+1.
###### Proof.

Define

 PTe=A[T](A′[T]A[T])(−1)A′[T]e=A[