DeepAI

The block mutual coherence property condition for signal recovery

Compressed sensing shows that a sparse signal can stably be recovered from incomplete linear measurements. But, in practical applications, some signals have additional structure, where the nonzero elements arise in some blocks. We call such signals as block-sparse signals. In this paper, the ℓ_2/ℓ_1-αℓ_2 minimization method for the stable recovery of block-sparse signals is investigated. Sufficient conditions based on block mutual coherence property and associating upper bound estimations of error are established to ensure that block-sparse signals can be stably recovered in the presence of noise via the ℓ_2/ℓ_1-αℓ_2 minimization method. For all we know, it is the first block mutual coherence property condition of stably reconstructing block-sparse signals by the ℓ_2/ℓ_1-αℓ_2 minimization method.

• 7 publications
• 4 publications
• 28 publications
• 10 publications
06/11/2020

The high-order block RIP for non-convex block-sparse compressed sensing

This paper concentrates on the recovery of block-sparse signals, which i...
12/10/2018

Coherence-Based Performance Guarantee of Regularized ℓ_1-Norm Minimization and Beyond

In this paper, we consider recovering the signal x∈R^n from its few nois...
07/10/2021

Stable Recovery of Weighted Sparse Signals from Phaseless Measurements via Weighted l1 Minimization

The goal of phaseless compressed sensing is to recover an unknown sparse...
06/28/2018

Signal Recovery under Mutual Incoherence Property and Oracle Inequalities

This paper considers signal recovery through an unconstrained minimizati...
03/04/2019

Piecewise Sparse Recovery in Unions of Bases

Sparse recovery is widely applied in many fields, since many signals or ...
11/12/2019

Sparse estimation via ℓ_q optimization method in high-dimensional linear regression

In this paper, we discuss the statistical properties of the ℓ_q optimiza...
10/04/2019

Lipschitz Learning for Signal Recovery

We consider the recovery of signals from their observations, which are s...

1 Introduction

Compressed sensing (CS) is a novel genre of sampling theory, which has attracted a large number of attention in different areas including applied mathematics, machine learning, pattern recognition, image processing, and so forth. The sparsity of signal is elementary precondition of compressed sensing. In general, one thinks over the model as follows:

 y=Φx+z, (1.1)

where is an measurement matrix () and

is a vector of measurement errors. The aim is to reconstruct the unknown signal

based on and .

Now we all understand that the minimization method presents an efficient approach for recovery of the sparse signal in numerous scenarios. The minimization problem in this settings is

 min~x∥~x∥1  subject~{}to  y−Φ~x∈B. (1.2)

In the noisefree situation, we get . In the noisy situation, we can put [1] or , where stands for the conjugate transpose of the matrix [2]. Now it is well known that the problem of sparse signal recovery has been well investigated in the framework of the mutual coherence property introduced in [3]. Let

 μ=maxi≠j|Φ⊤iΦj|∥Φi∥2∥Φj∥2. (1.3)

It has been shown that a sparse signal can been reconstructed by minimization with a small or zero error under some appropriate conditions regarding MIP[3] [1] [4] [5] [6]. In order to further enhance the reconstruction performance, Yin et al. [7] has recently proposed the approach (i.e., minimization method) as follows:

 min~x∥~x∥1−∥~x∥2  % subject~{}to  y−Φ~x∈B. (1.4)

Additionally, Yin et al. conducted simulations to show that the method (1.4) behaves better than the method (1.2) in recovering sparse signals. Based on this fact, numerous researches [8] [9] [10] on the minimization approach have been developed. Besides, for recovering , the researchers [11] [12] proposed minimization method:

 min~x∥~x∥1−α∥~x∥2  subject~{}to  y−Φ~x∈B. (1.5)

When , (1.5) degenerates to the minimization method (1.4).

However, in practical applications, there exist signals which have special structure form, where the nonzero coefficients appear in some blocks. Such structural signal we called block spare signal in this paper. Such structural sparse signals commonly arise in all kinds of applications, e.g. foetal electrocardiogram (FECG) [13], motion segmentation[15], color image [14], and reconstruction of multi-band signals [16] [17]. Without loss of generality, suppose that there exist blocks with block size in . Then, can be expressed as

 x=[x1,⋯,xdx[1],xd+1,⋯,x2dx[2],⋯,xN−d+1,⋯,xNx[n]]T, (1.6)

where represents the th block of . We call a vector block -sparse signal if has at most nonzero blocks, i.e., Therefore, the measurement matrix can also be described as

 Φ=[Φ1,⋯,Φdx[1],Φd+1,⋯,Φ2dΦ[2],⋯,ΦN−d+1,⋯,ΦNΦ[n]]T, (1.7)

where and respectively stand for the th column vector and th sub-block matrix of .

In this paper, we propose the following minimization to recover block sparse signal:

 min~x∥~x∥2,1−α∥~x∥2  subject~{}to  y−Φ~x∈B, (1.8)

where . Furthermore, mixed norm . Observe that . When , (1.8) returns to minimization [18]. And when the block size , (1.8) reduces to the minimization (1.5).

In this paper, we study the block mutual coherence conditions for the stable recovery of signals with blocks structure from (1.6) via minimization in noise case. Sufficient conditions for stable signal reconstruction by minimization are established. Moreover, we also gain upper bound estimation of error concerning the recovery of block sparse signal. As far as we know, this is the first block mutual coherence based sufficient condition of stably reconstructing via solving (1.8).

The remainder of the paper is organized as follows. In, Section 2, we present some notations and lemmas that will be used. The main theoretical results and their proofs are given in Section 3. Finally, the conclusion is summarized in Section 4.

2 Preliminaries

In this section, we primarily present several lemmas to prove our main results. Before giving these lemmas, we first of all explain some symbols in this paper.

Notations: denotes block indices, is the complement of in . For any vector , denote to imply that maintains the blocks indexed by of and displaces other blocks by zero. represents the block support of . In addition, we often assume that , where is the solution of (1.8) and is the signal to be recovered.

Definition 2.1.

(block mutual coherence) Given matrix , we define its block mutual coherence as

 μτ=max1≤i
Lemma 2.1.

([19], Lemma 3) For any block -sparse vector , we have

 (1−(s−1)dμτ)∥x∥22≤∥Φx∥22≤(1+(s−1)dμτ). (2.2)
Lemma 2.2.

We have

 ∥hEc∥2,1≤∥hE∥2,1+α∥h∥2. (2.3)
Proof.

Recollect that . Since is a minimizer of (1.8), we get

 ∥h+x∥2,1−α∥h+x∥2 =∥^x∥2,1−α∥^x∥2 ≤∥x∥2,1−α∥x∥2.

By the reverse triangular inequality of , we get

 ∥h+x∥2,1−∥x∥2,1≤α∥h+x∥2−α∥x∥2≤α∥h∥2.

Note that is block -sparse and , then

 α∥h∥2 ≥∥h+x∥2,1−∥x∥2,1 =∥(h+x)E∥2,1+∥(h+x)Ec∥2,1−∥x∥2,1 =∥hE+xE∥2,1+∥hEc+xEc∥2,1−∥x∥2,1 ≥∥xE∥2,1−∥hE∥2,1+∥hEc∥2,1−∥x∥2,1 =∥hEc∥2,1−∥hE∥2,1,

which brings about the result. ∎

3 Main results

With the preparations provided in Section , we establish the main results in this section-block mutual coherence conditions for the stable reconstruction of block -sparse signals. We will reveal that the measurement matrix satisfies the block mutual coherence property with , then every block -sparse signal can be stably reconstructed via the minimization method in presence of noise. We first think about stable reconstruction of block -sparse signals with -error.

Theorem 3.1.

Consider the model (1.1) with . Let be the solution of (1.8) with , and . Assume that is block -sparse with . Then fulfills

 ∥^x−x∥2≤⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩2(1−dμτ)(1+3αdμτ)1−(2+α2)dμτ+(1−α2)d2μ2τ(ϵ+η),s=1,(1−3dμτ)(25αdμτ+√30)2[1−(6+α2)dμτ+(9−α2)d2μ2τ](ϵ+η),s=2,24√3sαdμτ+√17[1+(1−9α2)dμτ]1+(1−9α2)dμτ(ϵ+η),s≥3. (3.1)

We then consider stable reconstructing of block -sparse signals with error in the bounded set .

Theorem 3.2.

Let be noisy measurement of a signal with . If the block -sparse signal obeys the block mutual coherence property with , then the solution of (1.8) with fulfills

 ∥^x−x∥2≤⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩√d(1−dμτ)(3α+√6)1−(2+α2)dμτ+(1−α2)d2μ2τ(ϵ+η),s=1,√d(1−3dμτ)(4α+√19)1−(6+α2)dμτ+(9−α2)d2μ2τ(ϵ+η),s=2,√d(15α+3√2s)1+(1−9α2)dμτ(ϵ+η),s≥3. (3.2)

Proof of Theorem 3.1.

Due to the feasibility of , we get

 ∥Φh∥2≤∥Φx−Φ^x∥2≤∥Φx−y∥2+∥Φ^x−y∥2≤ϵ+η. (3.3)

Notice that . It follows from the facts , for , and (2.2) that

 |⟨Φh,ΦhE⟩| ≥|⟨ΦhE,ΦhE⟩|−|⟨ΦhEc,ΦhE⟩| ≥(1−(s−1)dμτ)∥hE∥22−|∑j∈Ec∑i∈E(h[j])⊤(Φ[j])⊤Φ[i]h[i]| ≥(1−(s−1)dμτ)∥hE∥22−∑j∈Ec∑i∈E∥(Φ[j])⊤Φ[i]∥2∥h[i]∥2∥h[j]∥2 ≥(1−(s−1)dμτ)∥hE∥22−dμτ∥hE∥2,1∥hEc∥2,1 ≥(1−(s−1)dμτ)∥hE∥22−√sdμτ∥hE∥2(∥hE∥2,1+α∥h∥2) ≥(1−(2s−1)dμτ)∥hE∥22−α√sdμτ∥hE∥2∥h∥2. (3.4)

On the other hand, by (2.2), we get

 ∥ΦhE∥22≤(1+(s−1)dμτ)∥hE∥22. (3.5)

It follows from the Cauchy-Schwarz inequality, (3.3) and (3.5) that

 |⟨Φh,ΦhE⟩|≤∥Φh∥2∥ΦhE∥2≤(ϵ+η)√1+(s−1)dμτ∥hE∥2. (3.6)

Combining with (3) and , it implies

 ∥hE∥2 ≤√1+(s−1)dμτ1−(2s−1)dμτ(ϵ+η)+α√sdμτ1−(2s−1)dμτ∥h∥2 ≤√1+(s−1)/3s1−(2s−1)/3s(ϵ+η)+α√sdμτ1−(2s−1)dμτ∥h∥2.

Then, one can easily check that

 ∥hE∥2≤⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩32(ϵ+η)+αdμτ1−dμτ∥h∥2,s=1,√423(ϵ+η)+√2αdμτ1−3dμτ∥h∥2,s=2,2√3(ϵ+η)+α√s∥h∥2,s≥3. (3.7)

Because of the fact , for , we get

 ∥Φh∥22 =⟨Φh,Φh⟩=∑i,j⟨Φ[i]h[i],Φ[j]h[j]⟩ =∑i(h[i])⊤(Φ[i])⊤Φ[i]h[i]+∑i≠j(h[i])⊤(Φ[i])⊤Φ[j]h[j] ≥∑i∥(Φ[i])⊤Φ[i]∥2∥h[i]∥22−∑i≠j∥(Φ[i])⊤Φ[j]∥2∥h[i]∥2∥h[j]∥2 ≥∥h∥22,2−dμτ∑i≠j∥h[i]∥2∥h[j]∥2 =∥h∥22+dμτ∑i∥h[i]∥22−dμτ∑i,j∥h[i]∥2∥h[j]∥2 =∥h∥22+dμτ∥h∥22,2−dμτ∥h∥22,1 =(1+dμτ)∥h∥22−dμτ(∥hE∥2,1+∥hEc∥2,1)2 (a)≥(1+dμτ)∥h∥22−dμτ(2∥hE∥2,1+α∥h∥2)2 (b)≥(1+dμτ)∥h∥22−dμτ(2√s∥hE∥2+α∥h∥2)2, (3.8)

where (a) follows from (2.3), and (b) is due to the Cauchy-Schwarz inequality.

Next, we estimate (3.1) by discussing three cases: , , and . We first of all discuss the situation that . A combination of (3.3), (3.7) and (3), we get

 (1+dμτ)∥h∥22−dμτ[3(ϵ+η)+α(1+dμτ)1−dμτ∥h∥2]2≤(ϵ+η)2.

The equation above can be adapted as

 [1−(2+α2)dμτ+(1−α2)d2μ2τ]∥h∥22−6αdμτ(1−dμτ)(ϵ+η)∥h∥2−(1+9dμτ)(1−dμτ)21+dμτ(ϵ+η)2≤0.

Therefore, due to , we get

 [1−(2+α2)dμτ+(1−α2)d2μ2τ]∥h∥22−6αdμτ(1−dμτ)(ϵ+η)∥h∥2−4(1−dμτ)21+dμτ(ϵ+η)2≤0.

Accordingly, by Quadratic Formula, we get

 ∥h∥2 ≤12[1−(2+α2)dμτ+(1−α2)d2μ2τ]{6αdμτ(1−dμτ)(ϵ+η) +{[6αdμτ(1−dμτ)(ϵ+η)]2+16[1−(2+α2)dμτ+(1−α2)d2μ2τ](ϵ+η)2(1−dμτ)21+dμτ}1/2} (a)≤2(1−dμτ)(ϵ+η)[1−(2+α2)dμτ+(1−α2)d2μ2τ]{3αdμτ+√1−(2+α2)dμτ+(1−α2)d2μ2τ1+dμτ} (b)≤2(1−dμτ)(1+3αdμτ)[1−(2+α2)dμτ+(1−α2)d2μ2τ](ϵ+η),

where (a) is from the fact for any nonnegative constants and , and (b) is because both and are monotonically reducing when .

In the case of , it follows from (3.3), (3.7) and (3) that

 (1+dμτ)∥h∥22−dμτ[4√213(ϵ+η)+α(1+dμτ)1−3dμτ∥h∥2]2≤(ϵ+η)2.

The above equation can be recast as

 (1+dμτ)[1−α2dμτ(1+dμτ)(1−3dμτ)2]∥h∥22−8√21αdμτ(1+dμτ)(ϵ+η)3(1−3dμτ)∥h∥2−(1123dμτ+1)(ϵ+η)2≤0.

Owing to the condition of Theorem 3.1, , thereby,

 [1−(6+α2)dμτ+(9−α2)d2μ2τ]∥h∥22−252αdμτ(1−3dμτ)(ϵ+η)∥h∥2−152(ϵ+η)2(1−3dμτ)21+dμτ≤0.

By utilizing Quadratic Formula, we obtain

 ∥h∥2 ≤12[1−(6+α2)dμτ+(9−α2)d2μ2τ]{252αdμτ(1−3dμτ)(ϵ+η)+{[252αdμτ(1−3dμτ)(ϵ+η)]2 +30[1−(6+α2)dμτ+(9−α2)d2μ2τ](ϵ+η)2(1−3dμτ)21+dμτ}1/2} ≤(1−3dμτ)(ϵ+η)2[1−(6+α2)dμτ+(9−α2)d2μ2τ]{25αdμτ+√30[1−(6+α2)dμτ+(9−α2)d2μ2τ]1+dμτ} (a)≤(1−3dμτ)(25αdμτ+√30)2[1−(6+α2)dμτ+(9−α2)d2μ2τ](ϵ+η),

where (a) is from the fact that both and are monotonically descending when .

When , through (3.3), (3.7) and (3), we gain

 (1+dμτ)∥h∥22−dμτ(4√3s(ϵ+η)+3α∥h∥2)2≤(ϵ+η)2.

The above equation can be reworded as

 [1+(1−9α2)dμτ]∥h∥22−24√3sαdμτ(ϵ+η)∥h∥2−(1+48sdμτ)(ϵ+η)2≤0.

From , and when , hence

 [1+(1−9α2)dμτ]∥h∥22−24√3sαdμτ(ϵ+η)∥h∥2−17(ϵ+η)2≤0.

Consequently,

 ∥h∥2 ≤12[1+(1−9α2)dμτ]{24√3sαdμτ(ϵ+η) +{[24√3sαdμτ(ϵ+η)]2+68[1+(1−9α2)dμτ](ϵ+η)2}1/2} ≤24√3sαdμτ+√17[1+(1−9α2)dμτ]1+(1−9α2)dμτ(ϵ+η).

Proof of Theorem 3.2.

Notice that from the first portion of the proof of Theorem 3.1, we get

 |⟨Φh,ΦhE⟩|≥(1−(2s−1)dμτ)∥hE∥22−α√sdμτ∥hE∥2∥h∥2. (3.9)

Employing the fact , where , we have

 |⟨Φh,ΦEhE⟩|≤∥hE∥2∥(ΦE)⊤Φh∥2≤∥hE∥2√sd(ϵ+η),

which combines with (3.9) and the condition , it leads to

 ∥hE∥2 ≤√sd1−(2s−1)dμτ(ϵ+η)+α√sdμτ1−(2s−1)dμτ∥h∥2 ≤√sd1−(2s−1)/3s(ϵ+η)+α√sdμτ1−(2s−1)dμτ∥h∥2.

Thus, it is easy to check that

 ∥hE∥2≤⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩3√d2(ϵ+η)+αdμτ1−dμτ∥h∥2,s=1,2√2d(ϵ+η)+√2αdμτ1−3dμτ∥h∥2,s=2,3√sd(ϵ+η)+α√s∥h∥2,s≥3. (3.10)

By (3), we get

 ⟨Φh,Φh⟩≥(1+dμτ)∥h∥22−dμτ(2√s∥hE∥2+α∥h∥2)2. (3.11)

By (2.3), the fact that and Cauchy-Schwarz inequality, we get

 ⟨Φh,Φh⟩ =h⊤Φ⊤Φh≤∥h∥1∥Φ⊤Φh∥∞=n∑i=1∥h[i]∥1(ϵ+η) ≤(ϵ+η)n∑i=1√d∥h[i]∥2=√d(ϵ+η)∥h∥2,1 =√d(ϵ+η)(∥hE∥2,1+∥hEc∥2,1)≤√d(ϵ+η)(2∥hE∥2,1+α∥h∥2) ≤√d(ϵ+η)(2∥hE∥2,1+α∥h∥2)≤√d(ϵ+η)(2√s∥hE∥2,2+α∥h∥2) =√d(ϵ+η)(2√s∥hE∥2+α∥h∥2),

which combines with (3.11), it implies that

 √d(ϵ+η)(2√s∥hE∥2+α∥h∥2)≥(1+dμτ)∥h∥22−dμτ(2√s∥hE∥2+α∥h∥2)