Phase Transition of Convex Programs for Linear Inverse Problems with Multiple Prior Constraints

A sharp phase transition emerges in convex programs when solving the linear inverse problem, which aims to recover a structured signal from its linear measurements. This paper studies this phenomenon in theory under Gaussian random measurements. Different from previous studies, in this paper, we consider convex programs with multiple prior constraints. These programs are encountered in many cases, for example, when the signal is sparse and its ℓ_2 norm is known beforehand, or when the signal is sparse and non-negative simultaneously. Given such a convex program, to analyze its phase transition, we introduce a new set and a new cone, called the prior restricted set and prior restricted cone, respectively. Our results reveal that the phase transition of a convex problem occurs at the statistical dimension of its prior restricted cone. Moreover, to apply our theoretical results in practice, we present two recipes to accurately estimate the statistical dimension of the prior restricted cone. These two recipes work under different conditions, and we give a detailed analysis for them. To further illustrate our results, we apply our theoretical results and the estimation recipes to study the phase transition of two specific problems, and obtain computable formulas for the statistical dimension and related error bounds. Simulations are provided to demonstrate our results.

Authors

• 56 publications
• 13 publications
• 4 publications
01/20/2018

A Precise Analysis of PhaseMax in Phase Retrieval

Recovering an unknown complex signal from the magnitude of linear combin...
05/11/2013

Corrupted Sensing: Novel Guarantees for Separating Structured Signals

We study the problem of corrupted sensing, a generalization of compresse...
12/10/2018

Regularization by architecture: A deep prior approach for inverse problems

The present paper studies the so called deep image prior (DIP) technique...
10/13/2016

Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation

We propose a flexible convex relaxation for the phase retrieval problem ...
01/03/2021

Phase Transitions in Recovery of Structured Signals from Corrupted Measurements

This paper is concerned with the problem of recovering a structured sign...
10/19/2019

Convex Reconstruction of Structured Matrix Signals from Linear Measurements (I): Theoretical Results

We investigate the problem of reconstructing n-by-n structured matrix si...
10/22/2019

Phase Transition Behavior of Cardinality and XOR Constraints

The runtime performance of modern SAT solvers is deeply connected to the...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The linear inverse problem refers to the problem of recovering an unknown signal from its linear measurements. It is frequently encountered in many applications, such as image processing [1], network data analysis [2] and so on. In practice, we often have less measurements than the dimension of the true signal. As a result, the problem is generally ill-posed. Therefore, to make recovery possible, we may assume that the true signal has low complexity under some structures. Commonly considered structures include sparsity and low rank, and the corresponding recovery problems are known as compressed sensing and matrix completion.

Given the structures of the signal, a popular approach for recovery is to solve a convex program that enforces the known prior information about the structures. For example, we pursue a sparse recovery through norm minimization in the compressed sensing problem, and a low-rank recovery through nuclear norm minimization in the matrix completion problem. This approach is shown to be simple and efficient in many practical applications.

Meanwhile, a sharp phase transition is numerically observed, when we use convex programs to recover structured signals. The phase transition refers to the phenomenon that for a certain convex program, when the measurement number is greater than some threshold, it succeeds with high probability; while when the measurement number is smaller than another threshold, it fails with high probability. When we say a sharp phase transition, we mean that the transition region is very narrow. This phenomenon has attracted many researchers, and much work has been done to explain it in theory in the past several years. Some exciting results have been obtained since then.

In [3, 4, 5, 6], Donoho and Tanner analyzed the phase transition of the compressed sensing problem in the asymptotic regime. They first demonstrated that the minimization approach succeeds if and only if the random projection preserves the structure of faces of cross-polytope, and then used the theory of polytope angles to deal with this problem. In [7, 8, 9], the authors established a connection between the phase transition and the statistical decision theory, and revealed that the phase transition curve coincides with the minimax risk curve of denoising in many linear inverse problems. In [10], Amelunxen et al. presented a comprehensive analysis of the phase transition of convex programs in the linear inverse problem. They first formulated the phase transition problem to a geometry problem, then used tools from the theory of conic integral geometry to study this geometry problem. The results show that the phase transition of convex programs occurs at the statistical dimension of the descent cone of the structure inducing function at the true signal. In [11], Rudelson and Vershynin studied the performance of the minimization approach using the “escape from the mesh” theorem [12] in Gaussian process theory. Later, their ideas were extended in the papers [13, 14, 10], and the phase transition were identified by incorporating the arguments of Rudelson and Vershynin with a polarity argument. The obtained results are stated in terms of Gaussian width, and consistent with the results in [10]. In [15], Bayati et al. made use of a state evolution framework, inspired by ideas from statistical physics, and demonstrated that the phase transition of minimization is universal over a class of sensing matrices. Recently, in [16], Oymak and Tropp demonstrated the universality laws for the phase transition of convex programs for linear inverse problems, over a class of sensing matrices.

However, most of the above work focuses on the case when we have no additional prior constraints. But in many practical problems, we do have some additional prior information. For example, in image processing problems, in addition to the structures about texture etc, the fact that the pixel values are non-negative may help to recover the true image. In these cases, we would solve convex problems with (multiple) prior constraints to recover the true signal. While these problems exhibit a sharp phase transition as well, theoretical understanding of the phase transition is far from satisfactory. We mention that in [3, 4], Donoho and Tanner studied the minimization problem with an additional non-negativity constraint, and “weak threshold” and “strong threshold” were obtained in the asymptotic regime, which marks the phase transition. Nevertheless, a comprehensive analysis about the phase transition of this problem does not exist. Furthermore, when the signal has structures other than sparsity, or when we have prior constraints other than non-negativity, it remains an open problem to prove the existence and identify the location of the phase transition.

In this paper, we study the phase transition of convex programs with multiple prior constraints under Gaussian random measurements. In our analysis, we first introduce a new set and a new cone, called the prior restricted set and prior restricted cone, respectively. Next, we give a sufficient and necessary condition for the success of convex programs, which involves the prior restricted cone. It states that convex programs succeed if and only if the intersection of the null space of the sensing matrix and the prior restricted cone contains only the origin. This condition has been well studied by Amelunxen et al. in [10] using the theory of conic integral geometry. Utilizing their results, we obtain that the phase transition of convex programs with multiple prior constraints occurs at the statistical dimension of the prior restricted cone. Thus, intuitively, the “dimension” of the prior restricted cone (i.e., the statistical dimension of this cone) can be seen as a measure of how much we know about the true signal from the prior information, if convex programs are used to recover signals. Moreover, to apply our theoretical results in practice, we present two recipes to accurately estimate the statistical dimension of the prior restricted cone. The two recipes work under different conditions, and we give a detailed analysis for them. To further illustrate our results, we apply our theoretical results and the estimation recipes to study the phase transition of two specific problems: One is the linear inverse problem with norm constraints, and the other is the linear inverse problem with non-negativity constraints. We obtain computable formulas for the statistical dimension and related error bounds in either problem. The following simulations demonstrate that our results match the empirical successful probability perfectly.

The rest of the paper is organized as follows: In section II, we give a precise statement of the problems studied in this paper. In section III, some preliminaries and notations are introduced. In section IV, we state our main results. In section V, we apply our main results to study the phase transition of two specific problems. In section VI, simulations are provided to demonstrate our theoretical results. In section VII, we conclude the paper.

Ii Problem Formulation

In this section, we provide a precise statement of the problems studied in this paper. In section II-A, we introduce the linear inverse problem. In section II-B, we introduce the convex optimization procedure to recover signals from compressed, linear measurements.

Ii-a Linear Inverse Problem

In the linear inverse problem, we observe a signal via its linear measurements:

 y=Ax⋆, (1)

where

is the measurement vector,

is the sensing matrix, and is the unknown signal. Our goal is to recover given the knowledge of and .

Ii-B Convex Optimization Procedure

In many applications, we often have compressed measurements, i.e., . As a result, to recovery from and is an ill-posed problem. Hence, to make recovery possible, it is commonly assumed that the signal is well structured. In this case, a simple yet efficient approach for recovery is to solve a convex program, which forces the solution to have the corresponding structures. Moreover, apart from the assumed structures, we may have some additional prior information about . For example, we may know the norm of beforehand, or the signal is non-negative. The additional prior information often acts as constraints.

Suppose that is a proper convex function and promotes the structures of , and are some proper convex functions and promote the additional prior information of . Then in practice the following convex program is often used to recover the true signal :

 minf0(x),s.t. y=Ax, fi(x)≤fi(x⋆), i=1,…,k. (2)

We say that the convex problem (2) succeeds if the unique solution satisfies ; otherwise, we say it fails.

In this paper, we study the phase transition of problem (2). The analysis relies on some knowledge from convex analysis and convex geometry. Hence, in the next section, we give a brief introduction about the needed knowledge.

Iii Preliminaries

In this section, we present some preliminaries that will be used in our analysis.

Suppose is a proper convex function. Then the subdifferential of at is the set

 ∂h(z)={u∈Rn:h(z+t)≥h(z)+⟨u,t⟩ for all t∈Rn}.

Iii-B Descent Cones and Normal Cones of Convex Functions

The descent cone of a proper convex function at is the set of all non-ascent directions of at :

 D(h,z)={d∈Rn:∃a>0,h(z+a⋅d)≤h(z)}.

The normal cone of a proper convex function at is the polar of the descent cone of at :

 N(h,z)=D(h,z)∘={u∈Rn:⟨u,d⟩≤0 for all d∈D(h,z)}.

Suppose is non-empty, compact, and does not contain the origin, then the normal cone is the cone generated by the subdifferential [17, Corollary 23.7.1]:

Iii-C Normal Cone to Convex Sets

Let be a convex set with . The normal cone to at is

 N(¯x;C)\coloneqq{v∈Rn: ⟨v,x−¯x⟩≤0, ∀x∈C}.

Iii-D Statistical Dimension of Convex Cones

For a convex cone , the statistical dimension of is defined as:

 δ(K)=E(supt∈K∩Bn⟨g,t⟩)2, where g∼N(0,In).

The statistical dimension of a convex cone has a number of important properties, see [10, Proposition 3.1]. Moreover, the statistical dimension satisfies the following additivity property:

Fact 1 (Additivity of statistical dimension).

Let and be two convex cones in . The following holds:

1. If for any and , we have . Then

 δ(K1+K2)=δ(K1)+δ(K2).
2. If for any and , we have . Then

 δ(K1+K2)≥δ(K1)+δ(K2).
3. If for any and , we have . Then

 δ(K1+K2)≤δ(K1)+δ(K2).
Proof.

See Appendix F. ∎

Fact 1 generalizes the fact that for two linear subspaces and , suppose , then , since the statistical dimension extends the dimension of a linear subspace to the class of convex cones [10].

Iii-E Indicator Function of a Convex Set

Let be a convex set. Then the indicator function of the set is defined as

 IC(x)={0,when x∈C,∞,when x∉C.\

For any , the subdifferential of is [18, Example 2.32]:

 (3)

Iii-F Prior Restricted Set and Prior Restricted Cone

We first define the prior restricted set of convex problem (2):

Definition 1 (Prior Restricted Set).

For the convex problem (2), suppose is the true signal, then we define its prior restricted set as the following set:

 S={d∈Rn:fi(x⋆+d)≤fi(x⋆), i=0,1,…,k}.

Using this set, we can define the prior restricted cone of problem (2):

Definition 2 (Prior Restricted Cone).

For the convex problem (2), suppose is the true signal, then we define its prior restricted cone as the following set:

 C=cone(S)={u∈Rn:∃t>0,fi(x⋆+t⋅u)≤fi(x⋆), i=0,1,…,k}.

Iii-G Notations

Throughout, we denote the non-negative orthant in : , and the positive part: .

For a set , we use to denote its interior:

 int(C)\coloneqq{x∈C:B(x,r)⊆C for some r>0}.

Denote the affine hull of :

 aff(C)={θ1x1+⋯+θkxk:x1,…,xk∈C, θ1+⋯+θk=1},

and the relative interior of the set :

 ri(C)={x∈C:B(x,r)∩aff(C)⊆C for some r>0}.

The closure of is denoted by either or .

Given a point and a subset , the distance of to the set is denoted by :

 dist(u,C)\coloneqqinfx∈C∥u−x∥2.

We denote the projection of onto the set :

 ΠC(u):={x∈C:∥u−x∥2=dist(u,C)}.

If is non-empty, convex, and closed, the projection is a singleton. In this case, may denote the unique point in it, depending on the context.

Iv Main Results

In this section, we state our main results in this paper. We first give results about the phase transition of problem (2) in subsection IV-A, and then present two recipes to estimate the statistical dimension of the prior restricted cone in subsections IV-B and IV-C.

Iv-a Phase Transition of Convex Programs with Multiple Prior Constraints

In this subsection, we state our main results about the phase transition of problem (2). We begin by a geometry condition which determines the success of problem (2):

Lemma 1 (Optimality condition).

Consider problem (2) to recover the true signal . If is a proper convex function for any , problem (2) succeeds if and only if

 C∩null(A)={0},

where denotes the prior restricted cone of problem (2).

Proof.

See Appendix A. ∎

Fig. 1 gives a geometric interpretation of Lemma 1. Note that when there is no additional prior constraint, i.e., when we consider problem

 minf0(x),s.t. y=Ax (4)

to recover , the prior restricted cone is exactly , the descent cone of at . In this case, our optimality condition, Lemma 1, will degenerate to the optimality condition given by Chandrasekaran et al. in [13, Fact 2.8].

Using Lemma 1, we can study the phase transition of problem (2). For this purpose, we assume that we have random sensing matrix. In particular, we assume that

is drawn at random from the standard normal distribution on

. According to Lemma 1, to study the phase transition of problem (2), it is sufficient to answer the following questions:

• [leftmargin=1em]

• Under what conditions the kernel of intersects the cone trivially with high probability?

• Under what conditions the kernel of intersects the cone nontrivially with high probability?

This questions have been well studied in recent years. We borrow the answer from [10]:

Proposition 1 ([10], Theorem I).

Fix a tolerance . Suppose the matrix has independent standard normal entries, and denotes a convex cone. Then when

 m≤δ(K)−aζ√n,

we have with probability less than . On the contrary, when

 m≥δ(K)+aζ√n,

we have with probability at least . The quantity .

Proposition 1 is a direct consequence of [10, Theorem I]. The proof involves the theory of conic integral geometry. See reference [10] for details. Now combining Lemma 1 and Proposition 1, we obtain our main results about the phase transition:

Theorem 1 (Phase transition of convex programs with multiple prior constraints).

Consider convex problem (2) to solve the linear inverse problem. If the sensing matrix has independent standard normal entries, the phase transition of problem (2) occurs at the statistical dimension of its prior restricted cone. More precisely, for any , when the measurement number satisfies

 m≤δ(C)−aζ√n,

problem (2) fails with probability at least . On the contrary, when

 m≥δ(C)+aζ√n,

problem (2) succeeds with probability at least . The quantity .

Since the phase transition occurs at the statistical dimension of the prior restricted cone, intuitively, we can see it as a measure of how much we know about the true signal from the prior information.

Remark 1.

We can apply our Theorem 1 to analyze the phase transition of problem (4). The prior restricted cone of problem (4) is exactly , the descent cone of at . Thus, in this case, our Theorem 1 can be read as: The phase transition of problem (4) occurs at the statistical dimension of . This coincides with the results in [10, Theorem II].

Iv-B Statistical Dimension of Prior Restricted Cones: Part I

In theory, Theorem 1 have revealed that the phase transition of problem (2) occurs at the statistical dimension of its prior restricted cone. However, if we want to apply these results in practice, we must find ways to compute the statistical dimension efficiently. For this purpose, in this subsection, we present a recipe that provides a reliable estimate for the statistical dimension of the prior restricted cone, when all the subdifferentials are compact and do not contain the origin. The idea is inspired by the recipe proposed by Amelunxen et al. [10, pp. 244-248] for the computation of the statistical dimension of a descent cone.

The basic idea for the recipe is that the statistical dimension of the prior restricted cone can be expressed in terms of its polar, which has a close relation with the normal cones, and furthermore, the subdifferentials, of the functions ’s, . Let us first express the statistical dimension in terms of normal cones.

Lemma 2.

Consider problem (2) to recover the true signal . Let denote the descent cone and normal cone of at for , respectively. Suppose that

 ri(D(f0,x⋆))∩ri(D(f1,x⋆))∩⋯∩ri(D(fk,x⋆))≠∅.

Then the polar of the prior restricted cone can be expressed as follows:

 C∘=k∑i=0N(fi,x⋆),

and the statistical dimension of the prior restricted cone can be expressed as follows:

 δ(C)=Edist2(g,k∑i=0N(fi,x⋆)).
Proof.

See Appendix B-A. ∎

Lemma 2 establishes connections between the prior restricted cone and the individual normal cones, but it does not allow us to compute the statistical dimension of the prior restricted cone efficiently. This can be done by incorporating the subdifferential expression for normal cones.

Theorem 2 (The statistical dimension of the prior restricted cone).

Let be the prior restricted cone of problem (2), and let be the true signal. Assume that for any , the subdifferential is non-empty, compact, and does not contain the origin. Assume that the descent cones satisfy

 ri(D(f0,x⋆))∩ri(D(f1,x⋆))∩⋯∩ri(D(fk,x⋆))≠∅.

Define the function to be

 J(τ)\coloneqqE[dist2(g,k∑i=0τi⋅∂fi(x⋆))],

where . Then the statistical dimension of the prior restricted cone has the following upper bound:

 δ(C)≤infτ∈Rk+1+J(τ).

The function is convex, continuous, and continuously differentiable in . It attains its minimum in a compact subset of . Moreover, suppose that

 the two sets k∑i=0τi⋅∂fi(x⋆) and k∑i=0~τi⋅∂fi(x⋆) are not identical, for any τ≠~τ∈Rk+1+.

Then the function is strictly convex, and attains its minimum at a unique point. For the differential of at the boundary of , we interpret the partial derivative similarly as the right derivative, if .

Proof.

Since the subdifferential is non-empty, compact, and does not contain the origin, the normal cones is the cone generated by the subdifferential [17, Corollary 23.7.1]. Thus, by Lemma 2,

 δ(C) =E[dist2(g,k∑i=0N(fi,x⋆))]=E[dist2(g,k∑i=0(⋃τi≥0τi⋅∂fi(x⋆)))] =E[dist2(g,⋃τ∈Rk+1+(k∑i=0τi⋅∂fi(x⋆)))]=E[infτ∈Rk+1+dist2(g,k∑i=0τi⋅∂fi(x⋆))] ≤infτ∈Rk+1+E[dist2(g,k∑i=0τi⋅∂fi(x⋆))].

The inequality results from Jensen’s inequality. The proof of properties of appears in Appendix B-B and Appendix B-C. ∎

Theorem 2 provides an effective way to estimate the statistical dimension of the prior restricted cone, when all the subdifferentials are non-empty, compact, and does not contain the origin. We summarize it in Recipe 1. In subsection V-A, we apply Recipe 1 to study the phase transition of linear inverse problems with norm constraints.

Remark 2.

In [10], Amelunxen et al. studied the phase transition of problem (4). They proved that the phase transition occurs at the statistical dimension of the descent cone , and provided a recipe to compute it. Our Recipe 1 can be seen as a generalization of this recipe from one function to multiple functions, and our proof idea for Theorem 2 is inspired by the proof for [10, Proposition 4.1].

Iv-C Statistical Dimension of Prior Restricted Cones: Part II

Recipe 1 gives a reliable estimate of the statistical dimension of the prior restricted cone, when all the subdifferentials are compact and do not contain the origin. However, in many practical applications, we may encounter the case that some of the subdifferentials are unbounded or contain the origin. For example, consider the linear inverse problem with non-negativity constraints, i.e.,

 minf0(x),s.t. y=Ax, x≥0. (5)

Note that is equivalent to . The subdifferential of indicator functions has specific formula (3). It is easy to verify that the subdifferential of at contains the origin, and if contains zero entries, it is unbounded. Therefore, Recipe 1 cannot be used directly, and we have to find other ways to compute the statistical dimension. Actually, an effective way in this case is to express the normal cones via the subdifferentials, only for those functions whose subdifferentials are compact and do not contain the origin.

Theorem 3 (The statistical dimension of the prior restricted cone).

Let be the prior restricted cone of problem (2), and let be the true signal. Assume that for , the subdifferentials ’s are non-empty, compact, and do not contain the origin, and for , the subdifferentials ’s are non-empty, where is a natural number. Assume that the descent cones satisfy

 ri(D(f0,x⋆))∩ri(D(f1,x⋆))∩⋯∩ri(D(fk,x⋆))≠∅.

Assume that for any , we have

 0∉cl(k∑i=q+1N(fi,x⋆))+q∑i=0τi⋅∂fi(x⋆),

Define the function by

 J(τ)\coloneqqE[dist2(g,q∑i=0τi⋅∂fi(x⋆)+k∑i=q+1N(fi,x⋆))],

where . Then the statistical dimension of the prior restricted cone has the following upper bound:

 δ(C)≤infτ∈Rq+1+J(τ).

The function is convex, continuous, and continuously differential in . It attains its minimum in a compact subset of . Moreover, suppose that for any ,

 the two sets cl(k∑i=q+1N(fi,x⋆))+q∑i=0τi⋅∂fi(x⋆) and cl(k∑i=q+1N(fi,x⋆))+q∑i=0~τi⋅∂fi(x⋆) are not identical.

Then the function is strictly convex, and attains its minimum at a unique point. For the differential of at the boundary of , we interpret the partial derivative similarly as the right derivative, if .

Proof.

We proceed similarly as in Theorem 2,

 δ(C) =E[dist2(g,k∑i=0N(fi,x⋆))]=E[dist2(g,q∑i=0(⋃τi≥0τi⋅∂fi(x⋆))+k∑i=q+1N(fi,x⋆))] =E[dist2(g,⋃τ∈Rq+1+(q∑i=0τi⋅∂fi(x⋆)+k∑i=q+1N(fi,x⋆)))] =E[infτ∈Rq+1+dist2(g,q∑i=0τi⋅∂fi(x⋆)+k∑i=q+1N(fi,x⋆))] ≤infτ∈Rq+1+E[dist2(g,q∑i=0τi⋅∂fi(x⋆)+k∑i=q+1N(fi,x⋆))].

The inequality results from Jensen’s inequality. The proof of properties of appears in Appendix C-A and Appendix C-B. ∎

Theorem 3 further generalizes our Theorem 2 and [10, Proposition 4.1] to the case when some of the subdifferentials are unbounded or contain the origin, and the proofs share similar ideas. We summarize it in Recipe 2. In subsection V-B, we apply Recipe 2 to study the phase transition of linear inverse problems with non-negativity constraints.

V Examples and Applications

In this section, we make use of our theoretical results and the computation recipes to study the phase transition of several specific problems. In subsection V-A, we study the phase transition of linear inverse problems with norm constraints, and in subsection V-B, we study the phase transition of linear inverse problems with non-negativity constraints.

V-a Phase Transition of Linear Inverse Problem with ℓ2 Norm Constraints

In this subsection, we make use of Recipe 1 to study the phase transition of linear inverse problems with norm constraints. In other words, we study the phase transition of the following convex problem:

 minf0(x),s.t. y=Ax, ∥x∥2≤∥x⋆∥2. (6)

Note that for , the subdifferential of norm is . Therefore, applying Recipe 1 directly, we obtain the following results:

Corollary 1.

Let be the prior restricted cone of problem (6), and let be the true signal. Assume that the subdifferential is non-empty, compact, do not contain the origin. Assume that the descent cones satisfy

 ri(D(f0,x⋆))∩ri(D(∥⋅∥2,x⋆))≠∅.

Define the function to be

 J1(τ)=Edist2[(g,τ0⋅∂f0(x⋆)+τ1⋅x⋆∥x⋆∥2)],

where . Then the statistical dimension of the prior restricted cone of problem (6) has the following upper bound:

 δ(C1)≤infτ∈R2+J1(τ).

The function is convex, continuous, and continuously differentiable in . It attains its minimum in a compact subset of . Moreover, suppose that

 the two sets τ0⋅∂f0(x⋆)+τ1⋅x⋆∥x⋆∥2 and ~τ0⋅∂f0(x⋆)+~τ1⋅x⋆∥x⋆∥2 are not % identical for any τ≠~τ∈R2+.

Then the function is strictly convex, and attains its minimum at a unique point. For the differential of at the boundary of , we interpret the partial derivative similarly as the right derivative, if .

Proof.

Applying Theorem 2 to problem (6) directly, we obtain Corollary 1. ∎

Corollary 1 implies that to study the phase transition of problem (6), we need to find the infimum of . When is a general proper convex function, the infimum may be attained anywhere in . However, when is a norm, an important result is that the infimum of must be attained in .

Proposition 2.

Consider problem (6). Assume that is a norm, and the conditions in Corollary 1 hold. Define the function as

 J2(τ)=Edist2(g,τ⋅∂f0(x⋆))for τ≥0.

The function is strictly convex, continuously differentiable in , and attains its minimum at a unique point. Moreover, the unique minimizer of satisfies and is the unique minimizer of , and the minimum of over and that of over are equal:

 infτ∈R2+J1(τ)=J1(τ⋆)=J2(τ⋆0)=infτ∈R+J2(τ).
Proof.

The first part of this proposition, i.e., the properties of , has been proved in [10]. For the proof of the results about , please see Appendix D-A. ∎

Remark 3.

Let denote the prior restricted cone of problem (4), i.e., . In [10], Amelunxen et al. have proved that is a reliable estimate of . Therefore, Proposition 2 implies that when we use Recipe 1 to compute the statistical dimension of the prior restricted cone of problem (6), the obtained phase transition point is exactly the same as that of problem (4).

At the first sight, the above results may be surprising, since we have more prior information, but the obtained phase transition point is the same. From another point of view, this implies that if our Recipe 1 can provide an accurate estimation of the statistical dimension, and must be nearly equal. Actually, we can verify that this is the case.

Proposition 3.

Let and denote the prior restricted cones of problem (6) and problem (4), respectively. Assume that is a norm. Then

 δ(C1)≤δ(C2)≤δ(C1)+12. (7)
Proof.

See Appendix D-B. ∎

Remark 4.

Proposition 3 implies that in the case when is a norm, the additional norm constraint has little effect on the phase transition of linear inverse problem. Moreover, since is an accurate estimate of , it follows that is an accurate estimate of .

Using the above results, we can obtain an error bound for Recipe 1 when applied to problem (6).

Proposition 4.

Consider problem (6) to recover . Assume that is a norm and denote the prior restricted cone of problem (6). Then under the conditions of Corollary 1, we have

 0≤infτ∈R2+J1(τ)−δ(C1)≤2sup{∥s∥2:s∈∂f0(x⋆)}f0(x⋆/∥x⋆∥2)+12. (8)
Proof.

This is a direct consequence of [10, Theorem 4.3], Proposition 2, and Proposition 3. We omit the proof. ∎

Proposition 4 implies that our Recipe 1 can provide an accurate estimate of the statistical dimension of the prior restricted cone, when applied to problem (6). Next, as a more concrete example, let us study the phase transition of the compressed sensing problem with norm constraints:

 min∥x∥1,s.t. y=Ax, ∥x∥2≤∥x⋆∥2. (9)

We have the following results:

Corollary 2.

Let be the prior restricted cone of problem (9), and be the true signal. Assume that has exactly non-zero entries. Then the statistical dimension of the prior restricted cone of problem (9) satisfies

 ψ1(s/n)−2√sn−12n≤δ(C1)n≤ψ1(s/n),

where the function is

 ψ1(ρ)=infτ∈R+{ρ(1+τ2)+(1−ρ)∫∞τ(u−τ)2⋅φ(u)du}, (10)

where the function . The infimum is attained at the unique , which solves the stationary equation

 ρ1−ρ=∫∞τ(uτ−1)⋅φ(u)du.
Proof.

This is a direct consequence of Proposition 2, Proposition 3, Proposition 4 and [10, Proposition 4.1]. We omit the proof. ∎

V-B Phase Transition of Linear Inverse Problem with Non-negativity Constraints

In this subsection, we make use of Recipe 2 to study the phase transition of linear inverse problems with non-negativity constraints, i.e., problem (5). We have confirmed that the subdifferential of is unbounded and contains the origin. Therefore, to apply Recipe 2, we have to find the normal cone directly. Indeed, notice that the descent cone of at is

 D(IRn+,x⋆)=⋃υ≥0{υ(x−