 # On the regularity and conditioning of low rank semidefinite programs

Low rank matrix recovery problems appear widely in statistics, combinatorics, and imaging. One celebrated method for solving these problems is to formulate and solve a semidefinite program (SDP). It is often known that the exact solution to the SDP with perfect data recovers the solution to the original low rank matrix recovery problem. It is more challenging to show that an approximate solution to the SDP formulated with noisy problem data acceptably solves the original problem; arguments are usually ad hoc for each problem setting, and can be complex. In this note, we identify a set of conditions that we call regularity that limit the error due to noisy problem data or incomplete convergence. In this sense, regular SDPs are robust: regular SDPs can be (approximately) solved efficiently at scale; and the resulting approximate solutions, even with noisy data, can be trusted. Moreover, we show that regularity holds generically, and also for many structured low rank matrix recovery problems, including the stochastic block model, Z_2 synchronization, and matrix completion. Formally, we call an SDP regular if it has a surjective constraint map, admits a unique primal and dual solution pair, and satisfies strong duality and strict complementarity. However, regularity is not a panacea: we show the Burer-Monteiro formulation of the SDP may have spurious second-order critical points, even for a regular SDP with a rank 1 solution.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We consider a semidefinite program (SDP) in the standard form

 minimize⟨C,X⟩subject toAX=bandX⪰0, (P)

where denotes the matrix trace inner product. The primal variable is the symmetric positive semidefinite (PSD) matrix . The problem data comprises a symmetric (but possibly indefinite) cost matrix , a righthand side , and a linear constraint map with rank operating on any by for some fixed symmetric . Denote an arbitrary solution of () as and the optimal value as .

The optimization problem () appears in problems in statistics [SS05], combinatorics [GW95], and imaging [CMP10], among others. Due to the nature of these applications, practical instances of () such as matrix completion [SS05, UT19] and MaxCut [GW95] are often expected to have low rank solutions. It is also notable that any instance of () admits a solution with rank satisfying [Bar95, Pat98].

#### Regularity

Formally, we say an SDP is regular if it has a surjective constraint map, admits a unique primal and dual solution pair, and satisfies both strong duality and strict complementarity. (See Section 1.1 for more detail.) These conditions suffice to guarantee many useful properties about the resulting SDP.

Regularity was found by [AHO97] to hold generically: for almost all , and , () is regular so long as a primal and dual solution pair exists. A followup work [DIL16, Section 5] strengthens this result: for every surjective , regularity holds for almost all and , again conditioning on the existence of a primal and dual solution pair.

However, realistic applications of semidefinite programming may place structural constraints on , , and : for example, in matrix completion, the cost matrix ; in MaxCut type SDPs, the constraint map and the right hand side

is the vector of all ones. We will show in Section

2, and 4 that many of these SDPs, including synchronization and the stochastic block model, are still regular. We also show in Section 5 that matrix completion is primal regular: it satisfies all conditions for regularity except (possibly) for dual uniqueness.

#### Conditioning and regularity

Many authors have shown that instances of the primal SDP () appearing statistical or signal processing problems [CR09, WdM15, Ban18], admit a unique low rank solution which coincides with (or is close to) the underlying true signal. However, this analysis does not fully solve the original problem: optimization procedures give reliable solutions only when the problem is well-conditioned; otherwise, inaccuracies in the problem data or incomplete convergence can lead to wildly different reconstructions of the underlying signal. Here we consider two different notions of problem conditioning:

1. Measurement error: Suppose we solve must obtain the problem data , , and via noisy measurements that result in perturbed problem data , , and . We solve () with with perturbed problem data and obtain a perturbed solution . To ensure that the perturbed solution is meaningful for the original problem, we must ensure the error in the solution is controlled by the size of the perturbation in the data.

We can describe the sensitivity of the solution to measurement error by finding constants such that for all small ,

 ∥X⋆−X′⋆∥α\tiny{F}≤β(∥ΔA∥+∥Δb∥+∥ΔC∥).
2. Optimization error: Most optimization algorithms offer guarantees on the suboptimality of the putative solution they return, but many cannot guarantee bounds on the distance to the solution, . However, the distance to the solution is usually the more important metric for statistical and signal processing applications. Hence it is important to understand how (and when) guarantees in suboptimality translate into guarantees on the distance to the solution.

We may seek to bound the distance to the solution, , in terms of simpler metrics of optimization error: the infeasibility with respect to conic constraints, , and linear constraints, , and the suboptimality, . (Throughout the paper we define for .) We produce an error bound on the solution by finding constants such that for all near ,

 ∥X−X⋆∥ρ2≤γ(∥AX−b∥2+(−λmin(X))++(tr(CX)−p⋆)).

The exponents and and the multiplicative factors and , can be interpreted as condition numbers of ().

Regular SDPs obey useful bounds on these condition numbers. In the literature, it has been found that if the SDP () is regular, then [NO99] and [Stu00]. We note that only requires primal regularity. An upcoming work of ours [DU] shows that

under the weaker condition of primal regularity. Estimates of

and for regular SDPs based on problem data and solutions are also available respectively in [NO99] and our upcoming work [DU]. When the SDP () is not regular but only feasible, then the exponent of can become as large as which is shown to be tight [Stu00, Example 2]. In such cases, the SDP is very ill-conditioned. Thus if the SDP () is regular or primal regular, neither measurement error nor optimization error impede signal recovery, as the distance to the solution (which is or close to the true signal) grows at most quadratically in the measurement or optimization error.

#### Regularity and algorithmic convergence

Regularity also plays an important role in the convergence analysis of algorithms of SDP. For example:

• Regular SDP can be solved efficiently at scale: for example, the storage-optimal algorithm of [DYC19] requires regularity to ensure the limit of the dual iterates produces a meaningful approximation of the primal solution .

• For regular SDP, the central path of an interior point method (IPM) leads to the analytical center of the solution set [HdKR02, LSZ98].

Regularity can also improves the convergence rate for many algorithms:

• For SDP that satisfy Slater’s condition, IPMs can only be shown to converge linearly [Nes18]. But for primal regular SDP, IPMs achieve superlinear convergence [LSZ98]; and for regular SDP, IPMs achieve quadratic convergence [AHO98].

• For the exact penalty formulation of the dual SDP [DYC19], subgradient-type methods with constant or diminishing stepsize require iterations to reach an -suboptimal dual solution. But for regular SDP, subgradient methods achieve faster sublinear rates , using the quadratic error bound induced by regularity for the analysis [Stu00, JM17].

#### Regularity and the Burer-Monteiro method

The Burer-Monteiro (BM) [BM03] approach solves the SDP () by factoring the decision variable, building on earlier work by Homer and Peinado [HP97] that introduced the approach for the MaxCut SDP. The BM approach factors the decision variable , with factor , and solves the following (nonconvex) problem:

 minimizetr(CFF⊤)subject toA(FF⊤)=b. (BM)

When exceeds the rank of any solution to (), (BM) and () have the same solution set.

Usually, (BM) is solved using a Riemannian gradient or trust region method [BAC18], which requires that the feasible set forms a smooth Riemannian manifold. Following [BVB18], we call such an SDP smooth: the feasible set forms a smooth Riemannian manifold. In this paper we will consider many interesting smooth SDPs: including MaxCut, OrthogonalCut, and an SDP relaxation of a problem optimizing over a product of spheres; and statistical problems like synchronization and the stochastic block model. Notice that many interesting large scale SDPs, such as matrix completion [CT10] and phase retrieval [CSV13], may not be smooth.

Since these Riemannian optimization methods are only guaranteed to find second order stationary points, we will say the BM method succeeds for a smooth SDP when all second order stationary points are globally optimal (and fails otherwise). A recent result [BVB18] shows that for smooth SDP (and under a few more technical conditions), for almost all objectives , BM succeeds if .

Does the BM method succeed for every (smooth) regular SDP? Alas, no: we show the Burer-Monteiro approach (BM) can fail when , even if () is regular. This result extends a recent counterexamples due to [WW18] by showing uniqueness of the dual solution. Hence storage optimal algorithms for SDP, such as [DYC19], that operate directly on the SDP (without factorization) have advantages over BM.

#### Paper organization

We formally define regular SDPs in Section 1.1. Section 1.2 introduces the notation used in this paepr. In Section 2, we show that every PSD matrix solves a regular SDP and that primal regularity holds for almost all objectives under Slater’s condition. In Section 3, we construct regular SDPs for which the Burer-Monteiro approach fails. In Section 4, we use regularity to show that the SDPs corresponding to the stochastic block model and synchronization can recover the ground truth from noisy data. Notably, we show recovery is possible at higher noise thresholds than those for which the BM approach is known to succeed. Finally, in Section 5, we show that the celebrated matrix completion SDP is primal regular, but not (usually) regular.

### 1.1 Regularity

To start, recall the dual problem of () is

 maximize⟨b,y⟩subject toC−A∗y⪰0. (D)

Here is the dot product in . The decision variable is the vector . The map is the adjoint of the linear map , which satisfies . Explicitly, for .

We now formally state the conditions that define a regular SDP. The first two conditions, strong duality and linear independence, are standard in the literature.

###### Definition 1 (Strong Duality).

() and () satisfy strong duality if there is a primal-dual solution pair and for any solution pair to () and (),

 p⋆:=tr(CX⋆)=b⊤y⋆=:d⋆.

Notably, strong duality holds under Slater’s condition: existence of feasible primal and dual with .

Linear independence ensures that there are no redundant linear constraints.

###### Definition 2 (Linear independence).

We say () satisfies linear independence if the matrices are linearly independent in .

Regularity also requires strict complementary slackness.

###### Definition 3 (Strict complementarity).

We say a solution pair to () and () is strictly complementary if

 rank(X⋆)+rank(C−A∗y⋆)=n.

If the primal () and dual () SDP pair has one strictly complementary solution pair, we say the SDP pair satisfies strict complementarity, or simply that the primal SDP () satisfies strict complementarity.

Linear programs always have some strictly complementary solution whenever they exist: there is always some primal optimal and dual optimal so

 nnz(x⋆)+nnz(z⋆)=n,

where nnz is the number of nonzeros [GT56]. In contrast, semidefinite programs may not satisfy strict complementarity.

Finally, regularity requires that both () and () have unique solutions[AHO97, Example 1].

###### Definition 4 (Regularity).

The SDP () is regular if

• () satisfies strong duality;

• () satisfies linear independence;

• () satisfies strict complementarity; and

• () and () both have unique solutions.

###### Definition 5 (Primal regularity).

The SDP () is primal regular if it satisfies strong duality, linear independence, and strict complementarity, and the solution to () is unique.

The dual of a primal regular SDP may admit multiple solutions. Notice every regular SDP is primal regular. Primal regularity is practically important: for example, the matrix completion SDP [CR09], introduced in Section 5, is primal regular but not regular. Primal regular SDPs inherit some (but not all) of the nice properties of regular SDPs.

#### Equivalent conditions

In the definition of regularity, uniqueness of the primal and dual solutions may be replaced by nondegeneracy as defined in [AHO97]. Indeed, an SDP is regular if and only if it satisfies strong duality, linear independence, strict complementarity, and nondegeneracy [AHO97, Theorem 11].

#### Regularity under generic problem data

As mentioned in Section 1, almost all (under the Lebesgue measure), if the SDP pair with problem data has a primal and a dual solution then it is regular [AHO97, Theorem 11, 14 and 15]. In this paper, we also show in Theorem 2 that for fixed and , the SDP pair is primal regular for almost all .

### 1.2 Notation

#### Norms and Eigenvalues

For a matrix

, we denote its Frobenius, operator two norm, and nuclear norm (sum of singular values) as

, and respectively. The operator norm of a linear operator is defined as

. We write the eigenvalues of a symmetric matrix

in decreasing order as

 λ1(A)≥λ2(A)⋯≥λn(A).

We define the singular values similarly.

#### Inner product

We use the Euclidean inner product for vectors: for ,, . We use the trace inner product for matrices: for and or and , .

For a vector or a matrix , and denote the transpose. The adjoint map of a linear map from is defined as the unique linear map such that for every , .

#### SDP Optimization

The notation denotes a primal solution to () and denotes a dual solution to . Define the slack operator that maps a putative dual solution to its associated slack matrix . We omit the dependence on if it is clear from the context.

## 2 Regular SDPs are generic

In this section, we first show that any psd matrix solves a regular SDP. We also demonstrate that for almost all , if SDP () satisfies Slater’s condition and linear independence and has a primal solution, then it is primal regular. We then show that interesting SDPs, including MaxCut, OrthogonalCut, and ProductSDP (introduced in Section 2.3), are regular for almost all . Finally, we demonstrate numerically, MaxCut SDP of many graphs are indeed regular.

### 2.1 Any PSD matrix solves a regular SDP

Given any rank positive semidefinite matrix , we can construct a regular SDP with as its unique solution.

Write the eigenvalue decomposition of as . Here the eigenvalues satisfy , and we define the diagonal matrix and the orthonormal matrix .

We are now ready to construct the SDP and state our first theorem:

###### Theorem 1.

For any rank positive semidefinite matrix with eigenvalue decomposition , the SDP

 minimizetr(X)subject totr(viv⊤iX)=λi,i=1,…,r⋆,tr(viv⊤j+vjv⊤i2X)=0,1≤i

with variable is regular and has as its unique solution.

###### Proof.

Let us first write down the dual, with variables , :

 maximize∑r⋆i=1λiyiisubject toI−∑i≤j≤r⋆viv⊤j+vjv⊤i2yij⪰0. (2)

We now verify each property required for regularity.

#### Linear independence

The matrices , are orthogonal and hence they are linearly independent.

#### Strong duality and strict complementarity

Define with for and for . To verify strong duality and the strict complementarity, we claim and are solutions to the primal SDP, Eq. 1, and dual SDP, Eq. 2, respectively. Indeed, it is easy to verify that is primal feasible. Furthermore, by writing the slack matrix

 Z(y⋆)=I−∑i≤j≤r⋆viv⊤j+vjv⊤i2(y⋆)ij=I−r⋆∑i=1viv⊤i⪰0,

we see is dual feasible and has rank . Since the primal and dual objective match, , we see and are a primal-dual optimal solution pair. Since has rank and , we see strict complementarity holds.

#### Uniqueness

Suppose that solves the dual problem (2). We will show that , and hence the dual has a unique solution. Using strong duality, we know . Moreover, and are psd. Hence has rank at most . By the definition of , we see

 Z(y′⋆) =I−∑i≤j≤r⋆viv⊤j+vjv⊤i2y′⋆ij (3) =V⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1−(y′⋆)11−(y′⋆)122…−(y′⋆)1r⋆20⋮⋱⋮0−(y′⋆)r⋆12…1−(y′⋆)r⋆r⋆00…0In−r⋆⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦V⊤.

The lower right block of the inner matrix above is the identity . Hence we see has rank at least . Thus must have rank exactly . This fact forces the upper left block of in (3) to be . Hence, we must have .

To show the primal solution is unique, introduce the new variable so that . Using this change of variables in (1), we see uniquely solves (1) if and only if uniquely solves

 minimizetr(S)subject toSii=λi,i=1,…,r⋆Sij=0,1≤i

(Notice that is optimal for Eq. 4, using the same argument we used to show the optimality of for Eq. 1 above.) Since the optimal value of Eq. 4 is , from the constraints of (4), we see that any feasible of (4) has objective value . To achieve optimality, we must have for . Now use the fact that to see is the unique solution. ∎

### 2.2 Almost all cost matrices yield a primal regular SDP

We establish the fact that () is primal regular for almost all cost matrices , whenever the primal solution exists.

###### Theorem 2.

Suppose () satisfies the linear independence condition and Slater’s condition: there is some such that and . Then for almost all , () is primal regular as long as the primal solution exists.

###### Proof.

We utilize [DL11, Corollary 3.5]: for a convex extended value function , for almost all , the perturbed function admits at most one minimizer and satisfies , the relative interior of .

To exploit this theorem, we set and take to be the function

 χ{AX=b}+χ{X⪰0}, (5)

where is the indicator function of a convex set : if and otherwise. Using [DL11, Corollary 3.5], we see that for almost all , the problem has at most one solution , and

 −C ∈ri(∂(χ{AX=b}+χ{X⪰0})(XC)) (6) (a)=ri(∂(χ{AX=b})(XC)+∂(χ{X⪰0})(XC)) (b)=ri(∂(χ{AX=b}))(XC)+ri(∂(χ{X⪰0})(XC)) (c)=−{A∗y∣y∈\bf Rm}−{Z∣Z⪰0,ker(Z)=range(XC)},

which implies that for some slack matrix satisfying . Here step uses Slater’s condition to apply the sum rule of the subdifferential. Step uses [Ber09, Proposition 1.3.6]: the sum rule for the relative interior. Step uses basic sub-differential calculus. Hence, there is some and such that , , and

 rank(Z)+rank(XC)=nandtr(ZXC)=0.

Hence is dual optimal and strict complementarity holds. ∎

### 2.3 MaxCut-type SDP are regular for almost all C

In this section, we introduce three classes of SDPs that generalize the SDP relaxation of the MaxCut problem [GW95], with applications in statistical signal recovery, optics, and subproblems of important algorithms. We show in Corollary 1 that they are regular for almost all based on Theorem 2.

#### MaxCut

We call an SDP a MaxCut-type SDP if it is of the form

 minimizetr(CX)subject todiag(X)=1andX⪰0. (MaxCut)

Here we do not require the cost matrix to be a negative Laplacian matrix.

MaxCut-type SDP can be used to find approximations of the maximum weight cut in a graph [GW95], to recover an object of interest from optical measurements [WdM15], and to identify the cluster corresponding to each node in the stochastic block model [Ban18].

#### Orthogonal cut

For any , we denote by the -th digonal block of . An OrthogonalCut-type problem has decision variable for some integer and , or , and is of the form

 minimizetr(CX)subject toBlocks(X)=Id,,s=1,…,SX⪰0. (OrthogonalCut)

Note that when , (OrthogonalCut) reduces to (MaxCut), with constraints.

The OrthogonalCut-type SDP generalizes the MaxCut-type SDP, and appears in sensor network localization [CLS12] and ranking problems [Cuc16].

#### ProductSDP: optimization over a product of spheres

Finally, we introduce (ProductSDP), an SDP relaxation of a quadratic program over a product of spheres. Let be a positive integer and let be a partition of the set : for all , and . A (ProductSDP)-type problem, with decision variable , takes the form

 minimizetr(CX)subject to∑k∈SiXkk=1,i=1,…,m,X⪰0. (ProductSDP)

Note that when , (ProductSDP) reduces to (MaxCut).

To explain the name of this SDP, suppose for each . The constraint ensures that is on the sphere in . Now stack the variables for as a vector . The SDP (ProductSDP) is a relaxation of the quadratic program

 minimizetr(Cxx⊤)subject tox∈∏mi=1S|Si|−1 (7)

with replaced by . Problems of this form can appear as trust-region subproblems, e.g., [BVB18, Section 5.3].

Having defined these three classes of SDP, we show all of these problems are almost always regular.

###### Corollary 1.

The SDPs MaxCut, OrthogonalCut, and ProductSDP are regular for almost any cost matrix .

###### Proof.

We first checks dual uniqueness and linear independence, and then verify primal regularity to conclude that these three classes of SDP are regular.

#### Dual uniqueness and linear independence

First, note linear independence follows directly from the unqiueness of the dual solution. We show dual uniqueness by contradiction: if the dual is not unique, there is some such that and for some is still optimal. Using [WW18, Proposition 9], we know there is no nonzero such that

 A∗(y)X⋆=0.

It is then immediate the dual is unique by noting for any dual optimal .

#### Primal Regularity

The primal solution exists because the feasible region of each class is compact and nonempty. Slater’s condition for these three classes of SDP can be easily verified using a well-chosen diagonal matrix. Hence Theorem 2 asserts these three classes are primal regular for almost all . ∎

### 2.4 Numerical verification for real-world SDP

In this section, we numerically verify that the MaxCut problems (MaxCut) corresponding to several graphs are regular. In particular, we use the Gset graphs G1 to G20 [Gse]; in the MaxCut relaxation, the cost matrix is the negative graph Laplacian. Each graph has vertices, so the MaxCut SDP (MaxCut) has a decision variable of size .

To verify strict complementarity, we must compute the rank of the primal and dual solution and , and , and see whether .

To verify uniqueness of the primal solution, define a matrix whose columns form an orthonormal basis for the null space of . Define the linear operator , , where . According to [AHO97][Theorem 9 & 10], the primal solution is unique if the smallest singular value is nonzero.

To verify uniqueness of the dual solution, define a matrix whose columns form an orthonormal basis for the column space of and whose columns form a basis for the null space of . Define the matrix where the -th column of is for 333Here is the -th standard basis vector in and stacks the columns of a matrix.. Then according to [AHO97, Theroem 6 & 7], the dual is unique if the smallest singular value of is nonzero.

Numerically, we obtain and using the MOSEK solver [Mos10]. We estimate the rank by the number of eigenvalues larger than , and denote the smallest eigenvalue larger than as and respectively. We compute their condition numbers defined as and . We compute the condition numbers of and defined as and . The results are reported in Table 1. As can be seen, regularity is indeed satisfied for every MaxCut problem from G1 to G20. For graph G11, the condition number is about and its is actually only (not shown here) meaning that strict complementarity holds in a very weak sense.

## 3 Burer-Monteiro may fail for regular SDP

In this section, we show that the (BM) formulation of () admits second order stationary points that are not globally optimal even for regular SDPs with low rank ( or or ) solutions.

Recall from the introduction the Burer and Monteiro approach (BM approach) to semidefinite programming, which replaces the SDP () by the following nonlinear optimization problem with decision variable :

 minimizetr(CFF⊤)=:f(F)subject toA(FF∗)=b. (BM)

This problem is in general nonconvex.

Nonlinear optimization solvers such as Riemannian trust regions [BAC18] can guarantee that they find a second order stationary point (SOSP) of such a problem, but cannot guarantee (or even check) that they have found a global solution. When the constraint set is a manifold, as it is for all the examples discussed in the previous section, a putative solution is second order stationary if its Riemannian gradient is and its Riemannian Hessian is positive semidefinite. See Appendix A for further discussion.

Hence we can guarantee that the BM approach finds the global optimum if we can prove that all SOSPs are globally optimal. The following definition serves as a useful shorthard as we understand when this condition holds.

###### Definition 6.

We say the BM approach succeeds for an SDP () if every SOSP of (BM) is globally optimal, and hence is optimal for (). Conversely, we say the BM approach fails if (BM) has any SOSP that is not global optimal.

Note that as a practical matter, a nonlinear solver for (BM) might produce a globally optimal SOSP even for a problem that admits non-optimal SOSPs.

Recall from the introduction that for almost all , when , any SOSP of (BM) is globally optimal [BVB18]. On the other hand, building on results by [WW18], we will demonstrate a positive measure set of regular SDP of each of the three classes described in Section 2.3 for which BM fails whenever .

### 3.1 Examples: MaxCut, OrthogonalCut, and ProductSDP

Let us first recall the the (MaxCut) SDP we described in Section 2.3:

 minimizetr(CX)subject todiag(X)=1andX⪰0. (MaxCut)

As demonstrated in [WW18, Corollary 1], if

 r(r+1)2+r>n,

then for almost all , any SOSP of the BM formulation (BM) of (MaxCut) is global optimal. Hence the matrix solves (MaxCut). However in [WW18, Corollary 1], the authors show that if

 r(r+1)2+r≤n,

then there is a positive measure set of the cost matrix for which (MaxCut) has a unique rank solution but the BM approach fails.

Are these SDP particularly nasty? On the contrary! Our contribution, stated in the following theorem, is to show that these SDPs are regular. We also generalize these results to (OrthogonalCut) and (ProductSDP).

###### Theorem 3.

Fix a positive integer . If

 r(r+1)2+r≤n,

then there is a set of cost matrices with positive measure for which (MaxCut) admits a unique rank solution and is regular, but the BM approach fails.

The same result holds for (ProductSDP) if

 r(r+1)2+r≤m.

For (OrthogonalCut), the same result holds, except that the solution has rank , if

 r(r+1)2+rd≤m=Sd(d+1)2.
###### Proof.

The proofs of dual uniqueness and linear independence are the same as in the proof of Corollary 1. We next verify the failure of BM, and primal regularity.

#### Failure of BM, and Primal regularity

Waldspurger and Waters show that there is a positve measure set of cost matrices for which (MaxCut) satisfies: (1) strong duality [WW18, Proposition 4], (2) uniqueness of a primal solution with rank [WW18, Corollary 2], (3) strict complementarity for a dual solution [WW18, Lemma 2, Lemma 9 and ], (4) the BM approach fails [WW18, Corollary 1]. Together with dual uniqueness and linear independence, these results verify the theorem statement for (MaxCut).

#### OrthogonalCut and ProductSDP

The proof for the other two SDPs follows exactly the same argument as above, using [WW18, Corollary 2] for (OrthogonalCut) and [WW18, Corollary 3] for (ProductSDP). ∎

## 4 Noisy SDPs are regular

In section 2, we saw that many interesting SDPs are regular for almost any cost matrix . In this section, we show that the (very structured) cost matrices that appear in certain statistical problems also yield regular SDPs. In these problems, the objective measures agreement with observations of aground-truth object, while the constraints restrict the complexity of the solution. Importantly, regularity of these problems guarantees that the solution of the SDP recovers the ground truth.

More precisely, we consider the SDP relaxations of the following statistical problems:

• Synchronization

• Stochastic Block Model

We show that these SDP relaxations are regular with high probability.

We also demonstrate a strong advantage to solving the original SDP rather than using the BM approach (when applicable): these SDP provably recover the ground truth under much higher noise that is (provably) allowable using the BM approach.

### 4.1 \bf Z2 Synchronization

Consider a binary vector . The synchronization problem is to to recover the vector up to a sign from the observations , where is symmetric with iid standard normal upper diagonal entries, and diagonal entries. The value is the noise level. The SDP proposed in the literature with decision variable is

 minimizetr(−YX)subject todiag(X)=1andX⪰0. (\bf Z2 Sync)

The corresponding Burer-Monteiro formulation with variable is

 minimizetr(−YFF⊤)subject todiag(FF⊤)=1. (BM \bf Z2 Sync)

It is intuitive that the problem is more challenging as the noise level increases. For ( Sync), if the noise level satisfies for some numerical constant , it admits as its unique solution with high probability [Ban18, Proposition 3.6]. But for (BM Sync) with , the best known theoretical results state that the noise level must be less than for some small numerical constant to ensure the BM formulation succeeds, i.e., all second order stationary points satisfy [BBV16]. The gap between and is polynomially large.

We now prove Sync is regular whenever . The uniqueness of the primal is proved in [Ban18, Proposition 3.6]. The dual optimal solution proposed in [Ban18, Proposition 3.6] is

 y⋆=−ddiag(Yzz⊤),andZ⋆=−Y−(diag(y⋆))=ddiag(Yzz⊤)−Y,

where is the adjoint operator of . Note that using the fact that . Using the proof of [Ban18, Proposition 3.6, proof on pp356] and , we find that with high probability, there is a numerical constant such that

 λn−1(Z⋆)≥cn. (8)

We see and is optimal as . Moreover, strict complementarity is satisfied, as . The linear independence relation and the uniqueness of the dual can be verified in the same way as in the proof of Theorem 3. We summarize our findings as the following theorem.

###### Theorem 4.1.

For the synchronization problem, if the noise level for some numerical constant , then with high probability the SDP ( Sync) is regular with primal solution . Moreover, the dual solution satisfies for some numerical constant .

### 4.2 Stochastic Block Model

The stochastic block model (SBM) is structurally quite similar to synchronization. The SBM posits that we observe the edges and vertices of a graph with vertices that are split into two clusters according to a binary membership vector . For each pair of vertices with , the undirected edge is formed with probability if vertices and are in the same cluster () and with probability otherwise. The goal is to recover the cluster membership vector . For simplicity, we further assume that is even and that the clusters are balanced: entries of are and are . Let be the adjacency matrix of with diagonal entries set to be . The SDP proposed to recover by [BBV16], with variable , is

 maximizetr((A−p+q2J)X)subject todiag(X)=1andX⪰0. (SBM)

where the matrix . The corresponding Burer-Monteiro formulation with variable is

 minimizetr((A−p+q2J)FF⊤)subject todiag(FF⊤)=1. (BM SBM)

(There are other SDP formulations for SBM which make weaker assumptions; see [Ban18]. However, there are no guarantees for the corresponding Burer-Monteiro relaxations.)

To see the relation between ( Sync) and (SBM), we note the cost matrix can be decomposed as

 A−p+q2J=p−q2zz⊤+E,

where the error matrix has zero diagonal, expectation , and satisfies that for , ,

 Eij={1−pwith probabilityp−pwith probability1−p

and for ,

 Eij={1−qwith probabilityq−qwith probability1−q.

We may rescale the cost matrix by to form

 ~A=2p−q(A−p+q2J)=zz