1 Introduction
Recent years have seen a surge of progress in algorithm design via the sumofsquares (SoS) semidefinite programming hierarchy. Initiated by the work of [BBH12], who showed that polynomial time algorithms in the hierarchy solve all known integrality gap instances for Unique Games and related problems, a steady stream of works have developed efficient algorithms for both worstcase [BKS14, BKS15, BKS17, BGG16] and averagecase problems [HSS15, GM15, BM16, RRS16, BGL16, MSS16a, PS17]. The insights from these works extend beyond individual algorithms to characterizations of broad classes of algorithmic techniques. In addition, for a large class of problems (including constraint satisfaction), the family of SoS semidefinite programs is now known to be as powerful as any semidefinite program (SDP) [LRS15].
In this paper we focus on recent progress in using Sum of Squares algorithms to solve averagecase, and especially planted problems—problems that ask for the recovery of a planted signal perturbed by random noise. Key examples are finding solutions of random constraint satisfaction problems (CSPs) with planted assignments [RRS16] and finding planted optima of random polynomials over the dimensional unit sphere [RRS16, BGL16]
. The latter formulation captures a wide range of unsupervised learning problems, and has led to many unsupervised learning algorithms with the bestknown polynomial time guarantees
[BKS15, BKS14, MSS16b, HSS15, PS17, BGG16].In many cases, classical algorithms for such planted problems are spectral
algorithms—i.e., using the top eigenvector of a natural matrix associated with the problem input to recover a planted solution. The canonical algorithms for the
planted clique [AKS98], principal components analysis (PCA) [Pea01], and tensor decomposition (which is intimately connected to optimizaton of polynomials on the unit sphere) [Har70] are all based on this general scheme. In all of these cases, the algorithm employs the top eigenvector of a matrix which is either given as input (the adjacency matrix, for planted clique), or is a simple function of the input (the empirical covariance, for PCA).Recent works have shown that one can often improve upon these basic spectral methods using SoS, yielding better accuracy and robustness guarantees against noise in recovering planted solutions. Furthermore, for worst case problems—as opposed to the averagecase planted problems we consider here—semidefinite programs are strictly more powerful than spectral algorithms.^{1}^{1}1For example, consider the contrast between the SDP algorithm for MaxCut of Goemans and Williamson, [GW94], and the spectral algorithm of Trevisan [Tre09]; or the SDPbased algorithms for coloring worstcase 3colorable graphs [KT17] relative to the best spectral methods [AK97] which only work for random inputs. A priori one might therefore expect that these new SoS guarantees for planted problems would not be achievable via spectral algorithms. But curiously enough, in numerous cases these stronger guarantees for planted problems can be achieved by spectral methods! The twist is that the entries of these matrices are lowdegree polynomials in the input to the algorithm . The result is a new family of lowdegree spectral algorithms with guarantees matching SoS but requriring only eigenvector computations instead of general semidefinite programming [HSSS16, RRS16, AOW15a].
This leads to the following question which is the main focus of this work.
Are SoS algorithms equivalent to lowdegree spectral methods for planted problems?
We answer this question affirmatively for a wide class of distinguishing problems which includes refuting random CSPs, tensor and sparse PCA, densestsubgraph, community detection in stochastic block models, planted clique, and more. Our positive answer to this question implies that a lightweight algorithm—computing the top eigenvalue of a single matrix whose entries are lowdegree polynomials in the input—can recover the performance guarantees of an often bulky semidefinite programming relaxation.
To complement this picture, we prove two new SoS lower bounds for particular planted problems, both variants of component analysis: sparse principal component analysis and tensor principal component analysis (henceforth sparse PCA and tensor PCA, respectively) [ZHT06, RM14]. For both problems there are nontrivial lowdegree spectral algorithms, which have better noise tolerance than naive spectral methods [HSSS16, DM14b, RRS16, BGL16]
. Sparse PCA, which is used in machine learning and statistics to find important coordinates in highdimensional data sets, has attracted much attention in recent years for being apparently computationally intractable to solve with a number of samples which is more than sufficient for bruteforce algorithms
[KNV15, BR13b, MW15a]. Tensor PCA appears to exhibit similar behavior [HSS15]. That is, both problems exhibit informationcomputation gaps.Our SoS lower bounds for both problems are the strongest yet formal evidence for informationcomputation gaps for these problems. We rule out the possibility of subexponentialtime SoS algorithms which improve by polynomial factors on the signaltonoise ratios tolerated by the known low degree spectral methods. In particular, in the case of sparse PCA, it appeared possible prior to this work that it might be possible in quasipolynomial time to recover a
sparse unit vector
in dimensions from samples from the distribution. Our lower bounds suggest that this is extremely unlikely; in fact this task probably requires polynomial SoS degree and hence
time for SoS algorithms. This demonstrates that (at least with regard to SoS algorithms) both problems are much harder than the planted clique problem, previously used as a basis for reductions in the setting of sparse PCA [BR13b].Our lower bounds for sparse and tensor PCA are closely connected to the failure of lowdegree spectral methods in high noise regimes of both problems. We prove them both by showing that with noise beyond what known lowdegree spectral algorithms can tolerate, even lowdegree scalar algorithms (the result of restricting lowdegree spectral algorithms to matrices) would require subexponential time to detect and recover planted signals. We then show that in the restricted settings of tensor and sparse PCA, ruling out these weakened lowdegree spectral algorithms is enough to imply a strong SoS lower bound.
1.1 SoS and spectral algorithms for robust inference
We turn to our characterization of SoS algorithms for planted problems in terms of lowdegree spectral algorithms. First, a word on planted problems. Many planted problems have several formulations: search, in which the goal is to recover a planted solution, refutation, in which the goal is to certify that no planted solution is present, and distinguishing, where the goal is to determine with good probability whether an instance contains a planted solution or not. Often an algorithm for one version can be parlayed into algorithms for the others, but distinguishing problems are often the easiest, and we focus on them here.
A distinguishing problem is specified by two distributions on instances: a planted distribution supported on instances with a hidden structure, and a uniform
distribution, where samples w.h.p. contain no hidden structure. Given an instance drawn with equal probability from the planted or the uniform distribution, the goal is to determine with probability greater than
whether or not the instance comes from the planted distribution. For example:Planted clique Uniform distribution: , the ErdősRenyi distribution, which w.h.p. contains no clique of size . Planted distribution: The uniform distribution on graphs containing a size clique, for some . (The problem gets harder as gets smaller, since the distance between the distributions shrinks.)
Planted xor Uniform distribution: a xor instance on variables and equations , where all the triples and the signs are sampled uniformly and independently. No assignment to will satisfy more than a fraction of the equations, w.h.p. Planted distribution: The same, except the signs are sampled to correlate with for a randomly chosen , so that the assignment satisfies a fraction of the equations. (The problem gets easier as gets larger, and the contradictions in the uniform case become more locally apparent.)
We now formally define a family of distinguishing problems, in order to give our main theorem. Let be a set of instances corresponding to a product space (for concreteness one may think of to be the set of graphs on vertices, indexed by , although the theorem applies more broadly). Let , our uniform distrbution, be a product distribution on .
With some decision problem in mind (e.g. does contain a clique of size ?), let be a set of solutions to ; again for concreteness one may think of as being associated with cliques in a graph, so that is the set of all indicator vectors on at least vertices.
For each solution , let be the uniform distribution over instances that contain . For example, in the context of planted clique, if is a clique on vertices , then would be the uniform distribution on graphs containing the clique . We define the planted distribution to be the uniform mixture over , .
The following is our main theorem on the equivalence of sum of squares algorithms for distinguishing problems and spectral algorithms employing lowdegree matrix polynomials.
Theorem 1.1 (Informal).
Let , and let be sets of real numbers. Let be a family of instances over , and let be a decision problem over with the set of possible solutions to over . Let be a system of polynomials of degree at most in the variables and constant degree in the variables that encodes , so that

for , with high probability the system is unsatisfiable and admits a degree SoS refutation, and

for , with high probability the system is satisfiable by some solution , and remains feasible even if all but an fraction of the coordinates of are rerandomized according to .
Then there exists a matrix whose entries are degree polynomials such that
where denotes the maximum nonnegative eigenvalue.
The condition that a solution remain feasible if all but a fraction of the coordinates of are rerandomized should be interpreted as a noiserobustness condition. To see an example, in the context of planted clique, suppose we start with a planted distribution over graphs with a clique of size . If a random subset of vertices are chosen, and all edges not entirely contained in that subset are rerandomized according to the distribution, then with high probability at least of the vertices in remain in a clique, and so remains feasible for the problem : has a clique of size ?
1.2 SoS and informationcomputation gaps
Computational complexity of planted problems has become a rich area of study. The goal is to understand which planted problems admit efficient (polynomial time) algorithms, and to study the informationcomputation gap phenomenon: many problems have noisy regimes in which planted structures can be found by inefficient algorithms, but (conjecturally) not by polynomial time algorithms. One example is the planted clique problem, where the goal find a large clique in a sample from the uniform distribution over graphs containing a clique of size for a small constant . While the problem is solvable for any by a bruteforce algorithm requiring time, polynomial time algorithms are conjectured to require .
A common strategy to provide evidence for such a gap is to prove that powerful classes of efficient algorithms are unable to solve the planted problem in the (conjecturally) hard regime. SoS algorithms are particularly attractive targets for such lower bounds because of their broad applicability and strong guarantees.
In a recent work, Barak et al. [BHK16] show an SoS lower bound for the planted clique problem, demonstrating that when , SoS algorithms require time to solve planted clique. Intriguingly, they show that in the case of planted clique that SoS algorithms requiring time can distinguish planted from random graphs only when there is a scalarvalued degree polynomial (here is the adjacency matrix of a graph) with
That is, such a polynomial
has much larger expectation in under the planted distribution than its standard deviation in uniform distribution. (The choice of
is somewhat arbitrary, and could be replaced with or with small changes in the parameters.) By showing that as long as any such polynomial must have degree , they rule out efficient SoS algorithms when . Interestingly, this matches the spectral distinguishing threshold—the spectral algorithm of [AKS98] is known to work when .This stronger characterization of SoS for the planted clique problem, in terms of scalar distinguishing algorithms rather than spectral distinguishing algorihtms, may at first seem insignificant. To see why the scalar characterization is more powerful, we point out that if the degreemoments of the planted and uniform distributions are known, determining the optimal scalar distinguishing polynomial is easy: given a planted distribution and a random distribution over instances , one just solves a linear algebra problem in the coefficients of to maximize the expectation over relative to :
It is not difficult to show that the optimal solution to the above program has a simple form: it is the projection of the relative density of with respect to projected to the degree polynomials. So given a pair of distributions , in time, it is possible to determine whether there exists a degree scalar distinguishing polynomial. Answering the same question about the existence of a spectral distinguisher is more complex, and to the best of our knowledge cannot be done efficiently.
Given this powerful theorem for the case of the planted clique problem, one may be tempted to conjecture that this stronger, scalar distinguisher characterization of the SoS algorithm applies more broadly than just to the planted clique problem, and perhaps as broadly as thm:maindist. If this conjecture is true, given a pair of distributions and with known moments, it would be possible in many cases to efficiently and mechanically determine whether polynomialtime SoS distinguishing algorithms exist!
Conjecture 1.2.
In the setting of thm:maindist, the conclusion may be replaced with the conclusion that there exists a scalarvalued polynomial of degree so that
To illustrate the power of this conjecture, in the beginning of Section 6 we give a short and selfcontained explanation of how this predicts, via simple linear algebra, our degree SoS lower bound for tensor PCA. As evidence for the conjecture, we verify this prediction by proving such a lower bound unconditionally.
We also note why thm:maindist does not imply Conjecture 1.2. While, in the notation of that theorem, the entries of are lowdegree polynomials in , the function is not (to the best of our knowledge) a lowdegree polynomial in the entries of (even approximately). (This stands in contrast to, say the operator norm or Frobenious norm of , both of which are exactly or approximately lowdegree polynomials in the entries of .) This means that the final output of the spectral distinguishing algorithm offered by thm:maindist is not a lowdegree polynomial in the instance .
1.3 Exponential lower bounds for sparse PCA and tensor PCA
Our other main results are strong exponential lower bound on the sumofsquares method (specifically, against time or degree algorithms) for the tensor and sparse principal component analysis (PCA). We prove the lower bounds by extending the techniques pioneered in [BHK16]. In the present work we describe the proofs informally, leaving full details to a forthcoming full version.
Tensor PCA
We start with the simpler case of tensor PCA, introduced by [RM14].
Problem 1.3 (Tensor PCA).
Given an order tensor in , determine whether it comes from:

Uniform Distribution: each entry of the tensor sampled independently from .

Planted Distribution: a spiked tensor, where is sampled uniformly from , and where is a random tensor with each entry sampled independently from .
Here, we think of as a signal hidden by Gaussian noise. The parameter is a signaltonoise ratio. In particular, as grows, we expect the distinguishing problem above to get easier.
Tensor PCA is a natural generalization of the PCA problem in machine learning and statistics. Tensor methods in general are useful when data naturally has more than two modalities: for example, one might consider a recommender system which factors in not only people and movies but also time of day. Many natural tensor problems are NP hard in the worstcase. Though this is not necessarily an obstacle to machine learning applications, it is important to have averagecase models to in which to study algorithms for tensor problems. The spiked tensor setting we consider here is one such simple model.
Turning to algorithms: consider first the ordinary PCA problem in a spikedmatrix model. Given an matrix , the problem is to distinguish between the case where every entry of
is independently drawn from the standard Gaussian distribution
and the case when is drawn from a distribution as above with an added rank one shift in a uniformly random direction. A natural and wellstudied algorithm, which solves this problem to informationtheoretic optimality is to threshold on the largest singular value/spectral norm of the input matrix. Equivalently, one thresholds on the maximizer of the degree two polynomial
inA natural generalization of this algorithm to the tensor PCA setting (restricting for simplicity for this discussion) is the maximum of the degreethree polynomial over the unit sphere—equivalently, the (symmetric) injective tensor norm of . This maximum can be shown to be much larger in case of the planted distribution so long as . Indeed, this approach to distinguishing between planted and uniform distributions is informationtheoretically optimal [PWB16, BMVX16]. Since recovering the spike and optimizing the polynomial on the sphere are equivalent, tensor PCA can be thought of as an averagecase version of the problem of optimizing a degree polynomial on the unit sphere (this problem is NP hard in the worst case, even to approximate [HL09, BBH12]).
Even in this averagecase model, it is believed that there is a gap between which signal strengths allow recovery of by bruteforce methods and which permit polynomial time algorithms. This is quite distinct from the vanilla PCA setting, where eigenvector algorithms solve the spikerecovery problem to informationtheoretic optimality. Nevertheless, the bestknown algorithms for tensor PCA arise from computing convex relaxations of this degree polynomial optimization problem. Specifically, the SoS method captures the state of the art algorithms for the problem; it is known to recover the vector to error in polynomial time whenever [HSS15]. A major open question in this direction is to understand the complexity of the problem for . Algorithms (again captured by SoS) are known which run in time [RRS16, BGG16]. We show the following theorem which shows that the subexponential algorithm above is in fact nearly optimal for SoS algorithm.
Theorem 1.4.
For a tensor , let
For every small enough constant , if has iid Gaussian or entries, , for every for some universal .
In particular for third order tensors (i.e ), since degree SoS is unable to certify that a random tensor has maximum value much less than , this SoS relaxation cannot be used to distinguish the planted and random distributions above when .^{3}^{3}3In fact, our proof for this theorem will show somewhat more: that a large family of constraints—any valid constraint which is itself a lowdegree polynomial of —could be added to this convex relaxation and the lower bound would still obtain.
Sparse PCA
We turn to sparse PCA, which we formalize as the following planted distinguishing problem.
Problem 1.5 (Sparse PCA ).
Given an symmetric real matrix , determine whether comes from:

Uniform Distribution: each uppertriangular entry of the matrix is sampled iid from ; other entries are filled in to preserve symmetry.

Planted Distribution: a random sparse unit vector with entries is sampled, and is sampled from the uniform distribution above; then .
We defer significant discussion to Section 6, noting just a few things before stating our main theorem on sparse PCA. First, the planted model above is sometimes called the spiked Wigner model—this refers to the independence of the entries of the matrix . An alternative model for sparse PCA is the spiked Wishart model: is replaced by , where each , for some number of samples and some signalstrength . Though there are technical differences between the models, to the best of our knowledge all known algorithms with provable guarantees are equally applicable to either model; we expect that our SoS lower bounds also apply in the spiked Wishart model.
We generally think of as small powers of ; i.e. for some ; this allows us to generally ignore logarithmic factors in our arguments. As in the tensor PCA setting, a natural and informationtheoretically optimal algorithm for sparse PCA is to maximize the quadratic form , this time over sparse unit vectors. For from the uniform distribution standard techniques (nets and union bounds) show that the maximum value achievable is with high probability, while for from the planted model of course . So, when one may distinguish the two models by this maximum value.
However, this maximization problem is NP hard for general quadratic forms [CPR16]. So, efficient algorithms must use some other distinguisher which leverages the randomness in the instances. Essentially only two polynomialtimecomputable distinguishers are known.^{4}^{4}4If one studies the problem at much finer granularity than we do here, in particular studying
up to loworder additive terms and how precisely it is possible to estimate the planted signal
, then the situation is more subtle [DM14a]. If then the maximum eigenvalue of distinguishes the models. If then the planted model can be distinguished by the presence of large diagonal entries of . Notice both of these distinguishers fail for some choices of (that is, ) for which bruteforce methods (optimizing over sparse ) could successfully distinguish planted from uniform ’s. The theorem below should be interpreted as an impossibility result for SoS algorithms in the regime. This is the strongest known impossibility result for sparse PCA among those ruling out classes of efficient algorithms (one reductionbased result is also know, which shows sparse PCA is at least as hard as the planted clique problem [BR13a]. It is also the first evidence that the problem may require subexponential (as opposed to merely quasipolynomial) time.Theorem 1.6.
If , let
There are absolute constants so that for every and , if , then for ,
For more thorough discussion of the theorem, see Section 6.3.
1.4 Related work
On interplay of SoS relaxations and spectral methods
As we have already alluded to, many prior works explore the connection between SoS relaxations and spectral algorithms, beginning with the work of [BBH12] and including the followup works [HSS15, AOW15b, BM16] (plus many more). Of particular interest are the papers [HSSS16, MS16b], which use the SoS algorithms to obtain fast spectral algorithms, in some cases running in time linear in the input size (smaller even than the number of variables in the associated SoS SDP).
In light of our thm:maindist, it is particularly interesting to note cases in which the known SoS lower bounds matching the known spectral algorithms—these problems include planted clique (upper bound: [AKS98], lower bound:^{5}^{5}5SDP lower bounds for the planted clique problem were known for smaller degrees of sumofsquares relaxations and for other SDP relaxations before; see the references therein for details. [BHK16]), strong refutations for random CSPs (upper bound:^{6}^{6}6There is a long line of work on algorithms for refuting random CSPs, and 3SAT in particular; the listed papers contain additional references. [AOW15b, RRS16], lower bounds: [Gri01b, Sch08, KMOW17]), and tensor principal components analysis (upper bound: [HSS15, RRS16, BGG16], lower bound: this paper).
We also remark that our work applies to several previouslyconsidered distinguishing and averagecase problems within the sumofsquares algorithmic framework: block models [MS16a] , densestsubgraph [BCC10]; for each of these problems, we have by thm:maindist an equivalence between efficient sumofsquares algorithms and efficient spectral algorithms, and it remains to establish exactly what the tradeoff is between efficiency of the algorithm and the difficulty of distinguishing, or the strength of the noise.
To the best of knowledge, no previous work has attempted to characterize SoS relaxations for planted problems by simpler algorithms in the generality we do here. Some works have considered characterizing degree SoS relaxations (i.e. basic semidefinie programs) in terms of simpler algorithms. One such example is recent work of Fan and Montanari [FM16] who showed that for some planted problems on sparse random graphs, a class of simple procedures called local algorithms performs as well as semidefinite programming relaxations.
On strong SoS lower bounds for planted problems
By now, there’s a large body of work that establishes lower bounds on SoS SDP for various average case problems. Beginning with the work of Grigoriev [Gri01a], a long line work have established tight lower bounds for random constraint satisfaction problems [Sch08, BCK15, KMOW17] and planted clique [MPW15, DM15, HKP15, RS15, BHK16]. The recent SoS lower bound for planted clique of [BHK16] was particularly influential to this work, setting the stage for our main line of inquiry. We also draw attention to previous work on lower bounds for the tensor PCA and sparse PCA problems in the degree SoS relaxation [HSS15, MW15b]—our paper improves on this and extends our understanding of lower bounds for tensor and sparse PCA to any degree.
Tensor principle component analysis was introduced by Montanari and Richard [RM14] who indentified information theoretic threshold for recovery of the planted component and analyzed the maximum likelihood estimator for the problem. The work of [HSS15] began the effort to analyze the sum of squares method for the problem and showed that it yields an efficient algorithm for recovering the planted component with strength . They also established that this threshold is tight for the sum of squares relaxation of degree 4. Following this, Hopkins et al. [HSSS16] showed how to extract a linear time spectral algorithm from the above analysis. Tomioka and Suzuki derived tight information theoretic thresholds for detecting planted components by establishing tight bounds on the injective tensor norm of random tensors [TS14]. Finally, very recently, Raghavendra et. al. and Bhattiprolu et. al. independently showed subexponential time algorithms for tensor pca [RRS16, BGL16]. Their algorithms are spectral and are captured by the sum of squares method.
1.5 Organization
In sec:lowdegdist we set up and state our main theorem on SoS algorithms versus lowdegree spectral algorithms. In sec:examp we show that the main theorem applies to numerous planted problems—we emphasize that checking each problem is very simple (and barely requires more than a careful definition of the planted and uniform distributions). In sec:momentmatch and sec:proofofthm we prove the main theorerm on SoS algorithms versus lowdegree spectral algorithms.
In section 7 we get prepared to prove our lower bound for tensor PCA by proving a structural theorem on factorizations of lowdegree matrix polynomials with wellbehaved Fourier transforms. In section 8 we prove our lower bound for tensor PCA, using some tools proved in section 9.
Notation
For two matrices , let . Let denote the Frobenius norm, and its spectral norm. For matrix valued functions over and a distribution over , we will denote and by .
For a vector of formal variables , we use to denote the vector consisting of all monomials of degree at most in these variables. Furthermore, let us denote .
2 Distinguishing Problems and Robust Inference
In this section, we set up the formal framework within which we will prove our main result.
Uniform vs. Planted Distinguishing Problems
We begin by describing a class of distinguishing problems. For a set of real numbers, we will use denote a space of instances indexed by variables—for the sake of concreteness, it will be useful to think of as ; for example, we could have and as the set of all graphs on vertices. However, the results that we will show here continue to hold in other contexts, where the space of all instances is or .
Definition 2.1 (Uniform Distinguishing Problem).
Suppose that is the space of all instances, and suppose we have two distributions over , a product distribution (the “uniform” distribution), and an arbitrary distribution (the “planted” distribution).
In a uniform distinguishing problem, we are given an instance which is sampled with probability from and with probability from , and the goal is to determine with probability greater than which distribution was sampled from, for any constant .
Polynomial Systems
In the uniform distinguishing problems that we are interested in, the planted distribution will be a distribution over instances that obtain a large value for some optimization problem of interest (i.e. the max clique problem). We define polynomial systems in order to formally capture optimization problems.
Program 2.2 (Polynomial System).
Let be sets of real numbers, let , and let be a space of instances and be a space of solutions. A polynomial system is a set of polynomial equalities
where are polynomials in the program variables , representing , and in the instance variables , representing . We define to be the degree of in the program variables, and to be the degree of in the instance variables.
Remark 2.3.
For the sake of simplicity, the polynomial system prog:bopt has no inequalities. Inequalities can be incorporated in to the program by converting each inequality in to an equality with an additional slack variable. Our main theorem still holds, but for some minor modifications of the proof, as outlined in sec:proofofthm.
A polynomial system allows us to capture problemspecific objective functions as well as problemspecific constraints. For concreteness, consider a quadtratic program which checks if a graph on vertices contains a clique of size . We can express this with the polynomial system over program variables and instance variables , where iff there is an edge from to , as follows:
Planted Distributions
We will be concerned with planted distributions of a particular form; first, we fix a polynomial system of interest and some set of feasible solutions for , so that the program variables represent elements of . Again, for concreteness, if is the set of graphs on vertices, we can take to be the set of indicators for subsets of at least vertices.
For each fixed , let denote the uniform distribution over for which the polynomial system is feasible. The planted distribution is given by taking the uniform mixture over the , i.e., .
SoS Relaxations
If we have a polynomial system where for every , then the degree sumofsquares SDP relaxation for the polynomial system prog:bopt can be written as,
Program 2.4 (SoS Relaxation for Polynomial System).
Let be a polynomial system in instance variables and program variables . If for all , then an SoS relaxation for is
where is an matrix containing the variables of the SDP and are matrices containing the coefficients of in , so that the constraint encodes the constraint in the SDP variables. Note that the entries of are polynomials of degree at most in the instance variables.
Subinstances
Suppose that is a family of instances; then given an instance and a subset , let denote the subinstance consisting of coordinates within . Further, for a distribution over subsets of , let denote a subinstance generated by sampling . Let denote the set of all subinstances of an instance , and let denote the set of all subinstances of all instances.
Robust Inference
Our result will pertain to polynomial systems that define planted distributions whose solutions to subinstances generalize to feasible solutions over the entire instance. We call this property “robust inference.”
Definition 2.5.
Let be a family of instances, let be a distribution over subsets of , let be a polynomial system as in prog:bopt, and let be a planted distribution over instances feasible for . Then the polynomial system is said to satisfy the
robust inference property for probability distribution
on and subsampling distribution , if given a subsampling of an instance from , one can infer a setting of the program variables that remains feasible to for most settings of .Formally, there exists a map such that
for some negligible function . To specify the error probability, we will say that polynomial system is robustly inferable.
Main Theorem
We are now ready to state our main theorem.
Theorem 2.6.
Suppose that is a polynomial system as defined in prog:bopt, of degree at most in the program variables and degree at most in the instance variables. Let such that

The polynpomial system is robustly inferable with respect to the planted distribution and the subsampling distribution .

For , the polynomial system admits a degree SoS refutation with numbers bounded by with probability at least .
Let be such that for any subset with ,
There exists a degree matrix polynomial such that,
Remark 2.7.
Our argument implies a stronger result that can be stated in terms of the eigenspaces of the subsampling operator. Specifically, suppose we define
Then, the distinguishing polynomial exhibited by thm:main satisfies . This refinement can yield tighter bounds in cases where all monomials of a certain degree are not equivalent to each other. For example, in the Planted Clique problem, each monomial consists of a subgraph and the right measure of the degree of a subgraph is the number of vertices in it, as opposed to the number of edges in it.
In sec:examp, we will make the routine verifications that the conditions of this theorem hold for a variety of distinguishing problems: planted clique (lem:pcex), refuting random CSPs (lem:cspex, stochastic block models (lem:sbmex), densestsubgraph (lem:dksex), tensor PCA (lem:tpcaex), and sparse PCA (lem:spcaex). Now we will proceed to prove the theorem.
3 MomentMatching Pseudodistributions
We assume the setup from sec:lowdegdist: we have a family of instances , a polynomial system with a family of solutions , a “uniform” distribution which is a product distribution over , and a “planted” distribution over defied by the polynomial system as described in sec:lowdegdist.
The contrapositive of thm:lowdeg is that if is robustly inferable with respect to and a distribution over subinstances , and if there is no spectral algorithm for distinguishing and , then with high probability there is no degree SoS refutation for the polynomial system (as defined in prog:boptmat). To prove the theorem, we will use duality to argue that if no spectral algorithm exists, then there must exist an object which is in some sense close to a feasible solution to the SoS SDP relaxation.
Since each in the support of is feasible for by definition, a natural starting point is the SoS SDP solution for instances . With this in mind, we let be an arbitrary function from the support of over to PSD matrices. In other words, we take
where is the relative density of with respect to , so that , and is some matrix valued function such that and for all . Our goal is to find a PSD matrixvalued function that matches the lowdegree moments of in the variables , while being supported over most of (rather than just over the support of ).
The function is given by the following exponentially large convex program over matrixvalued functions,
Program 3.1 (Pseudodistribution Program).
(3.1)  
(3.2)  
(3.3) 
The constraint eq:lowdeg fixes , and so the objective function eq:obj can be viewied as minimizing , a proxy for the collision probability of the distribution, which is a measure of entropy.
Remark 3.2.
We have perturbed in eq:lambdaperturb so that we can easily show that strong duality holds in the proof of claim:dual. For the remainder of the paper we ignore this perturbation, as we can accumulate the resulting error terms and set to be small enough so that they can be neglected.
The dual of the above program will allow us to relate the existence of an SoS refutation to the existence of a spectral algorithm.
Program 3.3 (LowDegree Distinguisher).
where is the projection of to the PSD cone.
Claim 3.4.
prog:disting is a manipulation of the dual of prog:distrib, so that if prog:distrib has optimum , prog:disting as optimum at least .
Before we present the proof of the claim, we summarize its central consequence in the following theorem: if prog:distrib has a large objective value (and therefore does not provide a feasible SoS solution), then there is a spectral algorithm.
Theorem 3.5.
Fix a function be such that . Let be the function that gives the largest nonnegative eigenvalue of a matrix. Suppose then the optimum of prog:distrib is equal to only if there exists a lowdegree matrix polynomial such that,
while,
Proof.
By claim:dual, if the value of prog:distrib is , then there is a polynomial achieves a value of for the dual. It follows that
while
∎
It is interesting to note that the specific structure of the PSD matrix valued function plays no role in the above argument—since serves as a proxy for monomials in the solution as represented by the program variables , it follows that the choice of how to represent the planted solution is not critical. Although seemingly counterintuitive, this is natural because the property of being distinguishable by lowdegre distinguishers or by SoS SDP relaxations is a property of and .
We wrap up the section by presenting a proof of the claim:dual.
Proof of claim:dual.
We take the Lagrangian dual of prog:distrib. Our dual variables will be some combination of lowdegree matrix polynomials, , and a PSD matrix :
It is easy to verify that if is not PSD, then can be chosen so that the value of is . Similarly if there exists a lowdegree polynomial upon which and differ in expectation, can be chosen as a multiple of that polynomial so that the value of is .
Now, we argue that Slater’s conditions are met for prog:distrib, as is strictly feasible. Thus strong duality holds, and therefore
Taking the partial derivative of with respect to , we have
where the first derivative is in the space of functions from . By the convexity of as a function of , it follows that if we set , we will have the minimizer. Substituting, it follows that
(3.4) 
Now it is clear that the maximizing choice of is to set , the negation of the negativesemidefinite projection of . Thus eq:optbd simplifies to
(3.5) 
where we have used the shorthand . Now suppose that the lowdegree matrix polynomial achieves a righthandside value of
Consider . Clearly . Now, multiplying the above inequality through by the scalar , we have that
Therefore is at least , as if then the third term gives the lower bound, and otherwise the first term gives the lower bound.
Thus by substituting , the square root of the maximum of eq:unconst within an additive lowerbounds the maximum of the program
This concludes the proof. ∎
4 Proof of thm:main
We will prove thm:main by contradiction. Let us assume that there exists no degree matrix polynomial that distinguishes from . First, the lack of distinguishers implies the following fact about scalar polynomials.
Lemma 4.1.
Under the assumption that there are no degree distinguishers, for every degree scalar polynomial ,
Proof.
Suppose not, then the degree matrix polynomial will be a distinguisher between and . ∎
Constructing
First, we will use the robust inference property of to construct a pseudodistribution . Recall again that we have defined to be the relative density of with respect to , so that . For each subset , define a PSD matrixvalued function as,
where we use to denote the restriction of to , and to denote the instance given by completing the subinstance with the setting . Notice that is a function depending only on —this fact will be important to us. Define . Observe that is a PSD matrixvalued function that satisfies
(4.1) 
Since is an average over , each of which is a feasible solution with high probability, is close to a feasible solution to the SDP relaxation for . The following Lemma formalizes this intuition.
Define , and use to denote the orthogonal projection into .
Lemma 4.2.
Suppose prog:bopt satisfies the robust inference property with respect to planted distribution and subsampling distribution and if for all then for every , we have
Proof.
We begin by expanding the lefthand side by substituting the definition of . We have
And because the inner product is zero if is a feasible solution,  
And now letting 
Comments
There are no comments yet.