Matrix completion is one of the cornerstone problems in machine learning and has a diverse range of applications. One of the original motivations for it comes from theNetflix Problem where the goal is to predict user-movie ratings based on all the ratings we have observed so far, from across many different users. We can organize this data into a large, partially observed matrix where each row represents a user and each column represents a movie. The goal is to fill in the missing entries. The usual assumptions are that the ratings depend on only a few hidden characteristics of each user and movie and that the underlying matrix is approximately low rank. Another standard assumption is that it is incoherent, which we elaborate on later. How many entries of do we need to observe in order to fill in its missing entries? And are there efficient algorithms for this task?
There have been thousands of papers on this topic and by now we have a relatively complete set of answers. A representative result (building on earlier works by Fazel , Recht, Fazel and Parrilo , Srebro and Shraibman , Candes and Recht , Candes and Tao ) due to Keshavan, Montanari and Oh  can be phrased as follows: Suppose is an unknown matrix that has rank
but each of its entries has been corrupted by independent Gaussian noise with standard deviation. Then if we observe roughly
of its entries, the locations of which are chosen uniformly at random, there is an algorithm that outputs a matrix
that with high probability satisfies
There are extensions to non-uniform sampling models [55, 24], as well as various efficiency improvements [47, 40]. What is particularly remarkable about these guarantees is that the number of observations needed is within a logarithmic factor of the number of parameters — — that define the model.
In fact, there are benefits to working with even higher-order structure but so far there has been little progress on natural extensions to the tensor setting. To motivate this problem, consider the Groupon Problem (which we introduce here to illustrate this point) where the goal is to predict user-activity ratings. The challenge is that which activities we should recommend (and how much a user liked a given activity) depends on time as well — weekday/weekend, day/night, summer/fall/winter/spring, etc. or even some combination of these. As above, we can cast this problem as a large, partially observed tensor where the first index represents a user, the second index represents an activity and the third index represents the time period. It is again natural to model it as being close to low rank, under the assumption that a much smaller number of (latent) factors about the interests of the user, the type of activity and the time period should contribute to the rating. How many entries of the tensor do we need to observe in order to fill in its missing entries? This problem is emblematic of a larger issue: Can we always solve linear inverse problems when the number of observations is comparable to the number of parameters in the mode, or is computational intractability an obstacle?
In fact, one of the advantages of working with tensors is that their decompositions are unique in important ways that matrix decompositions are not. There has been a groundswell of recent work that uses tensor decompositions for exactly this reason for parameter learning in phylogenetic trees , HMMs , mixture models , topic models  and to solve community detection . In these applications, one assumes access to the entire tensor (up to some sampling noise). But given that the underlying tensors are low-rank, can we observe fewer of their entries and still utilize tensor methods?
A wide range of approaches to solving tensor completion have been proposed [56, 35, 70, 73, 61, 52, 48, 14, 74]. However, in terms of provable guarantees none111Most of the existing approaches rely on computing the tensor nuclear norm, which is hard to compute [39, 41]. The only other algorithms we are aware of [48, 14] require that the factors be orthogonal. This is a rather strong assumption. First, orthogonality requires the rank to be at most . Second, even when , most tensors need to be “whitened” to be put in this form and then a random sample from the “whitened” tensor would correspond to a (dense) linear combination of the entries of the original tensor, which would be quite a different sampling model. of them improve upon the following näive algorithm. If the unknown tensor is we can treat it as a collection of matrices each of size . It is easy to see that if has rank at most then each of these slices also has rank at most (and they inherit incoherence properties as well). By treating a third-order tensor as nothing more than an unrelated collection of low-rank matrices, we can complete each slice separately using roughly observations in total. When the rank is constant, this is a quadratic number of observations even though the number of parameters in the model is linear.
Here we show how to solve the (noisy) tensor completion problem with many fewer observations. Let . We give an algorithm based on the sixth level of the sum-of-squares hierarchy that can accurately fill in the missing entries of an unknown, incoherent tensor that is entry-wise close to being rank with roughly
observations. Moreover, our algorithm works even when the observations are corrupted by noise. When , this amounts to about observations per slice which is much smaller than what we would need to apply matrix completion on each slice separately. Our algorithm needs to leverage the structure between the various slices.
1.1 Our Results
We give an algorithm for noisy tensor completion that works for third-order tensors. Let be a third-order tensor that is entry-wise close to being low rank. In particular let
where is a scalar and and
are vectors of length, and respectively. Here is a tensor that represents noise. Its entries can be thought of as representing model misspecification because is not exactly low rank or noise in our observations or both. We will only make assumptions about the average and maximum absolute value of entries in . The vectors and are called factors, and we will assume that their norms are roughly for reasons that will become clear later. Moreover we will assume that the magnitude of each of their entries is bounded by in which case we call the vectors -incoherent222Incoherence is often defined based on the span of the factors, but we will allow the number of factors to be larger than any of the dimensions of the tensor so we will need an alternative way to ensure that the non-zero entries of the factors are spread out. (Note that a random vector of dimension and norm will be -incoherent with high probability.) The advantage of these conventions are that a typical entry in does not become vanishingly small as we increase the dimensions of the tensor. This will make it easier to state and interpret the error bounds of our algorithm.
Let represent the locations of the entries that we observe, which (as is standard) are chosen uniformly at random and without replacement. Set . Our goal is to output a hypothesis that has small entry-wise error, defined as:
This measures the error on both the observed and unobserved entries of . Our goal is to give algorithms that achieve vanishing error, as the size of the problem increases. Moreover we will want algorithms that need as few observations as possible. Here and throughout let and . Our main result is:
Theorem 1.1 (Main theorem).
Suppose we are given observations whose locations are chosen uniformly at random (and without replacement) from a tensor of the form (1) where each of the factors and are -incoherent. Let . And let . Then there is a polynomial time algorithm that outputs a hypothesis that with probability satisfies
provided that .
Since the error bound above is quite involved, let us dissect the terms in it. In fact, having an additive in the error bound is unavoidable. We have not assumed anything about in (1) except a bound on the average and maximum magnitude of its entries. If were a random tensor whose entries are and then no matter how many entries of we observe, we cannot hope to obtain error less than on the unobserved entries333The factor of is not important, and comes from needing a bound on the empirical error of how well the low rank part of itself agrees with our observations so far. We could replace it with any other constant factor that is larger than .. The crucial point is that the remaining term in the error bound becomes when which for polylogarithmic improves over the näive algorithm for tensor completion by a polynomial factor in terms of the number of observations. Moreover our algorithm works without any constraints that factors and be orthogonal or even have low inner-product.
In non-degenerate cases we can even remove another factor of from the number of observations we need. Suppose that is a tensor as in (1), but letand are still fixed, but because of the randomness in the coefficients , the entries of are now random variables.
Suppose we are given observations whose locations are chosen uniformly at random (and without replacement) from a tensor of the form (1), where each coefficient is a Gaussian random variable with mean zero and variance one, and each of the factors and are -incoherent.
Further, suppose that for a fraction of the entries of , we have and that is a tensor where each entry is a Gaussian with mean zero and variance . Then there is a polynomial time algorithm that outputs a hypothesis that satisfies
for a fraction of the entries. The algorithm succeeds with probability at least over the randomness of the locations of the observations, and the realizations of the random variables and the entries of . Moreover the algorithm uses observations.
In the setting above, it is enough that the coefficients are random and that the non-zero entries in the factors are spread out to ensure that the typical entry in has variance about . Consequently, the typical entry in is about . This fact combined with the error bounds in Theorem 1.1 immediately yield the above corollary . Remarkably, the guarantee is interesting even when (the so-called overcomplete case). In this setting, if we observe a subpolynomial fraction of the entries of we are able to recover almost all of the remaining entries almost entirely, even though there are no known algorithms for decomposing an overcomplete, third-order tensor even if we are given all of its entries, at least without imposing much stronger conditions that the factors be nearly orthogonal .
We believe that this work is a natural first step in designing practically efficient algorithms for tensor completion. Our algorithms manage to leverage the structure across the slices through the tensor, instead of treating each slice as an independent matrix completion problem. Now that we know this is possible, a natural follow-up question is to get more efficient algorithms. Our algorithms are based on the sixth level of the sum-of-squares hierarchy and run in polynomial time, but are quite far from being practically efficient as stated. Recent work of Hopkins et al.  shows how to speed up sum-of-squares and obtain nearly linear time algorithms for a number of problems where the only previously known algorithms ran in a prohibitively large degree polynomial running time. Another approach would be to obtain similar guarantees for alternating minimization. Currently, the only known approaches  require that the factors are orthonormal and only work in the undercomplete case. Finally, it would be interesting to get algorithms that recover a low rank tensor exactly when there is no noise.
1.2 Our approach
All of our algorithms are based on solving the following optimization problem:
and outputting the minimizer , where is some norm that can be computed in polynomial time. It will be clear from the way we define the norm that the low rank part of will itself be a good candidate solution. But this is not necessarily the solution that the convex program finds. How do we know that whatever it finds not only has low entry-wise error on the observed entries of , but also on the unobserved entries too?
This is a well-studied topic in statistical learning theory, and as is standard we can use the notion of Rademacher complexity as a tool to bound the error. The Rademacher complexity is a property of the norm we choose, and our main innovation is to use the sum-of-squares hierarchy to suggest a suitable norm. Our results are based on establishing a connection between noisy tensor completion and refuting random constraint satisfaction problems. Moreover, our analysis follows by embedding algorithms for refutation within the sum-of-squares hierarchy as a method to bound the Rademacher complexity.
A natural question to ask is: Are there other norms that have even better Rademacher complexity than the ones we use here, and that are still computable in polynomial time? It turns out that any such norm would immediately lead to much better algorithms for refuting random constraint satisfaction problems than we currently know. We have not yet introduced Rademacher complexity yet, so we state our lower bounds informally:
Theorem 1.3 (informal).
For any , if there is a polynomial time algorithm that achieves error
through the framework of Rademacher complexity then there is an efficient algorithm for refuting a random -SAT formula on variables with clauses. Moreover the natural sum-of-squares relaxation requires at least -levels in order to achieve the above error (again through the framework of Rademacher complexity).
These results follow directly from the works of Grigoriev , Schoenebeck  and Feige . There are similar connections between our upper bounds and the work of Coja-Oghlan, Goerdt and Lanka  who give an algorithm for strongly refuting random -SAT. In Section 2 we explain some preliminary connections between these fields, at which point we will be in a better position to explain how we can borrow tools from one area to address open questions in another. We state this theorem more precisely in Corollary 2.13 and Corollary 5.6, which provide both conditional and unconditional lower bounds that match our upper bounds.
1.3 Computational vs. Sample Complexity Tradeoffs
It is interesting to compare the story of matrix completion and tensor completion. In matrix completion, we have the best of both worlds: There are efficient algorithms which work when the number of observations is close to the information theoretic minimum. In tensor completion, we gave algorithms that improve upon the number of observations needed by a polynomial factor but still require a polynomial factor more observations than can be achieved if we ignore computational considerations. We believe that for many other linear inverse problems (e.g. sparse phase retrieval), there may well be gaps between what can be achieved information theoretically and what can be achieved with computationally efficient estimators. Moreover, proving lower bounds against the sum-of-squares hierarchy offers a new type of evidence that problems are hard, that does not rely on reductions from other average-case hard problems which seem (in general) to be brittle and difficult to execute while preserving the naturalness of the input distribution. In fact, even when there are such reductions , the sum-of-squares hierarchy offers a methodology to make sharper predictions for questions like: Is there a quasi-polynomial time algorithm for sparse PCA, or does it require exponential time?
In Section 2 we introduce Rademacher complexity, the tensor nuclear norm and strong refutation. We connect these concepts by showing that any norm that can be computed in polynomial time and has good Rademacher complexity yields an algorithm for strongly refuting random -SAT. In Section 3 we show how a particular algorithm for strong refutation can be embedded into the sum-of-squares hierarchy and directly leads to a norm that can be computed in polynomial time and has good Rademacher complexity. In Section 4 we establish certain spectral bounds that we need, and prove our main upper bounds. In Section 5 we prove lower bounds on the Rademacher complexity of the sequence of norms arising from the sum-of-squares hierarchy by a direct reduction to lower bounds for refuting random -XOR. In Appendix A we give a reduction from noisy tensor completion on asymmetric tensors to symmetric tensors. This is what allows us to extend our analysis to arbitrary order tensors, but the proofs are essentially identical to those in the case but more notationally involved so we omit them.
2 Noisy Tensor Completion and Refutation
Here we make the connection between noisy tensor completion and strong refutation explicit. Our first step is to formulate a problem that is a special case of both, and studying it will help us clarify how notions from one problem translate to the other.
2.1 The Distinguishing Problem
Here we introduce a problem that we call the distinguishing problem. We are given random observations from a tensor and promised that the underlying tensor fits into one of the two following categories. We want an algorithm that can tell which case the samples came from, and succeeds using as few observations as possible. The two cases are:
Each observation is chosen uniformly at random (and without replacement) from a tensor where independently for each entry we set
where is a vector whose entries are .
Alternatively, each observation is chosen uniformly at random (and without replacement) from a tensor each of whose entries is independently set to either or and with equal probability.
In the first case, the entries of the underlying tensor are predictable. It is possible to guess a fraction of them correctly, once we have observed enough of its entries to be able to deduce . And in the second case, the entries of are completely unpredictable because no matter how many entries we have observed, the remaining entries are still random. Thus we cannot predict any of the unobserved entries better than random guessing.
Now we will explain how the distinguishing problem can be equivalently reformulated in the language of refutation. We give a formal definition for strong refutation later (Definition 2.10), but for the time being we can think of it as the task of (given an instance of a constraint satisfaction problem) certifying that there is no assignment that satisfies many of the clauses. We will be interested in -XOR formulas, where there are variables that are constrained to take on values or . Each clause takes the form
where the right hand side is either or . The clause represents a parity constraint but over the domain instead of over the usual domain . We have chosen the notation suggestively so that it hints at the mapping between the two views of the problem. Each observation maps to a clause and vice-versa. Thus an equivalent way to formulate the distinguishing problem is that we are given a -XOR formula which was generated in one of the following two ways:
Each clause in the formula is generated by choosing an ordered triple of variables uniformly at random (and without replacement) and we set
where is a vector whose entries are . Now represents a planted solution and by design our sampling procedure guarantees that many of the clauses that are generated are consistent with it.
Alternatively, each clause in the formula is generated by choosing an ordered triple of variables uniformly at random (and without replacement) and we set where is a random variable that takes on values and .
In the first case, the -XOR formula has an assignment that satisfies a fraction of the clauses in expectation by setting . In the second case, any fixed assignment satisfies at most half of the clauses in expectation. Moreover if we are given clauses, it is easy to see by applying the Chernoff bound and taking a union bound over all possible assignments that with high probability there is no assignment that satisfies more than a fraction of the clauses.
This will be the starting point for the connections we establish between noisy tensor completion and refutation. Even in the matrix case these connections seem to have gone unnoticed, and the same spectral bounds that are used to analyze the Rademacher complexity of the nuclear norm  are also used to refute random -SAT formulas , but this is no accident.
2.2 Rademacher Complexity
Ultimately our goal is to show that the hypothesis that our convex program finds is entry-wise close to the unknown tensor . By virtue of the fact that is a feasible solution to (2) we know that it is entry-wise close to on the observed entries. This is often called the empirical error:
For a hypothesis , the empirical error is
Recall that is the average entry-wise error between and , over all (observed and unobserved) entries. Also recall that among the candidate ’s that have low empirical error, the convex program finds the one that minimizes for some polynomial time computable norm. The way we will choose the norm and our bound on the maximum magnitude of an entry of will guarantee that the low rank part of will with high probability be a feasible solution. This ensures that for the we find is not too large either. One way to bound is to show that no hypothesis in the unit norm ball can have too large a gap between its error and its empirical error (and then dilate the unit norm ball so that it contains ). With this in mind, we define:
For a norm and a set of observations, the generalization error is
It turns out that one can bound the generalization error via the Rademacher complexity.
Let be a set of locations chosen uniformly at random (and without replacement) from . And let be random variables. The Rademacher complexity of (the unit ball of) the norm is defined as
Let and suppose each with has bounded loss — i.e. and that locations are chosen uniformly at random and without replacement. Then with probability at least , for every with , we have
We repeat the proof here following  for the sake of completeness but readers familiar with Rademacher complexity can feel free to skip ahead to Definition 2.5. The main idea is to let be an independent set of samples from the same distribution, again without replacement. The expected generalization error is:
Then we can write
where the last line follows by the concavity of . Now we can use the Rademacher (random ) variables and rewrite the right hand side of the above expression as follows:
where the second, fourth and fifth inequalities use the triangle inequality. The equality uses the fact that the ’s are random signs and hence can absorb the absolute value around the terms that they multiply. The second term above in the last expression is exactly the Rademacher complexity that we defined earlier. This argument only shows that the Rademacher complexity bounds the expected generalization error. However it turns out that we can also use the Rademacher complexity to bound the generalization error with high probability by applying McDiarmid’s inequality. See for example . We also remark that generalization bounds are often stated in the setting where samples are drawn i.i.d., but here the locations of our observations are sampled without replacement. Nevertheless for the settings of we are interested in, the fraction of our observations that are repeats is — in fact it is subpolynomial — and we can move back and forth between both sampling models at negligible loss in our bounds.
In much of what follows it will be convenient to think of and as being represented by a sparse tensor , defined below.
Let be an tensor such that
This definition greatly simplifies our notation. In particular we have
where we have introduced the notation to denote the natural inner-product between tensors. Our main technical goal in this paper will be to analyze the Rademacher complexity of a sequence of successively tighter norms that we get from the sum-of-squares hierarchy, and to derive implications for noisy tensor completion and for refutation from these bounds.
2.3 The Tensor Nuclear Norm
Here we introduce the tensor nuclear norm and analyze its Rademacher complexity. Many works have suggested using it to solve tensor completion problems [56, 70, 74]. This suggestion is quite natural given that it is based on a similar guiding principle as that which led to -minimization in compressed sensing and the nuclear norm in matrix completion . More generally, one can define the atomic norm for a wide range of linear inverse problems , and the -norm, the nuclear norm and the tensor nuclear norm are all special cases of this paradigm. Before we proceed, let us first formally define the notion of incoherence that we gave in the introduction.
A length vector is -incoherent if and .
Recall that we chose to work with vectors whose typical entry is a constant so that the entries in do not become vanishingly small as the dimensions of the tensor increase. We can now define the tensor nuclear norm444The usual definition of the tensor nuclear norm has no constraints that the vectors , and be -incoherent. However, adding this additional requirement only serves to further restrict the unit norm ball, while ensuring that the low rank part of (when scaled down) is still in it, since the factors of are anyways assumed to be -incoherent. This makes it easier to prove recovery guarantees because we do not need to worry about sparse vectors behaving very differently than incoherent ones, and since we are not going to compute this norm anyways this modification will make our analysis easier.:
Definition 2.7 (tensor nuclear norm).
Let be defined as
The tensor nuclear norm of which is denoted by is the infimum over such that .
In particular . Finally we give an elementary bound on the Rademacher complexity of the tensor nuclear norm. Recall that .
Recall the definition of given in Definition 2.5. With this we can write
We can now adapt the discretization approach in , although our task is considerably simpler because we are constrained to -incoherent ’s. In particular, let
By standard bounds on the size of an -net , we get that . Suppose that for all . Then for an arbitrary, but -incoherent we can expand it as where each and similarly for and . And now
Moreover since each entry in has magnitude at most we can apply a Chernoff bound to conclude that for any particular we have
with probability at least . Finally, if we set and we set we get that
and this completes the proof. ∎
The important point is that the Rademacher complexity of the tensor nuclear norm is whenever . In the next subsection we will connect this to refutation in a way that allows us to strengthen known hardness results for computing the tensor nuclear norm [39, 41] and show that it is even hard to compute in an average-case sense based on some standard conjectures about the difficulty of refuting random -SAT.
2.4 From Rademacher Complexity to Refutation
Here we show the first implication of the connection we have established. Any norm that can be computed in polynomial time and has good Rademacher complexity immediately yields an algorithm for strongly refuting random -SAT and -XOR formulas. Next let us finally define strong refutation.
For a formula , let be the largest fraction of clauses that can be satisfied by any assignment.
In what follows, we will use the term random -XOR formula to refer to a formula where each clause is generated by choosing an ordered triple of variables uniformly at random (and without replacement) and setting where is a random variable that takes on values and .
An algorithm for strongly refuting random -XOR takes as input a -XOR formula and outputs a quantity that satisfies
For any -XOR formula ,
If is a random -XOR formula with clauses, then with high probability
This definition only makes sense when is large enough so that holds with high probability, which happens when . The goal is to design algorithms that use as few clauses as possible, and are able to certify that a random formula is indeed far from satisfiable (without underestimating the fraction of clauses that can be satisfied) and to do so as close as possible to the information theoretic threshold.
Now let us use a polynomial time computable norm that has good Rademacher complexity to give an algorithm for strongly refuting random -XOR. As in Section 2.1, given a formula we map its clauses to a collection of observations according to the usual rule: If there are variables, we construct an tensor where for each clause of the form we put the entry at location . All the rest of the entries in are set to zero. We solve the following optimization problem:
Let be the optimum value. We set . What remains is to prove that the output of this algorithm solves the strong refutation problem for -XOR.
Suppose that is computable in polynomial time and satisfies whenever and is a vector with entries. Further suppose that for any with its entries are bounded by in absolute value. Then (3) can be solved in polynomial time and if then setting solves strong refutation for -XOR with clauses.
The key observation is the following inequality which relates (3) to .
To establish this inequality, let be the assignment that maximizes the fraction of clauses satisfied. If we set and we have that by assumption. Thus is a feasible solution. Now with this choice of for the right hand side, every term in the sum that corresponds to a satisfied clause contributes and every term that corresponds to an unsatisfied clause contributes . We get for this choice of , and this completes the proof of the inequality above.
The crucial point is that the expectation of the right hand side over and is exactly the Rademacher complexity. However we want a bound that holds with high probability instead of just in expectation. It follows from McDiarmid’s inequality and the fact that the entries of and of are bounded by and by in absolute value respectively that if we take observations the right hand side will be with high probability. In this case, rearranging the inequality we have
The right hand side is exactly and is with high probability, which implies that both conditions in the definition for strong refutation hold and this completes the proof. ∎
We can now combine Theorem 2.11 with the bound on the Rademacher complexity of the tensor nuclear norm given in Lemma 2.8 to conclude that if we could compute the tensor nuclear norm we would also obtain an algorithm for strongly refuting random -XOR with only clauses. It is not obvious but it turns out that any algorithm for strongly refuting random -XOR implies one for -SAT. Let us define strong refutation for -SAT. We will refer to any variable or its negation as a literal. We will use the term random -SAT formula to refer to a formula where each clause is generated by choosing an ordered triple of literals uniformly at random (and without replacement) and setting .
An algorithm for strongly refuting random -SAT takes as input a -SAT formula and outputs a quantity that satisfies
For any -SAT formula ,
If is a random -SAT formula with clauses, then with high probability
The only change from Definition 2.10 comes from the fact that for -SAT a random assignment satisfies a fraction of the clauses in expectation. Our goal here is to certify that the largest fraction of clauses that can be satisfied is . The connection between refuting random -XOR and -SAT is often called “Feige’s XOR Trick” . The first version of it was used to show that an algorithm for -refuting -XOR can be turned into an algorithm for -refuting -SAT. However we will not use this notion of refutation so for further details we refer the reader to . The reduction was extended later by Coja-Oghlan, Goerdt and Lanka  to strong refutation, which for us yields the following corollary:
Suppose that is computable in polynomial time and satisfies whenever and is a vector with entries. Suppose further that for any with its entries are bounded by in absolute value and that . Then there is a polynomial time algorithm for strongly refuting a random -SAT formula with clauses.
Now we can get a better understanding of the obstacles to noisy tensor completion by connecting it to the literature on refuting random -SAT. Despite a long line of work on refuting random -SAT [37, 32, 31, 30, 25], there is no known polynomial time algorithm that works with clauses for any . Feige  conjectured that for any constant , there is no polynomial time algorithm for refuting random -SAT with clauses555In Feige’s paper  there was no need to make the conjecture any stronger because it was already strong enough for all of the applications in inapproximability.. Daniely et al.  conjectured that there is no polynomial time algorithm for for any . What we have shown above is that any norm that is a relaxation to the tensor nuclear norm and can be computed in polynomial time but has Rademacher complexity is for would disprove the conjecture of Daniely et al.  and would yield much better algorithms for refuting random -SAT than we currently know, despite fifteen years of work on the subject.
This leaves open an important question. While there are no known algorithms for strongly refuting random -SAT with clauses, there are algorithms that work with roughly clauses . Do these algorithms have any implications for noisy tensor completion? We will adapt the algorithm of Coja-Oghlan, Goerdt and Lanka  and embed it within the sum-of-squares hierarchy. In turn, this will give us a norm that we can use to solve noisy tensor completion which uses a polynomial factor fewer observations than known algorithms.
3 Using Resolution to Bound the Rademacher Complexity
Here we introduce the sum-of-squares hierarchy and will use it (at level six) to give a relaxation to the tensor nuclear norm. This will be the norm that we will use in proving our main upper bounds. First we introduce the notion of a pseudo-expectation operator from [7, 8, 10]:
Definition 3.1 (Pseudo-expectation ).
Let be even and let denote the linear subspace of all polynomials of degree at most on variables. A linear operator is called a degree pseudo-expectation operator if it satisfies the following conditions:
(2) , for any degree at most polynomial (nonnegativity)
Moreover suppose that with . We say that satisfies the constraint if for every . And we say that satisfies the constraint if for every .
The rationale behind this definition is that if is a distribution on vectors in then the operator is a degree pseudo-expectation operator for every — i.e. it meets the conditions of Definition 3.1. However the converse is in general not true. We are now ready to define the norm that will be used in our upper bounds:
Definition 3.2 ( norm).
We let be the set of all such that there exists a degree pseudo-expectation operator on satisfying the following polynomial constraints (where the variables are the ’s)
, and for all and
for all and .
The norm of which is denoted by is the infimum over such that .
The constraints in Definition 3.1 can be expressed as an -sized semidefinite program. This implies that given any set of polynomial constraints of the form , , one can efficiently find a degree pseudo-distribution satisfying those constraints if one exists. This is often called the degree Sum-of-Squares algorithm [69, 62, 53, 63]. Hence we can compute the norm of any tensor to within arbitrary accuracy in polynomial time. And because it is a relaxation to the tensor nuclear norm which is defined analogously but over a distribution on -incoherent vectors instead of a pseudo-distribution over them, we have that for every tensor . Throughout most of this paper, we will be interested in the case .
3.2 Resolution in
Recall that any polynomial time computable norm with good Rademacher complexity with observations yields an algorithm for strong refutation with roughly clauses too. Here we will use an algorithm for strongly refuting random -SAT to guide our search for an appropriate norm. We will adapt an algorithm due to Coja-Oghlan, Goerdt and Lanka  that strongly refutes random -SAT, and will instead give an algorithm that strongly refutes random -XOR. Moreover each of the steps in the algorithm embeds into the sixth level of the sum-of-squares hierarchy by mapping resolution operations to applications of Cauchy-Schwartz, that ultimately show how the inequalities that define the norm (Definition 3.2) can be manipulated to give bounds on its own Rademacher complexity.
Let’s return to the task of bounding the Rademacher complexity of . Let be arbitrary but satisfy . Then there is a degree six pseudo-expectation meeting the conditions of Definition 3.2. Using Cauchy-Schwartz we have:
To simplify our notation, we will define the following polynomial
which we will use repeatedly. If is even then any degree pseudo-expectation operator satisfies the constraint for every polynomial of degree at most (e.g., see Lemma in ). Hence the right hand side of (4) can be bounded as:
It turns out that bounding the right-hand side of (5) boils down to bounding the spectral norm of the following matrix.
Let be the
matrix whose rows and columns are indexed over ordered pairsand respectively, defined as
We can now make the connection to resolution more explicit: We can think of a pair of observations as a pair of -XOR constraints, as usual. Resolving them (i.e. multiplying them) we obtain a -XOR constraint
captures the effect of resolving certain pairs of -XOR constraints into -XOR constraints. The challenge is that the entries in
are not independent, so bounding its maximum singular value will require some care. It is important that the rows ofare indexed by and the columns are indexed by , so that and come from different -XOR clauses, as do and , and otherwise the spectral bounds that we will want to prove about would simply not be true! This is perhaps the key insight in .
It will be more convenient to decompose and reason about its two types of contributions separately. To that end, we let be the matrix whose non-zero entries are of the form
and all of its other entries are set to zero. Then let be the matrix whose entries are of the form
By construction we have . Finally:
The pseudo-expectation operator satisfies for all , and hence we have
Now let be a vector of variables where the th entry is and similarly for . Then we can re-write the right hand side as a matrix inner-product: