# Testing tensor products

A function f:[n]^d→F_2 is a direct sum if it is of the form f((a_1,...,a_d)) = f_1(a_1)+... + f_d (a_d), for some d functions f_1,...,f_d:[n]→F_2. We present a 4-query test which distinguishes between direct sums and functions that are far from them. The test relies on the BLR linearity test and on the direct product test constructed by Dinur and Steurer. We also present a different test, which queries the function (d+1) times, but is easier to analyze. In multiplicative ± 1 notation, the above reads as follows. A d-dimensional tensor with ± 1 entries is called a tensor product if it is a tensor product of d vectors with ± 1 entries. In other words, it is a tensor product if it is of rank 1. The presented tests check whether a given tensor is close to a tensor product.

## Authors

• 9 publications
• 1 publication
• ### Towards a General Direct Product Testing Theorem

The Direct Product encoding of a string a∈{0,1}^n on an underlying domai...
01/18/2019 ∙ by Elazar Goldenberg, et al. ∙ 0

• ### Simple, Fast Semantic Parsing with a Tensor Kernel

We describe a simple approach to semantic parsing based on a tensor prod...
07/02/2015 ∙ by Daoud Clarke, et al. ∙ 0

• ### Faster Johnson-Lindenstrauss Transforms via Kronecker Products

The Kronecker product is an important matrix operation with a wide range...
09/11/2019 ∙ by Ruhui Jin, et al. ∙ 0

• ### Symmetric Tensor Completion from Multilinear Entries and Learning Product Mixtures over the Hypercube

We give an algorithm for completing an order-m symmetric low-rank tensor...
06/09/2015 ∙ by Tselil Schramm, et al. ∙ 0

• ### Tensor Matched Subspace Detection

The problem of testing whether an incomplete tensor lies in a given tens...
10/23/2017 ∙ by Cuiping Li, et al. ∙ 0

• ### Tensor GMRES and Golub-Kahan Bidiagonalization methods via the Einstein product with applications to image and video processing

In the present paper, we are interested in developing iterative Krylov s...
05/15/2020 ∙ by M. El Guide, et al. ∙ 0

• ### Addressing Computational Bottlenecks in Higher-Order Graph Matching with Tensor Kronecker Product Structure

Graph matching, also known as network alignment, is the problem of findi...
11/17/2020 ∙ by Charles Colley, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Given functions , their direct sum is the function given by where addition is in the field . We denote . We study the testability question: given a function test if it is a direct sum, namely if it belongs to the set

 DirectSumn,d={f1⊕⋯⊕fd|fi:[n]→F2}.

We suggest and analyze a four-query test which we call the “square in a cube” test, and show that it is a strong absolute local test for being a direct sum. By this we mean that neither the number of queries nor the testability constant depend on the parameters and . We also describe a simpler -query test, whose easy analysis we defer to section 4.

Our square in a cube test is as follows

1. Choose uniformly at random

2. Choose two subsets uniformly at random and let be their symmetric difference.

3. Accept iff

 f(a)+f(aSb)+f(aTb)+f(aUb)=0,

where is the string whose -th coordinate equals if and otherwise.

We prove

###### Theorem 1 (Main).

There exists an absolute constant s.t. for all and given ,

 dist(f,DirectSumn,d)≤c⋅Pra,b,S,T[f(a)+f(aSb)+f(aTb)+f(aS△Tb)≠0]

where are chosen independently and uniformly from the domain of , and are random subsets of , and refers to relative Hamming distance, namely .

###### Remark 2.

The above theorem is true in a greater generality. Namely, the same proof can be adapted to the case of a function , where the corresponding subspace of direct sums is

 DirectSumn1,…,nd={f1⊕⋯⊕fd|fi:[ni]→F2}.
##### Testing if a tensor has rank 1.

An equivalent way to formulate our question is as a test for whether a -dimensional tensor with entries has rank . Indeed moving to multiplicative notation and writing and , we are asking whether there are such that

 h=h1⊗⋯⊗hd.

Denoting

 TensorProductn,d={h1⊗⋯⊗hd|hi:[n]→{−1,1}}

we have

###### Corollary 3.

There exists an absolute constant s.t. for all and , for every ,

 dist(h,TensorProduct)≤c⋅Pra,b,S,T[h(a)⋅h(aSb)⋅h(aTb)⋅h(aS△Tb)≠1].
##### Background.

Direct sum is a natural construction that is often used in complexity for hardness amplification. It is related to the direct product construction: a function is the direct product of as above if for all . The testability of direct products has received attention [GS97, DR06, DG08, IKW12, DS14] as abstraction of certain PCP tests and it was not surprising to find [DDG17] that there is a connection between testing direct products to testing direct sum. However, somewhat unsatisfyingly this connection was confined to testing a certain type of symmetric direct sum. A symmetric direct sum is a function that is a direct product with all components equal; namely such that there is a single such that

In [DDG17] a test was shown for testing if a given is a symmetric direct sum, and the analysis carried out relying on the direct product test. It was left as an open question to devise and analyze a test for the property of being a (not necessarily symmetric) direct sum.

##### Method.

Our proof, similarly to [DDG17], relies on a combination of the BLR linearity testing theorem [BLR93] and the direct product test of [DS14]. The trick is to find the right combination. We first observe that once we fix , the test is confined to a set of at most points in the domain, and can be viewed as performing a BLR (affinity rather than linearity) test on this piece of the domain. From the BLR theorem we deduce an affine linear function on this piece. The next step is to combine the different affine linear functions, one from each piece, into one global direct sum, and this is done by reducing to direct product.

## 2 Tensor Product

We refer to as a -dimensional binary tensor.

###### Definition 4.

A -dimensional binary tensor is a tensor product, if there exist one-dimensional tensors, i.e., vectors, such that .

###### Definition 5.

A -dimensional binary tensor is -close to tensor product, if there exists tensor product such that

 Prx∼[n]d(g(x)=g′(x))≥1−ε,

where is chosen uniformly at random.

In the next two sections we present two different approaches for testing whether a -dimensional binary tensor is close to a tensor product.

## 3 Square in a Cube Test

We start by introducing some notation.

Given two vectors , define

• ;

• the induced subcube is the binary cube ;

• the projection map defined for as

 ρa,b(x)(i)=⎧⎨⎩a(i)=b(i),i∉Δ(a,b);b(i),i∈Δ(a,b) and x(i)=1;a(i),i∈Δ(a,b) and x(i)=0;
• the function as .

The following test is the same as the on preceeding the formulation of Theorem 1.

###### Test 6.

Square in a Cube test. Given query access to a function :

• Choose uniformly at random.

• Choose uniformly at random.

• Query at and .

• Accept iff .

###### Theorem 7.

Suppose a function passes Test 6 with probabilty , then is -close to a tensor product.

### 3.1 The BLR affinity test

The Blum-Luby-Rubinfeld linearity test was introduced in [BLR93], where its remarkable properties were proven. Later a simpler proof via Fourier analysis was presented, e.g. see [BCH95]. Below we give a variation of this test for affine functions, see [O’D14, Chapter 1].

###### Definition 8.

A function is called affine, if there exists a set and a constant such that for every vector

 g(x)=c+∑i∈Sxi.
###### Definition 9.

A function is said to be -close to being affine, if there exists an affine function such that

 Prx∼Fd2(g(x)=g′(x))≥1−ε,

where is chosen uniformly at random.

Note that (see [O’D14, Exercise 1.26]) a function is affine iff for any two vectors it satisfies

 g(0)+g(x)+g(y)+g(x+y)=0. (1)

The BLR test implies that if a function satisfies (1

) with high probability, then it is close to an affine function.

###### Test 10.

• Choose and independently and uniformly at random.

• Query at and .

• Accept if .

###### Theorem 11 ([Blr93]).

Suppose passes the affinity test with probability . In other words, satisfies

 Prx,y∼Fd2(g(0)+g(x)+g(y)+g(x+y)=0)=1−ε.

Then is -close to being affine. ∎

### 3.2 Direct Product Test

###### Definition 12.

A function , where , is called a direct product if it is of the form . Given functions , their direct product is a function denoted and defined as .

In [DS14], Dinur and Steurer presented a -query test which distinguishes between direct products and functions that are far from direct products with constant probability.

###### Test 13.

– Two-query test with intersection . Given query access to a function :

• Choose a set of size uniformly at random.

• Choose uniformly at random, conditioned .

• Query at and .

• Accept iff .

###### Theorem 14.

[DS14, Theorem 1.1] Let be positive integers, let , where , and let . Let be given such that

 PrA,x,y(g(x)A=g(y)A)≥1−ε,

where are chosen w.r.t. the test distribution . Then there exists a direct product function such that , where stand for the Hamming distance. In particular, when this implies

 Prx(f(x)=g(x))≥1−O(ε), as k→∞% and N→∞.
###### Remark 15.

Note that Theorem 14 is true for for all , and not just . More precisely, the following statement holds:

If a function passes Test 13 with probability at least for wtih , then passes Test 13 with probability at least for , where is a positive integer.

This reduction shows that Theorem 14 is true as it is stated for for all , as the reduction affects only the constant in the notation.

For a more detailed explanation, see Appendix (Section 6).

### 3.3 Proof of Theorem 7

For a positive integer , we denote by the distribution on , where each coordinate is equal to with probability and to with probability .

We use the following proposition in the course of the proof.

###### Proposition 16.

Let be a set and be the corresponding linear function, i.e., . Suppose

 Prx∼μ\nicefrac23(FD2)(χS(x)=0)>23,

then .

###### Proof.

Consider . Then

 Prx∼μ\nicefrac23(FD2)(χS(x)=0)=Prx∼μ\nicefrac23(FD2)((−1)χS(x)=1).

Also the following holds

 13<∣∣ ∣∣2Prx∼μ\nicefrac23(FD2)((−1)χS(x)=1)−1∣∣ ∣∣=∣∣Ex∼μ\nicefrac23(FD2)(−1)χS(x)∣∣=
 ∣∣ ∣∣∏i∈[D]Exi∼μ\nicefrac23(F2)(−1)xi∣∣ ∣∣=∣∣ ∣∣(−13)|S|∣∣ ∣∣=(13)|S|,

and the statement follows. ∎

###### Proof.

(of Theorem 7.) Assume Test 6 fails on a function with probability less than , i.e.,

 Pra,b∼[n]dx,y∼Ca,b(fa,b(0)+fa,b(x)+fa,b(y)+fa,b(x+y)=0)>1−ε,

where all distributions are uniform. Recall that is a shorthand for . Then there exists such that

 Prb∼[n]dx,y∼Ca,b(fa,b(0)+fa,b(x)+fa,b(y)+fa,b(x+y)=0)>1−ε.

W.l.o.g. we assume that and that . We can assume this, since if needed we can re-index the tensor, and flip it, i.e., add the constant one tensor element-wise. We write for and for . Then for every ,

 Prx,y∼Cb(fb(0)+fb(x)+fb(y)+fb(x+y)=0)=1−εb.

The BLR theorem (Theorem 11) implies that there exists a subset , such that

 Prx∼Cb(fb(x)=χS(b)(x))=1−εb.
###### Remark.

By the BLR theorem, there should be the “greater or equal to” sign instead of the equality. We assume equality to ease of the proof.

Let be a function defined as follows. For each , the set can be viewed as a subset of , since . Then is defined as the element of corresponding to the set .

We now show that passes Test 13 with high probability and hence is close to a direct product.

Let be chosen uniformly at random, and let be chosen with respect to the following distribution . For each ,

 b′i={bi,w.p. \nicefrac34;chosen uniformly at random from [n]∖{bi},w.p. \nicefrac14.

Note that the distribution on pairs , where is chosen uniformly from and w.r.t. , is equaivalent to the following: for each ,

 {bi=b′i chosen uniformly from [n],w.p.% \nicefrac34;bi≠b′i both chosen uniformly from [n]w.p. \nicefrac14. (2)

In particular, it is symmetric in the sense that choosing uniformly at random first, and then , leads to the same distribution on pairs as the one described above.

For such a pair define distribution on as follows. For a vector ,

 xi=⎧⎪⎨⎪⎩0,if i∈Δ(b,b′);0,w.p. \nicefrac13;bi=b′iw.p. \nicefrac23.if i∉Δ(b,b′).

Note that the distribution is supported on a binary cube of dimension inside . Denote

 εb,b′=Prx∼Db,b′(f(x)≠χF(b)(x)).

We claim that the following holds

 εb=Prx∼Cb(f(x)≠χF(b)(x))=Eb′∼D(b)εb,b′. (3)

To see (3) note that since is chosen uniformly, is chosen w.r.t. , and , the resulting distribution for is

 xi={0,w.p. \nicefrac12;biw.p. \nicefrac12,

which is exactly the uniform distribution on

.

We now show that

 Prb∼[n]db′∼D(b)(εb,b′+εb′,b>13)<6ε (4)

First note that it follows from the definitions that

 Eb∼[n]dEb′∼D(b)εb,b′=Eb∼[n]dεb=ε.

And by the symmetry of the distribution on pairs ,

 Eb∼[n]dEb′∼D(b)εb′,b=Eb′∼D(b)Eb∼[n]dεb′,b=ε.

Combined together, the previous two equations imply that

 Eb∼[n]dEb′∼D(b)(εb,b′+εb′,b)=2ε,

and by the Markov inequality, Inequality 4 follows. By the definition of ,

 Prx∼Db,b′(χF(b)(x)=χF(b′)(x))>1−(εb,b′+εb′,b).

which is equivalent to

 Prx∼Db,b′(χF(b)ΔF(b′)(x)=1)>1−(εb,b′+εb′,b).

Proposition 16 implies that if , then

 F(b)Cb∩Cb′=F(b′)Cb∩Cb′.

By Theorem 14, the function is close to a direct product, i.e., there exist functions such that

 Prb∼[n]d(F(b)=(F1(b1),…,Fd(bd)))≥1−O(ε).

Therefore,

 Prb∼[n]d(f(b)=d∑i=1Fi(bi))≥1−O(ε).

## 4 The Shapka Test

In [KL14] Kaufman and Lubotzky showed that -coboundary expansion of a -dimensional complete simplicial complex implies testability of whether a symmetric -matrix is a tensor square of a vector. The following test is inspired by their work and in a way generalizes it.

Given two vectors , for denote by the vector which coincides with in every coordinate except for the -th one, where it coincides with , i.e.,

 (aib)j={aj,if j≠i;bi,if j=i.
###### Test 17.

• Choose uniformly at random.

• Define the query set to consist of , for all , and iff is even.

• Query at the elements of .

• Accept iff .

###### Remark 18.

Shapka is the Russian word for a winter hat (derived from Old French chape for a cap). The name the Shapka test comes from the fact that the set consists of the two top layers of the induced binary cube (and also the bottom layer if is even).

###### Theorem 19.

Suppose a function passes Test 17 with probabilty , then is -close to a tensor product.

### 4.1 Proof of Theorem 19

###### Proof.

Denote by the normalized distance from to the subspace of tensor products, i.e., there exists a tensor product such that

 Prx∼[n]d(f(x)≠g(x))=δ.

For a vector , for , define a function as follows. For ,

 fka(x)=f(akx).

For , the defintion of depends on the parity of and reads as follows. For ,

Given a collection of vectors, , we denote their tensor product by . In other words, for a vector , the following holds

 (T(g1,…,gd))(x1,…,xd)=∑i∈[d]g1(x1). (5)

In these notations, the following holds for any ,

 (f−T(f1a,…,fda))(b1,…,bd)=∑q∈Qa,bf(q).

As is a tensor product, it is at least -far from for any vectors , and hence for any ,

 (6)

Assume now that fails Test 17 with probabilty , i.e.,

 ε=Pra,b∼[n]d⎛⎜⎝∑q∈Qa,bf(q)=1⎞⎟⎠.

Combining this equality with (5) and (6), we get the following

 ε=Ea∼[n]dPrb∼[n]d((f−δ0(f1a,…,fda))(b1,…,bd)=1)≥(Ea∼[n]dδ)=δ,

which completes the proof. ∎

## 5 Further Directions

1. Can the original function be reconstructed by a voting scheme using the Shapka test 17?

2. It is plausible that the Square in the Cube test 6 be analyzed by the Fourier transofrm approach similarly to the analysis of the BLR test.

3. Another test in the spirit of the presented above is the following.

###### Test 20.

• Choose uniformly at random.

• Choose uniformly at random.

• Query at and .

• Accept iff .

We conjecture that this test is also good, i.e., if a function passes the test with high probability then it is close to a tensor product.

## 6 Appendix: Proof of Remark 15

In [DS14], Dinur and Steurer proved Theorem 14 for . The following reduction shows that the theorem is true for all by a reduction from to some . Recall that Test 13 makes two queries according to the distribution , which is the following distribution: (1) Choose a set of size uniformly at random. (2) Choose uniformly at random, conditioned .

###### Proposition 21.

Let denote the probability that a function passes Test 13 with respect to distribution . If for some , then for , where and .

In addition, is , then also .

###### Proof.

Fix a function , and suppose for some , i.e.,

 PrA,x,y∼T(αk)(g(x)A=g(y)A)≥1−ε.

We will show that where and . Note that satisfies .

Given a pair of random vectors and a set distributed according to , we construct a sequence of vectors such that for all , the pair is distributed according to .

The complement of has size . Partition it randomly into parts of equal size , . Denote for all .

For each , construct such that it agrees with on the coordinates in and with on the rest of the coordinates . Then for each , agrees with on the set of the size . Therefore,

 Pr(g(xi−1)Ai=g(xi)Ai)≥1−ε.

Hence,

 1−r⋅ε≤Pr(∀1≤i≤r:g(xi−1)Ai=g(xi)Ai)≤PrAr,x,y∼T(α′k)(g(x0)Ar=g(xr)Ar).

The case of has to be treated separately as it is not covered by Theorem 14. In this case there is a reduction to as follows. Given two vectors distributed w.r.t. construct an intermidiate random vector which agrees on exactly half of the coordinates with both and . ∎

###### Corollary 22.

Let be positive integers, let , where , and let . Let be given such that

 PrA,x,y(g(x)A=g(y)A)≥1−ε,

where are chosen w.r.t. the test distribution . Then there exists a direct product function such that , where stand for the Hamming distance. In particular, cwhen this implies

 Prx(f(x)=g(x))≥1−O(ε), as k→∞% and N→∞.

#### Funding

The first author is supported by ERC-CoG grant number 772839. A substantial part of the work was done while the second author held a joint postdoctoral position at The Weizmann Institute and Bar-Ilan University funded by the ERC grant number 33628. Currently, the second author is supported by the SNF grant number 200020_169106.

## References

• [BCH95] Mihir Bellare, Don Coppersmith, Johan Håstad, Marcos A. Kiwi, and Madhu Sudan. Linearity testing in characteristic two. In 36th Annual Symposium on Foundations of Computer Science, Milwaukee, Wisconsin, USA, 23-25 October 1995, pages 432–441, 1995.
• [BLR93] Manuel Blum, Michael Luby, and Ronitt Rubinfeld. Self-testing/correcting with applications to numerical problems. Journal of computer and system sciences, 47(3):549–595, 1993.
• [DDG17] Roee David, Irit Dinur, Elazar Goldenberg, Guy Kindler, and Igor Shinkar. Direct sum testing. SIAM J. Comput., 46(4):1336–1369, 2017.
• [DG08] Irit Dinur and Elazar Goldenberg. Locally testing direct products in the low error range. In Proc. 49th IEEE Symp. on Foundations of Computer Science, 2008.
• [DR06] Irit Dinur and Omer Reingold. Assignment testers: Towards combinatorial proofs of the PCP theorem. SIAM Journal on Computing, 36(4):975–1024, 2006. Special issue on Randomness and Computation.
• [DS14] Irit Dinur and David Steurer. Direct product testing. In 2014 IEEE 29th Conference on Computational Complexity (CCC), pages 188–196, 2014.
• [GS97] Oded Goldreich and Shmuel Safra. A combinatorial consistency lemma with application to proving the PCP theorem. In RANDOM: International Workshop on Randomization and Approximation Techniques in Computer Science. LNCS, 1997.
• [IKW12] Russell Impagliazzo, Valentine Kabanets, and Avi Wigderson. New direct-product testers and 2-query PCPs. SIAM J. Comput., 41(6):1722–1768, 2012.
• [KL14] Tali Kaufman and Alexander Lubotzky. High dimensional expanders and property testing. In Proceedings of the 5th Conference on Innovations in Theoretical Computer Science, ITCS ’14, pages 501–506, New York, NY, USA, 2014. ACM.
• [O’D14] Ryan O’Donnell. Analysis of Boolean Functions. Cambridge University Press, 2014.