# Small-Set Expansion in Shortcode Graph and the 2-to-2 Conjecture

Dinur, Khot, Kindler, Minzer and Safra (2016) recently showed that the (imperfect completeness variant of) Khot's 2 to 2 games conjecture follows from a combinatorial hypothesis about the soundness of a certain "Grassmanian agreement tester". In this work, we show that the hypothesis of Dinur et. al. follows from a conjecture we call the "Inverse Shortcode Hypothesis" characterizing the non-expanding sets of the degree-two shortcode graph. We also show the latter conjecture is equivalent to a characterization of the non-expanding sets in the Grassman graph, as hypothesized by a follow-up paper of Dinur et. al. (2017). Following our work, Khot, Minzer and Safra (2018) proved the "Inverse Shortcode Hypothesis". Combining their proof with our result and the reduction of Dinur et. al. (2016), completes the proof of the 2 to 2 conjecture with imperfect completeness. Moreover, we believe that the shortcode graph provides a useful view of both the hypothesis and the reduction, and might be useful in extending it further.

## Authors

• 13 publications
• 15 publications
• 14 publications
• ### Avoidable paths in graphs

We prove a recent conjecture of Beisegel et al. that for every positive ...
08/10/2019 ∙ by Marthe Bonamy, et al. ∙ 0

• ### Edge-decomposing graphs into coprime forests

The Barat-Thomassen conjecture, recently proved in [Bensmail et al.: A p...
03/09/2018 ∙ by Tereza Klimošová, et al. ∙ 0

• ### On the best-choice prophet secretary problem

We study a variant of the secretary problem where candidates come from i...
12/04/2020 ∙ by Pranav Nuti, et al. ∙ 0

• ### Proof of the Contiguity Conjecture and Lognormal Limit for the Symmetric Perceptron

We consider the symmetric binary perceptron model, a simple model of neu...
02/25/2021 ∙ by Emmanuel Abbe, et al. ∙ 0

• ### Imperfect Gaps in Gap-ETH and PCPs

We study the role of perfect completeness in probabilistically checkable...
07/18/2019 ∙ by Mitali Bafna, et al. ∙ 0

• ### Inapproximability of Additive Weak Contraction under SSEH and Strong UGC

Succinct representations of a graph have been objects of central study i...
11/30/2019 ∙ by Siddhartha Jain, et al. ∙ 0

• ### A Finer Calibration Analysis for Adversarial Robustness

We present a more general analysis of H-calibration for adversarially ro...
05/04/2021 ∙ by Pranjal Awasthi, et al. ∙ 15

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In [Kho02], Subhash Khot put forward a family of conjectures known as the “-to- games conjectures”. A binary constraint where s take values in alphabet is said to be d-to-d if for every value to , there are exactly values for that satisfy and vice-versa. For any , the “-to- games conjecture” roughly says that for every , there is some finite alphabet such that it is NP-hard to distinguish, given a constraint satisfaction problem with d-to-d constraints, whether it is possible to satisfy at least fraction of the constraints, or if every assignment satisfies at most fraction of the constraints. 111For , the conjectures are often stated in their perfect completeness variant, where we replace with in the first case. In this work (as well as all the line of works following [KMS17]), we refer to the imperfect completeness version as stated above. The case of corresponds to the more famous Unique Games Conjecture, but until recently there was no constant for which the corresponding d-to-d conjecture was known to be true.

Dinur, Khot, Kindler, Minzer, and Safra [DKK16], building on ideas of Khot, Minzer and Safra [KMS17], recently initiated an approach towards proving the -to- conjecture, based on a certain combinatorial hypothesis positing the soundness of the “Grassmann agreement test”.

In this work we show that their hypothesis follows from a certain natural hypothesis characterizing the structure of non-expanding sets in the degree two shortcode graph [BGH15]. Following our work, Khot, Minzer and Safra [KMS18] proved the latter hypothesis thus completing the proof of the 2-to-2 games conjecture. This has several implications to hardness of approximation including improving on the NP-hardness of approximation for Vertex Cover along with a host of other improved NP-hardness results. Perhaps more importantly, this also gives a strong evidence for the truth of the Unique Games Conjecture itself. We defer to [DKK16, DKK18, KMS18] for a detailed discussion on the importance of the 2-to-2 games conjecture, as well as the reduction of this conjecture to showing the soundness of the Grassmann agreement tester.

### 1.1 Our Results

Our main result reduces the task of proving the “Grassmann agreement hypothesis” of Dinur et al [DKK16, Hypothesis 3.6] to characterizing the structure of non-expanding sets in the associated Grassmann graph.

• We show that the Grassmann agreement hypothesis [DKK16, Hypothesis 3.6] follows from the Grasmann Expansion Hypothesis [DKK18, Hypothesis 1.7].

• We describe the related Shortcode test and the associated agreement and expansion hypothesis and relate them to the Grassmann versions above.

The above, combined with the work of [DKK16, KMS18], suffices to prove the 2-to-2 conjecture. However we note that it is possible to directly obtain a proof of the 2-to-2 conjecture (see the recent exposition at [BCS18]) using the “Inverse Shortcode Hypothesis” without going through the Grassmann graph at all. We think the shortcode view provides a natural way to understand the reduction and suggests potential extensions, see Section 1.6.

### 1.2 Grassmann Graph and DKKMS Consistency Test

To state our results formally, we need to define the Grassman and shortcode graphs, which we now do. The Grassmann graph with parameters has vertices given by all -dimensional subspaces (denoted by ) of Two subspaces of have an edge between them if .

Let be the set of all linear functions For every , let be the map that assign to every , the restriction of the linear function to the subspace . Let be the set of all such maps.

The Grassmann Consistency test is a two-query test for described below:

frametitle= Test 0: Grassmann Consistency Test , innertopmargin=10pt, frametitleaboveskip=-frametitlealignment=

Given:

a map from that maps any to a linear function on .

Operation:

1. Pick an edge of uniformly at random.

3. Accept if otherwise reject.

It is easy to see the following completeness of the Grassmann graph test.

###### Fact 1.1 (Completeness).

Suppose Then,

passes the Grassman Consistency test with probability

.

The DKKMS hypothesis conjectures a precise version of soundness of the Grassmann Consistency Test.

###### Hypothesis 1.2 (DKKMS Soundness Hypothesis).

For every , there exists , and an integer such that following holds for sufficiently large

Let such that Then, there exist subspaces of dimensions and respectively and a such that

 PV∼Vℓ,Q⊆V⊆W[F(V)=fV]⩾ε.

### 1.3 Shortcode Graph and Consistency Test

We now define the closely related Degree 2 Shortcode graph and a immediate analog of the Grassmann consistency test on this graph. For parameters as before, the vertices of the degree 2 shortcode graph are elements of , that is, all matrices on with dimensions . Two vertices and have an edge between them if is a rank 1 matrix over the field . The 2 query codeword test on this graph is entirely analogous to the one above for the Grassmann graph:

frametitle= Test 1: Degree 2 Shortcode Consistency Test , innertopmargin=10pt, frametitleaboveskip=-frametitlealignment=

Given:

a map from .

Operation:

1. Pick and a rank 1 matrix

for vectors

, all uniformly at random from their respective domains. Let .

3. Accept if .

Just as the Grassmann consistency test, the above shortcode consistency test is "2-to-2" constraint and the following completeness is easy to establish.

###### Fact 1.3 (Completeness).

Let be any affine linear function. Let be the map that evaluates on each row the input matrix. Then, passes the shortcode consistency test with probability 1.

The analogous soundness hypothesis can now be stated as:

###### Hypothesis 1.4 (Degree 2 Shortcode Soundness Hypothesis).

For every , there exists , and an integer such that following holds for sufficiently large

Let such that Then, there exists linear constraints and for and a such that

 PM∼Matℓ,n[F(M)=Mz+u∣Mqi=ti,r⊤iM=si ∀i⩽r]⩾ε.

### 1.4 Soundness vs Small-Set Expansion in Grasmann/Shortcode Graphs

Recall that for a regular graph , the expansion of a set of vertices is the probability, that a random walk beginning at a uniformly random vertex in steps out of . That is,

The DKKMS Soundness Hypothesis implies a natural characterization small non-expanding sets in the noted below as Hypothesis 1.6. Similarly, the degree 2 shortcode soundness hypothesis implies a natural characterization of non-expanding sets in . We include a brief overview of the argument here and refer the reader to the more extensive commentary in Section 1.3 of [DKK16] for further details.

Suppose are “non-expanding” sets that cover a constant fraction of vertices in We construct a labeling strategy by choosing uniformly random linear functions and setting if and is a random linear function otherwise. Clearly, doesn’t agree with a single linear function on significantly more than fraction of the vertices in On the other hand, if s are sufficiently non-expanding, then, a random edge will lie inside one of the s with a non-trivially large probability and thus will satisfy the Grassmann consistency test. In this, case, we will hope that there are subspaces of constant dimension and co-dimension, respectively such that restricting to subspaces (where is the subset such that ) implies that for some fixed global linear function . This can happen in the above example for only if there are as above such that one of the is (i.e. independent of ). Thus, Hypothesis 1.2 forces that the non-expanding sets to be “structured” (in the sense of having a large density inside for some of constant dimension and co-dimension, respectively.) This can be interpreted as saying that the non-expansion of any set of vertices in can be “explained” away by a more than typical density in one of the canonical non-expanding sets (i.e., those that contain a subspace and are contained inside a subspace of constant dimension and co-dimension, respectively.)

To formally state the Grassmann Expansion Hypothesis, we define the special non-expanding sets (referred to as “zoom-in” and “zoom-outs” in [DKK18]):

###### Definition 1.5 (Nice Sets in Grassmann Graph).

A subset of vertices in is said to be -nice if there are subspace of of dimension and co-dimension respectively such that and

###### Hypothesis 1.6 (Grassmann Expansion Hypothesis).

For every , there exists depending only on such that if satisfies , then, there are subspaces over of dimension and co-dimension satisfying respectively, such that

Analogously, we can define nice sets in the degree 2 shortcode graph and state the expansion hypothesis. We call , a right affine subspace of matrices in if there are pairs and every satisfies . We define a left affine subspace analogously.

###### Definition 1.7 (Nice Sets in Degree 2 Shortcode Graph).

A subset is said to be -nice if it is an intersection of a left and right affine subspace in with sum of the dimensions .

###### Hypothesis 1.8 (Inverse Shortcode Hypothesis).

For every , there exist depending only on such that for every subset , if , then, there exists an -nice set such that .

While Hypotheses 1.2 and 1.4 posit soundness of a specific “code-word consistency” test associated with the Grassmann/Shortcode graphs, Hypotheses 1.6 and 1.8 ask for a purely graph theoretic property: a characterization of non-expanding sets in and . As such, it appears easier to attack and [DKK16] thus suggested understanding the structure of non-expanding sets in as a natural first step. As we show in this note, proving Hypothesis 1.8 is in fact enough to show Hypothesis 1.2. In a follow up work [KMS18], this result was used in to complete the proof of the DKKMS soundness hypothesis.

### 1.5 Our Results

We are now ready to state our main results formally.

First, we show that the soundness of the shortcode consistency test follows from the expansion hypothesis for the shortcode graph.

###### Theorem 1.9.

The degree 2 Shortcode Expansion Hypothesis 1.8 implies the Degree 2 Shortcode Soundness Hypothesis 1.4.

Second, we show that the soundness hypothesis for the shortcode consistency test implies the soundness hypothesis for the Grassmann consistency test. This reduces the DKKMS soundness hypothesis to establishing the expansion hypothesis for the Shortcode graph.

###### Theorem 1.10.

The degree 2 Shortcode Soundness Hypothesis implies the Grassmann Soundness Hypothesis 1.2.

Finally, we relate the expansion hypothesis of the Grassmann graph to the expansion hypothesis for the degree 2 shortcode graph.

###### Theorem 1.11.

The Grassmann Expansion Hypothesis (Hypothesis 1.6) is equivalent to the Inverse Shortcode Hypothesis (Hypothesis 1.8).

### 1.6 Discussion

Working with the shortcode consistency test (and consequently, the shortcode expansion hypothesis) makes an approach to proving Hypothesis 1.2 somewhat more tractable. This is because unlike the Grassmann graph, Degree 2 shortcode graph is a Cayley graph on (isomorphic to ) under the group operation of -addition with the set of all rank 1 matrices forming the set of generators. Thus studying expansion of sets of vertices can be approached via powerful methods from Fourier analysis. Indeed, this is the route taken by the recent breakthrough [KMS18] that proves the shortcode expansion hypothesis and completes the proof of the 2-to-2 games conjecture (with imperfect completeness).

Perhaps equally importantly, the shortcode consistency test suggests immediate extensions (higher degree shortcode graphs) that provide a natural path to proving the Unique Games Conjecture. We discuss this approach here.

First, the Grassmann/shortcode consistency tests as stated above are “2-to-2” tests. That is, for any reply for the first query, there are two admissible replies for the other query. However, it is simple to modify the tests and make them unique or “1-to-1” at the cost of making the completeness instead of . For concreteness, we describe this simple modification below.

frametitle= Test 2: Unique Degree 2 Shortcode Consistency Test , innertopmargin=10pt, frametitleaboveskip=-frametitlealignment=

Given:

a map from .

Operation:

1. Pick and a rank 1 matrix for vectors , all uniformly at random from their respective domains. Let .

3. Accept if .

frametitle= Test 3: Unique Degree 3 Shortcode Consistency Test , innertopmargin=10pt, frametitleaboveskip=-frametitlealignment=

Given:

a map from .

Operation:

1. Pick

and a rank 1 tensor

for vectors , and all uniformly at random from their respective domains. Let .

3. Accept if .

It is easy to check that the any strategy that passes the 2-to-2 test can be modified to obtain a success probability of in passing the “unique” test above (see proof of Lemma 2.2 below). This is one of the several ways that the NP hardness of “2-to-2” games implies the NP hardness of -unique games - that is, distingushing between instances where at least the constraints are satisfiable from those where at most fraction of constraints are satisfiable.

A natural strategy, thus, to try to show NP hardness of -unique games is to use some variant of the shortcode consistency test above that has completeness instead of . Indeed, the degree 2 shortcode consistency test suggests natural analogs with higher completeness - by moving to higher degree shortcode graphs. For concreteness, consider the following test on degree 3 shortcode graphs, where it is easy to argue a completeness of .

Let be the set of all tensors over . Recall that a rank 1 tensor is defined by 3 vectors , and and can be written as .

To see why there’s a natural analog of the strategy in case of the degree 2 shortcode consistency test that gives a completeness of , we show:

###### Lemma 1.12 (Completeness).

Let and . Let be the map that assigns to any tensor , the value . Then, passes the test with probability .

###### Proof.

Let be such that is rank 1 tensor. Then, passes the test only if . If , then . Since are independently chosen in the test, the probability that is . ∎

Thus, the degree 3 shortcode consistency test gives a natural analog of the degree 2 shortcode consistency test with higher completeness. Indeed, degree r version gives a test with completeness of as expected. One can also frame expansion hypotheses similar to the ones for the degree 2 case that posit a characterization of the non-expanding sets in higher degree shortcode graphs.

While our current efforts to compose this test with the “outer-PCP” in order to get a reduction to Unique Games problem (with higher completeness) have not succeeded, it seems a natural avenue for launching an attack on the UGC.222There are indeed very serious obstacles that must be overcome before carrying this out. Specifically, the reduction of [DKK16] uses a careful interplay between smoothness properties of the outer PCP and efficiency or “blow up” properties of the test (i.e., the number of potential queries by the verifier as a function of the number of honest strategies). The tensor based test has too much of a blowup to be able to be simply “plugged in” in the outer PCP used by [DKK16].

## 2 Small-Set-Expansion vs Soundness

In this section, we establish that the inverse shortcode hypothesis (Hypothesis 1.8) implies the soundness of the degree 2 shortcode consistency test 1.4.

#### From 2-to-2 to Unique Tests

For the sake of exposition, it will be easier to work with Test 1.6, the “unique” version of the degree 2 shortcode consistency test. Thus, we restate the soundness hypothesis for Test 1.6 and show that it is enough to establish Hypothesis 1.4.

###### Hypothesis 2.1 (Soundness of Test 1.6).

For every , there exists , and an integer such that following holds for sufficiently large

Let such that Then, there exists linear constraints and for and a such that

 PM∼Matℓ,n[F(M)=Mz+u∣Mqi=ti,r⊤iM=si ∀i⩽r]⩾δ.

We first show that Hypothesis 2.1 implies Hypothesis 1.4.

###### Lemma 2.2.

Hypothesis 2.1 implies Hypothesis 1.4.

###### Proof.

Let be the labeling strategy for Test 1.3. We will first obtain a good labeling strategy for Test 1.6 by modifying slightly.

Choose uniformly at random from . For any , let . We claim that if passes the Test 1.3 with probability , then passes Test 1.6 with probability at least .

To see this, take any such that in . That is, for vectors . We will argue that with probability . This will imply that in expectation over the choice of , satisfies at least the constraints satisfied by in Test 1.3 completing the proof.

This is simple to see: since passes the test, or . WLOG, say the first happens. Observe that passes the unique test on if or . Since , thus passes if which happens with probability .

#### Expansion to Soundness

We will now show that Hypothesis 1.8 implies Hypothesis 2.1. This completes the proof of Theorem 1.9. A similar argument can be used to directly establish that Hypothesis 1.6 implies Hypothesis 1.2. We do not include it here explicitly. Instead, we relate the expansion and soundness hypothesis for the degree 2 shortcode test to the analogs for the Grassmann test as we believe this could shed light on showing expansion hypotheses for higher degree shortcode tests discussed in the next section.

###### Lemma 2.3.

Hypothesis 1.8 implies Hypothesis 2.1

###### Proof.

Let be the labeling function as in the assumption in Hypothesis 2.1. Then, we know that For any , let be the set of all matrices with . Then, by an averaging argument, there must be a such that

Apply Hypothesis 1.8 to to obtain -nice subset of such that . Let be a affine constraint satisfied by every . Consider the affine linear strategy that maps any to . Observe that for every , by this choice. As a result, when are such that , . Thus, is the “decoded” strategy that satisfies the requirements of Hypothesis 2.1 as required. This completes the proof.

## 3 Relating Grassmann Graphs to Degree 2 Shortcode Graphs

In this section, we show a formal relationship between the Grassmann and the degree Shortcode tests. In particular, we will prove Theorems 1.10 and 1.11.

### 3.1 A homomorphism from G(ℓ,n) into Sℓ,n

Key to the relationship between the two tests is an embedding of the degree 2 shortcode graph into . We describe this embedding first. As justified in the previous section, it is without loss of generality to work with the “unique” versions of both the tests.

To describe the above embedding, we need the notion of projection of a subspace of to a set of coordinates.

###### Definition 3.1 (Projection of a Subspace).

Given a subspace , the projection of to a set of coordinates , written as is the subspace of defined by taking the vectors obtained by keeping only the coordinates indexed by for every vector

Let be the set -tuples of linearly independent elements of , i.e. each forms a basis for the vector space . We will use to denote the standard basis

We will now describe a class of graph homomorphisms from into . Each element of this class can be described by a basis of .

For each basis , let be the set of all subspaces such that the projection of to the first coordinates when written w.r.t. the basis is full-dimensional. Our embedding will map each element of into a distinct element of such that the edge structure within in is preserved under this embedding.

###### Definition 3.2 (Homomorphism from G(ℓ,n) into Sℓ,n ).

Let be defined as follows. Write every vector in the -basis. For any and for , let be the unique vector in such that . We call to be the canonical basis for .

Define to be the matrix with the row given by the projection of on the last coordinates for each . When the basis is clear from the context, we will omit the subscript and write .

It is easy to confirm that is a bijection from into . This is because canonical basis for a subspace is unique.

Next, we prove some important properties of the homomorphism that will be useful in the proof of Theorem 1.10.

First, we show that the map is indeed a homomorphism as promised and thus, preserves edge structure.

###### Lemma 3.3 (ϕ is a homomorphism).

For defined above and any , in iff in .

###### Proof.

Let be arbitrary non-zero vectors that define a rank 1 matrix . Consider the matrix . Then, and thus . We claim that Suppose are the rows of . Then, the rows of are given by . Thus, is spanned by where is the standard basis element on the first coordinates and the notation

indicates the concatenation of the vectors in the ordered pair to get a

dimensional vector. In particular, every element of can be written as and any such vector is contained in if implying that

On the other hand, let be a subspace in such that and let and be the matrices obtained via the map . Then, and must differ in at least one row, say, WLOG, the last row of and are and respectively. Notice that since the vector with in the first coordinates is unique in , neither of belong to the intersection . Further, for every vector , either or must be contained in the intersection (as the extra linear equation that satisfies over and above is satisfied by exactly one of and . Thus, by letting to every one of the canonical basis elements of that are not in , we get a set of elements that are all 1) contained in 2) for every . This then has to be the canonical basis of (by uniqueness of the canonical basis) and further, the corresponding can be written as where is the set of such that is not in . ∎

Next, we want to argue that expansion of sets is preserved up to constant factors under the map . Towards this, we first show that contains a fraction of the vertices of as we next show.

###### Lemma 3.4 (Projections of Subspaces).

Let for Then, for large enough and

Further, let for some . Then, at least fraction of the neighbors of in are contained in .

###### Proof.

We can sample a random subspace of dimension as follows: Choose uniformly random and independent points from . If they are linearly independent, let be the subspace spanned by them.

We can estimate the probability that the sampled points are linearly independent as:

Next, we estimate the probability that the projection to first coordinates of the sampled vectors is linearly independent. By a similar reasoning as above, this probability is at least (the limit of this product for large .)

By a union bound, thus, a random subspace has a full dimensional projection on with probability at least for any

For the remaining part, assume that - the standard basis. Notice that a random neighbor of can be sampled as follows: choose a uniformly random basis for , say . Replace by a uniformly random vector outside of in . Since , the projection of to the first coordinates is linearly independent. would thus satisfy the same property whenever is such that the projection of to the first coordinates is not in the span of the projection to the first coordinates of . The chance of this happening is exactly . This completes the proof. ∎

As a consequence of above, we can now obtain that the preimages of non-expanding sets under are non-expanding in .

###### Lemma 3.5.

Let be a subset satisfying . Then, satisfies: .

###### Proof.

Let the basis used to construct . Then, . By Lemma 3.4, 1/2 the neighbors of are contained in . By assumption, fraction of these neighbors are contained inside . This finishes the proof. ∎

Via a similar application of Lemma 3.4, we can establish an appropriate converse.

###### Lemma 3.6.

Let be a subset satisfying . Then, for a uniformly random choice of basis for , and .

Finally, we show that -nice sets in get mapped to -nice sets in and vice-versa.

###### Lemma 3.7.

Let be an -nice set in . Then, is an -nice set in . Conversely, if is an -nice set in then for some -nice set in .

###### Proof.

WLOG, assume that . We will assume that is the set of all subspaces in contained in a subspace of co-dimension . The general case is analogous. Equivalently, if form a basis for , then, for every and ever for every .

Consider the canonical basis for - recall that this means that the projection of to the first coordinates equal . Thus, for every , we can write for some vectors of dimensions.

Then, is the matrix with rows by our construction. In particular, this means that the satisfies the constrain: where is the vector with th coordinate equal to . Thus, we have shown that for every , satisfies a set of affine linear equations.

Conversely, observe that if any satisfies the affine linear equation as above, the set of all for where is the th row of , must span a subspace in . This yields that is an -nice set.

The converse follows from entirely similar ideas. Suppose is an -nice set. WLOG, we restrict to the case where is the set of all matrices satisfying linear constraints for some choice of linearly independent constraints . Letting be the rows of , this implies that every vector in the span of for satisfies the linear equation where . This immediately yields that is contained in a subspace of co-dimension . Conversely, it is easy to check that for every subspace of dimension contained in , satisfies the affine linear constraints above.

This completes the proof.

### 3.2 Shortcode Test vs Grassmann Test

We now employ the homomorphism constructed in the previous subsection to relate the soundness and expansion hypothesis in shortcode and Grassmann tests.

First, we show that the soundness hypothesis for degree 2 shortcode consistency test implies the soundness hypothesis for the Grassmann consistency test and complete the proof of Theorem 1.10.

###### Lemma 3.8.

The degree 2 shortcode soundness hypothesis (Hypothesis 2.1) implies the Grassmann soundness hypothesis (Hypothesis 1.2).

###### Proof.

Let be the assumed labeling strategy in Hypothesis 1.2. We will construct a labeling strategy for from so that we can apply the conclusion of 2.1. We will first choose an embedding of the type we constructed before in order to construct .

Let be chosen uniformly at random and let as in the previous subsection. For any , let , a linear function restricted to . Let be the canonical basis for , i.e., the projection of to the first coordinates (when written in basis ) equals for every . Set where . Since is a onto, this defines a labeling strategy for all of .

Next, we claim that if passes the Grassmann consistency test with probability then passes the degree 2 shortcode consistency test with probability .

Before going on to the proof of this claim, observe that this completes the proof of the lemma. To see this, we first apply Hypothesis 2.1 to conclude that there’s an -nice set in and an affine function defined by such that the labeling strategy passes the degree 2 shortcode consistency test with probability for all in . It it easy to construct the an analogous linear strategy for the Grassmann consistency test: For any with the canonical basis defined above, set . Extend linearly to the span of all such vectors. Finally, extend to all vectors by taking any linear extension. From Lemma 3.4, 1/2 the neighbors of vertices in are contained in . From Lemma LABEL:lem:nice-sets-map-to-nice-sets, for some -nice set in . Finally, by an argument similar to the one in Lemma 3.4, with high probability over the draw of . Combining the above three observations yileds that passes the Grassmann consistency test when restricted to the nice set with probability .

We now complete the proof of the claim. This follows immediately if we show that for any chosen from ,

Without loss of generality, we assume that is the standard basis . First, notice that is of dimension for for all but fraction of pairs . Thus, we can assume that

Let , the basis change matrix corresponding to and let be the row of and let be the matrix formed by taking the first rows of . Fix for some . Assume now that is of dimension . Let be a basis for . Let and for some that linearly independent of each other and of any vector in . We estimate the probability that . Then, this is the probability that are mapped by into respectively, satisfying . It is easy to check that the probability of this over the random choice of is at least . This proves the claim.

By taking large enough (compared to ), this probability can be made larger than, say, (say). This finishes the proof.

Next, we show that the Grassmann Expansion Hypothesis (Hypothesis 1.6) is equivalent to the Inverse Shortcode Hypothesis (Hypothesis 1.8) and complete the proof of Theorem 1.11.

###### Lemma 3.9.

The Grassmann Expansion Hypothesis (Hypothesis 1.6) is equivalent to the Inverse Shortcode Hypothesis (Hypothesis 1.8).

###### Proof.

First, we show that Hypothesis 1.6 implies Hypothesis 1.8.

Let be such that . Then, by Lemma 3.5, has an expansion of in .

Applying the Grassmann expansion hypothesis (Hypothesis 1.6), we know that there exists a -nice set in such that . Further, since , we must have: . To finish, observe that by Lemma 3.7, is an -nice set, say in . This, show that completing the proof.

The proof of the other direction, that is, Hypothesis 1.8 implies Hypothesis 1.6, is analogous and relies on the use of Lemma 3.6. ∎

## References

• [BCS18] Mitali Bafna, Chi-Ning Chou, and Zhao Song, An exposition of dinur-khot-kindler-minzer-safra proof for the 2-to-2 games conjecture, http://boazbarak.org/dkkmsnotes.pdf.
• [BGH15] Boaz Barak, Parikshit Gopalan, Johan Håstad, Raghu Meka, Prasad Raghavendra, and David Steurer, Making the long code shorter, SIAM J. Comput. 44 (2015), no. 5, 1287–1324. MR 3416138
• [DKK16] Irit Dinur, Subhash Khot, Guy Kindler, Dor Minzer, and Muli Safra, Towards a proof of the 2-to-1 games conjecture?, Electronic Colloquium on Computational Complexity (ECCC) 23 (2016), 198.
• [DKK18]  , On non-optimally expanding sets in grassmann graphs, STOC 24 (2018), 94.
• [Kho02] Subhash Khot, On the power of unique 2-prover 1-round games, Proceedings of the 17th Annual IEEE Conference on Computational Complexity, Montréal, Québec, Canada, May 21-24, 2002, 2002, p. 25.
• [KMS17] Subhash Khot, Dor Minzer, and Muli Safra, On independent sets, 2-to-2 games, and grassmann graphs

, Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, 2017, pp. 576–589.

• [KMS18]  , Pseudorandom sets in grassmann graph have near-perfect expansion, Electronic Colloquium on Computational Complexity (ECCC) 25 (2018), 6.