# Extractor-Based Time-Space Lower Bounds for Learning

A matrix M: A × X →{-1,1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2) ..., where for every i, a_i ∈ A is chosen uniformly at random and b_i = M(a_i,x). Assume that k,ℓ, r are such that any submatrix of M of at least 2^-k· |A| rows and at least 2^-ℓ· |X| columns, has a bias of at most 2^-r. We show that any learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω(k ·ℓ), or at least 2^Ω(r) samples. The result holds even if the learner has an exponentially small success probability (of 2^-Ω(r)). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω(( |X|) · ( |A|)) or an exponential number of samples, achieving a tight Ω(( |X|) · ( |A|)) lower bound on the size of the memory, rather than a bound of Ω({( |X|)^2,( |A|)^2}) obtained in previous works [R17,MM17b]. Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on [R17] that gave a general technique for proving memory-samples lower bounds.

## Authors

• 6 publications
• 11 publications
• 11 publications
07/05/2021

### Memory-Sample Lower Bounds for Learning Parity with Noise

In this work, we show, for the well-studied problem of learning parity u...
02/17/2020

### Time-Space Tradeoffs for Distinguishing Distributions and Applications to Security of Goldreich's PRG

In this work, we establish lower-bounds against memory bounded algorithm...
11/20/2017

### On estimating the alphabet size of a discrete random source

We are concerned with estimating alphabet size N from a stream of symbol...
08/08/2017

### Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low Degree Polynomial Functions

We develop an extension of recently developed methods for obtaining time...
02/20/2020

### Quantum Time-Space Tradeoffs by Recording Queries

We use the recording queries technique of Zhandry [Zha19] to prove lower...
11/25/2018

### Average-Case Information Complexity of Learning

How many bits of information are revealed by a learning algorithm for a ...
04/18/2019

### Memory-Sample Tradeoffs for Linear Regression with Small Error

We consider the problem of performing linear regression over a stream of...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Can one prove unconditional lower bounds on the number of samples needed for learning, under memory constraints? The study of the resources needed for learning, under memory constraints was initiated by Shamir [S14] and by Steinhardt, Valiant and Wager [SVW16]. While the main motivation for studying this question comes from learning theory, the problem is also relevant to computational complexity and cryptography [R16, VV16, KRT16].

Steinhardt, Valiant and Wager conjectured that any algorithm for learning parities of size requires either a memory of size or an exponential number of samples. This conjecture was proven in [R16], showing for the first time a learning problem that is infeasible under super-linear memory constraints. Building on [R16], it was proved in [KRT16] that learning parities of sparsity is also infeasible under memory constraints that are super-linear in , as long as

. Consequently, learning linear-size DNF Formulas, linear-size Decision Trees and logarithmic-size Juntas were all proved to be infeasible under super-linear memory constraints

[KRT16] (by a reduction from learning sparse parities).

Can one prove similar memory-samples lower bounds for other learning problems?

As in [R17], we represent a learning problem by a matrix. Let , be two finite sets of size larger than 1 (where represents the concept-class that we are trying to learn and represents the set of possible samples). Let be a matrix. The matrix represents the following learning problem: An unknown element was chosen uniformly at random. A learner tries to learn from a stream of samples, , where for every , is chosen uniformly at random and .

Let and .

A general technique for proving memory-samples lower bounds was given in [R17]. The main result of [R17] shows that if the norm of the matrix is sufficiently small, then any learning algorithm for the corresponding learning problem requires either a memory of size at least , or an exponential number of samples. This gives a general memory-samples lower bound that applies for a large class of learning problems.

Independently of [R17], Moshkovitz and Moshkovitz also gave a general technique for proving memory-samples lower bounds [MM17a]. Their initial result was that if has a (sufficiently strong) mixing property then any learning algorithm for the corresponding learning problem requires either a memory of size at least or an exponential number of samples [MM17a]. In a recent subsequent work [MM17b], they improved their result, and obtained a theorem that is very similar to the one proved in [R17]. (The result of [MM17b] is stated in terms of a combinatorial mixing property, rather than matrix norm. The two notions are closely related (see in particular Corollary 5.1 and Note 5.1 in [BL06])).

#### Our Results

The results of [R17] and [MM17b] gave a lower bound of at most on the size of the memory, whereas the best that one could hope for, in the information theoretic setting (that is, in the setting where the learner’s computational power is unbounded), is a lower bound of , which may be significantly larger in cases where is significantly larger than , or vice versa.

In this work, we build on [R17] and obtain a general memory-samples lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of size at least or an exponential number of samples.

Our result is stated in terms of the properties of the matrix as a two-source extractor. Two-source extractors, first studied by Santha and Vazirani [SV84] and Chor and Goldreich [CG88], are central objects in the study of randomness and derandomization. We show that even a relatively weak two-source extractor implies a relatively strong memory-samples lower bound. We note that two-source extractors have been extensively studied in numerous of works and there are known techniques for proving that certain matrices are relatively good two-source extractors.

Our main result can be stated as follows (Corollary 3): Assume that are such that any submatrix of of at least rows and at least columns, has a bias of at most . Then, any learning algorithm for the learning problem corresponding to requires either a memory of size at least , or at least samples. The result holds even if the learner has an exponentially small success probability (of ).

A more detailed result, in terms of the constants involved, is stated in Theorem 1 in terms of the properties of as an -Extractor, a new notion that we define in Definition 2.1, and is closely related to the notion of two-source extractor. (The two notions are equivalent up to small changes in the parameters.)

All of our results (and all applications) hold even if the learner is only required to weakly learn , that is to output a hypothesis with a non-negligible correlation with the -th column of the matrix . We prove in Theorem 2 that even if the learner is only required to output a hypothesis that agrees with the -th column of on more than a fraction of the rows, the success probability is at most .

As in [R16, KRT16, R17], we model the learning algorithm by a branching program. A branching program is the strongest and most general model to use in this context. Roughly speaking, the model allows a learner with infinite computational power, and bounds only the memory size of the learner and the number of samples used.

As mentioned above, our result implies all previous memory-samples lower bounds, as well as new applications. In particular:

1. Parities: A learner tries to learn , from random linear equations over . It was proved in [R16] (and follows also from [R17]) that any learning algorithm requires either a memory of size or an exponential number of samples. The same result follows by Corollary 3 and the fact that inner product is a good two-source extractor [CG88].

2. Sparse parities: A learner tries to learn of sparsity , from random linear equations over . In Section 5.2, we reprove the main results of [KRT16]. In particular, any learning algorithm requires:

1. Assuming : either a memory of size or samples.

2. Assuming : either a memory of size or samples.

3. Learning from sparse linear equations: A learner tries to learn , from random sparse linear equations, of sparsity , over . In Section 5.3, we prove that any learning algorithm requires:

1. Assuming : either a memory of size or samples.

2. Assuming : either a memory of size or samples.

4. Learning from low-degree equations: A learner tries to learn , from random multilinear polynomial equations of degree at most , over . In Section 5.4, we prove that if , any learning algorithm requires either a memory of size or samples.

5. Low-degree polynomials: A learner tries to learn an -variate multilinear polynomial of degree at most over , from random evaluations of over . In Section 5.5, we prove that if , any learning algorithm requires either a memory of size or samples.

6. Error-correcting codes: A learner tries to learn a codeword from random coordinates: Assume that is such that for some , any pair of different columns of , agree on at least and at most coordinates. In Section 5.6, we prove that any learning algorithm for the learning problem corresponding to requires either a memory of size or samples. We also point to a relation between our results and statistical-query dimension [K98, BFJKMR94].

7. Random matrices: Let be finite sets, such that, and . Let

be a random matrix. Fix

and . With very high probability, any submatrix of of at least rows and at least columns, has a bias of at most . Thus, by Corollary 3, any learning algorithm for the learning problem corresponding to requires either a memory of size , or samples.

We note also that our results about learning from sparse linear equations have applications in bounded-storage cryptography. This is similar to [R16, KRT16], but in a different range of the parameters. In particular, for every , our results give an encryption scheme that requires a private key of length , and time complexity of per encryption/decryption of each bit, using a random access machine. The scheme is provenly and unconditionally secure as long as the attacker uses at most memory bits and the scheme is used at most times.

#### Techniques

Our proof follows the lines of the proof of [R17] and builds on that proof. The proof of [R17] considered the norm of the matrix , and thus essentially reduced the entire matrix to only one parameter. In our proof, we consider the properties of as a two-source extractor, and hence we have three parameters , rather than one. Considering these three parameters, rather than one, enables a more refined analysis, resulting in a stronger lower bound with a slightly simpler proof.

A proof outline is given in Section 3.

#### Motivation and Discussion

Many previous works studied the resources needed for learning, under certain information, communication or memory constraints (see in particular [S14, SVW16, R16, VV16, KRT16, MM17a, R17, MT17, MM17b] and the many references given there). A main message of some of these works is that for some learning problems, access to a relatively large memory is crucial. In other words, in some cases, learning is infeasible, due to memory constraints.

From the point of view of human learning, such results may help to explain the importance of memory in cognitive processes. From the point of view of machine learning, these results imply that a large class of learning algorithms cannot learn certain concept classes. In particular, this applies to any bounded-memory learning algorithm that considers the samples one by one. In addition, these works are related to computational complexity and have applications in cryptography.

#### Related Work

Independently of our work, Beame, Oveis Gharan and Yang also gave a combinatorial property of a matrix , that holds for a large class of matrices and implies that any learning algorithm for the corresponding learning problem requires either a memory of size or an exponential number of samples (when [BOGY17]

. Their property is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices. Their proof also builds on

[R17].

They also show, as an application, tight time-space lower bounds for learning low-degree polynomials, as well as other applications.

## 2 Preliminaries

Denote by

the uniform distribution over

. Denote by the logarithm to base . Denote by .

For a random variable

and an event , we denote by the distribution of the random variables , and we denote by the distribution of the random variable conditioned on the event .

#### Viewing a Learning Problem as a Matrix

Let , be two finite sets of size larger than 1. Let .

Let be a matrix. The matrix corresponds to the following learning problem: There is an unknown element that was chosen uniformly at random. A learner tries to learn from samples , where is chosen uniformly at random and . That is, the learning algorithm is given a stream of samples, , where each  is uniformly distributed and for every , .

#### Norms and Inner Products

Let . For a function , denote by the norm of , with respect to the uniform distribution over , that is:

 ∥f∥p=(E\/x∈RX[|f(x)|p])1/p.

For two functions , define their inner product with respect to the uniform distribution over as

 ⟨f,g⟩=E\/x∈RX[f(x)⋅g(x)].

For a matrix and a row , we denote by the function corresponding to the -th row of . Note that for a function , we have .

#### L2-Extractors and L∞-Extractors

###### Definition 2.1.

-Extractor: Let be two finite sets. A matrix is a --Extractor with error , if for every non-negative with there are at most rows in with

 |⟨Ma,f⟩|∥f∥1≥2−r.

Let be a finite set. We denote a distribution over as a function such that . We say that a distribution has min-entropy if for all , we have .

###### Definition 2.2.

Extractor: Let be two finite sets. A matrix is a --Extractor if for every distribution with min-entropy at least and every distribution with min-entropy at least ,

 ∣∣∣∑a′∈A∑x′∈Xpa(a′)⋅px(x′)⋅M(a′,x′)∣∣∣≤2−r.

#### Branching Program for a Learning Problem

In the following definition, we model the learner for the learning problem that corresponds to the matrix , by a branching program.

###### Definition 2.3.

Branching Program for a Learning Problem: A branching program of length and width , for learning, is a directed (multi) graph with vertices arranged in layers containing at most vertices each. In the first layer, that we think of as layer 0, there is only one vertex, called the start vertex. A vertex of outdegree 0 is called a leaf. All vertices in the last layer are leaves (but there may be additional leaves). Every non-leaf vertex in the program has outgoing edges, labeled by elements , with exactly one edge labeled by each such , and all these edges going into vertices in the next layer. Each leaf in the program is labeled by an element , that we think of as the output of the program on that leaf.

Computation-Path: The samples that are given as input, define a computation-path in the branching program, by starting from the start vertex and following at step  the edge labeled by , until reaching a leaf. The program outputs the label of the leaf reached by the computation-path.

Success Probability: The success probability of the program is the probability that , where is the element that the program outputs, and the probability is over (where is uniformly distributed over and are uniformly distributed over , and for every , ).

## 3 Overview of the Proof

The proof follows the lines of the proof of [R17] and builds on that proof.

Assume that is a --extractor with error , and let . Let be a branching program for the learning problem that corresponds to the matrix . Assume for a contradiction that is of length and width , where is a small constant.

We define the truncated-path, , to be the same as the computation-path of , except that it sometimes stops before reaching a leaf. Roughly speaking, stops before reaching a leaf if certain “bad” events occur. Nevertheless, we show that the probability that stops before reaching a leaf is negligible, so we can think of as almost identical to the computation-path.

For a vertex of , we denote by the event that reaches the vertex . We denote by the probability for (where the probability is over ), and we denote by the distribution of the random variable conditioned on the event . Similarly, for an edge  of the branching program , let be the event that traverses the edge . Denote, , and .

A vertex of is called significant if

 ∥∥Px|v∥∥2>2ℓ⋅2−n.

Roughly speaking, this means that conditioning on the event that reaches the vertex , a non-negligible amount of information is known about . In order to guess with a non-negligible success probability, must reach a significant vertex. Lemma 4.1 shows that the probability that reaches any significant vertex is negligible, and thus the main result follows.

To prove Lemma 4.1, we show that for every fixed significant vertex , the probability that reaches is at most (which is smaller than one over the number of vertices in ). Hence, we can use a union bound to prove the lemma.

The proof that the probability that reaches is extremely small is the main part of the proof. To that end, we use the following functions to measure the progress made by the branching program towards reaching .

Let be the set of vertices in layer- of , such that . Let be the set of edges from layer- of to layer- of , such that . Let

 Zi=∑v∈LiPr(v)⋅⟨Px|v,Px|s⟩k,
 Z′i=∑e∈ΓiPr(e)⋅⟨Px|e,Px|s⟩k.

We think of as measuring the progress made by the branching program, towards reaching a state with distribution similar to .

We show that each may only be negligibly larger than . Hence, since it’s easy to calculate that , it follows that is close to , for every . On the other hand, if is in layer- then is at least . Thus, cannot be much larger than . Since is significant, and hence is at most .

The proof that may only be negligibly larger than is done in two steps: Claim 4.12 shows by a simple convexity argument that . The hard part, that is done in Claim 4.10 and Claim 4.11, is to prove that may only be negligibly larger than .

For this proof, we define for every vertex , the set of edges that are going out of , such that . Claim 4.10 shows that for every vertex ,

 ∑e∈Γout(v)Pr(e)⋅⟨Px|e,Px|s⟩k

may only be negligibly higher than

 Pr(v)⋅⟨Px|v,Px|s⟩k.

For the proof of Claim 4.10, which is the hardest proof in the paper, and the most important place where our proof deviates from (and simplifies) the proof of [R17], we consider the function . We first show how to bound . We then consider two cases: If is negligible, then is negligible and doesn’t contribute much, and we show that for every , is also negligible and doesn’t contribute much. If is non-negligible, we use the bound on and the assumption that is a --extractor to show that for almost all edges , we have that is very close to . Only an exponentially small () fraction of edges are “bad” and give a significantly larger .

The reason that in the definitions of and we raised and to the power of is that this is the largest power for which the contribution of the “bad” edges is still small (as their fraction is ).

This outline oversimplifies many details. Let us briefly mention two of them. First, it is not so easy to bound . We do that by bounding and . In order to bound , we force to stop whenever it reaches a significant vertex (and thus we are able to bound for every vertex reached by ). In order to bound , we force to stop whenever is large, which allows us to consider only the “bounded” part of . (This is related to the technique of flattening a distribution that was used in [KR13]). Second, some edges are so “bad” that their contribution to is huge so they cannot be ignored. We force to stop before traversing any such edge. (This is related to an idea that was used in [KRT16] of analyzing separately paths that traverse “bad” edges). We show that the total probability that stops before reaching a leaf is negligible.

## 4 Main Result

###### Theorem 1.

Let . Fix to be such that .

Let , be two finite sets. Let . Let be a matrix which is a --extractor with error , for sufficiently large111By “sufficiently large” we mean that are larger than some constant that depends on . and , where . Let

 r:=min{r′2,(1−γ)k′2,(1−γ)ℓ′2−1}. (1)

Let be a branching program of length at most and width at most for the learning problem that corresponds to the matrix . Then, the success probability of is at most .

###### Proof.

Let

 k:=γk′andℓ:=γℓ′/3. (2)

Note that by the assumption that and are sufficiently large, we get that and  are also sufficiently large. Since , we have . Thus,

 r

Let be a branching program of length and width for the learning problem that corresponds to the matrix . We will show that the success probability of is at most .

### 4.1 The Truncated-Path and Additional Definitions and Notation

We will define the truncated-path, , to be the same as the computation-path of , except that it sometimes stops before reaching a leaf. Formally, we define , together with several other definitions and notations, by induction on the layers of the branching program .

Assume that we already defined the truncated-path , until it reaches layer- of . For a vertex in layer- of , let be the event that reaches the vertex . For simplicity, we denote by the probability for (where the probability is over ), and we denote by the distribution of the random variable conditioned on the event .

There will be three cases in which the truncated-path stops on a non-leaf :

1. If is a, so called, significant vertex, where the norm of is non-negligible. (Intuitively, this means that conditioned on the event that reaches , a non-negligible amount of information is known about ).

2. If is non-negligible. (Intuitively, this means that conditioned on the event that reaches , the correct element could have been guessed with a non-negligible probability).

3. If is non-negligible. (Intuitively, this means that is about to traverse a “bad” edge, which is traversed with a non-negligibly higher or lower probability than other edges).

Next, we describe these three cases more formally.

#### Significant Vertices

We say that a vertex in layer- of is significant if

 ∥∥Px|v∥∥2>2ℓ⋅2−n.

#### Significant Values

Even if is not significant, may have relatively large values. For a vertex in layer- of , denote by the set of all , such that,

 Px|v(x′)>22ℓ+2r⋅2−n.

For a vertex in layer- of , denote by the set of all , such that,

 ∣∣(M⋅Px|v)(α)∣∣≥2−r′.

#### The Truncated-Path T

We define by induction on the layers of the branching program . Assume that we already defined until it reaches a vertex in layer- of . The path stops on if (at least) one of the following occurs:

1. is significant.

2. .

3. .

4. is a leaf.

Otherwise, proceeds by following the edge labeled by  (same as the computational-path).

### 4.2 Proof of Theorem 1

Since follows the computation-path of , except that it sometimes stops before reaching a leaf, the success probability of is bounded (from above) by the probability that stops before reaching a leaf, plus the probability that reaches a leaf and .

The main lemma needed for the proof of Theorem 1 is Lemma 4.1 that shows that the probability that reaches a significant vertex is at most .

###### Lemma 4.1.

The probability that reaches a significant vertex is at most .

Lemma 4.1 is proved in Section 4.3. We will now show how the proof of Theorem 1 follows from that lemma.

Lemma 4.1 shows that the probability that stops on a non-leaf vertex, because of the first reason (i.e., that the vertex is significant), is small. The next two lemmas imply that the probabilities that stops on a non-leaf vertex, because of the second and third reasons, are also small.

###### Claim 4.2.

If is a non-significant vertex of then

 Prx[x∈Sig(v)|Ev]≤2−2r.
###### Proof.

Since is not significant,

 E\/x′∼Px|v[Px|v(x′)]=∑x′∈X[Px|v(x′)2]=2n⋅E\/x′∈RX[Px|v(x′)2]≤22ℓ⋅2−n.

Hence, by Markov’s inequality,

 Prx′∼Px|v[Px|v(x′)>22r⋅22ℓ⋅2−n]≤2−2r.

Since conditioned on , the distribution of is , we obtain

 Prx[x∈Sig(v)∣∣Ev]=Prx[(Px|v(x)>22r⋅22ℓ⋅2−n)∣∣Ev]≤2−2r.\qed
###### Claim 4.3.

If is a non-significant vertex of then

###### Proof.

Since is not significant, . Since is a distribution, . Thus,

 ∥∥Px|v∥∥2∥∥Px|v∥∥1≤2ℓ≤2ℓ′.

Since is a --extractor with error , there are at most elements with

 ∣∣⟨Mα,Px|v⟩∣∣≥2−r′⋅∥∥Px|v∥∥1=2−r′⋅2−n

The claim follows since is uniformly distributed over and since (Equation (1)). ∎

We can now use Lemma 4.1, Claim 4.2 and Claim 4.3 to prove that the probability that  stops before reaching a leaf is at most . Lemma 4.1 shows that the probability that  reaches a significant vertex and hence stops because of the first reason, is at most . Assuming that doesn’t reach any significant vertex (in which case it would have stopped because of the first reason), Claim 4.2 shows that in each step, the probability that stops because of the second reason, is at most . Taking a union bound over the steps, the total probability that stops because of the second reason, is at most . In the same way, assuming that doesn’t reach any significant vertex (in which case it would have stopped because of the first reason), Claim 4.3 shows that in each step, the probability that stops because of the third reason, is at most . Again, taking a union bound over the steps, the total probability that stops because of the third reason, is at most . Thus, the total probability that  stops (for any reason) before reaching a leaf is at most .

Recall that if doesn’t stop before reaching a leaf, it just follows the computation-path of . Recall also that by Lemma 4.1, the probability that reaches a significant leaf is at most . Thus, to bound (from above) the success probability of by , it remains to bound the probability that reaches a non-significant leaf and . Claim 4.4 shows that for any non-significant leaf , conditioned on the event that reaches , the probability for is at most , which completes the proof of Theorem 1.

###### Claim 4.4.

If is a non-significant leaf of then

 Pr[~x(v)=x|Ev]≤2−r.
###### Proof.

Since is not significant,

 E\/x′∈RX[Px|v(x′)2]≤22ℓ⋅2−2n.

Hence, for every ,

 Pr[x=x′|Ev]=Px|v(x′)≤2ℓ⋅2−n/2≤2−r

since (Equation (3)). In particular,

 Pr[~x(v)=x|Ev]≤2−r.\qed

This completes the proof of Theorem 1. ∎

### 4.3 Proof of Lemma 4.1

###### Proof.

We need to prove that the probability that reaches any significant vertex is at most . Let be a significant vertex of . We will bound from above the probability that  reaches , and then use a union bound over all significant vertices of . Interestingly, the upper bound on the width of is used only in the union bound.

#### The Distributions Px|v and Px|e

Recall that for a vertex of , we denote by the event that reaches the vertex . For simplicity, we denote by the probability for (where the probability is over ), and we denote by the distribution of the random variable conditioned on the event .

Similarly, for an edge  of the branching program , let be the event that traverses the edge . Denote, (where the probability is over ), and .

###### Claim 4.5.

For any edge  of , labeled by , such that , for any ,

 Px|e(x′)={0if% x′∈Sig(v)orM(a,x′)≠bPx|v(x′)⋅c−1eif x′∉Sig(v)andM(a,x′)=b

where is a normalization factor that satisfies,

 ce≥12−2⋅2−2r.
###### Proof.

Let be an edge of , labeled by , and such that . Since , the vertex is not significant (as otherwise always stops on and hence ). Also, since , we know that (as otherwise never traverses  and hence ).

If reaches , it traverses the edge if and only if: (as otherwise stops on ) and and . Therefore, for any ,

 Px|e(x′)={0if% x′∈Sig(v)orM(a,x′)≠bPx|v(x′)⋅c−1eif x′∉Sig(v)andM(a,x′)=b

where is a normalization factor, given by

 ce=∑{x′:x′∉Sig(v)∧M(a,x′)=b}Px|v(x′)=Prx[(x∉Sig(v))∧(M(a,x)=b)|Ev].

Since is not significant, by Claim 4.2,

 Prx[x∈Sig(v)|Ev]≤2−2r.

Since ,

 ∣∣Prx[M(a,x)=1|Ev]−Prx[M(a,x)=−1|Ev]∣∣=∣∣(M⋅Px|v)(a)∣∣≤2−r′,

and hence

 Prx[M(a,x)≠b|Ev]≤12+2−r′.

Hence, by the union bound,

 ce=Prx[(x∉Sig(v))∧(M(a,x)=b)|Ev]≥12−2−r′−2−2r≥12−2⋅2−2r

(where the last inequality follows since , by Equation (1)). ∎

#### Bounding the Norm of Px|s

We will show that cannot be too large. Towards this, we will first prove that for every edge of that is traversed by with probability larger than zero, cannot be too large.

###### Claim 4.6.

For any edge  of , such that ,

 ∥∥Px|e∥∥2≤4⋅2ℓ⋅2−n.
###### Proof.

Let be an edge of , labeled by , and such that . Since , the vertex is not significant (as otherwise always stops on and hence ). Thus,

 ∥∥Px|v∥∥2≤2ℓ⋅2−n.

By Claim 4.5, for any ,

 Px|e(x′)={0if% x′∈Sig(v)orM(a,x′)≠bPx|v(x′)⋅c−1eif x′∉Sig(v)andM(a,x′)=b

where satisfies,

 ce≥12−2⋅2−2r>14

(where the last inequality holds because we assume that and thus are sufficiently large.) Thus,

 ∥∥Px|e∥∥2≤c−1e⋅∥∥Px|v∥∥2≤4⋅2ℓ⋅2−n\qed
###### Claim 4.7.
 ∥∥Px|s∥∥2≤4⋅2ℓ⋅2−n.
###### Proof.

Let be the set of all edges of , that are going into , such that . Note that

 ∑e∈Γin(s)Pr(e)=Pr(s).

By the law of total probability, for every

,

 Px|s(x′)=∑e∈Γin(s)Pr(e)Pr(s)⋅Px|e(x′),

and hence by Jensen’s inequality,

 Px|s(x′)2≤∑e∈Γin(s)