1 Introduction
Can one prove unconditional lower bounds on the number of samples needed for learning, under memory constraints? The study of the resources needed for learning, under memory constraints was initiated by Shamir [S14] and by Steinhardt, Valiant and Wager [SVW16]. While the main motivation for studying this question comes from learning theory, the problem is also relevant to computational complexity and cryptography [R16, VV16, KRT16].
Steinhardt, Valiant and Wager conjectured that any algorithm for learning parities of size requires either a memory of size or an exponential number of samples. This conjecture was proven in [R16], showing for the first time a learning problem that is infeasible under superlinear memory constraints. Building on [R16], it was proved in [KRT16] that learning parities of sparsity is also infeasible under memory constraints that are superlinear in , as long as
. Consequently, learning linearsize DNF Formulas, linearsize Decision Trees and logarithmicsize Juntas were all proved to be infeasible under superlinear memory constraints
[KRT16] (by a reduction from learning sparse parities).Can one prove similar memorysamples lower bounds for other learning problems?
As in [R17], we represent a learning problem by a matrix. Let , be two finite sets of size larger than 1 (where represents the conceptclass that we are trying to learn and represents the set of possible samples). Let be a matrix. The matrix represents the following learning problem: An unknown element was chosen uniformly at random. A learner tries to learn from a stream of samples, , where for every , is chosen uniformly at random and .
Let and .
A general technique for proving memorysamples lower bounds was given in [R17]. The main result of [R17] shows that if the norm of the matrix is sufficiently small, then any learning algorithm for the corresponding learning problem requires either a memory of size at least , or an exponential number of samples. This gives a general memorysamples lower bound that applies for a large class of learning problems.
Independently of [R17], Moshkovitz and Moshkovitz also gave a general technique for proving memorysamples lower bounds [MM17a]. Their initial result was that if has a (sufficiently strong) mixing property then any learning algorithm for the corresponding learning problem requires either a memory of size at least or an exponential number of samples [MM17a]. In a recent subsequent work [MM17b], they improved their result, and obtained a theorem that is very similar to the one proved in [R17]. (The result of [MM17b] is stated in terms of a combinatorial mixing property, rather than matrix norm. The two notions are closely related (see in particular Corollary 5.1 and Note 5.1 in [BL06])).
Our Results
The results of [R17] and [MM17b] gave a lower bound of at most on the size of the memory, whereas the best that one could hope for, in the information theoretic setting (that is, in the setting where the learner’s computational power is unbounded), is a lower bound of , which may be significantly larger in cases where is significantly larger than , or vice versa.
In this work, we build on [R17] and obtain a general memorysamples lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of size at least or an exponential number of samples.
Our result is stated in terms of the properties of the matrix as a twosource extractor. Twosource extractors, first studied by Santha and Vazirani [SV84] and Chor and Goldreich [CG88], are central objects in the study of randomness and derandomization. We show that even a relatively weak twosource extractor implies a relatively strong memorysamples lower bound. We note that twosource extractors have been extensively studied in numerous of works and there are known techniques for proving that certain matrices are relatively good twosource extractors.
Our main result can be stated as follows (Corollary 3): Assume that are such that any submatrix of of at least rows and at least columns, has a bias of at most . Then, any learning algorithm for the learning problem corresponding to requires either a memory of size at least , or at least samples. The result holds even if the learner has an exponentially small success probability (of ).
A more detailed result, in terms of the constants involved, is stated in Theorem 1 in terms of the properties of as an Extractor, a new notion that we define in Definition 2.1, and is closely related to the notion of twosource extractor. (The two notions are equivalent up to small changes in the parameters.)
All of our results (and all applications) hold even if the learner is only required to weakly learn , that is to output a hypothesis with a nonnegligible correlation with the th column of the matrix . We prove in Theorem 2 that even if the learner is only required to output a hypothesis that agrees with the th column of on more than a fraction of the rows, the success probability is at most .
As in [R16, KRT16, R17], we model the learning algorithm by a branching program. A branching program is the strongest and most general model to use in this context. Roughly speaking, the model allows a learner with infinite computational power, and bounds only the memory size of the learner and the number of samples used.
As mentioned above, our result implies all previous memorysamples lower bounds, as well as new applications. In particular:

Parities: A learner tries to learn , from random linear equations over . It was proved in [R16] (and follows also from [R17]) that any learning algorithm requires either a memory of size or an exponential number of samples. The same result follows by Corollary 3 and the fact that inner product is a good twosource extractor [CG88].

Learning from sparse linear equations: A learner tries to learn , from random sparse linear equations, of sparsity , over . In Section 5.3, we prove that any learning algorithm requires:

Assuming : either a memory of size or samples.

Assuming : either a memory of size or samples.


Learning from lowdegree equations: A learner tries to learn , from random multilinear polynomial equations of degree at most , over . In Section 5.4, we prove that if , any learning algorithm requires either a memory of size or samples.

Lowdegree polynomials: A learner tries to learn an variate multilinear polynomial of degree at most over , from random evaluations of over . In Section 5.5, we prove that if , any learning algorithm requires either a memory of size or samples.

Errorcorrecting codes: A learner tries to learn a codeword from random coordinates: Assume that is such that for some , any pair of different columns of , agree on at least and at most coordinates. In Section 5.6, we prove that any learning algorithm for the learning problem corresponding to requires either a memory of size or samples. We also point to a relation between our results and statisticalquery dimension [K98, BFJKMR94].

Random matrices: Let be finite sets, such that, and . Let
be a random matrix. Fix
and . With very high probability, any submatrix of of at least rows and at least columns, has a bias of at most . Thus, by Corollary 3, any learning algorithm for the learning problem corresponding to requires either a memory of size , or samples.
We note also that our results about learning from sparse linear equations have applications in boundedstorage cryptography. This is similar to [R16, KRT16], but in a different range of the parameters. In particular, for every , our results give an encryption scheme that requires a private key of length , and time complexity of per encryption/decryption of each bit, using a random access machine. The scheme is provenly and unconditionally secure as long as the attacker uses at most memory bits and the scheme is used at most times.
Techniques
Our proof follows the lines of the proof of [R17] and builds on that proof. The proof of [R17] considered the norm of the matrix , and thus essentially reduced the entire matrix to only one parameter. In our proof, we consider the properties of as a twosource extractor, and hence we have three parameters , rather than one. Considering these three parameters, rather than one, enables a more refined analysis, resulting in a stronger lower bound with a slightly simpler proof.
A proof outline is given in Section 3.
Motivation and Discussion
Many previous works studied the resources needed for learning, under certain information, communication or memory constraints (see in particular [S14, SVW16, R16, VV16, KRT16, MM17a, R17, MT17, MM17b] and the many references given there). A main message of some of these works is that for some learning problems, access to a relatively large memory is crucial. In other words, in some cases, learning is infeasible, due to memory constraints.
From the point of view of human learning, such results may help to explain the importance of memory in cognitive processes. From the point of view of machine learning, these results imply that a large class of learning algorithms cannot learn certain concept classes. In particular, this applies to any boundedmemory learning algorithm that considers the samples one by one. In addition, these works are related to computational complexity and have applications in cryptography.
Related Work
Independently of our work, Beame, Oveis Gharan and Yang also gave a combinatorial property of a matrix , that holds for a large class of matrices and implies that any learning algorithm for the corresponding learning problem requires either a memory of size or an exponential number of samples (when ) [BOGY17]
. Their property is based on a measure of how matrices amplify the 2norms of probability distributions that is more refined than the 2norms of these matrices. Their proof also builds on
[R17].They also show, as an application, tight timespace lower bounds for learning lowdegree polynomials, as well as other applications.
2 Preliminaries
For a random variable
and an event , we denote by the distribution of the random variables , and we denote by the distribution of the random variable conditioned on the event .Viewing a Learning Problem as a Matrix
Let , be two finite sets of size larger than 1. Let .
Let be a matrix. The matrix corresponds to the following learning problem: There is an unknown element that was chosen uniformly at random. A learner tries to learn from samples , where is chosen uniformly at random and . That is, the learning algorithm is given a stream of samples, , where each is uniformly distributed and for every , .
Norms and Inner Products
Let . For a function , denote by the norm of , with respect to the uniform distribution over , that is:
For two functions , define their inner product with respect to the uniform distribution over as
For a matrix and a row , we denote by the function corresponding to the th row of . Note that for a function , we have .
Extractors and Extractors
Definition 2.1.
Extractor: Let be two finite sets. A matrix is a Extractor with error , if for every nonnegative with there are at most rows in with
Let be a finite set. We denote a distribution over as a function such that . We say that a distribution has minentropy if for all , we have .
Definition 2.2.
Extractor: Let be two finite sets. A matrix is a Extractor if for every distribution with minentropy at least and every distribution with minentropy at least ,
Branching Program for a Learning Problem
In the following definition, we model the learner for the learning problem that corresponds to the matrix , by a branching program.
Definition 2.3.
Branching Program for a Learning Problem: A branching program of length and width , for learning, is a directed (multi) graph with vertices arranged in layers containing at most vertices each. In the first layer, that we think of as layer 0, there is only one vertex, called the start vertex. A vertex of outdegree 0 is called a leaf. All vertices in the last layer are leaves (but there may be additional leaves). Every nonleaf vertex in the program has outgoing edges, labeled by elements , with exactly one edge labeled by each such , and all these edges going into vertices in the next layer. Each leaf in the program is labeled by an element , that we think of as the output of the program on that leaf.
ComputationPath: The samples that are given as input, define a computationpath in the branching program, by starting from the start vertex and following at step the edge labeled by , until reaching a leaf. The program outputs the label of the leaf reached by the computationpath.
Success Probability: The success probability of the program is the probability that , where is the element that the program outputs, and the probability is over (where is uniformly distributed over and are uniformly distributed over , and for every , ).
3 Overview of the Proof
The proof follows the lines of the proof of [R17] and builds on that proof.
Assume that is a extractor with error , and let . Let be a branching program for the learning problem that corresponds to the matrix . Assume for a contradiction that is of length and width , where is a small constant.
We define the truncatedpath, , to be the same as the computationpath of , except that it sometimes stops before reaching a leaf. Roughly speaking, stops before reaching a leaf if certain “bad” events occur. Nevertheless, we show that the probability that stops before reaching a leaf is negligible, so we can think of as almost identical to the computationpath.
For a vertex of , we denote by the event that reaches the vertex . We denote by the probability for (where the probability is over ), and we denote by the distribution of the random variable conditioned on the event . Similarly, for an edge of the branching program , let be the event that traverses the edge . Denote, , and .
A vertex of is called significant if
Roughly speaking, this means that conditioning on the event that reaches the vertex , a nonnegligible amount of information is known about . In order to guess with a nonnegligible success probability, must reach a significant vertex. Lemma 4.1 shows that the probability that reaches any significant vertex is negligible, and thus the main result follows.
To prove Lemma 4.1, we show that for every fixed significant vertex , the probability that reaches is at most (which is smaller than one over the number of vertices in ). Hence, we can use a union bound to prove the lemma.
The proof that the probability that reaches is extremely small is the main part of the proof. To that end, we use the following functions to measure the progress made by the branching program towards reaching .
Let be the set of vertices in layer of , such that . Let be the set of edges from layer of to layer of , such that . Let
We think of as measuring the progress made by the branching program, towards reaching a state with distribution similar to .
We show that each may only be negligibly larger than . Hence, since it’s easy to calculate that , it follows that is close to , for every . On the other hand, if is in layer then is at least . Thus, cannot be much larger than . Since is significant, and hence is at most .
The proof that may only be negligibly larger than is done in two steps: Claim 4.12 shows by a simple convexity argument that . The hard part, that is done in Claim 4.10 and Claim 4.11, is to prove that may only be negligibly larger than .
For this proof, we define for every vertex , the set of edges that are going out of , such that . Claim 4.10 shows that for every vertex ,
may only be negligibly higher than
For the proof of Claim 4.10, which is the hardest proof in the paper, and the most important place where our proof deviates from (and simplifies) the proof of [R17], we consider the function . We first show how to bound . We then consider two cases: If is negligible, then is negligible and doesn’t contribute much, and we show that for every , is also negligible and doesn’t contribute much. If is nonnegligible, we use the bound on and the assumption that is a extractor to show that for almost all edges , we have that is very close to . Only an exponentially small () fraction of edges are “bad” and give a significantly larger .
The reason that in the definitions of and we raised and to the power of is that this is the largest power for which the contribution of the “bad” edges is still small (as their fraction is ).
This outline oversimplifies many details. Let us briefly mention two of them. First, it is not so easy to bound . We do that by bounding and . In order to bound , we force to stop whenever it reaches a significant vertex (and thus we are able to bound for every vertex reached by ). In order to bound , we force to stop whenever is large, which allows us to consider only the “bounded” part of . (This is related to the technique of flattening a distribution that was used in [KR13]). Second, some edges are so “bad” that their contribution to is huge so they cannot be ignored. We force to stop before traversing any such edge. (This is related to an idea that was used in [KRT16] of analyzing separately paths that traverse “bad” edges). We show that the total probability that stops before reaching a leaf is negligible.
4 Main Result
Theorem 1.
Let . Fix to be such that .
Let , be two finite sets. Let . Let be a matrix which is a extractor with error , for sufficiently large^{1}^{1}1By “sufficiently large” we mean that are larger than some constant that depends on . and , where . Let
(1) 
Let be a branching program of length at most and width at most for the learning problem that corresponds to the matrix . Then, the success probability of is at most .
Proof.
Let
(2) 
Note that by the assumption that and are sufficiently large, we get that and are also sufficiently large. Since , we have . Thus,
(3) 
Let be a branching program of length and width for the learning problem that corresponds to the matrix . We will show that the success probability of is at most .
4.1 The TruncatedPath and Additional Definitions and Notation
We will define the truncatedpath, , to be the same as the computationpath of , except that it sometimes stops before reaching a leaf. Formally, we define , together with several other definitions and notations, by induction on the layers of the branching program .
Assume that we already defined the truncatedpath , until it reaches layer of . For a vertex in layer of , let be the event that reaches the vertex . For simplicity, we denote by the probability for (where the probability is over ), and we denote by the distribution of the random variable conditioned on the event .
There will be three cases in which the truncatedpath stops on a nonleaf :

If is a, so called, significant vertex, where the norm of is nonnegligible. (Intuitively, this means that conditioned on the event that reaches , a nonnegligible amount of information is known about ).

If is nonnegligible. (Intuitively, this means that conditioned on the event that reaches , the correct element could have been guessed with a nonnegligible probability).

If is nonnegligible. (Intuitively, this means that is about to traverse a “bad” edge, which is traversed with a nonnegligibly higher or lower probability than other edges).
Next, we describe these three cases more formally.
Significant Vertices
We say that a vertex in layer of is significant if
Significant Values
Even if is not significant, may have relatively large values. For a vertex in layer of , denote by the set of all , such that,
Bad Edges
For a vertex in layer of , denote by the set of all , such that,
The TruncatedPath
We define by induction on the layers of the branching program . Assume that we already defined until it reaches a vertex in layer of . The path stops on if (at least) one of the following occurs:

is significant.

.

.

is a leaf.
Otherwise, proceeds by following the edge labeled by (same as the computationalpath).
4.2 Proof of Theorem 1
Since follows the computationpath of , except that it sometimes stops before reaching a leaf, the success probability of is bounded (from above) by the probability that stops before reaching a leaf, plus the probability that reaches a leaf and .
The main lemma needed for the proof of Theorem 1 is Lemma 4.1 that shows that the probability that reaches a significant vertex is at most .
Lemma 4.1.
The probability that reaches a significant vertex is at most .
Lemma 4.1 is proved in Section 4.3. We will now show how the proof of Theorem 1 follows from that lemma.
Lemma 4.1 shows that the probability that stops on a nonleaf vertex, because of the first reason (i.e., that the vertex is significant), is small. The next two lemmas imply that the probabilities that stops on a nonleaf vertex, because of the second and third reasons, are also small.
Claim 4.2.
If is a nonsignificant vertex of then
Proof.
Since is not significant,
Hence, by Markov’s inequality,
Since conditioned on , the distribution of is , we obtain
Claim 4.3.
If is a nonsignificant vertex of then
Proof.
Since is not significant, . Since is a distribution, . Thus,
Since is a extractor with error , there are at most elements with
The claim follows since is uniformly distributed over and since (Equation (1)). ∎
We can now use Lemma 4.1, Claim 4.2 and Claim 4.3 to prove that the probability that stops before reaching a leaf is at most . Lemma 4.1 shows that the probability that reaches a significant vertex and hence stops because of the first reason, is at most . Assuming that doesn’t reach any significant vertex (in which case it would have stopped because of the first reason), Claim 4.2 shows that in each step, the probability that stops because of the second reason, is at most . Taking a union bound over the steps, the total probability that stops because of the second reason, is at most . In the same way, assuming that doesn’t reach any significant vertex (in which case it would have stopped because of the first reason), Claim 4.3 shows that in each step, the probability that stops because of the third reason, is at most . Again, taking a union bound over the steps, the total probability that stops because of the third reason, is at most . Thus, the total probability that stops (for any reason) before reaching a leaf is at most .
Recall that if doesn’t stop before reaching a leaf, it just follows the computationpath of . Recall also that by Lemma 4.1, the probability that reaches a significant leaf is at most . Thus, to bound (from above) the success probability of by , it remains to bound the probability that reaches a nonsignificant leaf and . Claim 4.4 shows that for any nonsignificant leaf , conditioned on the event that reaches , the probability for is at most , which completes the proof of Theorem 1.
Claim 4.4.
If is a nonsignificant leaf of then
Proof.
This completes the proof of Theorem 1. ∎
4.3 Proof of Lemma 4.1
Proof.
We need to prove that the probability that reaches any significant vertex is at most . Let be a significant vertex of . We will bound from above the probability that reaches , and then use a union bound over all significant vertices of . Interestingly, the upper bound on the width of is used only in the union bound.
The Distributions and
Recall that for a vertex of , we denote by the event that reaches the vertex . For simplicity, we denote by the probability for (where the probability is over ), and we denote by the distribution of the random variable conditioned on the event .
Similarly, for an edge of the branching program , let be the event that traverses the edge . Denote, (where the probability is over ), and .
Claim 4.5.
For any edge of , labeled by , such that , for any ,
where is a normalization factor that satisfies,
Proof.
Let be an edge of , labeled by , and such that . Since , the vertex is not significant (as otherwise always stops on and hence ). Also, since , we know that (as otherwise never traverses and hence ).
If reaches , it traverses the edge if and only if: (as otherwise stops on ) and and . Therefore, for any ,
where is a normalization factor, given by
Since is not significant, by Claim 4.2,
Since ,
and hence
Hence, by the union bound,
(where the last inequality follows since , by Equation (1)). ∎
Bounding the Norm of
We will show that cannot be too large. Towards this, we will first prove that for every edge of that is traversed by with probability larger than zero, cannot be too large.
Claim 4.6.
For any edge of , such that ,
Proof.
Let be an edge of , labeled by , and such that . Since , the vertex is not significant (as otherwise always stops on and hence ). Thus,
By Claim 4.5, for any ,
where satisfies,
(where the last inequality holds because we assume that and thus are sufficiently large.) Thus,
Claim 4.7.
Proof.
Let be the set of all edges of , that are going into , such that . Note that
Comments
There are no comments yet.