Memory-Sample Lower Bounds for Learning Parity with Noise

07/05/2021
by   Sumegha Garg, et al.
4

In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn x=(x_1,…,x_n) ∈{0,1}^n from a stream of random linear equations over F_2 that are correct with probability 1/2+ε and flipped with probability 1/2-ε, that any learning algorithm requires either a memory of size Ω(n^2/ε) or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [GRT'18], when the samples are noisy. A matrix M: A × X →{-1,1} corresponds to the following learning problem with error parameter ε: an unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2) …, where for every i, a_i ∈ A is chosen uniformly at random and b_i = M(a_i,x) with probability 1/2+ε and b_i = -M(a_i,x) with probability 1/2-ε (0<ε< 1/2). Assume that k,ℓ, r are such that any submatrix of M of at least 2^-k· |A| rows and at least 2^-ℓ· |X| columns, has a bias of at most 2^-r. We show that any learning algorithm for the learning problem corresponding to M, with error, requires either a memory of size at least Ω(k ·ℓ/ε), or at least 2^Ω(r) samples. In particular, this shows that for a large class of learning problems, same as those in [GRT'18], any learning algorithm requires either a memory of size at least Ω((log |X|) · (log |A|)/ε) or an exponential number of noisy samples. Our proof is based on adapting the arguments in [Raz'17,GRT'18] to the noisy case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2017

Extractor-Based Time-Space Lower Bounds for Learning

A matrix M: A × X →{-1,1} corresponds to the following learning problem:...
research
03/01/2023

Memory-Sample Lower Bounds for Learning with Classical-Quantum Hybrid Memory

In a work by Raz (J. ACM and FOCS 16), it was proved that any algorithm ...
research
04/18/2019

Memory-Sample Tradeoffs for Linear Regression with Small Error

We consider the problem of performing linear regression over a stream of...
research
11/20/2017

On estimating the alphabet size of a discrete random source

We are concerned with estimating alphabet size N from a stream of symbol...
research
06/09/2022

Strong Memory Lower Bounds for Learning Natural Models

We give lower bounds on the amount of memory required by one-pass stream...
research
11/25/2022

Towards Better Bounds for Finding Quasi-Identifiers

We revisit the problem of finding small ϵ-separation keys introduced by ...
research
04/11/2020

Discriminative Learning via Adaptive Questioning

We consider the problem of designing an adaptive sequence of questions t...

Please sign up or login with your details

Forgot password? Click here to reset