Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

02/10/2022
by   Sitan Chen, et al.
0

We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model. No general SQ lower bounds were known for learning ReLU networks of any depth in this setting: previous SQ lower bounds held only for adversarial noise models (agnostic learning) or restricted models such as correlational SQ. Prior work hinted at the impossibility of our result: Vempala and Wilmes showed that general SQ lower bounds cannot apply to any real-valued family of functions that satisfies a simple non-degeneracy condition. To circumvent their result, we refine a lifting procedure due to Daniely and Vardi that reduces Boolean PAC learning problems to Gaussian ones. We show how to extend their technique to other learning models and, in many well-studied cases, obtain a more efficient reduction. As such, we also prove new cryptographic hardness results for PAC learning two-hidden-layer ReLU networks, as well as new lower bounds for learning constant-depth ReLU networks from label queries.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2020

Statistical-Query Lower Bounds via Functional Gradients

We give the first statistical-query lower bounds for agnostically learni...
research
06/22/2020

Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks

We study the problem of PAC learning one-hidden-layer ReLU networks with...
research
11/25/2019

Trajectory growth lower bounds for random sparse deep ReLU networks

This paper considers the growth in the length of one-dimensional traject...
research
02/13/2023

Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression under Gaussian Marginals

We study the task of agnostically learning halfspaces under the Gaussian...
research
02/24/2023

Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes

We prove that the set of functions representable by ReLU neural networks...
research
02/02/2023

Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data

We study the interpolation, or memorization, power of deep ReLU neural n...
research
07/24/2023

Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials

We study the problem of PAC learning a linear combination of k ReLU acti...

Please sign up or login with your details

Forgot password? Click here to reset