Most Neural Networks Are Almost Learnable

05/25/2023
by   Amit Daniely, et al.
0

We present a PTAS for learning random constant-depth networks. We show that for any fixed ϵ>0 and depth i, there is a poly-time algorithm that for any distribution on √(d)·𝕊^d-1 learns random Xavier networks of depth i, up to an additive error of ϵ. The algorithm runs in time and sample complexity of (d̅)^poly(ϵ^-1), where d̅ is the size of the network. For some cases of sigmoid and ReLU-like activations the bound can be improved to (d̅)^polylog(ϵ^-1), resulting in a quasi-poly-time algorithm for learning constant depth random networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2017

Depth Separation for Neural Networks

Let f:S^d-1×S^d-1→S be a function of the form f(x,x') = g(〈x,x'〉) for g:...
research
07/24/2023

A faster and simpler algorithm for learning shallow networks

We revisit the well-studied problem of learning a linear combination of ...
research
06/03/2023

On Size-Independent Sample Complexity of ReLU Networks

We study the sample complexity of learning ReLU neural networks from the...
research
08/14/2019

Type-two Iteration with Bounded Query Revision

Motivated by recent results of Kapron and Steinberg (LICS 2018) we intro...
research
07/18/2018

Learning Sums of Independent Random Variables with Sparse Collective Support

We study the learnability of sums of independent integer random variable...
research
01/07/2020

Poly-time universality and limitations of deep learning

The goal of this paper is to characterize function distributions that de...
research
05/25/2022

Entropy Maximization with Depth: A Variational Principle for Random Neural Networks

To understand the essential role of depth in neural networks, we investi...

Please sign up or login with your details

Forgot password? Click here to reset