Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

10/31/2016
by   Itay Safran, et al.
0

We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L_1 norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2021

Size and Depth Separation in Approximating Natural Functions with Neural Networks

When studying the expressive power of neural networks, a main challenge ...
research
09/01/2021

Simultaneous Neural Network Approximations in Sobolev Spaces

We establish in this work approximation results of deep neural networks ...
research
12/12/2015

The Power of Depth for Feedforward Neural Networks

We show that there is a simple (approximately radial) function on ^d, ex...
research
12/04/2021

Optimization-Based Separations for Neural Networks

Depth separation results propose a possible theoretical explanation for ...
research
05/25/2022

Entropy Maximization with Depth: A Variational Principle for Random Neural Networks

To understand the essential role of depth in neural networks, we investi...
research
04/15/2019

Depth Separations in Neural Networks: What is Actually Being Separated?

Existing depth separation results for constant-depth networks essentiall...
research
10/26/2020

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

It is known that Θ(N) parameters are sufficient for neural networks to m...

Please sign up or login with your details

Forgot password? Click here to reset