Error bounds for approximations with deep ReLU networks

10/03/2016
by   Dmitry Yarotsky, et al.
0

We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz functions we describe adaptive depth-6 network architectures more efficient than the standard shallow architecture.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2021

Sharp Lower Bounds on the Approximation Rate of Shallow Neural Networks

We consider the approximation rates of shallow neural networks with resp...
research
03/06/2023

Expressivity of Shallow and Deep Neural Networks for Polynomial Approximation

We analyze the number of neurons that a ReLU neural network needs to app...
research
07/02/2019

Best k-layer neural network approximations

We investigate the geometry of the empirical risk minimization problem f...
research
09/09/2019

Optimal Function Approximation with Relu Neural Networks

We consider in this paper the optimal approximations of convex univariat...
research
11/30/2022

Limitations on approximation by deep and shallow neural networks

We prove Carl's type inequalities for the error of approximation of comp...
research
12/10/2020

The Representation Power of Neural Networks: Breaking the Curse of Dimensionality

In this paper, we analyze the number of neurons and training parameters ...
research
05/03/2017

Quantified advantage of discontinuous weight selection in approximations with deep neural networks

We consider approximations of 1D Lipschitz functions by deep ReLU networ...

Please sign up or login with your details

Forgot password? Click here to reset