Expressivity of Shallow and Deep Neural Networks for Polynomial Approximation

03/06/2023
by   Itai Shapira, et al.
0

We analyze the number of neurons that a ReLU neural network needs to approximate multivariate monomials. We establish an exponential lower bound for the complexity of any shallow network that approximates the product function x⃗→∏_i=1^d x_i on a general compact domain. Furthermore, we prove that this lower bound does not hold for normalized O(1)-Lipschitz monomials (or equivalently, by restricting to the unit cube). These results suggest shallow ReLU networks suffer from the curse of dimensionality when expressing functions with a Lipschitz parameter scaling with the dimension of the input, and that the expressive power of neural networks lies in their depth rather than the overall complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2016

Error bounds for approximations with deep ReLU networks

We study expressive power of shallow and deep neural networks with piece...
research
12/10/2020

The Representation Power of Neural Networks: Breaking the Curse of Dimensionality

In this paper, we analyze the number of neurons and training parameters ...
research
11/30/2022

Limitations on approximation by deep and shallow neural networks

We prove Carl's type inequalities for the error of approximation of comp...
research
05/27/2022

Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power

It is well-known that modern neural networks are vulnerable to adversari...
research
08/28/2022

Neural Network Approximation of Lipschitz Functions in High Dimensions with Applications to Inverse Problems

The remarkable successes of neural networks in a huge variety of inverse...
research
12/13/2020

Neural network approaches to point lattice decoding

We characterize the complexity of the lattice decoding problem from a ne...

Please sign up or login with your details

Forgot password? Click here to reset