Sharp Lower Bounds on the Approximation Rate of Shallow Neural Networks

06/28/2021
by   Jonathan W. Siegel, et al.
11

We consider the approximation rates of shallow neural networks with respect to the variation norm. Upper bounds on these rates have been established for sigmoidal and ReLU activation functions, but it has remained an important open problem whether these rates are sharp. In this article, we provide a solution to this problem by proving sharp lower bounds on the approximation rates for shallow neural networks, which are obtained by lower bounding the L^2-metric entropy of the convex hull of the neural network basis functions. In addition, our methods also give sharp lower bounds on the Kolmogorov n-widths of this convex hull, which show that the variation spaces corresponding to shallow neural networks cannot be efficiently approximated by linear methods. These lower bounds apply to both sigmoidal activation functions with bounded variation and to activation functions which are a power of the ReLU. Our results also quantify how much stronger the Barron spectral norm is than the variation norm and, combined with previous results, give the asymptotics of the L^∞-metric entropy up to logarithmic factors in the case of the ReLU activation function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/29/2021

Optimal Approximation Rates and Metric Entropy of ReLU^k and Cosine Networks

This article addresses several fundamental issues associated with the ap...
research
10/03/2016

Error bounds for approximations with deep ReLU networks

We study expressive power of shallow and deep neural networks with piece...
research
12/14/2020

High-Order Approximation Rates for Neural Networks with ReLU^k Activation Functions

We study the approximation properties of shallow neural networks (NN) wi...
research
07/28/2023

Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks

We study the following two related problems. The first is to determine t...
research
06/07/2020

Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth

We prove sharp dimension-free representation results for neural networks...
research
02/02/2023

Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data

We study the interpolation, or memorization, power of deep ReLU neural n...
research
06/29/2022

From Kernel Methods to Neural Networks: A Unifying Variational Formulation

The minimization of a data-fidelity term and an additive regularization ...

Please sign up or login with your details

Forgot password? Click here to reset