Optimal approximation of piecewise smooth functions using deep ReLU neural networks

09/15/2017
by   Philipp Petersen, et al.
0

We study the necessary and sufficient complexity of ReLU neural networks---in terms of depth and number of weights---which is required for approximating classifier functions in an L^2-sense. As a model class, we consider the set E^β ( R^d) of possibly discontinuous piecewise C^β functions f : [-1/2, 1/2]^d → R, where the different "smooth regions" of f are separated by C^β hypersurfaces. For given dimension d ≥ 2, regularity β > 0, and accuracy ε > 0, we construct artificial neural networks with ReLU activation function that approximate functions from E^β( R^d) up to an L^2 error of ε. The constructed networks have a fixed number of layers, depending only on d and β and they have O(ε^-2(d-1)/β) many non-zero weights, which we prove to be optimal. For the proof of optimality, we establish a lower bound on the description complexity of the class E^β ( R^d). By showing that a family of approximating neural networks gives rise to an encoder for E^β ( R^d), we then prove that one cannot approximate a general function f ∈E^β ( R^d) using neural networks that are less complex than those produced by our construction. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise C^β( R^d) functions, this minimal depth is given---up to a multiplicative constant---by β/d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2023

Optimal approximation of C^k-functions using shallow complex-valued neural networks

We prove a quantitative result for the approximation of functions of reg...
research
02/23/2023

Testing Stationarity Concepts for ReLU Networks: Hardness, Regularity, and Robust Algorithms

We study the computational problem of the stationarity test for the empi...
research
10/07/2021

On the Optimal Memorization Power of ReLU Neural Networks

We study the memorization power of feedforward ReLU neural networks. We ...
research
05/26/2022

Training ReLU networks to high uniform accuracy is intractable

Statistical learning theory provides bounds on the necessary number of t...
research
09/09/2019

Optimal Function Approximation with Relu Neural Networks

We consider in this paper the optimal approximations of convex univariat...
research
09/24/2015

Provable approximation properties for deep neural networks

We discuss approximation of functions using deep neural nets. Given a fu...
research
06/07/2020

Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth

We prove sharp dimension-free representation results for neural networks...

Please sign up or login with your details

Forgot password? Click here to reset