Deep Network Approximation Characterized by Number of Neurons

06/13/2019
by   Zuowei Shen, et al.
0

This paper quantitatively characterizes the approximation power of deep feed-forward neural networks (FNNs) in terms of the number of neurons, i.e., the product of the network width and depth. It is shown by construction that ReLU FNNs with width and depth 9L+12 can approximate an arbitrary Hölder continuous function of order α with a Lipschitz constant ν on [0,1]^d with a tight approximation rate 5(8√(d))^αν N^-2α/dL^-2α/d for any given N,L∈^+. The constructive approximation is a corollary of a more general result for an arbitrary continuous function f in terms of its modulus of continuity ω_f(·). In particular, the approximation rate of ReLU FNNs with width and depth 9L+12 for a general continuous function f is 5ω_f(8√(d) N^-2/dL^-2/d). We also extend our analysis to the case when the domain of f is irregular or localized in an ϵ-neighborhood of a d_M-dimensional smooth manifold M⊆ [0,1]^d with d_M≪ d. Especially, in the case of an essentially low-dimensional domain, we show an approximation rate 3ω_f(4ϵ1-δ√(dd_δ))+5ω_f(16d(1-δ)√(d_δ)N^-2/d_δL^-2/d_δ) for ReLU FNNs to approximate f in the ϵ-neighborhood, where d_δ=(d_M (d/δ)δ^2) for any given δ∈(0,1). Our analysis provides a general guide for selecting the width and the depth of ReLU FNNs to approximate continuous functions especially in parallel computing.

READ FULL TEXT
research
02/28/2021

Optimal Approximation Rate of ReLU Networks in terms of Width and Depth

This paper concentrates on the approximation power of deep feed-forward ...
research
02/26/2019

Nonlinear Approximation via Compositions

We study the approximation efficiency of function compositions in nonlin...
research
06/22/2020

Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth

A new network with super approximation power is introduced. This network...
research
08/09/2017

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

This article concerns the expressive power of depth in neural nets with ...
research
10/25/2020

Neural Network Approximation: Three Hidden Layers Are Enough

A three-hidden-layer neural network with super approximation power is in...
research
10/31/2017

Approximating Continuous Functions by ReLU Nets of Minimal Width

This article concerns the expressive power of depth in deep feed-forward...
research
11/15/2021

ReLU Network Approximation in Terms of Intrinsic Parameters

This paper studies the approximation error of ReLU networks in terms of ...

Please sign up or login with your details

Forgot password? Click here to reset