Deep Network Approximation for Smooth Functions

01/09/2020
by   Jianfeng Lu, et al.
0

This paper establishes optimal approximation error characterization of deep ReLU networks for smooth functions in terms of both width and depth simultaneously. To that end, we first prove that multivariate polynomials can be approximated by deep ReLU networks of width O(N) and depth O(L) with an approximation error O(N^-L). Through local Taylor expansions and their deep ReLU network approximations, we show that deep ReLU networks of width O(Nln N) and depth O(Lln L) can approximate f∈ C^s([0,1]^d) with a nearly optimal approximation rate O(f_C^s([0,1]^d)N^-2s/dL^-2s/d). Our estimate is non-asymptotic in the sense that it is valid for arbitrary width and depth specified by N∈N^+ and L∈N^+, respectively.

READ FULL TEXT
research
09/01/2021

Simultaneous Neural Network Approximations in Sobolev Spaces

We establish in this work approximation results of deep neural networks ...
research
06/05/2018

The universal approximation power of finite-width deep ReLU networks

We show that finite-width deep ReLU neural networks yield rate-distortio...
research
03/24/2018

Posterior Concentration for Sparse Deep Learning

Spike-and-Slab Deep Learning (SS-DL) is a fully Bayesian alternative to ...
research
06/07/2020

Sharp Representation Theorems for ReLU Networks with Precise Dependence on Depth

We prove sharp dimension-free representation results for neural networks...
research
02/28/2019

A lattice-based approach to the expressivity of deep ReLU neural networks

We present new families of continuous piecewise linear (CPWL) functions ...
research
07/28/2021

Neural Network Approximation of Refinable Functions

In the desire to quantify the success of neural networks in deep learnin...
research
02/06/2020

Duality of Width and Depth of Neural Networks

Here, we report that the depth and the width of a neural network are dua...

Please sign up or login with your details

Forgot password? Click here to reset