Optimal approximation of continuous functions by very deep ReLU networks

02/10/2018
by   Dmitry Yarotsky, et al.
0

We prove that deep ReLU neural networks with conventional fully-connected architectures with W weights can approximate continuous ν-variate functions f with uniform error not exceeding a_νω_f(c_ν W^-2/ν), where ω_f is the modulus of continuity of f and a_ν, c_ν are some ν-dependent constants. This bound is tight. Our construction is inherently deep and nonlinear: the obtained approximation rate cannot be achieved by networks with fewer than Ω(W/ W) layers or by networks with weights continuously depending on f.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2022

Expressive power of binary and ternary neural networks

We show that deep sparse ReLU networks with ternary weights and deep ReL...
research
06/27/2019

Error bounds for deep ReLU networks using the Kolmogorov--Arnold superposition theorem

We prove a theorem concerning the approximation of multivariate continuo...
research
02/26/2019

Nonlinear Approximation via Compositions

We study the approximation efficiency of function compositions in nonlin...
research
10/15/2019

Neural tangent kernels, transportation mappings, and universal approximation

This paper establishes rates of universal approximation for the shallow ...
research
06/30/2023

Efficient uniform approximation using Random Vector Functional Link networks

A Random Vector Functional Link (RVFL) network is a depth-2 neural netwo...
research
11/20/2021

SPINE: Soft Piecewise Interpretable Neural Equations

Relu Fully Connected Networks are ubiquitous but uninterpretable because...
research
01/11/2023

Exploring the Approximation Capabilities of Multiplicative Neural Networks for Smooth Functions

Multiplication layers are a key component in various influential neural ...

Please sign up or login with your details

Forgot password? Click here to reset