DeepAI AI Chat
Log In Sign Up

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces

by   Jonathan W. Siegel, et al.
Texas A&M University

We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev space W^s(L_q(Ω)) on a bounded domain Ω, where the error is measured in L_p(Ω). This problem is important for studying the application of neural networks in scientific computing and has previously been solved only in the case p=q=∞. Our contribution is to provide a solution for all 1≤ p,q≤∞ and s > 0. Our results show that deep ReLU networks significantly outperform classical methods of approximation, but that this comes at the cost of parameters which are not encodable.


page 1

page 2

page 3

page 4


Approximation in shift-invariant spaces with deep ReLU neural networks

We construct deep ReLU neural networks to approximate functions in dilat...

Expressive power of binary and ternary neural networks

We show that deep sparse ReLU networks with ternary weights and deep ReL...

Approximation spaces of deep neural networks

We study the expressivity of deep neural networks. Measuring a network's...

Nonlinear Approximation and (Deep) ReLU Networks

This article is concerned with the approximation and expressive powers o...

Double framed moduli spaces of quiver representations

Motivated by problems in the neural networks setting, we study moduli sp...

Two-layer neural networks with values in a Banach space

We study two-layer neural networks whose domain and range are Banach spa...

Efficient Approximation of Solutions of Parametric Linear Transport Equations by ReLU DNNs

We demonstrate that deep neural networks with the ReLU activation functi...