Error bounds for approximations with deep ReLU neural networks in W^s,p norms

02/21/2019
by   Ingo Gühring, et al.
0

We analyze approximation rates of deep ReLU neural networks for Sobolev-regular functions with respect to weaker Sobolev norms. First, we construct, based on a calculus of ReLU networks, artificial neural networks with ReLU activation functions that achieve certain approximation rates. Second, we establish lower bounds for the approximation by ReLU neural networks for classes of Sobolev-regular functions. Our results extend recent advances in the approximation theory of ReLU networks to the regime that is most relevant for applications in the numerical analysis of partial differential equations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2021

Simultaneous Neural Network Approximations in Sobolev Spaces

We establish in this work approximation results of deep neural networks ...
research
04/18/2021

On the approximation of functions by tanh neural networks

We derive bounds on the error, in high-order Sobolev norms, incurred in ...
research
06/30/2020

Approximation Rates for Neural Networks with Encodable Weights in Smoothness Spaces

We examine the necessary and sufficient complexity of neural networks to...
research
05/13/2019

Towards a regularity theory for ReLU networks -- chain rule and global error estimates

Although for neural networks with locally Lipschitz continuous activatio...
research
03/18/2019

On-line learning dynamics of ReLU neural networks using statistical physics techniques

We introduce exact macroscopic on-line learning dynamics of two-layer ne...
research
05/31/2023

On the Expressive Power of Neural Networks

In 1989 George Cybenko proved in a landmark paper that wide shallow neur...
research
09/09/2019

Optimal Function Approximation with Relu Neural Networks

We consider in this paper the optimal approximations of convex univariat...

Please sign up or login with your details

Forgot password? Click here to reset