Error bounds for deep ReLU networks using the Kolmogorov--Arnold superposition theorem

06/27/2019
by   Hadrien Montanelli, et al.
0

We prove a theorem concerning the approximation of multivariate continuous functions by deep ReLU networks, for which the curse of the dimensionality is lessened. Our theorem is based on the Kolmogorov--Arnold superposition theorem, and on the approximation of the inner and outer functions that appear in the superposition by very deep ReLU networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2018

Random ReLU Features: Universality, Approximation, and Composition

We propose random ReLU features models in this work. Its motivation is r...
research
09/01/2021

Approximation Properties of Deep ReLU CNNs

This paper is devoted to establishing L^2 approximation properties for d...
research
02/10/2018

Optimal approximation of continuous functions by very deep ReLU networks

We prove that deep ReLU neural networks with conventional fully-connecte...
research
05/30/2019

Function approximation by deep networks

We show that deep networks are better than shallow networks at approxima...
research
07/31/2020

The Kolmogorov-Arnold representation theorem revisited

There is a longstanding debate whether the Kolmogorov-Arnold representat...
research
02/28/2023

A multivariate Riesz basis of ReLU neural networks

We consider the trigonometric-like system of piecewise linear functions ...
research
08/10/2023

On the Optimal Expressive Power of ReLU DNNs and Its Application in Approximation with Kolmogorov Superposition Theorem

This paper is devoted to studying the optimal expressive power of ReLU d...

Please sign up or login with your details

Forgot password? Click here to reset