Deep neural network approximation of composite functions without the curse of dimensionality

04/12/2023
by   Adrian Riekert, et al.
0

In this article we identify a general class of high-dimensional continuous functions that can be approximated by deep neural networks (DNNs) with the rectified linear unit (ReLU) activation without the curse of dimensionality. In other words, the number of DNN parameters grows at most polynomially in the input dimension and the approximation error. The functions in our class can be expressed as a potentially unbounded number of compositions of special functions which include products, maxima, and certain parallelized Lipschitz continuous functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2021

Deep neural network approximation theory for high-dimensional functions

The purpose of this article is to develop machinery to study the capacit...
research
12/18/2021

The Kolmogorov Superposition Theorem can Break the Curse of Dimensionality When Approximating High Dimensional Functions

We explain how to use Kolmogorov's Superposition Theorem (KST) to overco...
research
01/28/2021

Deep ReLU Network Expression Rates for Option Prices in high-dimensional, exponential Lévy models

We study the expression rates of deep neural networks (DNNs for short) f...
research
02/23/2021

Deep ReLU Neural Network Approximation for Stochastic Differential Equations with Jumps

Deep neural networks (DNNs) with ReLU activation function are proved to ...
research
01/23/2020

Overcoming the curse of dimensionality for approximating Lyapunov functions with deep neural networks under a small-gain condition

We propose a deep neural network architecture for storing approximate Ly...
research
12/23/2021

Optimal learning of high-dimensional classification problems using deep neural networks

We study the problem of learning classification functions from noiseless...

Please sign up or login with your details

Forgot password? Click here to reset