Deep neural network approximations for Monte Carlo algorithms

08/28/2019
by   Philipp Grohs, et al.
0

Recently, it has been proposed in the literature to employ deep neural networks (DNNs) together with stochastic gradient descent methods to approximate solutions of PDEs. There are also a few results in the literature which prove that DNNs can approximate solutions of certain PDEs without the curse of dimensionality in the sense that the number of real parameters used to describe the DNN grows at most polynomially both in the PDE dimension and the reciprocal of the prescribed approximation accuracy. One key argument in most of these results is, first, to use a Monte Carlo approximation scheme which can approximate the solution of the PDE under consideration at a fixed space-time point without the curse of dimensionality and, thereafter, to prove that DNNs are flexible enough to mimic the behaviour of the used approximation scheme. Having this in mind, one could aim for a general abstract result which shows under suitable assumptions that if a certain function can be approximated by any kind of (Monte Carlo) approximation scheme without the curse of dimensionality, then this function can also be approximated with DNNs without the curse of dimensionality. It is a key contribution of this article to make a first step towards this direction. In particular, the main result of this paper, essentially, shows that if a function can be approximated by means of some suitable discrete approximation scheme without the curse of dimensionality and if there exist DNNs which satisfy certain regularity properties and which approximate this discrete approximation scheme without the curse of dimensionality, then the function itself can also be approximated with DNNs without the curse of dimensionality. As an application of this result we establish that solutions of suitable Kolmogorov PDEs can be approximated with DNNs without the curse of dimensionality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2020

Space-time deep neural network approximations for high-dimensional partial differential equations

It is one of the most challenging issues in applied mathematics to appro...
research
06/29/2023

Efficient Sobolev approximation of linear parabolic PDEs in high dimensions

In this paper, we study the error in first order Sobolev norm in the app...
research
09/03/2022

From Monte Carlo to neural networks approximations of boundary value problems

In this paper we study probabilistic and neural network approximations f...
research
01/28/2023

Deep Operator Learning Lessens the Curse of Dimensionality for PDEs

Deep neural networks (DNNs) have seen tremendous success in many fields ...
research
12/29/2021

Deep neural network approximation theory for high-dimensional functions

The purpose of this article is to develop machinery to study the capacit...
research
10/24/2022

An efficient Monte Carlo scheme for Zakai equations

In this paper we develop a numerical method for efficiently approximatin...
research
11/22/2022

Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks

The past decade has seen increasing interest in applying Deep Learning (...

Please sign up or login with your details

Forgot password? Click here to reset