DeepAI
Log In Sign Up

Efficient Approximation of Solutions of Parametric Linear Transport Equations by ReLU DNNs

01/30/2020
by   Fabian Laakmann, et al.
0

We demonstrate that deep neural networks with the ReLU activation function can efficiently approximate the solutions of various types of parametric linear transport equations. For non-smooth initial conditions, the solutions of these PDEs are high-dimensional and non-smooth. Therefore, approximation of these functions suffers from a curse of dimension. We demonstrate that through their inherent compositionality deep neural networks can resolve the characteristic flow underlying the transport equations and thereby allow approximation rates independent of the parameter dimension.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/23/2021

Deep ReLU Neural Network Approximation for Stochastic Differential Equations with Jumps

Deep neural networks (DNNs) with ReLU activation function are proved to ...
11/25/2022

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces

We study the problem of how efficiently, in terms of the number of param...
03/09/2021

Deep neural network approximation for high-dimensional parabolic Hamilton-Jacobi-Bellman equations

The approximation of solutions to second order Hamilton–Jacobi–Bellman (...
07/13/2022

Compositional Sparsity, Approximation Classes, and Parametric Transport Equations

Approximating functions of a large number of variables poses particular ...
01/28/2021

Deep ReLU Network Expression Rates for Option Prices in high-dimensional, exponential Lévy models

We study the expression rates of deep neural networks (DNNs for short) f...
05/10/2021

ReLU Deep Neural Networks from the Hierarchical Basis Perspective

We study ReLU deep neural networks (DNNs) by investigating their connect...
10/15/2019

Neural tangent kernels, transportation mappings, and universal approximation

This paper establishes rates of universal approximation for the shallow ...