Log In Sign Up

Efficient Approximation of Solutions of Parametric Linear Transport Equations by ReLU DNNs

by   Fabian Laakmann, et al.

We demonstrate that deep neural networks with the ReLU activation function can efficiently approximate the solutions of various types of parametric linear transport equations. For non-smooth initial conditions, the solutions of these PDEs are high-dimensional and non-smooth. Therefore, approximation of these functions suffers from a curse of dimension. We demonstrate that through their inherent compositionality deep neural networks can resolve the characteristic flow underlying the transport equations and thereby allow approximation rates independent of the parameter dimension.


page 1

page 2

page 3

page 4


Deep ReLU Neural Network Approximation for Stochastic Differential Equations with Jumps

Deep neural networks (DNNs) with ReLU activation function are proved to ...

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces

We study the problem of how efficiently, in terms of the number of param...

Deep neural network approximation for high-dimensional parabolic Hamilton-Jacobi-Bellman equations

The approximation of solutions to second order Hamilton–Jacobi–Bellman (...

Compositional Sparsity, Approximation Classes, and Parametric Transport Equations

Approximating functions of a large number of variables poses particular ...

Deep ReLU Network Expression Rates for Option Prices in high-dimensional, exponential Lévy models

We study the expression rates of deep neural networks (DNNs for short) f...

ReLU Deep Neural Networks from the Hierarchical Basis Perspective

We study ReLU deep neural networks (DNNs) by investigating their connect...

Neural tangent kernels, transportation mappings, and universal approximation

This paper establishes rates of universal approximation for the shallow ...