Log In Sign Up

Model Reduction and Neural Networks for Parametric PDEs

by   Kaushik Bhattacharya, et al.

We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature.


The Random Feature Model for Input-Output Maps between Banach Spaces

Well known to the machine learning community, the random feature model, ...

Approximation of Functionals by Neural Network without Curse of Dimensionality

In this paper, we establish a neural network to approximate functionals,...

Nonlinear approximation of high-dimensional anisotropic analytic functions

Motivated by nonlinear approximation results for classes of parametric p...

Category coding with neural network application

In many applications of neural network, it is common to introduce huge a...

Simplicial approximation to CW complexes in practice

We describe an algorithm that takes as an input a CW complex and returns...

Non-Euclidean Universal Approximation

Modifications to a neural network's input and output layers are often re...