Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks

11/22/2022
by   Ben Adcock, et al.
0

The past decade has seen increasing interest in applying Deep Learning (DL) to Computational Science and Engineering (CSE). Driven by impressive results in applications such as computer vision, Uncertainty Quantification (UQ), genetics, simulations and image processing, DL is increasingly supplanting classical algorithms, and seems poised to revolutionize scientific computing. However, DL is not yet well-understood from the standpoint of numerical analysis. Little is known about the efficiency and reliability of DL from the perspectives of stability, robustness, accuracy, and sample complexity. In particular, approximating solutions to parametric PDEs is an objective of UQ for CSE. Training data for such problems is often scarce and corrupted by errors. Moreover, the target function is a possibly infinite-dimensional smooth function taking values in the PDE solution space, generally an infinite-dimensional Banach space. This paper provides arguments for Deep Neural Network (DNN) approximation of such functions, with both known and unknown parametric dependence, that overcome the curse of dimensionality. We establish practical existence theorems that describe classes of DNNs with dimension-independent architecture size and training procedures based on minimizing the (regularized) ℓ^2-loss which achieve near-optimal algebraic rates of convergence. These results involve key extensions of compressed sensing for Banach-valued recovery and polynomial emulation with DNNs. When approximating solutions of parametric PDEs, our results account for all sources of error, i.e., sampling, optimization, approximation and physical discretization, and allow for training high-fidelity DNN approximations from coarse-grained sample data. Our theoretical results fall into the category of non-intrusive methods, providing a theoretical alternative to classical methods for high-dimensional approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2020

Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data

The accurate approximation of scalar-valued functions from sample points...
research
03/25/2022

On efficient algorithms for computing near-best polynomial approximations to high-dimensional, Hilbert-valued functions from limited samples

Sparse polynomial approximation has become indispensable for approximati...
research
01/16/2020

The gap between theory and practice in function approximation with deep neural networks

Deep learning (DL) is transforming whole industries as complicated decis...
research
05/29/2023

Optimal approximation of infinite-dimensional holomorphic functions

Over the last decade, approximating functions in infinite dimensions fro...
research
08/25/2022

CAS4DL: Christoffel Adaptive Sampling for function approximation via Deep Learning

The problem of approximating smooth, multivariate functions from sample ...
research
08/28/2019

Deep neural network approximations for Monte Carlo algorithms

Recently, it has been proposed in the literature to employ deep neural n...

Please sign up or login with your details

Forgot password? Click here to reset