Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data

12/11/2020
by   Ben Adcock, et al.
0

The accurate approximation of scalar-valued functions from sample points is a key task in mathematical modeling and computational science. Recently, machine learning techniques based on Deep Neural Networks (DNNs) have begun to emerge as promising tools for function approximation in scientific computing problems, with impressive results achieved on problems where the dimension of the underlying data or problem domain is large. In this work, we broaden this perspective by focusing on approximation of functions that are Hilbert-valued, i.e. they take values in a separable, but typically infinite-dimensional, Hilbert space. This problem arises in many science and engineering problems, in particular those involving the solution of parametric Partial Differential Equations (PDEs). Such problems are challenging for three reasons. First, pointwise samples are expensive to acquire. Second, the domain of the function is usually high dimensional, and third, the range lies in a Hilbert space. Our contributions are twofold. First, we present a novel result on DNN training for holomorphic functions with so-called hidden anisotropy. This result introduces a DNN training procedure and a full theoretical analysis with explicit guarantees on the error and sample complexity. This error bound is explicit in the three key errors occurred in the approximation procedure: best approximation error, measurement error and physical discretization error. Our result shows that there is a procedure for learning Hilbert-valued functions via DNNs that performs as well as current best-in-class schemes. Second, we provide preliminary numerical results illustrating the practical performance of DNNs on Hilbert-valued functions arising as solutions to parametric PDEs. We consider different parameters, modify the DNN architecture to achieve better and competitive results and compare these to current best-in-class schemes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2022

Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks

The past decade has seen increasing interest in applying Deep Learning (...
research
03/25/2022

On efficient algorithms for computing near-best polynomial approximations to high-dimensional, Hilbert-valued functions from limited samples

Sparse polynomial approximation has become indispensable for approximati...
research
01/16/2020

The gap between theory and practice in function approximation with deep neural networks

Deep learning (DL) is transforming whole industries as complicated decis...
research
04/06/2023

On the approximation of vector-valued functions by samples

Given a Hilbert space ℋ and a finite measure space Ω, the approximation ...
research
07/26/2020

Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection

Deep neural networks (DNNs) have achieved state-of-the-art performance a...
research
05/15/2023

Nearly Optimal VC-Dimension and Pseudo-Dimension Bounds for Deep Neural Network Derivatives

This paper addresses the problem of nearly optimal Vapnik–Chervonenkis d...
research
08/25/2022

CAS4DL: Christoffel Adaptive Sampling for function approximation via Deep Learning

The problem of approximating smooth, multivariate functions from sample ...

Please sign up or login with your details

Forgot password? Click here to reset