Higher-order Quasi-Monte Carlo Training of Deep Neural Networks

09/06/2020
by   M. Longo, et al.
0

We present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved to facilitate higher-order decay (in terms of the number of training samples) of the underlying generalization error, with consistency error bounds that are free from the curse of dimensionality in the input data space, provided that DNN weights in hidden layers satisfy certain summability conditions. We present numerical experiments for DtO maps from elliptic and parabolic PDEs with uncertain inputs that confirm the theoretical analysis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2019

Deep learning observables in computational fluid dynamics

Many large scale problems in computational fluid dynamics such as uncert...
research
09/29/2021

Multilevel Quasi-Monte Carlo for Optimization under Uncertainty

This paper considers the problem of optimizing the average tracking erro...
research
10/11/2019

The Expressivity and Training of Deep Neural Networks: toward the Edge of Chaos?

Expressivity is one of the most significant issues in assessing neural n...
research
09/03/2022

From Monte Carlo to neural networks approximations of boundary value problems

In this paper we study probabilistic and neural network approximations f...
research
05/18/2018

AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search

We present AlphaX, a fully automated agent that designs complex neural a...
research
10/09/2017

New Insights into History Matching via Sequential Monte Carlo

The aim of the history matching method is to locate non-implausible regi...
research
02/14/2021

Multi-Level Fine-Tuning: Closing Generalization Gaps in Approximation of Solution Maps under a Limited Budget for Training Data

In scientific machine learning, regression networks have been recently a...

Please sign up or login with your details

Forgot password? Click here to reset