DeepAI AI Chat
Log In Sign Up

Higher-order Quasi-Monte Carlo Training of Deep Neural Networks

by   M. Longo, et al.

We present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved to facilitate higher-order decay (in terms of the number of training samples) of the underlying generalization error, with consistency error bounds that are free from the curse of dimensionality in the input data space, provided that DNN weights in hidden layers satisfy certain summability conditions. We present numerical experiments for DtO maps from elliptic and parabolic PDEs with uncertain inputs that confirm the theoretical analysis.


page 1

page 2

page 3

page 4


Deep learning observables in computational fluid dynamics

Many large scale problems in computational fluid dynamics such as uncert...

Multilevel Quasi-Monte Carlo for Optimization under Uncertainty

This paper considers the problem of optimizing the average tracking erro...

The Expressivity and Training of Deep Neural Networks: toward the Edge of Chaos?

Expressivity is one of the most significant issues in assessing neural n...

From Monte Carlo to neural networks approximations of boundary value problems

In this paper we study probabilistic and neural network approximations f...

AlphaX: eXploring Neural Architectures with Deep Neural Networks and Monte Carlo Tree Search

We present AlphaX, a fully automated agent that designs complex neural a...

New Insights into History Matching via Sequential Monte Carlo

The aim of the history matching method is to locate non-implausible regi...