Derivative-Informed Projected Neural Networks for High-Dimensional Parametric Maps Governed by PDEs

11/30/2020
by   Thomas O'Leary-Roseberry, et al.
3

Many-query problems, arising from uncertainty quantification, Bayesian inversion, Bayesian optimal experimental design, and optimization under uncertainty-require numerous evaluations of a parameter-to-output map. These evaluations become prohibitive if this parametric map is high-dimensional and involves expensive solution of partial differential equations (PDEs). To tackle this challenge, we propose to construct surrogates for high-dimensional PDE-governed parametric maps in the form of projected neural networks that parsimoniously capture the geometry and intrinsic low-dimensionality of these maps. Specifically, we compute Jacobians of these PDE-based maps, and project the high-dimensional parameters onto a low-dimensional derivative-informed active subspace; we also project the possibly high-dimensional outputs onto their principal subspace. This exploits the fact that many high-dimensional PDE-governed parametric maps can be well-approximated in low-dimensional parameter and output subspace. We use the projection basis vectors in the active subspace as well as the principal output subspace to construct the weights for the first and last layers of the neural network, respectively. This frees us to train the weights in only the low-dimensional layers of the neural network. The architecture of the resulting neural network captures to first order, the low-dimensional structure and geometry of the parametric map. We demonstrate that the proposed projected neural network achieves greater generalization accuracy than a full neural network, especially in the limited training data regime afforded by expensive PDE-based parametric maps. Moreover, we show that the number of degrees of freedom of the inner layers of the projected network is independent of the parameter and output dimensions, and high accuracy can be achieved with weight dimension independent of the discretization dimension.

READ FULL TEXT

page 14

page 18

page 29

page 30

research
01/20/2022

Derivative-informed projected neural network for large-scale Bayesian optimal experimental design

We address the solution of large-scale Bayesian optimal experimental des...
research
05/31/2023

Efficient PDE-Constrained optimization under high-dimensional uncertainty using derivative-informed neural operators

We propose a novel machine learning framework for solving optimization p...
research
12/14/2021

Adaptive Projected Residual Networks for Learning Parametric Maps from Sparse Data

We present a parsimonious surrogate framework for learning high dimensio...
research
06/21/2022

Derivative-Informed Neural Operator: An Efficient Framework for High-Dimensional Parametric Derivative Learning

Neural operators have gained significant attention recently due to their...
research
05/31/2019

Greedy inference with layers of lazy maps

We propose a framework for the greedy approximation of high-dimensional ...
research
10/28/2020

A fast and scalable computational framework for large-scale and high-dimensional Bayesian optimal experimental design

We develop a fast and scalable computational framework to solve large-sc...
research
05/24/2023

Linear Neural Network Layers Promote Learning Single- and Multiple-Index Models

This paper explores the implicit bias of overparameterized neural networ...

Please sign up or login with your details

Forgot password? Click here to reset