An acceleration strategy for randomize-then-optimize sampling via deep neural networks

04/13/2021
by   Liang Yan, et al.
0

Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO may be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel strategy to substantially reduce the computation burden of RTO by using a goal-oriented deep neural networks (DNN) surrogate approach. In particular, the training points for the DNN-surrogate are drawn from a local approximated posterior distribution, and it is shown that the resulting algorithm can provide a flexible and efficient sampling algorithm, which converges to the direct RTO approach. We present a Bayesian inverse problem governed by a benchmark elliptic PDE to demonstrate the computational accuracy and efficiency of our new algorithm (i.e., DNN-RTO). It is shown that with our algorithm, one can significantly outperform the traditional RTO.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 11

page 12

page 15

11/20/2019

An adaptive surrogate modeling based on deep neural networks for large-scale Bayesian inverse problems

It is popular approaches to use surrogate models to speed up the computa...
04/13/2021

Stein variational gradient descent with local approximations

Bayesian computation plays an important role in modern machine learning ...
02/25/2020

Stein variational reduced basis Bayesian inversion

We propose and analyze a Stein variational reduced basis method (SVRB) t...
12/17/2019

Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models

Inverse problems are pervasive mathematical methods in inferring knowled...
04/27/2019

Multilevel adaptive sparse Leja approximations for Bayesian inverse problems

Deterministic interpolation and quadrature methods are often unsuitable ...
03/03/2019

Scalable optimization-based sampling on function space

Optimization-based samplers provide an efficient and parallellizable app...
08/13/2020

Iterative Surrogate Model Optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks

We present a novel active learning algorithm, termed as iterative surrog...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.