An acceleration strategy for randomize-then-optimize sampling via deep neural networks

04/13/2021
by   Liang Yan, et al.
0

Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO may be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel strategy to substantially reduce the computation burden of RTO by using a goal-oriented deep neural networks (DNN) surrogate approach. In particular, the training points for the DNN-surrogate are drawn from a local approximated posterior distribution, and it is shown that the resulting algorithm can provide a flexible and efficient sampling algorithm, which converges to the direct RTO approach. We present a Bayesian inverse problem governed by a benchmark elliptic PDE to demonstrate the computational accuracy and efficiency of our new algorithm (i.e., DNN-RTO). It is shown that with our algorithm, one can significantly outperform the traditional RTO.

READ FULL TEXT

page 9

page 11

page 12

page 15

research
11/20/2019

An adaptive surrogate modeling based on deep neural networks for large-scale Bayesian inverse problems

It is popular approaches to use surrogate models to speed up the computa...
research
04/13/2021

Stein variational gradient descent with local approximations

Bayesian computation plays an important role in modern machine learning ...
research
02/25/2020

Stein variational reduced basis Bayesian inversion

We propose and analyze a Stein variational reduced basis method (SVRB) t...
research
02/09/2023

Introduction To Gaussian Process Regression In Bayesian Inverse Problems, With New ResultsOn Experimental Design For Weighted Error Measures

Bayesian posterior distributions arising in modern applications, includi...
research
04/05/2022

Deep surrogate accelerated delayed-acceptance HMC for Bayesian inference of spatio-temporal heat fluxes in rotating disc systems

We study the Bayesian inverse problem of inferring the Biot number, a sp...
research
06/02/2022

Masked Bayesian Neural Networks : Computation and Optimality

As data size and computing power increase, the architectures of deep neu...
research
08/13/2020

Iterative Surrogate Model Optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks

We present a novel active learning algorithm, termed as iterative surrog...

Please sign up or login with your details

Forgot password? Click here to reset