An acceleration strategy for randomize-then-optimize sampling via deep neural networks

by   Liang Yan, et al.

Randomize-then-optimize (RTO) is widely used for sampling from posterior distributions in Bayesian inverse problems. However, RTO may be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient. In this work, we present a novel strategy to substantially reduce the computation burden of RTO by using a goal-oriented deep neural networks (DNN) surrogate approach. In particular, the training points for the DNN-surrogate are drawn from a local approximated posterior distribution, and it is shown that the resulting algorithm can provide a flexible and efficient sampling algorithm, which converges to the direct RTO approach. We present a Bayesian inverse problem governed by a benchmark elliptic PDE to demonstrate the computational accuracy and efficiency of our new algorithm (i.e., DNN-RTO). It is shown that with our algorithm, one can significantly outperform the traditional RTO.



There are no comments yet.


page 9

page 11

page 12

page 15


An adaptive surrogate modeling based on deep neural networks for large-scale Bayesian inverse problems

It is popular approaches to use surrogate models to speed up the computa...

Stein variational gradient descent with local approximations

Bayesian computation plays an important role in modern machine learning ...

Stein variational reduced basis Bayesian inversion

We propose and analyze a Stein variational reduced basis method (SVRB) t...

Accelerating PDE-constrained Inverse Solutions with Deep Learning and Reduced Order Models

Inverse problems are pervasive mathematical methods in inferring knowled...

Multilevel adaptive sparse Leja approximations for Bayesian inverse problems

Deterministic interpolation and quadrature methods are often unsuitable ...

Scalable optimization-based sampling on function space

Optimization-based samplers provide an efficient and parallellizable app...

Iterative Surrogate Model Optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks

We present a novel active learning algorithm, termed as iterative surrog...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.