Scalable method for Bayesian experimental design without integrating over posterior distribution

06/30/2023
by   Vinh Hoang, et al.
0

We address the computational efficiency in solving the A-optimal Bayesian design of experiments problems for which the observational map is based on partial differential equations and, consequently, is computationally expensive to evaluate. A-optimality is a widely used and easy-to-interpret criterion for Bayesian experimental design. This criterion seeks the optimal experimental design by minimizing the expected conditional variance, which is also known as the expected posterior variance. This study presents a novel likelihood-free approach to the A-optimal experimental design that does not require sampling or integrating the Bayesian posterior distribution. The expected conditional variance is obtained via the variance of the conditional expectation using the law of total variance, and we take advantage of the orthogonal projection property to approximate the conditional expectation. We derive an asymptotic error estimation for the proposed estimator of the expected conditional variance and show that the intractability of the posterior distribution does not affect the performance of our approach. We use an artificial neural network (ANN) to approximate the nonlinear conditional expectation in the implementation of our method. We then extend our approach for dealing with the case that the domain of experimental design parameters is continuous by integrating the training process of the ANN into minimizing the expected conditional variance. Through numerical experiments, we demonstrate that our method greatly reduces the number of observation model evaluations compared with widely used importance sampling-based approaches. This reduction is crucial, considering the high computational cost of the observational models. Code is available at https://github.com/vinh-tr-hoang/DOEviaPACE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2019

A continuation method in Bayesian inference

We present a continuation method that entails generating a sequence of t...
research
10/26/2020

BayCANN: Streamlining Bayesian Calibration with Artificial Neural Network Metamodeling

Purpose: Bayesian calibration is theoretically superior to standard dire...
research
10/17/2021

Localization with Sampling-Argmax

Soft-argmax operation is commonly adopted in detection-based methods to ...
research
02/19/2019

Scalable Thompson Sampling via Optimal Transport

Thompson sampling (TS) is a class of algorithms for sequential decision-...
research
08/16/2021

Multimodal Information Gain in Bayesian Design of Experiments

One of the well-known challenges in optimal experimental design is how t...
research
03/26/2019

A layered multiple importance sampling scheme for focused optimal Bayesian experimental design

We develop a new computational approach for "focused" optimal Bayesian e...
research
03/03/2019

Scalable optimization-based sampling on function space

Optimization-based samplers provide an efficient and parallellizable app...

Please sign up or login with your details

Forgot password? Click here to reset