Multimodal Information Gain in Bayesian Design of Experiments

08/16/2021
by   Quan Long, et al.
0

One of the well-known challenges in optimal experimental design is how to efficiently estimate the nested integrations of the expected information gain. The Gaussian approximation and associated importance sampling have been shown to be effective at reducing the numerical costs. However, they may fail due to the non-negligible biases and the numerical instabilities. A new approach is developed to compute the expected information gain, when the posterior distribution is multimodal - a situation previously ignored by the methods aiming at accelerating the nested numerical integrations. Specifically, the posterior distribution is approximated using a mixture distribution constructed by multiple runs of global search for the modes and weighted local Laplace approximations. Under any given probability of capturing all the modes, we provide an estimation of the number of runs of searches, which is dimension independent. It is shown that the novel global-local multimodal approach can be significantly more accurate and more efficient than the other existing approaches, especially when the number of modes is large. The methods can be applied to the designs of experiments with both calibrated and uncalibrated observation noises.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/27/2017

Mixture model fitting using conditional models and modal Gibbs sampling

In this paper, we present a novel approach to fitting mixture models bas...
research
03/03/2021

Importance Sampling with the Integrated Nested Laplace Approximation

The Integrated Nested Laplace Approximation (INLA) is a deterministic ap...
research
05/20/2022

Robust Expected Information Gain for Optimal Bayesian Experimental Design Using Ambiguity Sets

The ranking of experiments by expected information gain (EIG) in Bayesia...
research
06/30/2023

Scalable method for Bayesian experimental design without integrating over posterior distribution

We address the computational efficiency in solving the A-optimal Bayesia...
research
05/23/2022

Split personalities in Bayesian Neural Networks: the case for full marginalisation

The true posterior distribution of a Bayesian neural network is massivel...
research
11/22/2019

A Novel Method of Marginalisation using Low Discrepancy Sequences for Integrated Nested Laplace Approximations

Recently, it has been shown that approximations to marginal posterior di...
research
11/23/2022

Efficient sampling of non log-concave posterior distributions with mixture of noises

This paper focuses on a challenging class of inverse problems that is of...

Please sign up or login with your details

Forgot password? Click here to reset