MLMC_stochastic_gradient
The codes used for the numerical experiments in the paper(https://arxiv.org/abs/2005.08414)
view repo
In this paper we propose an efficient stochastic optimization algorithm to search for Bayesian experimental designs such that the expected information gain is maximized. The gradient of the expected information gain with respect to experimental design parameters is given by a nested expectation, for which the standard Monte Carlo method using a fixed number of inner samples yields a biased estimator. In this paper, applying the idea of randomized multilevel Monte Carlo methods, we introduce an unbiased Monte Carlo estimator for the gradient of the expected information gain with finite expected squared ℓ_2-norm and finite expected computational cost per sample. Our unbiased estimator can be combined well with stochastic gradient descent algorithms, which results in our proposal of an optimization algorithm to search for an optimal Bayesian experimental design. Numerical experiments confirm that our proposed algorithm works well not only for a simple test problem but also for a more realistic pharmacokinetic problem.
READ FULL TEXT
In this paper we develop an efficient Monte Carlo algorithm for estimati...
read it
We introduce a fully stochastic gradient based approach to Bayesian opti...
read it
Seeking to improve model generalization, we consider a new approach base...
read it
Bayesian quadrature optimization (BQO) maximizes the expectation of an
e...
read it
In this paper, we introduce an unbiased gradient simulation algorithms f...
read it
We consider parallel global optimization of derivative-free
expensive-to...
read it
Experimental design is crucial for inference where limitations in the da...
read it
The codes used for the numerical experiments in the paper(https://arxiv.org/abs/2005.08414)
Comments
There are no comments yet.