Bayesian optimal design using stochastic gradient optimisation and Fisher information gain

04/11/2019 ∙ by Sophie Harbisher, et al. ∙ 0

Finding high dimensional designs is increasingly important in applications of experimental design, but is computationally demanding under existing methods. We introduce an efficient approach applying recent advances in stochastic gradient optimisation. To allow rapid gradient calculations we work with a computationally convenient utility function, the trace of the Fisher information. We provide a decision theoretic justification for this utility, analogous to work by Bernardo (1979) on the Shannon information gain. Due to this similarity we refer to our utility as the Fisher information gain. We compare our optimisation scheme, SGO-FIG, to existing state-of-the-art methods and show our approach is quicker at finding designs which maximise expected utility, allowing designs with hundreds of choices to be produced in under a minute in one example.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.