Efficient Bayesian Experimental Design for Implicit Models

10/23/2018 ∙ by Steven Kleinegesse, et al. ∙ 0

Bayesian experimental design involves the optimal allocation of resources in an experiment, with the aim of optimising cost and performance. For implicit models, where the likelihood is intractable but sampling from the model is possible, this task is particularly difficult and therefore largely unexplored. This is mainly due to technical difficulties associated with approximating posterior distributions and utility functions. We devise a novel experimental design framework for implicit models that improves upon previous work in two ways. First, we use the mutual information between parameters and data as the utility function, which has previously not been feasible. We achieve this by utilising Likelihood-Free Inference by Ratio Estimation (LFIRE) to approximate posterior distributions, instead of the traditional approximate Bayesian computation or synthetic likelihood methods. Secondly, we use Bayesian optimisation in order to solve the optimal design problem, as opposed to the typically used grid search. We find that this increases efficiency and allows us to consider higher design dimensions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In all scientific disciplines, performing experiments and therewith collecting data is an essential part to improving our understanding of the world around us. It is, however, usually not trivial to decide where and how to collect the data; experimental design is therefore concerned with the allocation of resources when conducting an experiment. The general aim is to find design features, or experimental configurations, that may improve parameter estimations or compare competing models. In essence, the underlying question in experimental design is: where and how do we have to collect data in order to optimise cost and performance? For instance, in epidemiology we might be concerned about when to count the number of infected in a population. In this case, we could be trying to find the optimal measurement time that results in the most informative estimation of the disease model parameters. Traditional experimental design uses frequentist approaches that are usually based on the Fisher information matrix (e.g. Fedorov, 1972; Atkinson and Donev, 1992); this is a well-established field. The frequentist framework however, does not work well for optimising non-linear problems, as only locally-optimal designs can be obtained (Ryan et al., 2016)

. Bayesian statistics has mature theory addressing this issue, but due to the computational costs involved, the field of

Bayesian experimental design has only recently become popular. Much of this high cost is incurred by computing the so-called expected utility function that is used to determine the optimal design (e.g. the optimal measurement time). A popular and principled choice for this expected utility function is the mutual information between parameters and simulated data at design  (see Ryan et al., 2016).

There exists extensive work on Bayesian experimental design for explicit models, where the likelihood is analytically known or can be easily computed (see Ryan et al., 2016, for a review). There has, however, been little work on designing experiments for implicit models, where the likelihood is intractable and the model is specified in terms of a stochastic data generating process or simulator. These models are common in the natural sciences and appear in many disciplines; examples include epidemiology (Ricker, 1954; Numminen et al., 2013) and cosmology (M. Schafer and Freeman, 2012; Alsing et al., 2018). It is thus crucial to develop efficient methods for experimental design that are applicable to these models, which is the aim of this paper.

Previous Work

Because the likelihood for implicit models is intractable, exact posterior computation is difficult. Likelihood-free inference methods have emerged to address this issue, with the most prevalent methods being Approximate Bayesian Computation (ABC) (Rubin, 1984) and Synthetic Likelihood (SL) (Wood, 2010). Some of the earliest work of Bayesian experimental design for models with intractable likelihood was done by e.g. Cook et al. (2008),  Liepe et al. (2013) and Drovandi and Pettitt (2013); the latter was the first to use ABC rejection sampling (Beaumont et al., 2002) to obtain posterior samples. Even though ABC rejection sampling is known to be inefficient, most of the work that followed (e.g. Dehideniya et al., 2018; Price et al., 2016; Hainy et al., 2016) also used the same method.

Mutual information is typically the preferred choice for the expected utility function in Bayesian experimental design, as resulting optimal designs yield consistent and efficient parameter estimates (Paninski, 2005; Ryan et al., 2016). For implicit models, however, computing the mutual information is hard, because of the difficulties associated with evaluating posterior densities. Drovandi and Pettitt (2013)

thus used the inverse of the posterior variance of ABC samples as the expected utility function in order to find the optimal design that minimises parameter uncertainty.

Cook et al. (2008)

used moment closure to approximate the mutual information, while  

Liepe et al. (2013) used ABC to approximate the mutual information for a restricted class of models.

Finding an efficient experimental design is often cast as an optimisation problem. The Bayesian optimal design that we are trying to find is the design point that maximises the expected utility function over the whole design space,

(1)

For implicit models however, is not available analytically in closed form and also expensive to evaluate; in addition, we also do not have access to its gradients. Previous work on Bayesian experimental design, considering both explicit and implicit models, predominantly pertains to solving the above optimisation problem either by the sampling-based algorithm of Müller (1999) or by grid search. The former has been found to converge very slowly (Drovandi and Pettitt, 2013), as finding the maximum of a set of samples is not easy. The latter approach is much more common but becomes unfeasible for high design dimensions, due to the extremely high amount of evaluations needed. There has been little work on making this part of the experimental design process more efficient, especially when considering implicit models.

Contributions

In this paper, we propose an efficient Bayesian experimental design framework for implicit models that addresses the aforementioned technical difficulties.

  1. We use the mutual information between model parameters and data as the utility function to find the optimal design. We achieve this by using Likelihood-Free Inference by Ratio Estimation (LFIRE) (Dutta et al., 2016), instead of the traditional ABC or SL approaches, to approximate the posterior distribution for implicit models. LFIRE yields the ratio of posterior to prior density, allowing for estimation of the posterior and straightforward computation of the mutual information at the same time.

  2. Rather than the more traditional grid search, we solve the optimisation problem in Equation 1 by means of Bayesian optimisation (e.g. Shahriari et al., 2016). This makes Bayesian experimental design more efficient and allows us to consider higher design dimensions.

The remainder of the paper is structured as follows. Our novel design framework is explained in Section 2. In Section 3 we test the performance of our framework on two epidemiological models and discuss the results. We then summarise our findings in Section 4.

2 Proposed Method

At its core, Bayesian experimental design requires us to compute an expected utility function that describes the value of design in learning about . We then need to maximise this in order to find the optimal design point . We explain here how we address these two non-trivial steps efficiently for implicit models.

2.1 Computing the Utility

The choice of expected utility function strongly dictates the optimal designs that are found. Here we consider information-based utilities and focus on the computation of mutual information for implicit models. We consider the mutual information between the model parameters and the simulated data , conditioned on the design . This gives us a measure of the correlation between and

, i.e. it tells us how ’much’ we can learn about the model parameters given the data. This mutual information can also be expressed as the expected Kullback-Leibler Divergence

 (Kullback and Leibler, 1951) between the posterior distribution and prior distribution  (Ryan et al., 2016). In this phrasing, the optimal design can be understood as the design that, on average, yields the largest information gain about the parameters when observing the data. The expected utility that we need to maximise is then

(2)
(3)
(4)

see e.g. Ryan et al. (2016). Note that we here made the typical assumption that our prior belief about is not affected by the design , i.e.  = .

The integral in Equation 4 is typically high-dimensional and a standard way of approximating it is by means of Monte-Carlo integration, i.e.

(5)

where and . This approximation requires samples from the prior distribution, corresponding samples from the data generating distribution and density evaluations of the posterior and prior distribution.

For implicit models, the computation of the posterior distribution in Equation 5 is hard. Using the traditional ABC method, we would only obtain posterior samples but not posterior densities, making the computation of Equation 5 even more difficult. Ratio estimation approaches on the other hand can yield ratios of the likelihood to the marginal and therefore, by Bayes’ Rule, also yield the ratios of the posterior density to prior density, i.e.

(6)

which is the intractable ratio in Equation 5. An overview of methods for ratio estimation methods can be found in Sugiyama et al. (2012). We here use the Likelihood-Free Inference by Ratio Estimation (LFIRE) framework of Dutta et al. (2016)

to approximate the above ratio. Importantly, the approximated ratios can be used to estimate both the posterior and the mutual information. The LFIRE ratio is approximated by solving a logistic regression problem between data simulated from

and data simulated from . We shall omit further details here and direct the reader to the work of Dutta et al. (2016) for more information. Throughout this work, we used the same settings as them, in particular, we used data points from each of the two distributions; if this number is higher, the ratio approximation becomes more accurate but also more expensive to compute.

By substituting the LFIRE ratio into the logarithm in Equation 5, we can approximate the mutual information in a straightforward way, without having to explicitly compute posterior and prior densities, i.e.

(7)

where and . In other words, for a given design we first need to obtain prior samples and then use them to simulate data points . These pairs of prior samples and data samples are then used to compute ratios , allowing us to approximate the expected utility according to Equation 7. Assuming that we choose a prior that is easy to sample from and that the process of generating data from the implicit model is not overly expensive, most of the computational cost lies in the LFIRE ratio computations. We summarise the computation of the mutual information for implicit models in Algorithm 1.

1:Sample from the prior: for
2:for i=1 to i=N do
3:     Simulate data:
4:     Compute the ratio by LFIRE
5:end for
6:Compute
Algorithm 1 Mutual Information Computation via LFIRE Ratios

2.2 Optimising the Utility

Even though evaluating the expected utility is costly, the optimisation problem in Equation 1 is commonly solved using grid search. For most Bayesian experimental design settings, however, especially when dealing with implicit models, this results in high computational costs when considering high design dimensions, which in this context means four or more dimensions. We would like to optimise the expected utility efficiently, even when the design dimensions are high. To do so, we propose to utilise Bayesian optimisation (BO) (Shahriari et al., 2016) which is a popular optimisation scheme for objective functions that we can evaluate but whose form and gradients are unknown, or expensive to evaluate.

The general idea of BO is to build a probabilistic model of the objective function and then use an acquisition function to decide where to evaluate it next. In our case, the objective function is the expected utility in Equation 7. For the probabilistic model we use Gaussian Processes (GP) (Rasmussen and Williams, 2005) and for the acquisition function we use Expected Improvement (EI) (Mockus et al., 1978). These are both popular and well-tested choices. We use a Gaussian kernel for the GP model, but for design dimensions higher than 10 there exist more scalable kernels (e.g. Minasny and McBratney, 2005; Oh et al., 2018). For the BO stage of our design framework we use the GPyOpt package in Python (GPyOpt, 2016). In addition to being practical for expensive evaluations, BO smoothes out noise introduced by the Monte-Carlo approximation and is therefore likely to also improve the estimate of .

2.3 Obtaining the Posterior

After having found an optimal design that maximises the expected utility , we can make a real-world observation . LFIRE allows us to use the already computed ratios at to estimate the posterior of given and obtain samples from it. By rearranging Equation 6 we can easily compute the posterior density at a certain model parameter, given that we can evaluate the prior density. While other approaches are possible, to obtain posterior samples via the LFIRE method, we define weights for every prior sample ; each weight is the LFIRE ratio evaluated at the real-world observation , i.e. . After normalising the weights, i.e. , we then resample from the set of prior parameters according to the categorical distribution .

3 Experiments

In this section, we test our novel design framework on two implicit models from epidemiology, the Death Model (Cook et al., 2008) and the SIR Model (Allen, 2008). The former has a tractable likelihood in closed form, allowing us to compare approximations to an analytical solution, while the latter does not. For both models the design variable is time and therefore the aim is to find out at what times we should make measurements in order to most accurately estimate the model parameters.

3.1 Example Implicit Models

Death Model

The Death Model is a stochastic process that describes the decline of a population due to some infection. The change from a susceptible state to an infected state is given by a continuous-time Markov process that we discretised. The process is parametrised by an infection rate and at any time , each susceptible individual has a chance of getting infected (Cook et al., 2008). At a particular time , the total number of individuals moving from state to state

is given by a sample from a Binomial distribution 

(Cook et al., 2008), i.e.

(8)

where is the invariant total population, is the step size, set to throughout, and we choose that . As a time series, the number of infected is then given by .

Let be the measurement times at which we observe the number of infected. The likelihood for the Death Model is analytically tractable (Cook et al., 2008). For observations of the number of susceptibles, given by , and a model parameter , we obtain , where and  (Cook et al., 2008). Using this expression for the likelihood we can then obtain a posterior distribution by Bayes’ Rule. This enables us to compute the expected utility in Equation 3 and compare it to the LFIRE approximation.

SIR Model

The SIR Model (Allen, 2008) is a more complex version of the Death Model where, in addition to the number of susceptibles and infected , we also have a recovered population that cannot be further infected.

We define the probability of a susceptible getting infected as

, where . Similarly, the probability of an infected recovering from the disease is , where . At a particular time , let the number of susceptibles that get infected be and the number of infected that recover be ; these two population changes are computed by sampling from a Binomial distribution,

(9)
(10)

This results in an unobserved time-series of , and by doing the following updates:

(11)
(12)
(13)

We shall start this time series with susceptibles, one infected, zero recovered and use a discrete time step of throughout. The actual time at which we do measurements is again given by , resulting in a single data point .

3.2 Death Model Results

The aim for the Death Model is to estimate the infection rate as efficiently as possible.

One-Dimensional Designs

We put a truncated Gaussian prior of mean one and variance one over the model parameter , such that , and sample prior parameters from it. The design space covers the range and, when optimising the expected utility via grid search, we choose grid sizes of .

We then optimise the expected utility function by Bayesian optimisation (BO), according to the framework outlined in Section 2. We compare our method to optimising by grid search, for both the LFIRE approximation and the analytic computation of the expected utility. These expected utilities are shown in Figure 1.

Figure 1: Expected utilities of the Death Model for the grid search method and the BO method. Also shown is the analytic expected utility for the grid search method. Curves are normalised to be in .

The expected utility approximated with LFIRE ratios closely matches the analytical expected utility around its peak while decaying more quickly for large . This justifies using the LFIRE approximation in cases where we cannot compute the mutual information exactly, such as for the SIR Model. The BO method results in a similar expected utility as the grid search method. Unlike the grid search method however, Bayesian optimisation results in few evaluations where is low, focusing more on regions where it is high and thus yielding a higher resolution in the peak region. This results in some large discrepancies between the grid search method and Bayesian optimisation method away from the peak region, i.e. at the boundaries. The optimal design times and corresponding values, computed by using the analytic likelihood, are: for the grid search method with analytic computations, for the grid search method with LFIRE approximations and for the BO method with the LFIRE approximations. The optimal design time for the analytic computation is slightly larger than for the two methods using LFIRE approximations. The actual expected utility values, however, are all close together. This means that, for the Death Model, there is a range of optimal design times that result in comparable utility values, which is reflected by the flat peak in Figure 1.

Using as the model parameter to generate ’real-world’ observations at the optimal times , we obtain LFIRE posterior samples for the grid search and BO methods, according to the procedure outlined in Section 2.3

. We then apply Gaussian kernel density estimation (KDE) to these samples to smooth out the resulting posterior density. Doing this for

real-world observations allows us to obtain posterior densities reflecting the possible variation in the data measured at time

. For comparison, we similarly compute the exact posterior distribution by using the tractable likelihood function mentioned previously. The mean of the posterior densities and their standard deviations are shown in Figure 

2.

Figure 2: Comparison of the Death Model mean posterior densities for different methods. The shaded areas indicate one standard deviation.

The grid search and BO method result in extremely similar posterior densities. The exact posterior density is narrower than the approximations, reflecting the approximation error of the LFIRE approach. Even though these approximate posterior distributions have a discrepancy to the exact posterior, the corresponding functions are still similar (see Figure 1). This indicates that, on average, the divergence between individual posterior and prior distributions is still comparable for the different methods (see Equation 3). Using these mean posterior densities, we find the median model parameters to be: , and

for the grid search, BO and analytic method, with 95% credibility intervals for

equal to , and , respectively.

At this point we would like to emphasise the importance of designing an experiment, as opposed to just randomly selecting an experimental design. To do so, we compute a baseline where we randomly select an optimal design from the design space. Using this random design point and a corresponding real-world observation , we compute LFIRE ratios, as done previously. These are then used to compute a resulting posterior distribution, which is again smoothed out by Gaussian KDE. Because of the inherent randomness, we do this several times and compute a set of posterior densities. In Figure 3 we compare of these baseline posterior densities to the mean posterior density obtained via the BO method.

Figure 3: Death Model posterior densities for the baseline of randomly selecting design points, together with the BO method mean posterior density.

As can be seen from Figure 3, there is much fluctuation involved in randomly selecting optimal designs. If the experimenter is unlucky and selects a design point that is highly unfavourable, e.g. see in Figure 1, then the resulting posterior distribution is wide and the parameter estimate uncertain. The large variety in posterior distributions largely motivates the use of Bayesian experimental design in general.

High-Dimensional Designs

So far, we have only considered a one-dimensional design variable, the measurement time . We can, however, increase the design dimensions of this problem by rephrasing the premise. We shall now consider the problem of selecting optimal design times, instead of just one; this is referred to as non-myopic Bayesian experimental design. To do this, our design variable becomes a

-dimensional vector,

. We naturally add the constraint that time must be ordered, i.e. , and then sequentially compute the number of infected at each design time to build the data vector . The values of this data vector depend on each other according to Equation 8, i.e.  depends on and so on. The computation of the LFIRE ratios is then done as before, with the difference that we have an -dimensional design variable and an -dimensional data vector ; the expected utility is also computed as previously, according to Algorithm 1.

It is computationally unfeasible to perform grid search in high design dimensions, as the number of evaluations required increases dramatically. Optimising via Bayesian optimisation (BO) allows us to decrease the computational cost, as we only need to explore the design space where the expected utility is potentially high, as was the case in Figure 1. In addition, because we have more evaluations in the peak regions of and we smooth out the noise introduced by the Monte-Carlo approximation in Algorithm 1, we increase the accuracy of our optimum estimate .

As a proof of concept, we consider a non-myopic design problem where the number of design dimensions is . In other words, knowing that we can do eight experiments, we want to find out at what times we should take these measurements. Because of the increased dimensions, it is impossible to show the expected utility surface in the same way that we did in Figure 1. We again run the procedure outlined in Section 2 in order to find optimal measurement times . In Figure 4(a) we show the convergence towards the optimum as a function of evaluations. From this figure we find that we can converge to an optimum after around evaluations. If we had defined a four-dimensional grid instead with 40 points per dimensions, like we did for the one-dimensional situation in Figure 1, we would have had to do evaluations. It becomes apparent that we can drastically improve computational efficiency when using BO.

Using the procedure explained in Section 2, we obtain LFIRE posterior samples corresponding to the optimal design and then smooth out the posterior samples with Gaussian kernel density estimation (KDE). The resulting posterior density is shown in Figure 4(b), together with the posterior density for . Expectedly, the posterior is narrower for than for , due to having more data. The corresponding median estimate of the model parameter is and the 95% credibility interval .

As a comparison, we also compute the expected utility value at a design point consisting of equidistant times; this might be an intuitive choice for an experimenter having no prior information. Using Algorithm 1, we obtain for the equidistant design times and for the optimal design times. Thus, unlike for seen before, if we can take many measurements, we do not get much improvement when designing a non-myopic experiment. This is natural to occur, in particular for the relatively simple Death model.

(a) Convergence
(b) Posterior
Figure 4: (a) Convergence of the best expected utility value as a function of number of evaluations for the Death Model with design dimensions. (b) The posterior distribution, smoothed out by Gaussian KDE, corresponding to the optimal design . Also shown is the posterior distribution for .

3.3 SIR Model Results

The aim for the SIR Model is to estimate the rate of infection and the rate of recovery as efficiently as possible.

One-Dimensional Designs

We put a uniform prior on both model parameters and sampled parameters from the prior. The data used in the LFIRE computations is at a particular design time . The design space covers the range and, when optimising the expected utility via grid search, we choose step sizes of .

We optimise the expected utility function according to the design framework outlined in Section 2, using both grid search and Bayesian optimisation (BO). The resulting expected utility functions are shown in Figure 5. Note that, unlike for the Death Model, we do not have a tractable likelihood function for the SIR Model and therefore we cannot compute an analytic expected utility function as a comparison.

Figure 5: Expected utilities of the SIR Model for the grid search and BO method, using the design framework from Section 2.

The SIR Model expected utilities show a similar uni-modal behaviour as those of the Death Model, with the only difference being that the SIR Model results in a with a peak that is more shifted towards lower . The grid search and BO methods yield expected utilities that are generally similar to each other, with optimal design times and , respectively. The computed via grid search, however, has less resolution around the peak and generally more fluctuations due to the Monte Carlo approximation.

After having found the optimal design points from the expected utilities in Figure 5, we generate a ’real-world’ observation by using and . Using this data we then obtain samples from the posterior distribution, by means of the procedure explained in Section 2; we do this for both the grid search and BO method. We again apply Gaussian KDE to these samples to smooth out the posterior densities, and repeat this process for real-world observations. The mean posterior densities are shown in Figures 6(a) and 6(b).

(a) Grid Search
(b) Bayes. Opt.
Figure 6: SIR Model mean posterior densities for the grid search and BO method. The true parameter values are shown by a red cross.

Both posterior distributions show a wide spread in , a narrow spread in and have uni-modal peaks that are in the same region. The median parameter estimates when using the grid search method are , and the 95% credibility intervals are for and for . For the BO method we obtain , and 95% credibility intervals equal to for and for , both containing the true data generating parameters. Generally, the parameter is well-estimated, whereas there is a marginal uncertainty in estimating the parameter.

High-Dimensional Designs

As done for the Death Model, we shall now consider non-myopic design for the SIR Model, i.e. situations where we know that we can take measurements. As before, the design variable becomes -dimensional, , with a similar constraint that the time must be ordered. The data vector is built from the observations , and at each time , i.e. . The computation of the LFIRE ratios is then done as previously, with the exception that the design variable is -dimensional and the data vector is -dimensional. We again consider eight design dimensions; the expected utility for is evaluated as outlined in Algorithm 1 and then optimised via Bayesian optimisation (BO), instead of grid search which is infeasible for .

Figure 7(a) shows that, using BO, we can converge to the optimum in around expected utility evaluations. This is again a drastic difference to the number of evaluations we would have had to do with grid search, e.g.  using the same number of designs per dimensions as for the one-dimensional design setting. We then generate LFIRE posterior samples at this optimal design, according to the procedure outlined in Section 2, and smooth these out via Gaussian KDE. After repeating this process times, the resulting mean posterior density is shown in Figure 7(b), yielding parameter estimates (posterior means) and 95% credibility intervals equal to for and for , both containing the true data generating parameters which were and .

(a) Convergence
(b) Posterior
Figure 7: SIR Model with design dimensions. (a) Maximal utility identified as a function of number of evaluations. (b) The posterior distribution obtained with the optimal design , smoothed out by Gaussian kernel density estimation.

We again compute the expected utility value at a design point consisting of equidistant times, in order to compare it to the optimal design times. Using Algorithm 1, we obtain for the equidistant design times and for the optimal design times. While the optimal design has higher utility, the difference is small, which is again natural since the relative value of designing experiments generally diminishes as the amount of data that can be gathered increases.

4 Conclusions

In this paper, we have presented a Bayesian experimental design framework for implicit models, where the likelihood is intractable but sampling from the model is possible. We used the LFIRE approach to obtain density ratios of the posterior to prior distributions, which would otherwise not easily be possible with traditional likelihood-free inference methods such as approximate Bayesian computation. This allowed us to conveniently compute the mutual information between model parameters and simulated data, a notoriously difficult task for intractable models. We then used this mutual information as a utility function to decide where we should take data next. Instead of the common grid search approach, we optimised this utility function by means of Bayesian optimisation and found that this smoothed out the noise introduced by Monte-Carlo approximations. Furthermore, we were able to find optimal designs in design dimensions impossible with grid search.

There are a few limitations to our proposed design framework. First, high-dimensional Bayesian optimisation is still an active research area and its applicability in hundreds of dimensions remains to be investigated. Secondly, as with all likelihood-free inference methods, posterior estimations are approximate. We particularly noticed this when applying LFIRE to the Death Model. While the resulting utility functions were very similar and the optimal designs barely affected, characterising more generally how the approximation affects mutual information would be informative.

While we applied our framework to examples from epidemiology, the proposed methodology is general and thus applicable to a wider range of models. Other implicit models from neurobiology, cell biology or physics might be of particular interest, including both temporal and spatial models. It would then also be valuable to consider the cost or time required for doing an experiment and not only the information gain.

Finally, preliminary results suggest that the proposed framework extends to sequential designs where we update our belief about the model parameters based on the experimental outcome. This is a more realistic setting, but has barely been touched upon for models with intractable likelihoods.

Acknowledgements

Steven Kleinegesse was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.

References

  • Allen (2008) Allen, L. J. S. (2008). An Introduction to Stochastic Epidemic Models, pages 81–130. Springer Berlin Heidelberg, Berlin, Heidelberg.
  • Alsing et al. (2018) Alsing, J., Wandelt, B., and Feeney, S. (2018). Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology. MNRAS, 477:2874–2885.
  • Atkinson and Donev (1992) Atkinson, A. C. and Donev, A. N. (1992). Optimum Experimental Designs.
  • Beaumont et al. (2002) Beaumont, M., Zhang, W., and Balding, D. (2002). Approximate bayesian computation in population genetics. Genetics, 162(4):2025–2035.
  • Cook et al. (2008) Cook, A. R., Gibson, G. J., and Gilligan, C. A. (2008). Optimal observation times in experimental epidemic processes. Biometrics, 64(3):860–868.
  • Dehideniya et al. (2018) Dehideniya, M. B., Drovandi, C. C., and McGree, J. M. (2018). Optimal bayesian design for discriminating between models with intractable likelihoods in epidemiology. Computational Statistics & Data Analysis, 124:277 – 297.
  • Drovandi and Pettitt (2013) Drovandi, C. C. and Pettitt, A. N. (2013). Bayesian experimental design for models with intractable likelihoods. Biometrics, 69(4):937–948.
  • Dutta et al. (2016) Dutta, R., Corander, J., Kaski, S., and Gutmann, M. U. (2016). Likelihood-free inference by ratio estimation. ArXiv e-prints.
  • Fedorov (1972) Fedorov, V. (1972). Theory of Optimal Experiments Designs.
  • GPyOpt (2016) GPyOpt (2016). GPyOpt: A bayesian optimization framework in python. http://github.com/SheffieldML/GPyOpt.
  • Hainy et al. (2016) Hainy, M., Drovandi, C. C., and McGree, J. (2016). Likelihood-free extensions for bayesian sequentially designed experiments. In Kunert, J., Muller, C. H., and Atkinson, A. C., editors, 11th International Workshop in Model-Oriented Design and Analysis (mODa 2016), pages 153–161, Hamminkeln, Germany. Springer.
  • Kullback and Leibler (1951) Kullback, S. and Leibler, R. A. (1951). On information and sufficiency. Ann. Math. Statistics, 22:79–86.
  • Liepe et al. (2013) Liepe, J., Filippi, S., Komorowski, M., and Stumpf, M. P. H. (2013). Maximizing the information content of experiments in systems biology. PLOS Computational Biology, 9(1):1–13.
  • M. Schafer and Freeman (2012) M. Schafer, C. and Freeman, P. (2012). Likelihood-free inference in cosmology: Potential for the estimation of luminosity functions. 209:3–19.
  • Minasny and McBratney (2005) Minasny, B. and McBratney, A. B. (2005). The matérn function as a general model for soil variograms. Geoderma, 128(3):192 – 207. Pedometrics 2003.
  • Mockus et al. (1978) Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of Bayesian methods for seeking the extremum, volume 2.
  • Müller (1999) Müller, P. (1999). Simulation-based optimal design.
  • Numminen et al. (2013) Numminen, E., Cheng, L., Gyllenberg, M., and Corander, J. (2013). Estimating the transmission dynamics of streptococcus pneumoniae from strain prevalence data. Biometrics, 69(3):748–757.
  • Oh et al. (2018) Oh, C., Gavves, E., and Welling, M. (2018). BOCK : Bayesian optimization with cylindrical kernels. In Dy, J. and Krause, A., editors,

    Proceedings of the 35th International Conference on Machine Learning

    , volume 80 of Proceedings of Machine Learning Research, pages 3868–3877, Stockholmsmässan, Stockholm Sweden. PMLR.
  • Paninski (2005) Paninski, L. (2005). Asymptotic theory of information-theoretic experimental design. Neural Comput., 17(7):1480–1507.
  • Price et al. (2016) Price, D. J., Bean, N. G., Ross, J. V., and Tuke, J. (2016). On the efficient determination of optimal bayesian experimental designs using abc: A case study in optimal observation of epidemics. Journal of Statistical Planning and Inference, 172:1 – 15.
  • Rasmussen and Williams (2005) Rasmussen, C. E. and Williams, C. K. I. (2005). Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press.
  • Ricker (1954) Ricker, W. E. (1954). Stock and recruitment. Journal of the Fisheries Research Board of Canada, 11(5):559–623.
  • Rubin (1984) Rubin, D. B. (1984). Bayesianly justifiable and relevant frequency calculations for the applied statistician. Ann. Statist., 12(4):1151–1172.
  • Ryan et al. (2016) Ryan, E. G., Drovandi, C. C., McGree, J. M., and Pettitt, A. N. (2016). A review of modern computational algorithms for bayesian optimal design. International Statistical Review, 84(1):128–154.
  • Shahriari et al. (2016) Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and de Freitas, N. (2016). Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175.
  • Sugiyama et al. (2012) Sugiyama, M., Suzuki, T., and Kanamori, T. (2012). Density Ratio Estimation in Machine Learning. Cambridge University Press.
  • Wood (2010) Wood, S. N. (2010). Statistical inference for noisy nonlinear ecological dynamic systems. Nature, 466:1102.

References

  • Allen (2008) Allen, L. J. S. (2008). An Introduction to Stochastic Epidemic Models, pages 81–130. Springer Berlin Heidelberg, Berlin, Heidelberg.
  • Alsing et al. (2018) Alsing, J., Wandelt, B., and Feeney, S. (2018). Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology. MNRAS, 477:2874–2885.
  • Atkinson and Donev (1992) Atkinson, A. C. and Donev, A. N. (1992). Optimum Experimental Designs.
  • Beaumont et al. (2002) Beaumont, M., Zhang, W., and Balding, D. (2002). Approximate bayesian computation in population genetics. Genetics, 162(4):2025–2035.
  • Cook et al. (2008) Cook, A. R., Gibson, G. J., and Gilligan, C. A. (2008). Optimal observation times in experimental epidemic processes. Biometrics, 64(3):860–868.
  • Dehideniya et al. (2018) Dehideniya, M. B., Drovandi, C. C., and McGree, J. M. (2018). Optimal bayesian design for discriminating between models with intractable likelihoods in epidemiology. Computational Statistics & Data Analysis, 124:277 – 297.
  • Drovandi and Pettitt (2013) Drovandi, C. C. and Pettitt, A. N. (2013). Bayesian experimental design for models with intractable likelihoods. Biometrics, 69(4):937–948.
  • Dutta et al. (2016) Dutta, R., Corander, J., Kaski, S., and Gutmann, M. U. (2016). Likelihood-free inference by ratio estimation. ArXiv e-prints.
  • Fedorov (1972) Fedorov, V. (1972). Theory of Optimal Experiments Designs.
  • GPyOpt (2016) GPyOpt (2016). GPyOpt: A bayesian optimization framework in python. http://github.com/SheffieldML/GPyOpt.
  • Hainy et al. (2016) Hainy, M., Drovandi, C. C., and McGree, J. (2016). Likelihood-free extensions for bayesian sequentially designed experiments. In Kunert, J., Muller, C. H., and Atkinson, A. C., editors, 11th International Workshop in Model-Oriented Design and Analysis (mODa 2016), pages 153–161, Hamminkeln, Germany. Springer.
  • Kullback and Leibler (1951) Kullback, S. and Leibler, R. A. (1951). On information and sufficiency. Ann. Math. Statistics, 22:79–86.
  • Liepe et al. (2013) Liepe, J., Filippi, S., Komorowski, M., and Stumpf, M. P. H. (2013). Maximizing the information content of experiments in systems biology. PLOS Computational Biology, 9(1):1–13.
  • M. Schafer and Freeman (2012) M. Schafer, C. and Freeman, P. (2012). Likelihood-free inference in cosmology: Potential for the estimation of luminosity functions. 209:3–19.
  • Minasny and McBratney (2005) Minasny, B. and McBratney, A. B. (2005). The matérn function as a general model for soil variograms. Geoderma, 128(3):192 – 207. Pedometrics 2003.
  • Mockus et al. (1978) Mockus, J., Tiesis, V., and Zilinskas, A. (1978). The application of Bayesian methods for seeking the extremum, volume 2.
  • Müller (1999) Müller, P. (1999). Simulation-based optimal design.
  • Numminen et al. (2013) Numminen, E., Cheng, L., Gyllenberg, M., and Corander, J. (2013). Estimating the transmission dynamics of streptococcus pneumoniae from strain prevalence data. Biometrics, 69(3):748–757.
  • Oh et al. (2018) Oh, C., Gavves, E., and Welling, M. (2018). BOCK : Bayesian optimization with cylindrical kernels. In Dy, J. and Krause, A., editors,

    Proceedings of the 35th International Conference on Machine Learning

    , volume 80 of Proceedings of Machine Learning Research, pages 3868–3877, Stockholmsmässan, Stockholm Sweden. PMLR.
  • Paninski (2005) Paninski, L. (2005). Asymptotic theory of information-theoretic experimental design. Neural Comput., 17(7):1480–1507.
  • Price et al. (2016) Price, D. J., Bean, N. G., Ross, J. V., and Tuke, J. (2016). On the efficient determination of optimal bayesian experimental designs using abc: A case study in optimal observation of epidemics. Journal of Statistical Planning and Inference, 172:1 – 15.
  • Rasmussen and Williams (2005) Rasmussen, C. E. and Williams, C. K. I. (2005). Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press.
  • Ricker (1954) Ricker, W. E. (1954). Stock and recruitment. Journal of the Fisheries Research Board of Canada, 11(5):559–623.
  • Rubin (1984) Rubin, D. B. (1984). Bayesianly justifiable and relevant frequency calculations for the applied statistician. Ann. Statist., 12(4):1151–1172.
  • Ryan et al. (2016) Ryan, E. G., Drovandi, C. C., McGree, J. M., and Pettitt, A. N. (2016). A review of modern computational algorithms for bayesian optimal design. International Statistical Review, 84(1):128–154.
  • Shahriari et al. (2016) Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and de Freitas, N. (2016). Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175.
  • Sugiyama et al. (2012) Sugiyama, M., Suzuki, T., and Kanamori, T. (2012). Density Ratio Estimation in Machine Learning. Cambridge University Press.
  • Wood (2010) Wood, S. N. (2010). Statistical inference for noisy nonlinear ecological dynamic systems. Nature, 466:1102.