where are unobserved variables and are some fixed but unknown values of the model parameters. The likelihood function is implicitly defined via an integral,
For many realistic data generating processes, the integral cannot be computed analytically in closed form, and numerical approximation is computationally too costly as well. Standard likelihood-based inference is then not feasible. But inference can be performed by using the possibility to simulate data from the model. Such simulation-based likelihood-free inference methods have emerged in multiple disciplines: “Indirect inference” originated in economics (Gouriéroux et al., 1993), “approximate Bayesian computation” (ABC) in genetics (Beaumont et al., 2002; Marjoram et al., 2003; Sisson et al., 2007), or the “synthetic likelihood” approach in ecology (Wood, 2010). The different methods share the basic idea to identify the model parameters by finding values which yield simulated data that resemble the observed data. The inference process is shown in a schematic way in Algorithm 1 in the framework of ABC.
In Algorithm 1, two fundamental difficulties of the aforementioned inference methods are highlighted. One difficulty is the measurement of similarity, or discrepancy, between the observed data and the simulated data (line 5). The choice of discrepancy measure affects the statistical quality of the inference process. The second difficulty is of computational nature. Since simulating data can be computationally very costly, one would like to identify the region in the parameter space where the simulated data resemble the observed data as quickly as possible, without proposing parameters which have a negligible chance to be accepted (line 3).
2 Discriminability as discrepancy measure
We transformed the original problem of measuring the discrepancy between and
into a problem of classifying the data into simulated versus observed(Gutmann et al., 2014). Intuitively, it is easier to discriminate between two data sets which are very different than between data which are similar, and when the two data sets are generated with the same parameter values, the classification task cannot be solved significantly above chance-level. This motivated us to use the discriminability (classifiability) as discrepancy measure, and to perform likelihood-free inference by identifying the parameter values which yield chance-level discriminability only (Gutmann et al., 2014).
We next illustrate this approach using a toy example. The data
are assumed to be sampled from a standard normal distribution (black curve in Figure1(a)), and the parameter of interest is the mean. For data simulated with mean (green curve), the two densities barely overlap so that classification is easy. In fact, linear discriminant analysis (LDA) yields a discriminability of almost 100% (Figure 1(b), green dashed curve). If the data are simulated with a mean closer to zero, for example with (red curve), the simulated data become more similar to and the classification accuracy drops to around 60% (red dashed curve). For , where the simulated and observed data are generated with the same values of , only chance-level discriminability of 50% is obtained. This illustrates how discriminability can be used as a discrepancy measure.
We analyzed the validity of this approach theoretically and demonstrated it on more challenging synthetic data as well as real data with an individual-based epidemic model for bacterial infections in day care centers (Gutmann et al., 2014). The finding that classification can be used to measure the discrepancy has both practical and theoretical value: The main practical value is that the rather difficult problem of choosing a discrepancy measure is reduced to a more standard problem where we can leverage on effective existing solutions. The theoretical value lies in the establishment of a tight connection between likelihood-free inference and classification – two fields of research which appear rather different at first glance.
3 Bayesian optimization to identify parameter regions of interest
In the following, we denote a certain discrepancy measure by . A small value of is assumed to imply that are judged to be similar to . The difficulty in finding parameter regions where is small is at least twofold: First, the mapping from to can generally not be expressed in closed form and derivatives are not available either. Second, is actually a stochastic process due to the use of simulations to obtain . We illustrate this in Figure 2 for our Gaussian toy example where is the discriminability between and (for further examples, see Gutmann and Corander, 2015). The figure visualizes the distribution of for . The fact that is a random process was suppressed in Figure 1 by working with a large sample size.
We used Bayesian optimization, a combination of nonlinear (Gaussian process) regression and optimization (see, for example, Brochu et al., 2010), to quickly identify regions where is likely to be small (Gutmann and Corander, 2015). In Bayesian optimization, the available information about the relation between and is used to build a statistical model of , and new data are actively acquired in regions where the minimum of is potentially located. After acquisition of the new data, e.g. a tuple
, the model is updated using Bayes’ theorem.
For our simple toy example, the region around zero was identified as the region of interest within ten acquisitions (Figure 3(a-e)). While the location of the minimum is approximately correct, the posterior mean approximates the (empirical) mean of in Figure 2 only roughly. As more evidence about the behavior of in the region of interest is acquired, the fit improves (Figure 3(f)).
In the full paper (Gutmann and Corander, 2015), we show that Bayesian optimization not only allows to quickly identify the regions of interest but also to perform approximate posterior inference. Our findings are supported by theory, and applications to real data analysis with intractable models. In our applications, the inference was accelerated through a reduction in the number of required simulations by several orders of magnitude.
Two major difficulties in likelihood-free inference are the choice of the discrepancy measure between simulated and observed data, and the identification of regions in the parameter space where the discrepancy is likely to be small. The former difficulty is more of statistical, the latter more of computational nature. We gave a brief introduction to our recent work on the two issues: We used classification to measure the discrepancy (Gutmann et al., 2014), and Bayesian optimization to quickly identify regions of low discrepancy (Gutmann and Corander, 2015).
- Beaumont et al. (2002) M.A. Beaumont, W. Zhang, and D.J. Balding. Approximate Bayesian computation in population genetics. Genetics, 162(4):2025–2035, 2002.
- Brochu et al. (2010) E. Brochu, V.M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv:1012.2599 [cs.LG], 2010.
- Gouriéroux et al. (1993) C. Gouriéroux, A. Monfort, and E. Renault. Indirect inference. J. Appl. Econ., 8(S1):S85–S118, 1993.
- Gutmann and Corander (2015) M.U. Gutmann and J Corander. Bayesian optimization for likelihood-free inference of simulator-based statistical models. arXiv:1501.03291 [stat.ML], 2015.
- Gutmann et al. (2014) M.U. Gutmann, R. Dutta, S. Kaski, and J. Corander. Likelihood-free inference via classification. arXiv:1407.4981 [stat.CO], 2014.
- Marjoram et al. (2003) P. Marjoram, J. Molitor, V. Plagnol, and S. Tavaré. Markov chain Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 100(26):15324–15328, 2003.
- Sisson et al. (2007) S.A. Sisson, Y. Fan, and M.M. Tanaka. Sequential Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 104(6):1760–1765, 2007.
- Wood (2010) S.N. Wood. Statistical inference for noisy nonlinear ecological dynamic systems. Nature, 466(7310):1102–1104, 2010.