Importance sampling (IS) is a powerful Monte Carlo tool that allows to learn properties of a distribution (called target) by utilizing weighted samples drawn from another distribution (called proposal) . The choice of the proposal distribution in IS algorithms is critical for successful performance. In particular, one issue that can arise in IS when a poorly chosen proposal is used is weight degeneracy
, which relates to the problem of only a few samples having significant weights that contribute to approximation of the target distribution. High mismatch between the proposal distribution and the target distribution yields a high variance IS estimator. Alternatively, one can use a method known as multiple importance sampling (MIS), in which samples are drawn from multiple proposal distributions[2, 3]. MIS methods have an advantage over static IS methods as they show larger sample diversity and alternative weighting schemes can be applied to reduce the variance of the IS estimator [4, 5, 6]. Adaptive importance sampling (AIS) methods were introduced in order to adapt the proposal distribution through an iterative learning algorithm . While AIS methods have shown success in many practical applications [8, 9]
, major challenges of the method become apparent when considering multimodal and highly nonlinear target distributions. Moreover, AIS also suffers from the curse of dimensionality, whereby the issue of weight degeneracy intensifies and leads to reduced sample diversity in subsequent iterations of the algorithm.
Some advances have been made to AIS schemes to increase the diversity of the generated samples and limit the effects of weight degeneracy. The use of deterministic mixture weights  in the population Monte Carlo scheme (DM-PMC)  significantly reduces the variance of the IS estimator and maintains the diversity of the proposals through each iteration. Unfortunately, the DM-PMC algorithm does not adapt the covariance matrices of the proposal distributions. Alternatively, the nonlinear population Monte Carlo (N-PMC) scheme applies a nonlinear transformation of the importance weights, similar to the approaches of [12, 13, 14]. The weight transformation guarantees representation of more samples for proposal adaptation in the case of weight degeneracy. However, in high dimensional spaces, the effect of the weight transformation may lead to minuscule adaptation of the proposal. Other AIS methods such as mixture population Monte Carlo (M-PMC) , adaptive multiple importance sampling (AMIS) , and adaptive population importance sampling (APIS) , have also been proposed with some variations. Some of these AIS methods also adapt the covariance matrix, but the stability of the update is not guaranteed. In this paper, we propose a novel and robust method for adapting a set of proposal distributions, called covariance adaptive importance sampling (CAIS), which tackles the inherent problems posed by high-dimensional targets. Specifically, we condition the covariance adaptation of each proposal on a local measurement of the effective sample size in order to achieve robustness.
Ii Problem Formulation
We introduce the estimation problem in a Bayesian context, with the goal of computing the posterior distribution , where is a
-dimensional vector of unknown parameters related to a set of available observations. It follows from Bayes’ rule that,
where is the likelihood function of the data, while is a prior distribution of the vector of unknown parameters. The goal is in computing the expectation of some function w.r.t. , such that
In particular, we may be interested in computing quantities such as the normalizing constant or the mean of the posterior distribution. To that end, basic implementations of IS approximate using samples drawn from a proposal distribution that are weighted properly.
Iii Novel Covariance Adaptation Strategy
|1. Initialization: Select the initial proposal|
|a. Draw samples from the proposal,|
|b. Compute the standard importance weights,|
|and normalize them, with|
|c. Adapt the proposal parameters and .|
|3. Return the pairs for and|
We assume a family of proposal distributions, , where the parameters and are the mean and covariance matrix, respectively. Table I summarizes a basic AIS framework, where the proposal is adapted over iterations.
Iii-a General Formulation
For a multivariate normal distribution, the maximum likelihood (ML) estimate of the covariance matrix forunweighted samples with mean is given by,
It is well-known that if and , then and cannot be inverted . Analogous to the case of unweighted samples, suppose that we are given a set of weighted samples . If the weighted mean of the samples is , then the weighted empirical covariance is given by,
Consider the set of indices . As , we have that and . If , then (4) yields a singular matrix. For this reason, adaptation of the covariance matrix in AIS is challenging due to the chance of weight degeneracy of the samples at each iteration.
We can characterize the degeneracy of the importance weights in AIS at any iteration through the effective sample size (ESS) , which can approximated as,
It is important to note that the approximation in (5) is only true under certain conditions . Moreover, other approximations for ESS can be used, such as , though for our experiments we use eq. (5). Under the assumption that at each AIS iteration (i.e. large enough variability in the generated samples), our criterion for robust covariance adaptation rests upon guaranteeing that the ESS at each iteration of the algorithm satisfies some lower bound condition.
Iii-B Algorithm Description
In this section, we formalize a novel algorithm, covariance adaptive importance sampling (CAIS), which guarantees robust covariance adaptation for a population of proposal distributions.
Initialization: Initialize the proposal parameters for and set .
Generate samples: Draw samples from each mixand,
Compute local ESS: For each set of samples and normalized weights, , compute the local ESS, , by applying eq. (5).
Update mean: Update the mean of each mixand by taking the weighted sample mean,
Check stopping condition: If , then return for and . Otherwise, set and go to Step 2.
Iii-C Algorithm Summary
The novel technique adapts both the mean and covariance matrix for a population of proposal distributions. We emphasize that the untransformed weights are used in order to adapt the mean of each proposal, while the covariance adaptation is conditioned on the local ESS of the samples drawn by each proposal. If the local ESS is above a threshold , the untransformed weights are used for the covariance matrix adaptation. Otherwise, the importance weights are transformed using a weight transformation function and the covariance is adapted using the transformed weights, yielding a wider and hence more stable proposal.
Iii-D Choice of the Weight Transformation Function
Iii-D1 Weight Clipping
One choice for is the clipping function utilized in N-PMC. N-PMC transforms the weights by building a sorted permutation of the unnormalized weights as
and then transforming the subset of the greatest weights as
This choice of guarantees that the ESS determined by the transformed weights is bounded below by . Intuitively, in the case of extreme degeneracy, this can be interpreted as adapting the covariance matrix using the ML estimate of the covariance considering only the highest weighted samples.
Iii-D2 Weight Tempering
While (9) guarantees that the minimum ESS condition is satisfied, it also forces the largest weights to take identical value, eliminating any information about which of these weights are most representative of the target distribution. An alternative choice, which offers more flexibility, is the weight tempering function,
where controls the transformation of the importance weights.
Note that all strategies transform the importance weights but preserve the position of the samples. More advanced mechanisms could be devised by implementing covariance estimators where the samples are also transformed, following the ideas of .
Iii-E Choice of Parameters
The choice of the parameter depends on the dimension of the target, . In order to guarantee stability of the covariance update, we must have that . However, we note that under poor initialization of the proposal distribution, a choice of large (relative to ) will force a strong nonlinear transformation of the importance weights, regardless of the chosen. Consequently, the adapted covariance matrix will not differ from the one from the previous iteration, resulting in slower convergence to the true target.
The weight tempering function should be such that the ESS of the transformed weights is . In order to find the optimal , it is sufficient to employ a combination of loose grid search and fine grid search, similar to the approach used in . For that is -suboptimal (i.e. ), the computational complexity of a standard one-dimensional grid-search is . For a reasonable choice of (not too small), this is overshadowed by the computational complexity of each covariance matrix update, which is .
Iv Numerical Examples
Iv-a Example 1: Unimodal Gaussian Target
We consider a toy example to demonstrate fundamental differences between existing AIS schemes which adapt the covariance matrix and the novel strategy. In particular, the target of interest is a 10-dimensional unimodal Gaussian distribution withfor and a non-diagonal covariance matrix randomly generated from a Wishart distribution. We use a single proposal distribution for each method and initialize the mean as and the covariance matrix as , where is the identity matrix. This is a poor initialization since it is far from the mean of the target distribution. We draw samples over iterations. For the novel covariance adaptation strategy, we fix , which is greater than the dimension . The results are averaged over Monte Carlo runs.
Results for this example are shown in Figure 1. We tested four different schemes which continuously adapt the covariance matrix after each iteration of the algorithm, including the basic AIS algorithm from Table I, with proposal adaptation using eq. (4) and (7). In the left-hand plot of Figure 1, we can see that the basic AIS algorithm adapts to a peaky-proposal distribution, resulting in limited diversity in the generation of samples due to weight degeneracy. The N-PMC algorithm manages to keep the adaptation of the covariance matrix more robust. However, due to the effect of the transformed weights on the adaptation of the location parameter, the proposal distribution remains far from the target. The novel covariance adaptation strategies adapt the best, with the strategy utilizing the weight tempering function performing best.
The center plot shows the sorted eigenvalues of the adapted covariance matrix and compares to that of the target distribution. The basic AIS method results in a covariance matrix whose eigenvalues are practically . The N-PMC method manages to keep the eigenvalues nonzero, but fails to adapt to the covariance matrix of the target. The novel strategy works best with either choice of weight transformation function.
The right-hand plot shows the evolution of the Kullback-Liebler (KL) distance between the target and proposal as a function of iteration of the algorithm. This plot emphasizes the advantage of utilizing the weight tempering function over the weight clipping function. We can see that with the weight tempering function, after just iterations, the KL distance reaches a minimum, whereas the weight clipping function does not reach this value until after iterations.
Iv-B Example 2: High-Dimensional Multimodal Target
We now consider the following target distribution
where , for , and . The covariances matrices, are non-diagonal and were randomly generated from the Wishart distribution. Our objective is to estimate the mean of the target, which is given by, . The prior mean of each proposal distribution is drawn uniformly, such that . This is considered a good initialization since all modes of the target distribution are contained in this hypercube. We assume a prior covariance matrix for each proposal, , where is the identity matrix. We test for a constant of samples, mixands for iterations, where we draw samples from each proposal at each iteration. The results are averaged over Monte Carlo simulations. The simulation results are summarized in Table II. The novel scheme (CAIS) is compared to three different state-of-the-art AIS algorithms: N-PMC, DM-PMC, and adaptive population importance sampling (APIS). The results indicate that the novel scheme outperforms all three schemes under their best parameter configurations for each value of .
In this work, we addressed the problem of proposal parameter adaptation in adaptive importance sampling methods for high-dimensional scenarios. The new proposed methods adapt location (mean) parameters using the standard importance weights, while the adaptation of scale (covariance matrix) parameters are conditioned on the effective sample size and utilize transformed importance weights. This robust adaptation improves the performance of AIS and also allows us for better reconstruction of the target distribution as a mixture of kernels with different adapted covariances. Simulation results show excellent performance of the novel strategies compared to other state-of-the-art algorithms. Furthermore, the novel adaptation methods have the potential to extend to other adaptive importance sampling methods, improving their performance for higher-dimensional systems.
-  C. P Robert and G. Casella, Monte Carlo Statistical Methods, vol. 95, 2004.
-  E. Veach and L. Guibas, “Optimally combining sampling techniques for Monte Carlo rendering,” Proceedings of the 22nd Conf. on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 419–428, 1995.
-  A. Owen and Y. Zhou, “Safe and Effective Importance Sampling,” Journal of the American Statistical Association, vol. 95, no. 449, pp. 135–143, 2000.
-  V. Elvira, L. Martino, D. Luengo, and M. F. Bugallo, “Generalized multiple importance sampling,” arXiv preprint arXiv:1511.03095, 2015.
-  V. Elvira, L. Martino, D. Luengo, and M. F. Bugallo, “Efficient multiple importance sampling estimators,” Signal Processing Letters, IEEE, vol. 22, no. 10, pp. 1757–1761, 2015.
-  V. Elvira, L. Martino, D. Luengo, and M. F. Bugallo, “Heretical multiple importance sampling,” IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1474–1478, 2016.
-  M. Oh and J. O. Berger, “Adaptive importance sampling in monte carlo integration,” Journal of Statistical Computation and Simulation, vol. 41, no. 3-4, pp. 143–168, 1992.
-  S. K. Au and J. L. Beck, “A new adaptive importance sampling scheme for reliability calculations,” Structural Safety, vol. 21, no. 2, pp. 135–158, 1999.
-  O. Cappé, A. Guillin, J. M Marin, and C. P Robert, “Population Monte Carlo,” Journal of Computational and Graphical Statistics, vol. 13, no. 4, pp. 907–929, 2004.
-  T. Bengtsson, P. Bickel, and B. Li, “Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems,” Probability and Statistics: Essays in Honor of David A. Freedman, vol. 2, pp. 316–334, 2008.
-  V. Elvira, L. Martino, D. Luengo, and M. F. Bugallo, “Improving population Monte Carlo: Alternative weighting and resampling schemes,” Signal Processing, vol. 131, pp. 77–91, 2017.
-  E. L. Ionides, “Truncated importance sampling,” Journal of Computational and Graphical Statistics, vol. 17, no. 2, pp. 295–311, 2008.
-  E. Koblents and J. Míguez, “A population Monte Carlo scheme with transformed weights and its application to stochastic kinetic models,” Statistics and Computing, vol. 25, no. 2, pp. 407–425, 2013.
-  A. Vehtari and A. Gelman, “Pareto smoothed importance sampling,” arXiv preprint arXiv:1507.02646, 2015.
-  O. Cappé, R. Douc, A. Guillin, J. M. Marin, and C. P. Robert, “Adaptive importance sampling in general mixture classes,” Statistics and Computing, vol. 18, no. 4, pp. 447–459, 2008.
-  J. M. Cornuet, J-M. Marin, A. Mira, and C. P. Robert, “Adaptive multiple importance sampling,” Scandinavian Journal of Statistics, vol. 39, no. 4, pp. 798–812, 2012.
-  L. Martino, V. Elvira, D. Luengo, and J. Corander, “An Adaptive Population Importance Sampler: Learning from Uncertainty,” IEEE Transactions on Signal Processing, vol. 63, no. 16, pp. 4422–4437, 2015.
-  O. Ledoit and M. Wolf, “Honey, I Shrunk the Sample Covariance Matrix,” The Journal of Portfolio Management, vol. 30, no. 4, pp. 110–119, 2004.
-  A. Kong, “A note on importance sampling using standardized weights,” University of Chicago, Dept. of Statistics, Tech. Rep, vol. 348, 1992.
-  L. Martino, V. Elvira, and F. Louzada, “Effective sample size for importance sampling based on discrepancy measures,” Signal Processing, vol. 131, pp. 386–401, 2017.
-  L. Martino, V. Elvira, and G. Camps-Valls, “Group importance sampling for particle filtering and mcmc,” arXiv preprint arXiv:1704.02771, 2017.
-  C. Hsu, C. Chang, and C. Lin, “A Practical Guide to Support Vector Classification,” BJU international, vol. 101, no. 1, pp. 1396–400, 2008.