 # PAWL-Forced Simulated Tempering

In this short note, we show how the parallel adaptive Wang-Landau (PAWL) algorithm of Bornn et al. (2013) can be used to automate and improve simulated tempering algorithms. While Wang-Landau and other stochastic approximation methods have frequently been applied within the simulated tempering framework, this note demonstrates through a simple example the additional improvements brought about by parallelization, adaptive proposals and automated bin splitting.

Comments

There are no comments yet.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 A Parallel Adaptive Wang-Landau Algorithm

The central idea underlying Wang-Landau (wang1992 ) and related algorithms is that instead of generating samples from a target density , it is sometimes more efficient to instead sample a strategically biased density . In the case of Wang-Landau, the goal is to sample

 ~π(x)=π(x)×1dd∑i=1IXi(x)∫Xiπ(x)dx (1)

where is equal to if and 0 otherwise. Interestingly, this biased target ensures each of the partitions of the space are visited equally: . Additionally, the restriction of the modified distribution to each set coincides with the restriction of the target distribution to this set up to a multiplicative constant; namely for all , .

While the biased density has desirable properties, an obvious problem is that calculating

is not straightforward. As such, the Wang-Landau algorithm creates estimates

of these quantities at each step . Algorithm 1 provides psuedo-code for the algorithm.

In the full version of the algorithm, the step size is only reduced when all of the regions have been uniformly explored as measured by the flat histogram criterion where is the proportion of samples within since the last time the flat histogram criterion was met. Here is a user-specified threshold. The reader is referred to bornn2013 for a full description and discussion of the algorithm, as well as details on stabilizing the algorithm through parallelization, introducing adaptive proposals, and automating the partitioning of the space. These three improvements, applied to simulated tempering, will be the focus of this work.

## 2 Simulated Tempering

The use of stochastic approximation algorithms, including Wang-Landau, within simulated tempering has been suggested by various authors (see, for example, geyer1995 and atchade2010 ). In this note, we further examine the improvements proposed in bornn2013 , namely parallelization, adaptive proposals, and automatic partitioning of the space. The primary idea of simulated tempering is to sample from a tempered distribution for some temperature . The algorithm proceeds by setting a temperature ladder

and running a Markov chain on the pair

. As such, the chain explores the state space while moving up and down the temperature ladder. Readers are referred to marinari1992 ; geyer1995 for further details. Of note for our purposes, however, is that one is able to specify pseudo-priors on the different steps of the ladder to ensure equal occupation numbers – time spent in each step of the ladder – which is a task well-suited for stochastic approximation.

To test these (potential) improvements to simulated tempering, we employ a small bimodal density. Specific, we set

to be an equally-weighted mixture of two standard normal distributions, one centered at

and the other at . As such, the distribution has two modes (at and ) with a large low-density valley separating them. As a result, estimating the mean () is a natural challenge for any sampler. We run chains each of length (for various ), and calculate the root mean squared error (RMSE) between the posterior mean (calculated from all states with ) and the true mean of

. We compare standard simulated tempering using Metropolis-Hastings with uniform pseudo-priors (using a Gaussian random walk with standard deviation 10, and temperatures

) to that using stochastic approximation adjusted such that the pseudo-priors ensure equal occupation numbers. See atchade2010 for details. We use standard stochastic approximation with step sizes for . In other words, the step size starts decreasing after 1 iteration, iterations, or iterations, respectively. We also explore Wang-Landau, which automatically decreases the step size after a flat histogram criterion is met. We look at values of the user-specified tuning parameter , namely . Figure 1 displays the RMSE as a function of for each algorithm. We see that all of the stochastic approximation algorithms (including Wang-Landau) perform similarly in this simple example. It has been argued, however, that in more complex situations Wang-Landau will outperform stochastic approximation with deterministicly decreasing step size atchade2010 . Figure 1: RMSE for estimating the mean in the bimodal density for various simulated tempering configurations. We see that Wang-Landau (provided c is small) and stochastic approximation with deterministic step size decreases (provided t0 is large) both perform well.

In Figure 2 we similarly compare the simple Metropolis-Hastings simulated tempering algorithm to the Wang-Landau version (using ) with and without adapting the proposal standard deviation (set to target an acceptance ratio of ); see bornn2013 for specifics. Figure 2: RMSE for estimating the mean in the bimodal density for various simulated tempering configurations with and without adaptive proposals.

It is clear that adaptation in the proposal mechanism provides significant gains to both the standard simulated tempering algorithm as well as the Wang-Landau version. Further improvements might be made by considering mixture proposals tailored to each step on the temperature ladder, rather than being optimized to create a given acceptance rate across all temperatures. Figure 2 also displays the adaptive Wang-Landau algorithm in parallel with and particles, demonstrating vastly improved convergence of the algorithm. With particles, the approximate improvement in RMSE is , which is roughly equivalent to if we were to run a single chain for

iterations. However, due to vectorization the parallel version does not take

times as long to run. In our examples, and particles took and times longer than the single chain, respectively.

We also explored automatic setting of the temperature ladder using the bin-splitting method proposed in bornn2013 (not shown). However, in this small example the advanced binning method performed similarly to simply fixing the temperature ladder to the integers . We suspect that in more complicated settings where the results are more sensitive to the temperature ladder the automatic binning approach will bring additional benefit.

## 3 Conclusion

This brief note has employed a simple bimodal example to demonstrate the benefits of embedding adaptive proposals, parallelization, and automatic bin splitting within the simulated tempering framework. Due to space limitations, many pertinent references and ideas have been excluded, though the interested reader might follow the citation trail to further explore these algorithms. If there is a single takeaway, it is that sometimes “stacking” multiple computational techniques can lead to significant improvements in performance. In this case, parallelization and adaptive proposals provide signifant improvements to simulated tempering with the Wang-Landau algorithm; additionally, they are straightforward to implement through the R package PAWL, available online.

Ongoing work involves applying these simulated tempering methods to learn latent dimensions in nonstationary spatial models (bornn2012 ), which due to partial identifiability of the parameter space show particular promise for benefiting from the ideas presented herein. Specifically, as this class of models is new and as-yet poorly understood, it is unclear apriori how to determine the scale of the proposal distribution as well as set the temperature ladder.

## Bibliography

• (1) Atchade, Y., Liu, J. The Wang-Landau algorithm for Monte Carlo computation in general state spaces. Statistica Sinica, Vol. 20, 209-233 (2010)
• (2) Bornn, L., Shaddick, G., Zidek, J. Modeling nonstationary processes through dimension expansion. Journal of the American Statistical Association, 107(497), 281-289 (2012)
• (3) Bornn, L., Jacob, P.E., Del Moral, P., Doucet, A. An adaptive interacting Wang-Landau algorithm for automatic density exploration. To appear in the Journal of Computational and Graphical Statistics (2013)
• (4) Geyer, C., Thompson, E. Annealing Markov chain Monte Carlo with applications to ancestral inference. Journal of the American Statistical Association, Vol. 90, No. 431, 909-920 (1995)
• (5) Marinari, E., Parisi, G. Simulated tempering: a new Monte Carlo scheme. EPL (Europhysics Letters), 19(6), 451 (1992)
• (6) Wang, F., Landau, D. P. Efficient, multiple-range random walk algorithm to calculate the density of states. Physical Review Letters, 86(10), 2050-2053 (2001)