1 Introduction
Restricted Boltzmann Machines (RBM) [Freund+Haussler94, Welling05, Hinton06] have become a model of choice for learning unsupervised features for use in deep feedforward architectures [Hinton06, Bengio2009] as well as for modeling complex, highdimensional distributions [Welling05, TaylorHintonICML2009, Larochelle+al2010]. Their success can be explained in part through the bipartite structure of their graphical model. Units are grouped into a visible layer and a hidden layer , prohibiting connections within the same layer. The use of latent variables affords RBMs a rich modeling capacity, while the conditional independence property yields a trivial inference procedure.
RBMs are parametrized by an energy function
which is converted to probability through the Boltzmann distribution, after marginalizing out the hidden units. The probability of a given configuration
is thus given by , where is the partition function defined as .Despite their popularity, direct learning of these models through maximum likelihood remains problematic. The maximum likelihood gradient with respect to the parameters of the model is:
(1) 
The first term is trivial to calculate and is referred to as the positive phase, as it raises the probability of training data. The second term or negative phase is intractable in most applications of interest, as it involves an expectation over . Many learning algorithms have been proposed in the literature to address this issue:

Contrastive Divergence (CD) [Hinton99, Hinton2002]
replaces the expectation with a finite set of negative samples, which are obtained by running a short Markov chain initialized at positive training examples. This yields a biased, but lowvariance gradient which has been shown to work well as a feature extractor for deep networks such as the Deep Belief Network
[Hinton06]. 
Stochastic Maximum Likelihood (SML) or Persistent Contrastive Divergence (PCD) [Younes98onthe, Tieleman08] on the other hand, relies on a persistent Markov chain to sample the negative particles. The chain is run for a small number of steps between consecutive model updates, with the assumption that the Markov chain will stay close to its equilibrium distribution as the parameters evolve. Learning actually encourages this process, in what is called the “fastweight effect” [TielemanT2009].

Ratio Matching and Score Matching [Hyvarinen2005, Hyvarinen2007] avoid the issue of the partition function altogether by replacing maximum likelihood by another learning principle, based on matching the change in likelihood to that implied by the empirical distribution.
[Marlin10InductivePrinciples] recently compared these algorithms on a variety of tasks and found SML to be the most attractive method when taking computational complexity into account. Unfortunately, these results fail to address the main shortcomings of SML. First, it relies on Gibbs sampling to extract negative samples: a poor choice when sampling from multimodal distributions. Second, to guarantee convergence, the learning rate must be annealed throughout learning in order to offset the loss of ergodicity ^{1}^{1}1We use the term “ergodicity” rather loosely, to reflect the amount of time required for the states sampled by the Markov chain, to reflect the true expectation we wish to measure. incurred by the Markov chain due to parameter updates [Younes98onthe, Desjardins+al2010]. Using tempering in the negative phase of SML [Desjardins+al2010, Cho10IJCNN, Salakhutdinov2010, SalakhutdinovICML2010] appears to address these issues to some extent. By performing a random walk in the joint (configuration, temperature) space, negative particles can escape regions of high probability and travel between disconnected modes. Also, since high temperature chains are inherently more ergodic, the sampler as a whole exhibits better mixing and results in better convergence properties than traditional SML.
Tempering is still no panacea however. Great care must be taken to select the set of temperatures over which to run the simulation. Having too few or incorrectly spaced chains can result in high rejection ratios (tempered transition), low return rates (simulated tempering) or low swap rates between neighboring chains (parallel tempering), which all undermine the usefulness of the method. In this work, we show that the choice of can be automated for parallel tempering, both in terms of optimal temperature spacing, as well as the number of chains to simulate. Our algorithm relies heavily on the work of [Katzgraber06], who were the first to show that optimal temperature spacing can be obtained by minimizing the average return time of particles under simulation.
The paper is organized as follows. We start with a brief review of SML, then explore the details behind SML with Parallel Tempering (SMLPT) as described in [Desjardins+al2010]. Following this, we show how the algorithm of Katzgraber et al. can be adapted to the online gradient setting for use with SMLPT and show how chains can be created dynamically, so as to maintain a given level of ergodicity throughout training. We then proceed to show various results on a complex synthetic dataset.
2 SML with Optimized Parallel Tempering
2.1 Parallel Tempered SML (SMLPT)
We start with a very brief review of SML, which will serve mostly to anchor our notation. For details on the actual algorithm, we refer the interested reader to [TielemanT2009, Marlin10InductivePrinciples]. RBMs are parametrized by , where is the th hidden bias, the th visible bias and is the weight connecting units to . They belong to the family of loglinear models whose energy function is given by , where are functions associated with each parameter . In the case of RBMs, and . For this family of model, the gradient of Equation 1 simplifies to:
(2) 
As was mentioned previously, SML approximates the gradient by drawing negative phase samples (i.e. to estimate the second expectation) from a persistent Markov chain, which attempts to track changes in the model. If we denote the state of this chain at timestep
as and the ith training example as , then the stochastic gradient update follows , where , and is obtained after steps of alternating Gibbs starting from state and .Training an RBM using SMLPT maintains the positive phase as is. During the negative phase however, we create and sample from an an extended set of persistent chains, . Here each represents a smoothed version of the distribution we wish to sample from, with the inverse temperature controlling the degree of smoothing. Distributions with small values are easier to sample from as they exhibit greater ergodicity.
After performing Gibbs steps for each of the intermediate distributions, crosstemperature state swaps are proposed between neighboring chains using a MetropolisHastingsbased swap acceptance criterion. If we denote by the joint state (visible and hidden) of the th chain, the swap acceptance ratio for swapping chains (,) is given by:
(3) 
Although one might reduce variance by using freeenergies to compute swap ratios, we prefer using energies as the above factorizes nicely into the following expression:
(4) 
While many swapping schedules are possible, we use the Deterministic Even Odd algorithm (DEO)
[Lingenheil200980], described below.2.2 Return Time and Optimal Temperatures
Conventional wisdom for choosing the optimal set has relied on the “flat histogram” method which selects the parameters such that the pairwise swap ratio is constant and independent of the index
. Under certain conditions (such as when sampling from multivariate Gaussian distributions), this can lead to a geometric spacing of the temperature parameters
[Neal94b]. [Behrens2010] has recently shown that geometric spacing is actually optimal for a wider family of distributions characterized by , where denotes the expectation over inverse temperature and are arbitrary constants.Since this is clearly not the case for RBMs, we turn to the work of [Katzgraber06] who propose a novel measure for optimizing . Their algorithm directly maximizes the ergodicity of the sampler by minimizing the time taken for a particle to perform a roundtrip between and . This is defined as the average “return time”
. The benefit of their method is striking: temperatures automatically pool around phase transitions, causing spikes in local exchange rates and maximizing the “flow” of particles in temperature space.
The algorithm works as follows. For sampling updates:

assign a label to each particle: those swapped into are labeled as “up” particles. Similarly, any “up” particle swapped into becomes a “down” particle.

after each swap proposal, update the histograms , counting the number of “up” and “down” particles for the Markov chain associated with .

define , the fraction of “up”moving particles at . By construction, notice that and .
thus defines a probability distribution of “up” particles in the range
. 
The new inverse temperature parameters are chosen as the ordered set which assigns equal probability mass to each chain. This yields an curve which is linear in the chain index.
The above procedure is applied iteratively, each time increasing so as to finetune the ’s. To monitor return time, we can simply maintain a counter for each particle , which is (1) incremented at every sampling iteration and (2) reset to 0 whenever has label “down” and is swapped into . A lowerbound for return time is then given by .
2.3 Optimizing while Learning
2.3.1 Online Beta Adaptation
While the above algorithm exhibits the right properties, it is not very well suited to the context of learning. When training an RBM, the distribution we are sampling from is continuously changing. As such, one would expect the optimal set to evolve over time. We also do not have the luxury of performing sampling steps after each gradient update.
Our solution is simple: the histograms and are updated using an exponential moving average, whose time constant is in the order of the return time . Using as the time constant is crucial as it allows us to maintain flow statistics at the proper timescale. If an “up” particle reaches the th chain, we update as follows:
(5) 
where is the estimated return time at time .
Using the above, we can estimate the set of optimal inverse temperatures . Beta values are updated by performing a step in the direction of the optimal value: , where is a learning rate on . The properties of [Katzgraber06] naturally enforce the ordering constraint on the ’s.
2.3.2 Choosing and
Another important point is that [Katzgraber06] optimizes the set while keeping the bounds and fixed. While is a natural choice, we expect the optimal to vary during learning. For this reason, we err on the side of caution and use , relying on a chain spawning process to maintain sufficiently high swap rates between neighboring parallel chains. Spawning chains as required by the sampler should therefore result in increased stability, as well as computational savings.
[Lingenheil200980] performed an interesting study where they compared the round trip rate to the average swap rate measured across all chains. They found that the DEO algorithm, which alternates between proposing swaps between chains followed by ), gave rise to a concave function with a broad maximum around an average swap rate of
Our temperature adaptation therefore works in two phases:

The algorithm of Katzgraber et. al is used to optimize , for a fixed M.

Periodically, a chain is spawned whenever , a hyperparameter of the algorithm.
Empirically, we have observed increased stability when the index of the new chain is selected such that , . To avoid a long burnin period, we initialize the new chain with the state of the th chain and choose its inverse temperature as the mean . A small but fixed burnin period allows the system to adapt to the new configuration.
3 Results and Discussion
We evaluate our adaptive SMLPT algorithm (SMLAPT) on a complex, synthetic dataset. This dataset is heavily inspired from the one used in [Desjardins+al2010] and was specifically crafted to push the limits of the algorithm.
It is an online dataset of x
binary images, where each example is sampled from a mixture model with probability density function
. Our dataset thus consists of mixture components whose weights are sampled uniformly in the unit interval and normalized to one. Each mixture component is itself a random xbinary image, whose pixels are independent random variables having a probability
of being flipped. From the point of view of a sampler performing a random walk in image space, is inversely proportional to the difficulty of finding the mode in question. The complexity of our synthetic dataset comes from our particular choice of and . ^{2}^{2}2 and Large and small lead to modes which are difficult to sample and in which a Gibbs sampler would tend to get trapped. Large values on the other hand will tend to intercept ”down” moving particles and thus present a challenge for parallel tempering.Figure 1(a) compares the results of training a hidden unit RBM, using standard SML, SMLPT with parallel chains and our new SMLAPT algorithm. We performed updates (followed by steps of sampling) with minibatches of size 5 and tested learning rates in , learning rates in . For each algorithm, we show the results for the best performing hyperparameters, averaging over different runs. Results are plotted with respect to computation time to show the relative computational cost of each algorithm.
in terms of likelihood. Errors bars represent standard error on the mean.
As we can see, standard SML fails to learn anything meaningful: the Gibbs sampler is unable to cope with the loss in ergodicity and the model diverges. SMLPT on the other hand performs much better. Using more parallel chains in SMLPT consistently yields a better likelihood score, as well as reduced variance. This seems to confirm that using more parallel chains in SMLPT increases the ergodicity of the sampler. Finally, SMLAPT outperforms all other methods. As we will see in Figure 2, it does so using only parallel chains. Unfortunately, the computational cost seems similar to 50 parallel chains. We hope this can be reduced to the same cost as SMLPT with chains in the near future. Also interesting to note, while the variance of all methods increase with training time, SMLAPT seems immune to this issue.
We now compare the various metrics being optimized by our adaptive algorithm. Figure 1(b) shows the average return time for each of the algorithms. We can see that SMLAPT achieves a return time which is comparable to SMLPT with 10 chains, while achieving a better likelihood score than SMLPT 50.
We now select the best performing seeds for SMLPT with 50 chains and SMLAPT, and show in Figure 2, the resulting curves obtained at the end of training.
The blue curve plots as a function of beta index, while the red curves plots as a function of . We can see that SMLAPT results in a more or less linear curve for , which is not the case for SMLPT. In Figure 3(a) we can see the effect on the pairwise swap statistics . As reported in [Katzgraber06], optimizing to maintain a linear leads to temperatures pooling around the bottleneck. In comparison, SMLPT fails to capture this phenomenon regardless of whether it uses or parallel chains (figures 3(b)3(c)).
Finally, Figure 4 shows the evolution of the inverse temperature parameters throughout learning. We can see that the position of the bottleneck in temperature space changes with learning. As such, a manual tuning of temperatures would be hopeless in achieving optimal return times.
4 Conclusion
We have introduced a new adaptive training algorithm for RBMs, which we call Stochastic Maximum Likelihood with Adaptive Parallel Tempering (SMLAPT). It leverages the benefits of PT in the negative phase of SML, but adapts and spawns new temperatures so as to minimize return time. The resulting negative phase sampler thus exhibits greater ergodicity. Using a synthetic dataset, we have shown that this can directly translate to a better and more stable likelihood score. In the process, SMLAPT also greatly reduces the number of hyperparameters to tune: temperature set selection is not only automated, but optimal. The enduser is left with very few dials: a standard learning rate on and a minimum average swap rate below which to spawn.
Much work still remains. In terms of computational cost, we would like a model trained with SMLAPT and resulting in chains, to always be upperbounded by SMLPT initialized with chains. Obviously, the above experiments should also be repeated with larger RBMs on natural datasets, such as MNIST or Caltech Silhouettes.^{3}^{3}3http://people.cs.ubc.ca/ bmarlin/data/index.shtml.
Acknowledgments
The authors acknowledge the support of the following agencies for research funding and computing support: NSERC, Compute Canada and CIFAR. We would also like to thank the developers of Theano
^{4}^{4}4http://deeplearning.net/software/theano/, for developing such a powerful tool for scientific computing, especially gradientbased learning.References
 [Behrens et al., 2010] Behrens et al.][2010]Behrens2010 Behrens, G., Friel, N., & Hurn, M. (2010). Tuning Tempered Transitions. ArXiv eprints.

[Bengio, 2009]
Bengio][2009]Bengio2009
Bengio, Y. (2009).
Learning deep architectures for AI.
Foundations and Trends in Machine Learning
, 2, 1–127. Also published as a book. Now Publishers, 2009. 
[Cho et al., 2010]
Cho et al.][2010]Cho10IJCNN
Cho, K., Raiko, T., & Ilin, A. (2010).
Parallel tempering is efficient for learning restricted boltzmann
machines.
Proceedings of the International Joint Conference on Neural Networks (IJCNN 2010)
. Barcelona, Spain.  [Desjardins et al., 2010] Desjardins et al.][2010]Desjardins+al2010 Desjardins, G., Courville, A., & Bengio, Y. (2010). Tempered Markov chain monte carlo for training of restricted Boltzmann machine. Proceedings of AISTATS 2010 (pp. 145–152).
 [Freund & Haussler, 1994] Freund and Haussler][1994]Freund+Haussler94 Freund, Y., & Haussler, D. (1994). Unsupervised learning of distributions on binary vectors using two layer networks (Technical Report UCSCCRL9425). University of California, Santa Cruz.
 [Hinton, 1999] Hinton][1999]Hinton99 Hinton, G. E. (1999). Products of experts. Proceedings of the Ninth International Conference on Artificial Neural Networks (ICANN) (pp. 1–6). Edinburgh, Scotland: IEE.
 [Hinton, 2002] Hinton][2002]Hinton2002 Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14, 1771–1800.
 [Hinton et al., 2006] Hinton et al.][2006]Hinton06 Hinton, G. E., Osindero, S., & Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554.
 [Hyvärinen, 2005] Hyvärinen][2005]Hyvarinen2005 Hyvärinen, A. (2005). Estimation of nonnormalized statistical models using score matching. Journal of Machine Learning Research, 6, 695–709.
 [Hyvärinen, 2007] Hyvärinen][2007]Hyvarinen2007 Hyvärinen, A. (2007). Some extensions of score matching. Computational Statistics and Data Analysis, 51, 2499–2512.
 [Katzgraber et al., 2006] Katzgraber et al.][2006]Katzgraber06 Katzgraber, H. G., Trebst, S., Huse, D. A., & Troyer, M. (2006). Feedbackoptimized parallel tempering monte carlo. Journal of Statistical Mechanics: Theory and Experiment, 2006, P03018.
 [Larochelle et al., 2010] Larochelle et al.][2010]Larochelle+al2010 Larochelle, H., Bengio, Y., & Turian, J. (2010). Tractable multivariate binary density estimation and the restricted Boltzmann forest. Neural Computation, 22, 2285–2307.
 [Lingenheil et al., 2009] Lingenheil et al.][2009]Lingenheil200980 Lingenheil, M., Denschlag, R., Mathias, G., & Tavan, P. (2009). Efficiency of exchange schemes in replica exchange. Chemical Physics Letters, 478, 80 – 84.

[Marlin et al., 2010]
Marlin
et al.][2010]Marlin10InductivePrinciples
Marlin, B., Swersky, K., Chen, B., & de Freitas, N. (2010).
Inductive principles for restricted boltzmann machine learning.
In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010)
.  [Nadler & Hansmann, 2007] Nadler and Hansmann][2007]Nadler07 Nadler, W., & Hansmann, U. H. E. (2007). Generalized ensemble and tempering simulations: A unified view. Phys. Rev. E, 75, 026109.
 [Neal, 1994] Neal][1994]Neal94b Neal, R. M. (1994). Sampling from multimodal distributions using tempered transitions (Technical Report 9421). Dept. of Statistics, University of Toronto.
 [Salakhutdinov, 2010a] Salakhutdinov][2010a]SalakhutdinovICML2010 Salakhutdinov, R. (2010a). Learning deep boltzmann machines using adaptive mcmc. Proceedings of the Twentyseventh International Conference on Machine Learning (ICML10) (pp. 943–950). ACM.
 [Salakhutdinov, 2010b] Salakhutdinov][2010b]Salakhutdinov2010 Salakhutdinov, R. (2010b). Learning in Markov random fields using tempered transitions. NIPS’09.
 [Taylor & Hinton, 2009] Taylor and Hinton][2009]TaylorHintonICML2009 Taylor, G., & Hinton, G. (2009). Factored conditional restricted Boltzmann machines for modeling motion style. Proceedings of the 26th International Conference on Machine Learning (ICML’09) (pp. 1025–1032). Montreal: Omnipress.
 [Tieleman, 2008] Tieleman][2008]Tieleman08 Tieleman, T. (2008). Training restricted boltzmann machines using approximations to the likelihood gradient. ICML 2008 (pp. 1064–1071). Helsinki, Finland: ACM.
 [Tieleman & Hinton, 2009] Tieleman and Hinton][2009]TielemanT2009 Tieleman, T., & Hinton, G. (2009). Using fast weights to improve persistent contrastive divergence. ICML 2009 (pp. 1033–1040). ACM.
 [Welling et al., 2005] Welling et al.][2005]Welling05 Welling, M., RosenZvi, M., & Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. NIPS’04. Cambridge, MA: MIT Press.
 [Younes, 1998] Younes][1998]Younes98onthe Younes, L. (1998). On the convergence of markovian stochastic algorithms with rapidly decreasing ergodicity rates. Stochastics and Stochastics Models (pp. 177–228).
Comments
There are no comments yet.