Herded Gibbs Sampling

01/17/2013 ∙ by Luke Bornn, et al. ∙ 0

The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an O(1/T) convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Over the last 60 years, we have witnessed great progress in the design of randomized sampling algorithms; see for example [16, 9, 3, 22] and the references therein. In contrast, the design of deterministic algorithms for “sampling” from distributions is still in its inception [8, 13, 7, 20]. There are, however, many important reasons for pursuing this line of attack on the problem. From a theoretical perspective, this is a well defined mathematical challenge whose solution might have important consequences. It also brings us closer to reconciling the fact that we typically use pseudo-random number generators to run Monte Carlo algorithms on classical, Von Neumann architecture, computers. Moreover, the theory for some of the recently proposed deterministic sampling algorithms has taught us that they can achieve convergence rates [8, 13], which are much faster than the standard Monte Carlo rates of for computing ergodic averages. From a practical perspective, the design of deterministic sampling algorithms creates an opportunity for researchers to apply a great body of knowledge on optimization to the problem of sampling; see for example [4] for an early example of this.

The domain of application of currently existing deterministic sampling algorithms is still very narrow. Importantly, there do not exist deterministic tools for sampling from unnormalized multivariate probability distributions. This is very limiting because the problem of sampling from unnormalized distributions is at the heart of the field of Bayesian inference and the probabilistic programming approach to artificial intelligence

[17, 6, 18, 11]. At the same time, despite great progress in Monte Carlo simulation, the celebrated Gibbs sampler continues to be one of the most widely-used algorithms. For, example it is the inference engine behind popular statistics packages [17], several tools for text analysis [21]

, and Boltzmann machines

[2, 12]. The popularity of Gibbs stems from its simplicity of implementation and the fact that it is a very generic algorithm.

Without any doubt, it would be remarkable if we could design generic deterministic Gibbs samplers with fast (theoretical and empirical) rates of convergence. In this paper, we take steps toward achieving this goal by capitalizing on a recent idea for deterministic simulation known as herding. Herding [24, 23, 10] is a deterministic procedure for generating samples

, such that the empirical moments

of the data are matched. The herding procedure, at iteration , is as follows:

(1)

where is a feature map (statistic) from to a Hilbert space with inner product ,

is the vector of parameters, and

is the moment vector (expected value of over the data) that we want to match. If we choose normalized features by making constant for all , then the update to generate samples for in Equation 1 is equivalent to minimizing the objective

(2)

where may have no prior known value and is the naturally defined norm based upon the inner product of the space [8, 4].

Herding can be used to produce samples from normalized probability distributions. This is done as follows. Let denote a discrete, normalized probability distribution, with and . A natural feature in this case is the vector that has all entries equal to zero, except for the entry at the position indicated by . For instance, if and , we have . Hence,

is an empirical estimate of the distribution. In this case, one step of the herding algorithm involves finding the largest component of the weight vector (

), setting , fixing the -entry of to one and all other entries to zero, and updating the weight vector: . The output is a set of samples for which the empirical estimate converges on the target distribution as .

The herding method, as described thus far, only applies to normalized distributions or to problems where the objective is not to guarantee that the samples come from the right target, but to ensure that some moments are matched. An interpretation of herding in terms of Bayesian quadrature has been put forward recently by

[14].

In this paper, we will show that it is possible to use herding to generate samples from more complex unnormalized probability distributions. In particular, we introduce a deterministic variant of the popular Gibbs sampling algorithm, which we refer to as herded Gibbs. While Gibbs relies on drawing samples from the full-conditionals at random, herded Gibbs generates the samples by matching the full-conditionals. That is, one simply applies herding to all the full-conditional distributions.

The experiments will demonstrate that the new algorithm outperforms Gibbs sampling and mean field methods in the domain of sampling from sparsely connected probabilistic graphical models, such as grid-lattice Markov random fields (MRFs) for image denoising and conditional random fields (CRFs) for natural language processing.

We advance the theory by proving that the deterministic Gibbs algorithm converges for distributions of independent variables and fully-connected probabilistic graphical models. However, a proof establishing suitable conditions that ensure convergence of herded Gibbs sampling for sparsely connected probabilistic graphical models is still unavailable.

2 Herded Gibbs Sampling

For a graph of discrete nodes

, where the set of nodes are the random variables

, , let denote the target distribution defined on .

Gibbs sampling is one of the most popular methods to draw samples from . Gibbs alternates (either systematically or randomly) the sampling of each variable given , where is the index of the node, and denotes the neighbors of node . That is, Gibbs generates each sample from its full-conditional distribution .

Herded Gibbs replaces the sampling from full-conditionals with herding at the level of the full-conditionals. That is, it alternates a process of matching the full-conditional distributions . To do this, herded Gibbs defines a set of auxiliary weights for any value of and . For ease of presentation, we assume the domain of is binary, , and we use one weight for every and assignment to the neighbors . Herded Gibbs can be trivially generalized to the multivariate setting by employing weight vectors in instead of scalars.

If the binary variable

has four binary neighbors , we must maintain weight vectors. Only the weight vector corresponding to the current instantiation of the neighbors is updated, as illustrated in Algorithm 1. The memory complexity of herded Gibbs is exponential in the maximum node degree. Note the algorithm is a deterministic Markov process with state .

  Input: .
  Step 1: Set . Initialize in the support of and in .
  for  do
     Step 2: Pick a node according to some policy. Denote .
     Step 3: If , set , otherwise set .
     Step 4: Update weight
     Step 5: Keep the values of all the other nodes and all the other weights .
  end for
  Output:
Algorithm 1 Herded Gibbs Sampling.

The initialization in step 1 guarantees that always remains in the support of . For a deterministic scan policy in step 2, we take the value of variables as a sample sequence. Throughout the paper all experiments employ a fixed variable traversal for sample generation. We call one such traversal of the variables a sweep.

3 Analysis

As herded Gibbs sampling is a deterministic algorithm, there is no stationary probability distribution of states. Instead, we examine the average of the sample states over time and hypothesize that it converges to the joint distribution, our target distribution,

. To make the treatment precise, we need the following definition:

Definition 1.

For a graph of discrete nodes , where the set of nodes , , is the empirical estimate of the joint distribution obtained by averaging over samples acquired from . is derived from samples, collected at the end of every sweep over variables, starting from the sweep:

(3)

Our goal is to prove that the limiting average sample distribution over time converges to the target distribution . Specifically, we want to show the following:

(4)

If this holds, we also want to know what the convergence rate is.

We begin the theoretical analysis with a graph of one binary variable. For this graph, there is only one weight . Denote as for notational simplicity. The sequence of is determined by the dynamics of (shown in Figure 1):

(5)

Lemma 3 in the appendix shows that is the invariant interval of the dynamics, and the state is visited at a frequency close to with an error:

(6)

This is known as the fast moment matching property in [24, 23, 10]. We will show in the next two theorems that the fast moment matching property also holds for two special types of graphs, with proofs provided in the appendix.

Figure 1: Herding dynamics for a single variable.

In an empty graph, all the variables are independent of each other and herded Gibbs reduces to running one-variable chains in parallel. Denote the marginal distribution .

Examples of failing convergence in the presence of rational ratios between the s were observed in [4]. There the need for further theoretical research on this matter was pointed out. The following theorem provides formal conditions for convergence in the restricted domain of empty graphs.

Theorem 1.

For an empty graph, when herded Gibbs has a fixed scanning order, and are rationally independent, the empirical distribution converges to the target distribution as for any .

A set of real numbers, , is said to be rationally independent if for any set of rational numbers, , we have . The proof of Theorem 1

consists of first formulating the dynamics of the weight vector as a constant translation mapping in a circular unit cube, and then proving that the weights are uniformly distributed by making use of Kronecker-Weyl’s theorem

[25].

For fully-connected (complete) graphs, convergence is guaranteed even with rational ratios. In fact, herded Gibbs converges to the target joint distribution at a rate of with a burn-in period. This statement is formalized in Theorem 2.

Theorem 2.

For a fully-connected graph, when herded Gibbs has a fixed scanning order and a Dobrushin coefficient of the corresponding Gibbs sampler , there exist constants , and such that

(7)

where , , , and .

The constants and are defined in Equation 31 for Proposition 4 in the appendix. If we ignore the burn-in period and start collecting samples simply from the beginning, we achieve a convergence rate of as stated in Corollary 10 in the appendix. The constant in the convergence rate has an exponential term, with in the exponent. An exponentially large constant seems to be unavoidable for any sampling algorithm when considering the convergence to a joint distribution with states. As for the marginal distributions, it is obvious that the convergence rate of herded Gibbs is also because marginal probabilities are linear functions of the joint distribution. However, in practice, we observe very rapid convergence results for the marginals, so stronger theoretical results about the convergence of the marginal distributions seem plausible.

The proof proceeds by first bounding the discrepancy between the chain of empirical estimates of the joint obtained by averaging over herded Gibbs samples, , and a Gibbs chain initialized at . After one iteration, this discrepancy is bounded above by .

The Gibbs chain has geometric convergence to and the distance between the Gibbs and herded Gibbs chains is bounded by . The geometric convergence rate to dominates the discrepancy of herded Gibbs and thus we infer that converges to geometrically. To round-off the proof, we must find a limiting value for . The proof concludes with an burn-in for .

However, for a generic graph we have no mathematical guarantees on the convergence rate of herded Gibbs. In fact, one can easily construct synthetic examples for which herded Gibbs does not seem to converge to the true marginals and joint distribution. For the examples covered by our theorems and for examples with real data, herded Gibbs demonstrates good behaviour. The exact conditions under which herded Gibbs converges for sparsely connected graphs are still unknown.

4 Experiments

4.1 Simple Complete Graph

We begin with an illustration of how herded Gibbs substantially outperforms Gibbs on a simple complete graph. In particular, we consider a fully-connected model of two variables, and , as shown in Figure 2; the joint distribution of these variables is shown in Table 1. Figure 3 shows the marginal distribution approximated by both Gibbs and herded Gibbs for different . As decreases, both approaches require more iterations to converge, but herded Gibbs clearly outperforms Gibbs. The figure also shows that Herding does indeed exhibit a linear convergence rate.

Figure 2: Two-variable model.
Table 1: Joint distribution of the two-variable model.
(a) Approximate marginals obtained via Gibbs (blue) and herded Gibbs (red).
(b) Log-log plot of marginal approximation errors obtained via Gibbs (blue) and herded Gibbs (red).
(c) Inverse of marginal approximation errors obtained via Gibbs (blue) and herded Gibbs (red).
Figure 3: (a) Approximating a marginal distribution with Gibbs (blue) and herded Gibbs (red) for an MRF of two variables, constructed so as to make the move from state to progressively more difficult as decreases. The four columns, from left to right, are for , , and . Table 1

provides the joint distribution for these variables. The error bars for Gibbs correspond to one standard deviation. Rows (b) and (c) illustrate that the empirical convergence rate of herded Gibbs matches the expected theoretical rate. In the plots of rows (b) and (c), the upper-bound in the error of herded Gibbs was used to remove the oscillations so as to illustrate the behaviour of the algorithm more clearly.

4.2 MRF for Image Denoising

Next, we consider the standard setting of a grid-lattice MRF for image denoising. Let us assume that we have a binary image corrupted by noise, and that we want to infer the original clean image. Let denote the unknown true value of pixel , and the observed, noise-corrupted value of this pixel. We take advantage of the fact that neighboring pixels are likely to have the same label by defining an MRF with an Ising prior. That is, we specify a rectangular 2D lattice with the following pair-wise clique potentials:

(8)

and joint distribution:

(9)

where is used to indicate that nodes and are connected. The known parameters establish the coupling strength between nodes and . Note that the matrix is symmetric. If all the , then neighboring pixels are likely to be in the same state.

The MRF model combines the Ising prior with a likelihood model as follows:

(10)

The potentials encourage label smoothness. The likelihood terms

are conditionally independent (e.g. Gaussians with known variance

and mean centered at each value of , denoted ). In more precise terms,

(11)

When the coupling parameters are identical, say , we have . Hence, different neighbor configurations result in the same value of . If we store the conditionals for configurations with the same sum together, we only need to store as many conditionals as different possible values that the sum could take. This enables us to develop a shared version of herded Gibbs that is more memory efficient where we only maintain and update weights for distinct states of the Markov blanket of each variable.

In this exemplary image denoising experiment, noisy versions of the binary image, seen in Figure 4 (left), were created through the addition of Gaussian noise, with varying . Figure 4 (right) shows a corrupted image with . The reconstruction errors as a function of the number of iterations, for this example, are shown in Figure 5. The plot compares the herded Gibbs method against Gibbs and two versions of mean field with different damping factors [19]. The results demonstrate that the herded Gibbs techiques are among the best methods for solving this task.

Figure 4: Original image (left) and its corrupted version (right), with noise parameter .
Figure 5: Reconstruction errors for the image denoising task. The results are averaged across 10 corrupted images with Gaussian noise . The error bars correspond to one standard deviation. Mean field requires the specification of the damping factor D.

A comparison for different values is presented in Table 2. As expected mean field does well in the low-noise scenario, but the performance of the shared version of herded Gibbs as the noise increases is significantly better.

Method 2 4 6 8
Herded Gibbs
Herded Gibbs - shared
Gibbs
Mean field (D=0.5)
Mean field (D=1)
Table 2: Errors of image denoising example after 30 iterations (all measurements have been scaled by ). We use an Ising prior with and four Gaussian noise models with different ’s. For each , we generated 10 corrupted images by adding Gaussian noise. The final results shown here are averages and standard deviations (in parentheses) across the 10 corrupted images. D denotes the damping factor in mean field.

4.3 CRF for Named Entity Recognition

OtherPerson ZedOthersOther.OtherPerson ZedOthersOtherdeadOther.OtherdeadOtherbaby
Figure 6: Typical skip-chain CRF model for named entity recognition.

Named Entity Recognition (NER) involves the identification of entities, such as people and locations, within a text sample. A conditional random fied (CRF) for NER models the relationship between entity labels and sentences with a conditional probability distribution:

, where is a sentence, is a labeling, and is a vector of coupling parameters. The parameters, , are feature weights and model relationships between variables and or and . A chain CRF only employs relationships between adjacent variables, whereas a skip-chain CRF can employ relationships between variables where subscripts and differ dramatically. Skip-chain CRFs are important in language tasks, such as NER and semantic role labeling, because they allow us to model long dependencies in a stream of words, see Figure 6.

Once the parameters have been learned, the CRF can be used for inference; a labeling for some sentence is found by maximizing the above probability. Inference for CRF models in the NER domain is typically carried out with the Viterbi algorithm. However, if we want to accommodate long term dependencies, thus resulting in the so called skip-chain CRFs, Viterbi becomes prohibitively expensive. To surmount this problem, the Stanford named entity recognizer [15] makes use of annealed Gibbs sampling.

To demonstrate herded Gibbs on a practical application of great interest in text mining, we modify the standard inference procedure of the Stanford named entity recognizer by replacing the annealed Gibbs sampler with the herded Gibbs sampler. The herded Gibbs sampler in not annealed. To find the maximum a posteriori sequence , we simply choose the sample with highest joint discrete probability. In order to be able to compare against Viterbi, we have purposely chosen to use single-chain CRFs. We remind the reader, however, that the herded Gibbs algorithm could be used in cases where Viterbi inference is not possible.

We used the pre-trained 3-class CRF model in the Stanford NER package [15]. This model is a linear chain CRF with pre-defined features and pre-trained feature weights, . For the test set, we used the corpus for the NIST 1999 IE-ER Evaluation. Performance is measured in per-entity . For all the methods, except Viterbi, we show scores after 100, 400 and 800 iterations in Table 3. For Gibbs, the results shown are the averages and standard deviations over 5 random runs. We used a linear annealing schedule for Gibbs. As the results illustrate, herded Gibbs attains the same accuracy as Viterbi and it is faster than annealed Gibbs. Unlike Viterbi, herded Gibbs can be easily applied to skip-chain CRFs. After only 400 iterations (90.5 seconds), herded Gibbs already achieves an score of 84.75, while Gibbs, even after 800 iterations (115.9 seconds) only achieves an score of 84.61. The experiment thus clearly demonstrates that (i) herded Gibbs does no worse than the optimal solution, Viterbi, and (ii) herded Gibbs yields more accurate results for the same amount of computation. Figure 7 provides a representative NER example of the performance of Gibbs, herded Gibbs and Viterbi (all methods produced the same annotation for this short example).

MethodIterations 100 400 800 Annealed Gibbs [s] [s] [s] Herded Gibbs [s] [s] [s] Viterbi [s]

Table 3: Gibbs, herded Gibbs and Viterbi for the NER task. The average computational time each approach took to do inference for the entire test set is listed (in square brackets). After only 400 iterations (90.48 seconds), herded Gibbs already achieves an score of 84.75, while Gibbs, even after 800 iterations (115.92 seconds) only achieves an score of 84.61. For the same computation, herded Gibbs is more accurate than Gibbs.

”Pumpkin” (Tim Roth) and ”Honey Bunny” (Amanda Plummer) are having breakfast in a diner. They decide to rob it after realizing they could make money off the customers as well as the business, as they did during their previous heist. Moments after they initiate the hold-up, the scene breaks off and the title credits roll. As Jules Winnfield (Samuel L. Jackson) drives, Vincent Vega (John Travolta) talks about his experiences in Europe, from where he has just returned: the hash bars in Amsterdam, the French McDonald’s and its ”Royale with Cheese”.

Figure 7: Results for the application of the NER CRF to a random Wikipedia sample [1]

. Entities are automatically classified as

Person, Location and Organization.

5 Conclusions and Future Work

In this paper, we introduced herded Gibbs, a deterministic variant of the popular Gibbs sampling algorithm. While Gibbs relies on drawing samples from the full-conditionals at random, herded Gibbs generates the samples by matching the full-conditionals. Importantly, the herded Gibbs algorithm is very close to the Gibbs algorithm and hence retains its simplicity of implementation.

The synthetic, denoising and named entity recognition experiments provided evidence that herded Gibbs outperforms Gibbs sampling. However, as discussed, herded Gibbs requires storage of the conditional distributions for all instantiations of the neighbors in the worst case. This storage requirement indicates that it is more suitable for sparse probabilistic graphical models, such as the CRFs used in information extraction. At the other extreme, the paper advanced the theory of deterministic sampling by showing that herded Gibbs converges with rate for models with independent variables and fully-connected models. Thus, there is gap between theory and practice that needs to be narrowed. We do not anticipate that this will be an easy task, but it is certainly a key direction for future work.

We should mention that it is also possible to design parallel versions of herded Gibbs in a Jacobi fashion. We have indeed studied this and found that these are less efficient than the Gauss-Seidel version of herded Gibbs discussed in this paper. However, if many cores are available, we strongly recommend the Jacobi (asynchronous) implementation as it will likely outperform the Gauss-Seidel (synchronous) implementation.

The design of efficient herding algorithms for densely connected probabilistic graphical models remains an important area for future research. Such algorithms, in conjunction with Rao Blackwellization, would enable us to attack many statistical inference tasks, including Bayesian variable selection and Dirichlet processes.

There are also interesting connections with other algorithms to explore. If, for a fully connected graphical model, we build a new graph where every state is a node and directed connections exist between nodes that can be reached with a single herded Gibbs update, then herded Gibbs becomes equivalent to the Rotor-Router model of Alex Holroyd and Jim Propp111We thank Art Owen for pointing out this connection. [13]. This deterministic analogue of a random walk has provably superior concentration rates for quantities such as normalized hitting frequencies, hitting times and occupation frequencies. In line with our own convergence results, it is shown that discrepancies in these quantities decrease as instead of the usual . We expect that many of the results from this literature apply to herded Gibbs as well. The connection with the work of Art Owen and colleagues, see for example [7], also needs to be explored further. Their work uses completely uniformly distributed (CUD) sequences

to drive Markov chain Monte Carlo schemes. It is not clear, following discussions with Art Owen, that CUD sequences can be constructed in a greedy way as in herding.

References

  • [1] Pulp fiction - wikipedia, the free encyclopedia @ONLINE, June 2012.
  • [2] D. H. Ackley, G. Hinton, and T.. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147–169, 1985.
  • [3] C. Andrieu, N. de Freitas, A. Doucet, and M. I. Jordan.

    An Introduction to MCMC for Machine Learning.

    Machine Learning, 50(1):5–43, 2003.
  • [4] F. Bach, S. Lacoste-Julien, and G. Obozinski. On the equivalence between herding and conditional gradient algorithms. In International Conference on Machine Learning, 2012.
  • [5] P. Brémaud. Markov chains: Gibbs fields, Monte Carlo simulation, and queues, volume 31. Springer, 1999.
  • [6] P. Carbonetto, J. Kisynski, N. de Freitas, and D. Poole. Nonparametric Bayesian logic. In Uncertainty in Artificial Intelligence, pages 85–93, 2005.
  • [7] S. Chen, J. Dick, and A. B. Owen. Consistency of Markov chain quasi-Monte Carlo on continuous state spaces. Annals of Statistics, 39(2):673–701, 2011.
  • [8] Y. Chen, M. Welling, and A.J. Smola. Supersamples from kernel-herding. In Uncertainty in Artificial Intelligence, pages 109–116, 2010.
  • [9] A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science. Springer, 2001.
  • [10] A. Gelfand, Y. Chen, L. van der Maaten, and M. Welling.

    On herding and the perceptron cycling theorem.

    In Advances in Neural Information Processing Systems, pages 694–702, 2010.
  • [11] N. D. Goodman, V. K. Mansinghka, D. M. Roy, K. Bonawitz, and J. B. Tenenbaum. Church: a language for generative models. Uncertainty in Artificial Intelligence, 2008.
  • [12] G.E. Hinton and R.R. Salakhutdinov.

    Reducing the dimensionality of data with neural networks.

    Science, 313(5786):504–507, 2006.
  • [13] Alexander E Holroyd and James Propp. Rotor walks and Markov chains. Algorithmic Probability and Combinatorics, 520:105–126, 2010.
  • [14] F. Huszár and D. Duvenaud. Optimally-weighted herding is Bayesian quadrature. Arxiv preprint arXiv:1204.1664, 2012.
  • [15] T. Grenager J. R. Finkel and C. Manning. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 363–370, Stroudsburg, PA, USA, 2005. Association for Computational Linguistics.
  • [16] J. S. Liu. Monte Carlo strategies in scientific computing. Springer, 2001.
  • [17] D. J. Lunn, A. Thomas, N. Best, and D. Spiegelhalter. WinBUGS a Bayesian modelling framework: Concepts, structure, and extensibility. Statistics and Computing, 10(4):325–337, 2000.
  • [18] B. Milch and S. Russell. General-purpose MCMC inference over relational structures. In Uncertainty in Artificial Intelligence, pages 349–358, 2006.
  • [19] K. P. Murphy. Machine Learning: a Probabilistic Perspective. MIT Press, 2012.
  • [20] I. Murray and L. T. Elliott. Driving Markov chain Monte Carlo with a dependent random stream. Technical Report arXiv:1204.3187, 2012.
  • [21] I. Porteous, D. Newman, A. Ihler, A. Asuncion, P. Smyth, and M. Welling. Fast collapsed Gibbs sampling for latent Dirichlet allocation. In ACM SIGKDD international conference on Knowledge discovery and data mining, pages 569–577, 2008.
  • [22] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2nd edition, 2004.
  • [23] M. Welling. Herding dynamic weights for partially observed random field models. In Uncertainty in Artificial Intelligence, pages 599–606, 2009.
  • [24] M. Welling. Herding dynamical weights to learn. In International Conference on Machine Learning, pages 1121–1128, 2009.
  • [25] Hermann Weyl. Über die gleichverteilung von zahlen mod. eins. Mathematische Annalen, 77:313–352, 1916.

Appendix A Proof of Theorem 1

We first show that the weight dynamics of a one-variable herding algorithm are restricted to an invariant interval of length .

Lemma 3.

If is the weight of the herding dynamics of a single binary variable with probability , and at some step , then . Moreover, for , we have:

(12)
(13)
Proof.

We first show that . This is easy to observe by induction as and if for some , then, following Equation 5, we have:

(14)

Summing up both sides of Equation 5 over immediately gives us the result of Equation 12 since:

(15)

In addition, Equation 13 follows by observing that . ∎

When is outside the invariant interval, it is easy to observe that will move into it monotonically at a linear speed in a transient period. So we will always consider an initialization of from now on.

Equivalently, we can take a one-to-one mapping (we define ) and think of as updated by a constant translation vector in a circular unit interval as shown in Figure 9. That is,

(16)
Figure 8: Equivalent weight dynamics for
a single variable.
Figure 9: Dynamics of herding with two independent variables.

We are now ready to give the proof of Theorem 1.

Proof of Theorem 1.

For an empty graph of independent vertices, the dynamics of the weight vector are equivalent to a constant translation mapping in an -dimensional circular unit space , as shown in Figure 9:

(17)

The Kronecker-Weyl theorem [25] states that the sequence is equidistributed (or uniformly distributed) on if and only if is rationally independent. Since we can define a one-to-one volume preserving transformation between and as , the sequence of weights is also uniformly distributed in .

Define the mapping from a state value to an interval of as

(18)

and let be its measure. We obtain the limiting distribution of the joint state as

(19)

Appendix B Proof of Theorem 2

In this appendix, we give an upper bound for the convergence rate of the sampling distribution in fully connected graphs. As herded Gibbs sampling is deterministic, the distribution of a variable’s state at every iteration degenerates to a single state. As such, we study here the empirical distribution of a collection of samples.

The structure of the proof is as follows (with notation defined in the next subsection): We study the distribution distance between the invariant distribution and the empirical distribution of samples collected starting from sweep , . We show that the distance decreases as with the help of an auxiliary regular Gibbs sampling Markov chain initialized at , as shown in Figure 10. On the one hand, the distance between the regular Gibbs chain after one iteration, , and decreases according to the geometric convergence property of MCMC algorithms on compact state spaces. On the other hand, we show that in one step the distance between and increases by at most . Since the distance term dominates the exponentially small distance term, the distance between and is bounded by . Moreover, after a short burn-in period, , the empirical distribution will have an approximation error in the order of .

Figure 10: Transition kernels and relevant distances for the proof of Theorem 2.

b.1 Notation

Assume without loss of generality that in the systematic scanning policy, the variables are sampled in the order .

b.1.1 State Distribution

  • Denote by the support of the distribution , that is, the set of states with positive probability.

  • We use to denote the time in terms of sweeps over all of the variables, and to denote the time in terms of steps where one step constitutes the updating of one variable. For example, at the end of sweeps, we have .

  • Recall the sample/empirical distribution, , presented in Definition 1. Figure 11 provides a visual interpretation of the definition.

    Figure 11: Distribution over time at the end of every sweep.
  • Denote the sample/empirical distribution at the step within a sweep as , as shown in Figure 12:

    This is the distribution of samples collected at the step of every sweep, starting from the sweep. Clearly, .

    Figure 12: Distribution over time within a sweep.
  • Denote the distribution of a regular Gibbs sampling Markov chain after sweeps of updates over the variables with .

    For a given time , we construct a Gibbs Markov chain with initial distribution and the same scanning order of herded Gibbs, as shown in Figure 10.

b.1.2 Transition Kernel

  • Denote the transition kernel of regular Gibbs for the step of updating a variable with , and for a whole sweep with .

    By definition, . The transition kernel for a single step can be represented as a matrix:

    (20)

    where is the current state vector of variables, is the state of the next step, and denotes all the components of excluding the component. If , the conditional probability is undefined and we set it with an arbitrary distribution. Consequently, can also be represented as:

  • Denote the Dobrushin ergodic coefficient [5] of the regular Gibbs kernel with . When , the regular Gibbs sampler has a geometric rate of convergence of

    (21)

    A common sufficient condition for is that is strictly positive.

  • Consider the sequence of sample distributions in Figures 11 and 12. We define the transition kernel of herded Gibbs for the step of updating variable with , and for a whole sweep with .

    Unlike regular Gibbs, the transition kernel is not homogeneous. It depends on both the time and the sample size . Nevertheless, we can still represent the single step transition kernel as a matrix:

    (22)

    where is defined as:

    (23)

    where is the number of occurrences of a joint state, and is the number of occurrences of a conditioning state in the previous step. When , we know that with a proper initialization of herded Gibbs, and we simply set for these entries. It is not hard to verify the following identity by expanding every term with its definition

    and consequently,

    with

b.2 Linear Visiting Rate

We prove in this section that every joint state in the support of the target distribution is visited, at least, at a linear rate. This result will be used to measure the distance between the Gibbs and herded Gibbs transition kernels.

Proposition 4.

If a graph is fully connected, herded Gibbs sampling scans variables in a fixed order, and the corresponding Gibbs sampling Markov chain is irreducible, then for any state and any index , the state is visited at least at a linear rate. Specifically,

(24)

Denote the minimum nonzero conditional probability as

The following lemma, which is needed to prove Proposition 4, gives an inequality between the number of visits of two sets of states in consecutive steps.

Lemma 5.

For any integer and two sets of states with a mapping that satisfies the following condition:

(25)

we have that, for any and , the number of times is visited in the set of steps is lower bounded by a function of the number of times is visited in the previous steps as:

(26)
Proof.

As a complement to Condition 25, we can define as the inverse mapping from to subsets of so that for any , , we have , and .

Consider any state , when is visited in , the weight is active. Let us denote the set of all the steps in when is active by , that is, . Applying Lemma 3 we get

(27)

Since the variables are not changed at steps in , we have

(28)

Combining the fact that and summing up both sides of Equation 27 over proves the lemma:

(29)

Remark 6.

A fully connected graph is a necessary condition for the application of Lemma 3 in the proof. If a graph is not fully connected (), a weight may be shared by multiple full conditioning states. In this case is no longer a consecutive sequence of times when the weight is updated, and Lemma 3 does not apply here.

Now let us prove Proposition 4 by iteratively applying Lemma 5.

Proof of Proposition 4.

Because the corresponding Gibbs sampler is irreducible and any Gibbs sampler is aperiodic, there exists a constant such that for any state , and any step in a sweep, , we can find a path of length for any state with a positive transition probability, , to connect from to , where each step of the path follows the Gibbs updating scheme. For a strictly positive distribution, the minimum value of is .

Denote and the element of the path as . We can define subsets as the union of all the states in the path from any state in :

By definition of these paths, we know and , and there exits an integer and a mapping that satisfy the condition in Lemma 5 ( is the index of the variable to be updated, and the mapping is defined by the transition path). Also notice that any state in can be different from by at most variables, and therefore .

Let us apply Lemma 5 recursively from to as

(30)

The proof is concluded by choosing the constants