1 Introduction
Monte Carlo methods are based on the idea that one can approximate a complex distribution with a set of stochastically sampled particles. The flexibility and variety of Monte Carlo methods have made them the workhorse of statistical computation (Robert and Casella, 2004). However, their success relies critically on having available a good sampler, and designing such a sampler is often challenging.
In this paper, we rethink particle approximations over discrete hypothesis spaces from a different perspective. Suppose we got to pick where to place the particles in the hypothesis space; where would we put them? Intuitively, we would want to distribute them in such a way that they cover high probability regions of the target distribution, but without the particles all devolving onto the mode of the distribution. This problem can be formulated precisely within the framework of variational inference
(Wainwright and Jordan, 2008), which treats probabilistic inference as an optimization problem over a set of distributions. We derive a coordinate ascent update for particle approximations that iteratively minimizes the KullbackLeibler (KL) divergence between the particle approximation and the target distribution.After introducing our general framework, we describe how it can be applied to filtering and smoothing problems. We then show experimentally that variational particle approximations can overcome a number of problems that are challenging for conventional Monte Carlo methods. In particular, our approach is able to produce a diverse, high probability set of particles in situations where Monte Carlo and meanfield variational methods sometimes degenerate.
2 Background
Consider the problem of approximating a probability distribution
over discrete latent variables , where the target distribution is known only up to a normalizing constant : . We will refer to as the score of and as the partition function. We further assume that is a Markov network defined on a graph , so that factorizes according to:(1) 
where indexes the maximal cliques of .
2.1 Importance sampling and sequential Monte Carlo
A general way to approximate is with a weighted collection of particles, :
(2) 
where and if its arguments are equal and 0 otherwise. Importance sampling is a Monte Carlo method that stochastically generates particles from a proposal distribution, , and computes the weight according to . Importance sampling has the property that the particle approximation converges to the target distribution as (Robert and Casella, 2004).
Sequential Monte Carlo methods such as particle filtering (Doucet et al., 2001) apply importance sampling to stochastic dynamical systems (where indexes time) by sequentially sampling the latent variables at each time point using a proposal distribution . This procedure can produce conditionally low probability particles; therefore, most algorithms include a resampling step which replicates high probability particles and kills off low probability particles. The downside of resampling is that it can produce degeneracy: the particles become concentrated on a small number of hypotheses, and consequently the effective number of particles is low.
2.2 Variational inference
Variational methods (Wainwright and Jordan, 2008) define a parametrized family of probability distributions and then choose that maximizes the negative variational free energy:
(3) 
The negative variational free energy is related to the partition function and the KL divergence through the following identity:
(4) 
where
(5) 
Since , the negative variational free energy is a lower bound on the log partition function, achieving equality when the KL divergence is minimized to 0. Maximizing with respect to is thus equivalent to minimizing the KL divergence between and .
Unlike the Monte Carlo methods described in the previous section, variational methods do not in general converge to the target distribution, since typically is not in . The advantage of variational methods is that they guarantee an improved bound after each iteration, and convergence is easy to monitor (unlike most Monte Carlo methods). In practice, variational methods are also often more computationally efficient.
We next consider particle approximations from the perspective of variational inference. We then turn to the application of particle approximations to inference in stochastic dynamical systems.
3 Variational particle approximations
Variational inference can be connected to Monte Carlo methods by viewing the particles as a set of variational parameters parameterizing . For the particle approximation defined in Eq. 2, the negative variational free energy takes the following form:
(6) 
where is the number of times an identical replica of appears in the particle set. We wish to find the set of particles and their associated weights that maximize , subject to the constraint that . This constraint can be implemented by defining a new functional with Lagrange multiplier :
(7) 
Taking the functional derivative of the Lagrangian with respect to and equating to zero, we obtain:
(8) 
where
(9) 
We can plug the above result back into the definition of :
(10) 
Thus, is maximized by choosing the values of with the highest score. The following theorem shows that allowing (i.e., having replica particles) can never improve the bound.
Theorem: Let and denote two particle approximations, where consists of unique particles ( for all ) and is identical to except that particle is replicated times (displacing other particles with cumulative score ). For any choice of particles, .
Proof: We first apply Jensen’s inequality to obtain an upper bound on :
(11) 
Since , we wish to show that . All the particles in and are identical except for and the particles in that were displaced by replicas of in ; thus we only need to establish that . Since the score can never be negative, and the inequality holds for any choice of particles.
The variational bound can be optimized by coordinate ascent, as specified in Algorithm 1, which we refer to as discrete particle variational inference (DPVI). This algorithm takes advantage of the fact that when optimizing the bound with respect to a single variable, only potentials local to that variable need to be computed. In particular, let be a replica of with a singlevariable modification, . We can compute the unnormalized probability of this particle efficiently using the following equation:
(12) 
where . The variational bound for the modified particle can then be computed using Eq. 10. Particles can be initialized arbitrarily. When repeatedly iterated, DPVI will converge to a local maximum of the negative variational free energy. Note that in principle more sophisticated methods can be used to find the top modes (e.g., Flerova et al., 2012; Yanover and Weiss, 2003); however, we have found that this coordinate ascent algorithm is fast, easy to implement, and very effective in practice (as our experiments below demonstrate).
An important aspect of this framework is that it maintains one of the same asymptotic guarantees as importance sampling: converges to as , since in this limit DPVI is equivalent to exact inference. Thus, DPVI combines advantages of variational methods (monotonically decreasing KL divergence between and ) with the asymptotic correctness of Monte Carlo methods. The asymptotic complexity of DPVI in the sequential setting is where is the maximum support size of the latent variables. For the iterative update of the particles the complexity is , where is the maximum number of iterations until convergence and is the maximum clique size. In our experiments, we empirically observed that we only need a small number of iterations and particles in order to outperform our baselines.
4 Filtering and smoothing in hidden Markov models
We now describe how variational particle approximations can be applied to filtering and smoothing in hidden Markov models (HMMs). Consider a hidden Markov model with observations generated by the following stochastic process:
(13) 
where is a set of transition and emission parameters. We are particularly interested in marginalized HMMs where the parameters are integrated out: . This induces dependencies between observation and all previous observations, making inference challenging.
Filtering is the problem of computing the posterior over the latent variables at time given the history . To construct the variational particle approximation of the filtering distribution, we need to compute the product of potentials for variable :
(14) 
We can then apply the coordinate ascent update described in the previous section. This update is simplified in the filtering context due to the underlying Markov structure:
(15) 
At each time step, the algorithm selects the continuations (new variable assignments of the current particle set) that maximize the negative variational free energy.
Smoothing is the problem of computing the posterior over the latent variables at time given data from both the past and the future, . The product of potentials is given by:
(16) 
where refers to all the latent variables except (and likewise for ). This potential can be plugged into the updates described in the previous section.
To understand DPVI applied to filtering problems, it is helpful to contemplate three possible fates for a particle at time (illustrated in Figure 1):

Selection: A single continuation of particle has nonzero weight. This can be seen as a deterministic version of particle filtering, where the sampling operation is replaced with a max operation.

Splitting: Multiple continuations of particle have nonzero weight. In this case, the particle is split into multiple particles at the next iteration.

Deletion: No continuations of particle have nonzero weight. In this case, the particle is deleted from the particle set.
Similar to particle filtering with resampling, DPVI deletes and propagates particles based on their probability. However, as we show later, DPVI is able to escape some of the problems associated with resampling.
(A) DPVI  (B) Particle Filtering 
5 Related work
DPVI is related to several other ideas in the statistics literature:

DPVI is a special case of a mixture meanfield variational approximation (Jaakkola and Jordan, 1998; Lawrence, 2000):
(17) In DPVI, and . A distinct advantage of DPVI is that the variational updates do not require the additional lower bound used in general mixture meanfield, due to the intractability of the meanfield updates.

When , DPVI is equivalent to iterated conditional modes (ICM; Besag, 1986), which iteratively maximizes each latent variable conditional on the rest of the variables.

DPVI is conceptually similar to nonparametric variational inference (Gershman et al., 2012), which approximates the posterior over a continuous state space using a set of particles convolved with a Gaussian kernel.

Frank et al. (2009) used particle approximations within a variational message passing algorithm. The resulting approximation is “local” in the sense that the particles are used to approximate messages passed between nodes in a factor graph, in contrast to the “global” approximation produced by DPVI, which attempts to capture the distribution over the entire set of variables.

Ionides (2008) described a truncated version of importance sampling in which weights falling below some threshold are set to the threshold value. This is similar (though not equivalent) to the DPVI setting where latent variables are sampled exhaustively and without replacement.
6 Experiments
In this section, we compare the performance of DPVI to several widely used approximate inference algorithms, including particle filtering and variational methods. We first present a didactic example to illustrate how DPVI can sometimes succeed where particle filtering fails. We then apply DPVI to four probabilistic models: the Dirichlet process mixture model (DPMM; Antoniak, 1974; Escobar and West, 1995), the infinite HMM (iHMM; Beal et al., 2002; Teh et al., 2006), the infinite relational model (IRM; Kemp et al., 2006) and the Ising model.
6.1 Didactic example: binary HMM
(A) 
(B) 
As a didactic example, we use a simple HMM with binary hidden states () and observations ():
(18) 
with , , , and all less than 0.5. We will use this model to illustrate how DPVI differs from particle filtering. Figure 2 compares several inference schemes for this model.
For illustration, we use the following parameters: , , , and . Suppose you observe a sequence generated from this model. For a sufficiently long sequence, a particle filter with resampling will eventually delete all conditionally unlikely particles, and thus suffer from degeneracy. On the other hand, without resampling the approximation will degrade over time because conditionally unlikely particles are never replaced by better particles. For this reason, it is sometimes suggested that resampling only be performed when the effective sample size (ESS) falls below some threshold. The ESS is calculated as follows:
(19) 
A low ESS means that most of the weight is being placed on a small number of particles, and hence the approximation may be degenerate (although in some cases this may mean that the target distribution is peaky). We evaluated particle filtering with multinomial resampling on synthetic data generated from the HMM described above. Approximation accuracy was measured by using the forwardbackward algorithm to compute the hidden state posterior marginals exactly and then comparing these marginals to the particle approximation. Figure 3 shows performance as a function of ESS threshold, demonstrating that there is a fairly narrow range of thresholds for which performance is good. Thus in practice, successful applications of particle filtering may require computationally expensive tuning of this threshold.
In contrast, DPVI achieves performance comparable to the optimal particle filter, but without a tunable threshold. This occurs because DPVI uses an implicit threshold that is automatically tuned to the problem. Instead of resampling particles, DPVI deletes or propagates particles deterministically based on their relative contribution to the variational bound.
6.2 Dirichlet process mixture model
A DPMM generates data from the following process (Antoniak, 1974; Escobar and West, 1995):
where is a concentration parameter and is a base distribution over the parameter of the observation distribution . Since the Dirichlet process induces clustering of the parameters into distinct values, we can equivalently express this model in terms of a distribution over cluster assignments, . The distribution over is given by the Chinese restaurant process (Aldous, 1985):
(20) 
where is the number of data points prior to assigned to cluster and is the number of clusters for which .
6.2.1 Synthetic data
We first demonstrate our approach on synthetic datasets drawn from various mixtures of bivariate Gaussians (see Table 1). The model parameters for each simulated dataset were chosen to create a spectrum of increasingly overlapping clusters. In particular, we constructed models out of the following building blocks:
For the DPMM, we used a Normal likelihood with a NormalInverseGamma prior on the component parameters:
(21) 
where indexes observation dimensions and
denotes the Inverse Gamma distribution with shape
and scale. We used the following hyperparameter values:
.Dataset  Particle Filtering ()  DPVI ()  DPVI () 

D1:  0.970.03  0.930.05  0.990.02 
D2:  0.890.05  0.860.07  0.900.03 
D3:  0.580.12  0.510.03  0.740.16 
D4:  0.500.06  0.460.05  0.550.07 
D5:  0.050.05  0.0140.02  0.140.10 
D6:  0.150.08  0.110.06  0.190.07 
Clustering accuracy was measured quantitatively using Vmeasure (Rosenberg and Hirschberg, 2007). Figure 4 graphically demonstrates the discovery of latent clusters for both DPVI as well as particle filtering. As shown in Table 1
, we observe only marginal improvements when the means are farthest from each other and variances are small, as these parameters leads to wellseparated clusters in the training set. However, the relative accuracy of DPVI increases considerably when the clusters are overlapping, either due to the fact that the means are close to each other or the variances are high.
An interesting special case is when . In this case, DPVI is equivalent to the greedy algorithm proposed by Daume (2007) and later extended by Wang and Dunson (2011). In fact, this algorithm was independently proposed in cognitive psychology by Anderson (1991). As shown in Table 1, DPVI with 20 particles outperforms the greedy algorithm, as well as particle filtering with 20 particles.
6.2.2 Spike sorting
Spike sorting is an important problem in experimental neuroscience settings where researchers collect large amounts of electrophysiological data from multichannel tetrodes. The goal is to extract from noisy spike recordings attributes such as the number of neurons, and cluster spikes belonging to the same neuron. This problem naturally motivates the use of DPMM, since the number of neurons recorded by a single tetrode is unknown. Previously,
Wood and Black (2008) applied the DPMM to spike sorting using particle filtering and Gibbs sampling. Here we show that DPVI can outperform particle filtering, achieving high accuracy even with a small number of particles.We used data collected from a multiunit recording from a human epileptic patient (Quiroga et al., 2004). The raw spike recordings were preprocessed following the procedure proposed by Quiroga et al. (2004)
, though we note that our inference algorithm is agnostic to the choice of preprocessing. The original data consist of an input vector with
dimensions and 9196 data points. Following Wood and Black (2008), we used a Normal likelihood with a NormalInverseWishart prior on the component parameters:(22) 
where
denotes the Inverse Wishart distribution with degrees of freedom
and scale matrix . We used the following hyperparameter values: .We compared our algorithm to the current best particle filtering baseline, which uses stratified resampling (Wood and Black, 2008; Fearnhead, 2004). The same model parameters were used for all comparisons. Qualitative results, shown in Figure 5, demonstrate that DPVI is better able to separate the spike waveforms into distinct clusters, despite running DPVI with 10 particles and particle filtering with 100 particles. We also provide quantitative results by calculating the heldout loglikelihood on an independent test set of spike waveforms. The quantitative results (summarized in Table 2) demonstrate that even with only 10 particles DPVI can outperform particle filtering with particles.
Method  Heldout loglikelihood 

DPVI ()  3.2474 () 
DPVI ()  1.3888 () 
Particle Filtering (Stratified) ()  1.4771 () 
Particle Filtering (Stratified) ()  5.6757 () 
Particle Filtering (Stratified) ()  3.2965 () 
6.3 Infinite HMM
An iHMM generates data from the following process (Teh et al., 2006):
Like the DPMM, the iHMM induces a sequence of cluster assignments. The distribution over cluster assignments is given by the Chinese restaurant franchise (Teh et al., 2006). Letting denote the number of times cluster transitioned to cluster , is assigned to cluster with probability proportional to , or to a cluster never visited from () with probability proportional to . If an unvisited cluster is selected, is assigned to cluster with probability proportional to , or to a new cluster (i.e., one never visited from any state, ) with probability proportional to .
6.3.1 Synthetic data
We generated 50 sequences with length 500 from 50 different HMMs, each with 10 hidden and 5 observed states. For the rows of the transition and initial probability matrices of the HMMs we used a symmetric Dirichlet prior with concentration parameter 0.1; for the emission probability matrix, we used a symmetric Dirichlet prior with concentration parameter 10.
Figure 6A illustrates the performance of DPVI and particle filtering (with multinomial and stratified resampling) for varying numbers of particles (). Performance error was quantified by computing the Hamming distance between the true hidden sequence and the sampled sequence. The Munkres algorithm was used to maximize the overlap between the two sequences. The results show that DPVI outperforms particle filtering in all three cases.
When the data consist of long sequences, resampling at every step will produce degeneracy in particle filtering; this tends to result in a smaller number of clusters relative to DPVI. The superior accuracy of DPVI suggests that a larger number of clusters is necessary to capture the latent structure of the data. Not surprisingly, this leads to longer run times (Table 4), but it is important to note that particle filtering and DPVI have comparable percluster time complexity.
6.3.2 Text analysis
We next analyzed a realworld dataset, text taken from the beginning of “Alice in Wonderland”, with 31 observation symbols (letters). We used the first 1000 characters for training, and the subsequent 4000 characters for test. Performance was measured by calculating the predictive loglikelihood. We fixed the hyperparameters and to 1 for both DPVI and the particle filtering.
We ran one pass of DPVI (filtering) and particle filtering over the training sequence. We then sampled 50 datasets from the distribution over the sequences. We truncated the number of states and used the learned transition and emission matrices to compute the predictive loglikelihood of the test sequence. To handle the unobserved emissions in the test sequence we used “add” smoothing with . Finally, we averaged over all the 50 datasets.
We also compared DPVI to the beam sampler (Van Gael et al., 2008), a combination of dynamic programming and slice sampling, which was previously applied to this dataset. For the beam sampler, we followed the setting of Van Gael et al. (2008). We run the sampler for 10000 iterations and collect a sample of hidden state sequence every 200 iterations. Figure 6B shows the predictive loglikelihood for varying numbers of particles. Even with a small number of particles, DPVI can outperform both particle filtering and the beam sampler.
6.4 Infinite relational model (IRM)
The IRM (Kemp et al., 2006) is a nonparametric model of relational systems. The model simultaneously discovers the clusters of entities and the relationships between the clusters. A key assumption of the model is that each entity belongs to exactly one cluster.
Given a relation involving types of entities, the goal is to infer a vector of cluster assignments for all the entities of each type .^{2}^{2}2The IRM model can be defined for multiple relations but for simplicity we only describe the single relation case. Assuming the cluster assignments for each type are independent, the joint density of the relation and the cluster assignment vectors can be written as:
(23) 
The cluster assignment vectors are drawn from a
prior. Given the cluster assignment vectors, the relations are drawn from a Bernoulli distribution with a parameter that depends on the clusters involved in that relation. More formally, let us define a binary relation
, where is the label of the type occupying position in the relation. Each relational value is generated according to:(24) 
where denotes the entity (of type ) occupying position . Each entry of is drawn from a Beta() distribution. By using a conjugate BetaBernoulli model, we can analytically marginalize the parameters (see Kemp et al., 2006), allowing us to directly compute the likelihood of the relational matrix given the cluster assignments, .
We compared the performance of DPVI with Gibbs sampling, using predictive loglikelihood on heldout data as a performance metric. Two datasets analyzed in Kemp et al. (2006), “animals” and “Alyawarra”, were used for this task. The animals dataset (Osherson et al., 1991) is a two type dataset with animals and features as it types; it contains 50 animals and 85 features. The Alyawarra dataset (Denham, 1973) has a ternary relation where is the set of 104 people and is the set of 25 kinship terms.
We removed 20% of the relations form each dataset and computed the predictive loglikelihood for the heldout data. We ran DPVI with 1, 10 and 20 particles for 100 iterations. Given the weights of the particles, we computed the weighted loglikelihood. We also ran 20 independent runs of the Gibbs sampler for 100 iterations and computed the average predictive loglikelihood. Every iteration scans all the data points in all the types sequentially. We set the hyperparameters and to 1. Figure 7 illustrates the coclustering discovered by DPVI for the animals dataset, demonstrating intuitively reasonable animal and feature clusters.
Sample animal clusters:  
A1: Hippopotamus, Elephant, Rhinoceros  
A2: Seal, Walrus, Dolphins, Blue Whale,  
Killer Whale, Humpback Whale  
A3: Beaver, Otter, Polar Bear  
Sample feature clusters:  
F1: Hooves, Long neck, Horns  
F2: Inactive, Slow, Bulbous Body, Tough Skin  
F3: Lives in Fields, Lives in Plains, Grazer  
F4: Walks, Quadrupedal, Ground  
F5: Fast, Agility, Active, Tail 
The results after 100 iterations are presented in Table 5. The best performance is achieved by DPVI with 20 particles. Figure 8 shows the predictive loglikelihood for every iteration of DPVI and Gibbs sampling. For the animals dataset, DPVI with 10 and 20 particles converge in 11 and 18 iterations, respectively. The number of iterations required for convergence in the Alyawarra dataset is just 2 and 3 for 10 and 20 particles, respectively. In terms of computation time per iteration of DPVI versus Gibbs, the only difference for DPVI with one particle and Gibbs is the sorting cost. Hence, for the multiple particle versus multiple runs of Gibbs sampling, the only additional cost is the sorting cost for multiple particles (e.g. 10 or 20). However, this insignificant additional cost is compensated for by a faster convergence rate in our experiments.
Predicitive loglikelihood  
Method  Animals  Alyawarra 
DPVI ()  418.498  8.452 
DPVI ()  382.543  8.450 
DPVI ()  370.674  8.450 
Gibbs (avg. of runs)  374.986  8.453 
(A)  (B) 
6.5 Ising model
So far, we have been studying inference in directed graphical models, but DPVI can also be applied to undirected graphical models. We illustrate this using the Ising model for binary vectors :
(25) 
where and are fixed parameters. In particular, we study a square lattice ferromagnet, where for neighboring nodes (0 otherwise) and for all nodes. We refer to as the coupling strength. This model has two global modes: when all the nodes are set to 1, and when all the nodes are set to 0. As the coupling strength increases, the probability mass becomes increasingly concentrated at the two modes.
We applied DPVI to this model, varying the number of particles and the coupling strength. To quantify performance, we computed the DPVI variational lower bound on the partition function and compared this to the lower bound furnished by the meanfield approximation (see Wainwright and Jordan, 2008). Figure 9A shows the results of this analysis for low coupling strength () and high coupling strength (). DPVI consistently achieves a better lower bound than meanfield, even with a single particle, and this advantage is especially conspicuous for high coupling strength. Adding more particles improves the results, but more than 3 particles does not appear to confer any additional improvement for high coupling strength. These results illustrate how DPVI is able to capture multimodal target distributions, where meanfield approximations break down (since they cannot effectively handle multimodality).
To illustrate the performance of DPVI further, we compared several posterior approximations for the Ising model in Figure 10. In addition to the meanfield approximation, we also compared DPVI with two other standard approximations: the SwendsenWang Monte Carlo sampler (Swendsen and Wang, 1987) and loopy belief propagation (Murphy et al., 1999). The sampler tended to produce noisy results, whereas meanfield and BP both failed to capture the multimodal structure of the posterior. In contrast, DPVI with two particles perfectly captured the two modes.
7 Conclusions
This paper introduced a variational framework for particle approximations of discrete probability distributions. We described a practical algorithm for optimizing the approximation, and showed empirically that it can outperform widelyused Monte Carlo and variational algorithms. The key to the success of this approach is an optimal selection of particles: Rather than generating them randomly (as in Monte Carlo algorithms), we deterministically choose a set of unique particles that optimizes the KL divergence between the approximation and the target distribution. Because we are selecting particles optimally, we can achieve good performance with a smaller number of particles compared to Monte Carlo algorithms, thereby improving computational efficiency. Another advantage of DPVI is that its deterministic nature eliminates the contribution of Monte Carlo variance to estimation error.
A consistent problem vexing sequential Monte Carlo methods like particle filtering is the doubleedged sword of resampling: this step is necessary to remove conditionally unlikely particles, but the resulting loss of particle diversity can lead to degeneracy. As we showed in our experiments, tuning an ESS threshold for resampling can improve performance, but requires finding a relatively narrow sweet spot for the threshold. DPVI achieves comparable performance to the best particle filter by using a deterministic strategy for deleting and replacing particles, avoiding finicky tuning parameters. It is also worth noting two other desirable properties of DPVI in this context: (1) the particle set is guaranteed to be diverse because all particles are unique; (2) all the particles have high probability and therefore the propagation of conditionally unlikely particles is avoided, as happens when particle filtering is run without resampling. We believe that this combination of properties is a key to the superior performance of DPVI relative to particle filtering.
An important task for future work is to consider how DPVI can be efficiently applied to models with combinatorial latent structure (such as the factorial HMM), which may have too many assignments to enumerate completely. In this setting, it is desirable to use a proposal distribution to selectively sample certain assignments. An interesting possibility is to use randomly seeded optimization algorithms to generate high probability proposals. Since the proposal mechanism does not play any role in the score function (unlike in particle filtering, where samples have to be reweighted), we are free to choose any deterministic or stochastic proposal mechanism without needing to evaluate its probability density function.
In summary, DPVI harmoniously combines a number of ideas from Monte Carlo and variational methods. The resulting algorithm can achieve performance superior to widely used particle filtering, MCMC and meanfield methods, though more work is needed to evaluate its performance on a wider range of probabilistic models and to compare it to other inference algorithms.
Acknowledgments
TDK is generously supported by the Leventhal Fellowship. VKM is supported by the Army Research Office Contract Number 0010363131, Office of Naval Research Award N000141310333, and the DARPA PPAML program. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
References
 Aldous (1985) David Aldous. Exchangeability and related topics. École d’Été de Probabilités de SaintFlour XIII 1983, pages 1–198, 1985.
 Anderson (1991) John R Anderson. The adaptive nature of human categorization. Psychological Review, 98:409–429, 1991.
 Antoniak (1974) Charles E Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. The Annals of Statistics, pages 1152–1174, 1974.
 Beal et al. (2002) Matthew J Beal, Zoubin Ghahramani, and Carl E Rasmussen. The infinite hidden Markov model. In Advances in Neural Information Processing Systems, pages 577–584, 2002.
 Besag (1986) Julian Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society. Series B (Methodological), 48:259–302, 1986.

Daume (2007)
Hal Daume.
Fast search for Dirichlet process mixture models.
In
International Conference on Artificial Intelligence and Statistics
, pages 83–90, 2007.  Denham (1973) Woodrow W Denham. The detection of patterns in Alyawara nonverbal behavior. PhD thesis, University of Washington, 1973.
 Doucet et al. (2001) Arnaud Doucet, Nando De Freitas, Neil Gordon, et al. Sequential Monte Carlo methods in practice. Springer New York, 2001.
 Escobar and West (1995) Michael D Escobar and Mike West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995.
 Fearnhead (2004) Paul Fearnhead. Particle filters for mixture models with an unknown number of components. Statistics and Computing, 14:11–21, 2004.
 Flerova et al. (2012) Natalia Flerova, Emma Rollon, and Rina Dechter. Bucket and minibucket schemes for m best solutions over graphical models. In Graph Structures for Knowledge Representation and Reasoning, pages 91–118. Springer, 2012.
 Frank et al. (2009) Andrew Frank, Padhraic Smyth, and Alexander T Ihler. Particlebased variational inference for continuous systems. In Advances in Neural Information Processing Systems, 2009.
 Gershman et al. (2012) Samuel Gershman, Matt Hoffman, and David Blei. Nonparametric variational inference. Proceedings of the 29th International Conference on Machine Learning, 2012.
 Ionides (2008) Edward L Ionides. Truncated importance sampling. Journal of Computational and Graphical Statistics, 17:295–311, 2008.
 Jaakkola and Jordan (1998) Tommi S Jaakkola and Michael I Jordan. Improving the mean field approximation via the use of mixture distributions. In MI Jordan, editor, Learning in Graphical Models, pages 163–173. Springer, 1998.
 Kemp et al. (2006) Charles Kemp, Joshua B Tenenbaum, Thomas L Griffiths, Takeshi Yamada, and Naonori Ueda. Learning systems of concepts with an infinite relational model. In AAAI, volume 3, page 5, 2006.
 Lawrence (2000) Neil D Lawrence. Variational inference in probabilistic models. PhD thesis, University of Cambridge, 2000.
 Murphy et al. (1999) Kevin P Murphy, Yair Weiss, and Michael I Jordan. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 467–475. Morgan Kaufmann Publishers Inc., 1999.
 Osherson et al. (1991) Daniel N Osherson, Joshua Stern, Ormond Wilkie, Michael Stob, and Edward E Smith. Default probability. Cognitive Science, 15:251–269, 1991.
 Quiroga et al. (2004) R Quian Quiroga, Z Nadasdy, and Y BenShaul. Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural computation, 16:1661–1687, 2004.
 Robert and Casella (2004) Christian P Robert and George Casella. Monte Carlo Statistical Methods. Springer, 2004.
 Rosenberg and Hirschberg (2007) Andrew Rosenberg and Julia Hirschberg. Vmeasure: A conditional entropybased external cluster evaluation measure. In EMNLPCoNLL, volume 7, pages 410–420, 2007.
 Swendsen and Wang (1987) Robert H Swendsen and JianSheng Wang. Nonuniversal critical dynamics in Monte Carlo simulations. Physical Review Letters, 58:86–88, 1987.
 Teh et al. (2006) Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581, 2006.
 Van Gael et al. (2008) Jurgen Van Gael, Yunus Saatci, Yee Whye Teh, and Zoubin Ghahramani. Beam sampling for the infinite hidden Markov model. In Proceedings of the 25th international conference on Machine learning, pages 1088–1095. ACM, 2008.
 Wainwright and Jordan (2008) Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1:1–305, 2008.

Wang and Dunson (2011)
Lianming Wang and David B Dunson.
Fast Bayesian inference in Dirichlet process mixture models.
Journal of Computational and Graphical Statistics, 20, 2011.  Wood and Black (2008) Frank Wood and Michael J Black. A nonparametric Bayesian alternative to spike sorting. Journal of Neuroscience Methods, 173:1–12, 2008.
 Yanover and Weiss (2003) Chen Yanover and Yair Weiss. Finding the M most probable configurations in arbitrary graphical models. In Advances in Neural Information Processing Systems, page None, 2003.