    # Probabilistic Duality for Parallel Gibbs Sampling without Graph Coloring

We present a new notion of probabilistic duality for random variables involving mixture distributions. Using this notion, we show how to implement a highly-parallelizable Gibbs sampler for weakly coupled discrete pairwise graphical models with strictly positive factors that requires almost no preprocessing and is easy to implement. Moreover, we show how our method can be combined with blocking to improve mixing. Even though our method leads to inferior mixing times compared to a sequential Gibbs sampler, we argue that our method is still very useful for large dynamic networks, where factors are added and removed on a continuous basis, as it is hard to maintain a graph coloring in this setup. Similarly, our method is useful for parallelizing Gibbs sampling in graphical models that do not allow for graph colorings with a small number of colors such as densely connected graphs.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Inference in general discrete graphical models is hard. Besides variational methods, the main approach for inference in such models is through running a Markov chain that in the limit draws samples from the true posterior distribution.

One such Markov-Chain is given by the so-called Gibbs-sampler that was first introduced introduced by Geman and Geman 

. In each step, one random variable is resampled given all the others, using the conditional probability distributions in the graphical model. Under mild hypotheses, the Gibbs sampler produces an ergodic Markov chain that converges to the target distribution. The main appeal of the Gibbs sampler lies in its simplicity and ease of implementation. Unfortunately, for highly coupled random variables, mixing of this Markov chain can be prohibitively slow. Moreover, in order to achieve ergodicity, we have to sample one variable after another, yielding an inherently sequential algorithm. However, with the advent of affordable parallel computing hardware in the form of GPUs, it is desirable to have a parallel sampling algorithm. Early attempts tried running all the update steps in parallel

, however this update schedule usually does not converge to the target distribution .

Another common approach to parallel Gibbs-sampling is to compute a graph coloring of the underlying graph and then perform Gibbs-sampling blockwise . However, it is not always straightforward to find an appropriate graph coloring and it is hard to maintain such a graph coloring in a dynamic setup, i.e. when factors are added and removed on a continuous basis.

In this paper, we show a simple method of parallelizing a Gibbs-sampler that does not require a graph coloring. This is particularly useful for situations in which a graph coloring is hard to obtain or the graph topology changes frequently, which requires to maintain or to recompute the graph coloring.

## 2 Related Work

An early attempt at parallelization for Ising models was described by Swendsen and Wang . Their method was generalized to arbitrary probabilistic graphical models in . However, while the Swendsen Wang-algorithm mixes fast for the Ising model with no unary potentials, this needs not be the case for general probabilistic graphical models [6, 7]. Higdon  presents a method for performing partial Swendsen Wang-updates. However, sampling using Higdon’s method requires sampling from a coarser graphical model as a subproblem, which Higdon tackles using conventional sampling methods. Our dualization strategy allows to circumvent this step, so that only standard clusterwise sampling as in  is required.

 describes two ways of parallelizing Gibbs sampling in discrete Markov random fields. The first method relies on computing graph colorings, the second on decomposing the graph into blocks (called splashes) consisting of subgraphs with limited tree width. Both methods are complimentary in that graph colorings work well for loosely-coupled graphical models whereas splash sampling works well in the strongly coupled case. However, computing a minimal graph coloring is a NP-hard problem  and the number of colorings necessary depends on the graph. Moreover, it is hard to maintain a graph coloring in a dynamic setting in which the graph topology is not constant anymore. Our approach does not suffer from these issues and requires almost no preprocessing. Moreover, our approach can be combined with splash sampling. Whereas the approach in  requires the splashes to be induced subgraphs of the graphical model, our approach allows to select arbitrary subgraphs of the graphical model as splashes, making it possible to use splashes containing many variables.

Schmidt et al. show in  how a Gaussian scale mixtures (GSMs) can be used for efficiently sampling from fields of experts . Our approach is very similar to theirs, but deals with discrete graphical models. Moreover, whereas Schmidt et al. started with a model in a primal-dual formulation and trained it on data, we focus on decomposing existing graphical models and mainly use duality for inference. In fact, both techniques can be subsumed in the framework of exponential family harmoniums 

, which makes it possible to deal with models consisting of both discrete and continuous random variables.

Martens et al.  show how to sample from a discrete graphical model using auxiliary variables. Their approach is similar to ours, but relies on computing the (sparse) Cholesky decomposition of a large matrix beforehand. Our approach does not have this issue.

A variant of our algorithm that computes expectations instead of performing sampling corresponds to the mean-field-algorithm for junction tree-approximations in . We can show that our algorithm minimizes an upper bound to the true mean-field objective. However, whereas the mean-field algorithm in  has to recalibrate the tree for each update of a potential, our algorithm updates all the potentials at once with only one run of the junction-tree-algorithm.

Schwing et al.  designed a system that performs belief propagation in a distributed way. Our approach has a similar goal but follows a different strategy: while Schwing et al. achieve parallelism through a convex formulation, we augment the probabilistic model with additional random variables.

## 3 Probabilistic duality

We first define the notion of duality of random variables. Note that this definition is very similar to the Lagrange functional of a convex optimization problem in convex analysis.

###### Definition 1.

Let and denote random variables. We call functions and

to a common vector space with some bilinear form link functions. We say

and are dual to each other via

if the joint distributions can be written as

 p(x,θ)=h(x)g(θ)e⟨s(x),r(θ)⟩

with positive functions and .

In the language of , a dual pair of random-variables is simply an exponential family harmonium. For a pair of link functions and , some real valued functions we now define the concept of an -transform.

###### Definition 2.

The -transforms of , is defined as

 H(θ) :=∑xh(x)e⟨s(x),r(θ)⟩ G(x) :=∑θg(θ)e⟨s(x),r(θ)⟩.

Note the resemblance to the notion convex conjugacy. The following simple lemma is central to the rest of the theory:

###### Lemma 1.

Let be and two dually related random variables with joint probability density as above. Then

 p(x) =h(x)G(x) p(θ) =H(θ)g(θ) p(x∣θ) =h(x)H(θ)e⟨s(x),r(θ)⟩ p(θ∣x) =g(θ)G(x)e⟨s(x),r(θ)⟩.

Formally, this is similar to the notion of duality in convex optimization, we call the problem of sampling from the primal and the corresponding problem of sampling from the dual sampling problem. The primal and dual problems are linked to each other via the conditional densities and . Note that both and are in the exponential family.

Another view is that we represent as a mixture of probability distributions from a specified exponential family determined by and . The density of the mixture parameters is then given by .

To obtain a dual formulation of a sampling problem, we have to decompose as

 p(x)=h(x)G(x)=h(x)∑θg(θ)e⟨s(x),r(θ)⟩

with some functions and . is then the marginal of

 p(x,θ)=h(x)g(θ)e⟨s(x),r(θ)⟩.

We show how this can be done for the discrete case in Section 4.1.

Further evidence can be incorporated by replacing and with

 ~s(x)=s(x,xe)and~h(x,xe)=h(x,xe).

The lemma already shows a possible strategy how to sample from using a simple Gibbs sampler: first sample from , then from and so on. As both and are generally high-dimensional, sampling from and from could potentially cause problems. However, we show that and both factorize in Markov random fields for appropriate choices and which yields algorithms that are very easy to parallelize.

## 4 Duality in MRFs

We now take a closer look at sampling in Markov random fields. The Hammersley-Clifford theorem states that the joint probability density of nodes in the MRF can be decomposed as

 p(x)=1ZN∏i=1pi(x)

with some probability measures and normalization constant . In general the only depend on a small subset of the components of (e.g. the cliques in the MRF). Assume that for every we have a random variable , so that and are dual to each other via with respect to . We can then write

 pi(x,θi)=hi(x)gi(θi)e⟨s(x),ri(θi)⟩.
###### Theorem 1.

Let and , and .

Then is the marginal of

 p(x,θ)∝h(x)g(θ)e⟨s(x),r(θ)⟩.

Thus if , and are dual to each other. The marginal distribution of is given by

 p(θ)∝H(θ)∏igi(θ).
###### Proof.

We have

 p(x) ∝∏ihi(x)Gi(x)=∏i⎡⎣hi(x)∑θigi(θi)e⟨s(x),ri(θi)⟩⎤⎦ =∑θ1…∑θN[∏ihi(x)][∏igi(θi)]e⟨s(x),∑iri(θi)⟩ =∑θh(x)g(θ)e⟨s(x),r(θ)⟩.

###### Corollary 1.

and are given by

 p(x∣θ) ∝∏ihi(x)e⟨s(x),∑iri(θi)⟩ p(θ∣x) ∝∏igi(θi)e⟨s(x),∑iri(θi)⟩.

In particular, factorizes and if and every component of only depends on one component of , factorizes as well. In particular, this is true for the standard choice .

### 4.1 Binary pairwise MRFs

We now want to turn to the special case of a pairwise binary MRF. It turns out that finding a dual representation is equivalent to finding an appropriate factorization of the probability table.

###### Theorem 2.

Let be proportional to the probability table of two binary random variables and . Assume we are given a factorization with , where both and have strictly positive entries. Let

 α1 =logB2,1B1,1 α2 =logC2,1C1,1 q =logB1,2C1,2B1,1C1,1 β1 =logB2,2B1,1B1,2B2,1 β2 =logC2,2C1,1C1,2C2,1.

Then

 p(x1,x2)∝∑θ∈{0,1}h(x)g(θ)e⟨x,r(θ)⟩

with

 r(θ) =θ(β1β2). h(x) =eα1x1eα2x2 g(θ) =eqθ.
###### Proof.

This follows from

 P=∑i=1,2(B1,iB2,i)(C1,iC2,i)

after a simple calculation. ∎

We will now show how to find such a factorization:

###### Lemma 2.

If is symmetric and , can be factored in the form , where

 B=(√p11cos(φ)√p11sin(φ)√p22sin(φ)√p22cos(φ))%withφ=π4−12arccos(p12√p11p22).
###### Proof.

is well defined because and positive because . Now let

 B=(~b⊺1~b⊺2).

We have

 BB⊺=(∥~b1∥2⟨~b1,~b2⟩⟨~b1,~b2⟩∥~b2∥2).

Due to trigonometric considerations

 BB⊺=(p11c√p11p22c√p11p22p22)withc=cos(π2−2φ).

This shows as required.∎

###### Remark 1.

For , we have

 cos(φ) =12(√1+a+√1−a) sin(φ) =12(√1+a−√1−a).
###### Lemma 3.

For any , then

 (p−11200p−121)P

is symmetric.

###### Lemma 4.

If , then

 (0110)P

has positive determinant.

In summary, we have found a strictly positive factorization of any strictly positive matrix. Together with Theorem 1 this yields a dual representation for every binary pairwise MRF.

### 4.2 General discrete MRFs

When the variables are allowed to have multiple states, we can convert any discrete pairwise MRF into a binary MRF using

-encoding and additional hard-constraints that ensure that exactly one binary variable belonging to a random variable in the original MRF has value

. All inference algorithms in this paper therefore generalize to this situation.

Dualizing a factor in this way introduces auxiliary binary random variables to the model. Note however, that no new random variables need to be introduced for -entries in the factor. For example, for a Potts-factor of order , only auxiliary binary random variables have to be introduced per factor.

Arbitrary discrete MRFs with higher-order factors work as well, as long as we can find an appropriate positive tensor factorization of the probability table. Moreover, it is also possible to perform inference approximately by fitting a mixture of Bernoulli / mixture of Dirichlet distributions to the factors using expectation maximization.

### 4.3 Swendsen-Wang and local constraints

As it turns out, the Swendsen-Wang algorithm can be seen as a degenerate special case of our formalism for a particular choice of . Moreover, more general local constraint models can potentially be derived from this formalism.

More explicitly, for the Ising model let

 s(x):=(−I(xe1=xe2))e∈E

where denotes the set of edges and

 I(xe1=xe2)={0if xe1=xe2∞else.

The Ising factor of the form

 Pi∝(1e−wie−wi1)=(e−wie−wie−wie−wi)+(1−e−wi001−e−wi)

can then be decomposed as

 Pi(xe1,xe2)=∑θi∈{0,1}g(θi)e−θiI(xe1=xe2).

where

 g(0) =e−wi g(1) =1−e−wi.

The primal dual sampling algorithm then proceeds as follows:

 p(θi∣x) ∝g(θi)e−θiI(xe1=xe2) p(x∣θi) ∝e−∑iθiI(xe1=xe2),

which are just the update rules for the Swendsen-Wang algorithm.

The partial Swendsen-Wang method by Higdon  can be regarded as a decomposition of the form

 Pi∝(1−αe−wie−wi1−α)+(α00α).

This leads to the method described by Higdon, where we are left with sampling from coarser Ising-model. By applying a factorization as in Section 4.1 to the first term enables us to circumvent this step, so that all clusters can be sampled independently (the latent variables then have different states).

Similarly, the generalization of the Swendson-Wang algorithm in  can be regarded as a multiplicative decomposition of the form

 Pi∝(ewi11ewi)⋆~Pi,

where we used the -operator to indicate componentwise multiplication. Applying the decomposition above to the first factor, yields (a variant of) the method in . We can use our method to further decompose , allowing to update all clusters in parallel.

## 5 Inference

### 5.1 Sampling

Having a primal-dual representation of of the form

 p(x,θ)∝h(x)g(θ)e⟨s(x),r(θ)⟩

we can sample from (and thereby from ) by blockwise Gibbs-sampling, i.e.

 x(t+1) ∼p(x∣θ(t))∝h(x)e⟨s(x),r(θ(t))⟩ θ(t+1) ∼p(θ∣x(t+1))∝g(θ)e⟨s(x(t+1)),r(θ)⟩.

For discrete pairwise MRFs with the dual representations as described above, both distributions factor, so that sampling can be done in parallel, e.g. on the GPU. Effectively, we have converted our model to a restricted Boltzmann machine.

### 5.2 Estimation of the log-partition-function

The logarithm normalization constant of an unnormalized probability distribution, the so-called log-partition-function

, is important for model-selection and related tasks. In this section, we provide a simple estimator for this quantity that can be used for any dual pair of random variables.

The following defines an unbiased estimator for the partition function

:

 V(x,θ)=~p(x)~p(θ)~p(x,θ)=Zp(x)p(θ)p(x,θ),

where and are the unnormalized probability distributions. Indeed, we have

 E[V(x,θ)]=Z∫p(x)p(θ)p(x,θ)p(x,θ)dxdθ=Z.

Written in terms of and , can be written as

 V(x,θ)=G(x)H(θ)e−⟨s(x),r(θ)⟩.

Note that

 E[−logV(x,θ)]−[−logE[V(x,θ)]]=I(x,θ),

where is the mutual information between and . This is a measure for the uncertainty of

, as it can be interpeted as a generalized variance for the convex function

.

This also implies that the expectation of

 logV(x,θ)

yields a lower bound to the log-partition function. In practice, has too much variance to be useful. Therefore, we estimate the expectation of , which yields a lower bound to the log-partition function.

###### Example 1.

For the Swendsen-Wang-duality for the Ising-model, we have , and therefore

 H(θ)=∑x∏ee−θeI(xe1=xe2)=2C(θ),

where is the number of clusters defined by . Therefore

 logV(x,θ)=log2⋅C(θ)+log~p(x),

where is the unnormalized distribution of the Ising-model.

A natural question to ask is how this estimator relates to the estimate obtained by running naive mean-field on the primal distribution only. The following negative result shows that in most cases the estimate obtained by mean-field approximations is preferable:

###### Lemma 5.

We have

 I(x,θ)=EθKL(p(x∣θ),p(x))≥minξKL(p(x∣ξ),p(x)), (1)

where we define .

###### Proof.

The equality in (1) can be obtained by a straightforward calculation. The inequality is a simple application of the fact that the expectation over is always bigger than the minimum over . ∎

Note, however, that it is not always straightforward to find that minimizes . This is for example the case for the Swendsen-Wang-representation from Example 1.

### 5.3 MAP- and mean-field inference

The concept of probabilistic duality that we introduced in Section 3 is also useful to derive parallel MAP- and mean-field inference algorithms.

By applying EM to we can also compute local MAP-assignments to in parallel. The updates read

 x(t+1) =argmaxxh(x)e⟨s(x),ξ(t)⟩ ξ(t+1) =E(r(θ)∣x(t+1)).

Similar, we can compute mean-field assignments to using the updates

 η(t+1) =E(s(x)∣ξ(t)) ξ(t+1) =E(r(θ)∣η(t+1)),

where the expectations are taken over the distributions

 p(x∣ξ) ∝h(x)e⟨s(x),ξ⟩ p(θ∣η) ∝g(θ)e⟨η,r(θ)⟩.

Note that these updates have the advantage over ICM and standard naive mean field that they can again run in parallel and still have convergence guarantees.

Using this algorithm, we can show that we minimize an upper bound to the true mean-field objective :

###### Lemma 6.

We have

 minηKL(p(x∣ξ)p(θ,η),p(x,θ))≥KL(p(x∣ξ),p(x)). (2)
###### Proof.

We have

 KL(p(x∣ξ)p(θ,η),p(x,θ))=Ex∣ξEθ∣η(−logp(x,θ)p(x∣ξ)p(θ∣η))≥Ex∣ξ(−logEθ∣ηp(x,θ)p(x∣ξ)p(θ∣η))=Ex∣ξ(−logp(x)p(x∣ξ))=KL(p(x∣ξ),p(x)).

Lemma 6 implies that traditional mean-field updates are still preferable to the ones from our method. Indeed, in practice we found that our method can lead to poor approximations in presence of many factors. However, it is still possible to first run our fast parallel algorithm and then fine-tune the result using traditional mean-field updates.

### 5.4 Blocking

Gibbs sampling as described in Section 5.1 can still be prohibitively slow in presence of strongly correlated random variables. Similarly, EM and mean-field updates tend to get stuck in local optima in this situation. This problem also occurs for standard Gibbs sampling, ICM and naive mean-field updates. A common way out is to introduce blocking, e.g. as in  for Gibbs sampling.

Unfortunately, blocking is only possible with respect to induced subgraphs for traditional algorithms. As it turns out, our primal-dual decomposition allows to perform blocking with respect to arbitrary subgraphs. This is illustrated in Figure 1. The idea is to decompose the dual variables into two subsets and , so that ) is tractable. This is the case, when is tractable, because then

 p(x,θ0∣θ1)=p(θ0∣x)p(x∣θ1).

Note that is tractable, if the graph obtained by removing all the factors belonging to has low tree-width.

For blocked Gibbs sampling, we sample in each step and then . As a variation of this process, we can vary the decomposition of into and in each step.

When we perform max-product-belief propagation for and take expectations for we obtain a new inference algorithm for MAP-inference. Note that in each step, we maximize over all variables at once. Similarly, we obtain a probabilistic inference algorithm by changing the max-product-belief propagation step for by sum-product-belief propagation.

Note also that both the EM, as well as the mean-field algorithm are guaranteed to increase the objective function in each step.

In this framework, the standard sequential Gibbs sampler can also be interpreted as a blocked Gibbs sampler, where blocking is performed with respect to one primal and all neighboring dual variables. Unfortunately, as blocking generally improves mixing of a Gibbs chain , this implies that the standard sequential Gibbs sampler has better mixing properties than the parallel primal-dual sampling algorithm. Still, the primal dual formulation allows for more flexible blocking schemes, potentially making it possible to improve on the mixing properties of the standard sequential Gibbs sampler in some situations.

## 6 Experimental Results

We tested our method on synthetic graphical models. The first model consists of a -Ising grid and coupling strenghts ranging from to . Even though the Ising grid is two-colorable and it is therefore trivial to implement a parallel Gibbs sampler in this setting, this is no longer possible when the Graph topology is dynamic, i.e. we remove and add factors from time to time. Maintaining a coloring in this setting is itself a hard problem. The second model is given by a random graph with variables and factors, where

. Both the unitary and pairwise log-potentials were sampled from a normal distribution with mean

and a standard deviation of

. The last model consists of a fully connected Ising model of variables and coupling strengths ranging from to . Note that for such models, there is an algorithm that computes the partition function and the marginals in polynomial time . However, this is not longer the case, when the potentials have varying coupling strengths.

For all these models we compute the potential scale reduction factor (PSRF) for both a sequential Gibbs sampler and our primal-dual-sampler by running Markov chains in parallel. From the PSRF we compute an estimate of the mixing time of the Markov chain by taking the first index, so that the PSRF remains below some specified threshhold afterwards.

The result for the Ising grid is shown in Figure 1(a). For both the primal-dual sampler and the sequential Gibbs sampler, we plotted the number of seeps over the whole grid to achieve a PSRF below . As expected, both the sequential Gibbs sampler and the primal-dual sampler mix slower as we increase the coupling strength. Moreover, even though the primal-dual sampler mixes slower than the sequential Gibbs sampler, in our experiments the ratio of the mixing times was between and for all the coupling strength. Therefore, even though a Gibbs sampler based on a two coloring is preferable in the static setting, our primal-dual sampler becomes a viable alternative in the dynamic setting.

Similar results where obtained for the random graphs. As expected, mixing of the primal dual sampler became worse as the number of factors per vertix increases. While our primal-dual-sampler can be an interesting alternative when the factor-to-vertex ratio is low (e.g. ), we do not recommend our method for models with many more factors than variables if these factors are not very weak.

The result for the fully connected Ising-model is shown in Figure 1(b). As there is no coloring available for a fully connected graphical model, we compare the number of full sweeps of our primal-dual sampler against the number of single-site updates of the sequential Gibbs sampler. We see that our method leads to improved mixing in this setting.

## 7 Conclusion

We have introduced a new concept of duality to random variales and showed its usefulness in performing inference in probabilistic graphical models. In particular, we demonstrated how to obtain a highly-parallizing Gibbs sampler. Even though this parallel Gibbs sampler has inferior mixing properties compared to the sequential Gibbs sampler, we believe that it can still be very useful in settings, where a good graph coloring is hard to obtain or the graph topology changes frequently. Possible extensions of our approach include good algorithms for selecting appropriate subgraphs for blocking. Moreover, as primal-dual representations are not unique, we believe that further progress can be made by deriving new decompositions. Another line of research is to generalize our ideas to higher order factors, both exactly and in approximate ways. We believe that this is possible and allows to apply the methods in this paper to arbitrary discrete graphical models.

## Acknowledgements

This work was supported by Microsoft Research through its PhD Scholarship Programme.

## References

•  Adrian Barbu and Song-Chun Zhu.

Generalizing swendsen-wang to sampling arbitrary posterior probabilities.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8):1239–1253, 2005.
•  Boris Flach. A class of random fields on complete graphs with tractable partition function. IEEE transactions on pattern analysis and machine intelligence, 35(9):2304–2306, 2013.
•  Michael R Garey, David S Johnson, and Larry Stockmeyer. Some simplified np-complete problems. In

Proceedings of the sixth annual ACM symposium on Theory of computing

, pages 47–63. ACM, 1974.
•  Stuart Geman and Donald Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Transactions on pattern analysis and machine intelligence, (6):721–741, 1984.
•  Joseph Gonzalez, Yucheng Low, Arthur Gretton, and Carlos Guestrin. Parallel gibbs sampling: From colored fields to thin junction trees. In AISTATS, volume 15, pages 324–332, 2011.
•  David M Higdon. Auxiliary variable methods for markov chain monte carlo with applications. Journal of the American Statistical Association, 93(442):585–595, 1998.
•  James Martens and Ilya Sutskever. Parallelizable sampling of markov random fields. In AISTATS, pages 517–524, 2010.
•  David Newman, Padhraic Smyth, Max Welling, and Arthur U Asuncion. Distributed inference for latent dirichlet allocation. In Advances in neural information processing systems, pages 1081–1088, 2007.
•  Stefan Roth and Michael J Black. Fields of experts.

International Journal of Computer Vision

, 82(2):205–229, 2009.
•  Uwe Schmidt, Qi Gao, and Stefan Roth. A generative perspective on mrfs in low-level vision. In

Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on

, pages 1751–1758. IEEE, 2010.
•  Alexander Schwing, Tamir Hazan, Marc Pollefeys, and Raquel Urtasun. Distributed message passing for large scale graphical models. In Computer vision and pattern recognition (CVPR), 2011 IEEE conference on, pages 1833–1840. IEEE, 2011.
•  Robert H Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in monte carlo simulations. Physical review letters, 58(2):86, 1987.
•  Max Welling, Michal Rosen-Zvi, and Geoffrey E Hinton. Exponential family harmoniums with an application to information retrieval. In Advances in neural information processing systems, pages 1481–1488, 2004.
•  Wim Wiegerinck. Variational approximations between mean field theory and the junction tree algorithm. In

Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence

, pages 626–633. Morgan Kaufmann Publishers Inc., 2000.