1 Introduction and related work
Bayesian networks (also called belief networks
) are probabilistic graphical models where a set of random variables and their conditional dependencies are described by a directed acyclic graph (DAG). Many supervised and unsupervised models can be considered as special cases of Bayesian networks.
In this paper we focus on the problem of efficient inference in Bayesian networks with multiple layers of continuous latent variables, where exact inference is intractable (e.g. the conditional dependencies between variables are nonlinear) but the joint distribution is differentiable. Algorithms for approximate inference in dynamic Bayesian networks can be roughly divided into two categories: sampling approaches and parametric approaches. Parametric approaches include Belief Propagation
[Pea82] or the more recent Expectation Propagation (EP) [Min01]). When it is not reasonable or possible to make assumptions about the posterior (which is often the case), one needs to resort to sampling approaches such as Markov Chain Monte Carlo (MCMC)
[Nea93]. In highdimensional spaces, gradientbased samplers such as Hybrid Monte Carlo [DKPR87] are known for their relatively fast mixing properties. When just interested in finding a mode of the posterior, vanilla gradientbased optimization methods can be used.Our method uses the concept of conditionally deterministic variables, which have been earlier used in e.g. [CS05], but not as a tool for efficient inference in general Bayesian networks with continuous variables.
In section 3 we propose a method for transforming a Bayesian network with continuous latent variables to an (equivalent) auxiliary model, where the continuous latent variables are replaced by latent variables that are conditonally deterministic given auxiliary variables . We show that this auxiliary model is equivalent to the original model when the auxiliary variables are integrated out. However, it is also possible to integrate out the , resulting in an auxiliary joint PDF over and . The main advantage is that inference is much faster in this form. This can be explained from the observation that the Markov blankets of variables in the auxiliary form are larger than the variables in the original form; consequently, gradients of the auxiliary PDF take into account information from more variables, while the computational complexity for each update is equal to the original form. In section 4 we show some empirical results that confirm our theoretical results. Gains are especially noticable in models with many interconnected latents variables, such as Dynamic Bayesian networks [Mur02].
The method is applicable to conditional distributions for which it is possible to design an auxiliary form (see section 3.3). This includes distributions for which a differentiable inverse CDF, or differentiable approximations, exists.
2 Background
Notes about notation.
We write and to denote distributions with parameter , i.e. is shorthand for . We write and to denote and
, the corresponding (conditional) probability density or mass functions (PDFs or PMFs). In some situations
is treated as a random variable. Also noteis treated as a global vector, and each distribution typically only uses a small subset of
’s elements. Variables are capitalized, sets of variables are capitalized and bold, vectors are written in bold and lower case.2.1 Bayesian networks
A Bayesian network models a set of random variables and their conditional dependencies as a directed acyclic graph, where each variable corresponds to a vertex and each edge to a conditional dependency. Let the distribution of each variable be , where we condition on ’s (possibly empty) set of parents , and is a global parameter vector. The joint distribution over can be computed as follows, using the factorization property of Bayesian networks:
(1) 
The joint PDF of Bayesian networks can be represented as a factor graph where each factor corresponds to an individual conditional PDF: where is a value of , is a value of and is the PDF or PMF of the corresponding conditional distribution ^{1}^{1}1For brevity we sometimes treat sets of random variables like as a random variables and as their instantiated vectors. The Markov blanket of some variable can be described is the set of all variables that appear as arguments of factors in which also appears as an argument.
We use conditionally deterministic variables. The value of such a variable with parents is deterministically computed with a (possibly nonlinear) function from the parents and the parameters in the Bayesian network: . The PDF of a conditionally deterministic variable is a Dirac delta function, which we define as a Gaussian PDF with infinitesimal :
(2) 
which equals when and equals 0 everywhere else, and .
2.2 Learning problem under consideration
Let be some i.i.d. dataset with datapoints where is a vector with all data concatenated. Likewise, is a value of all latent variables variables for all datapoints. The joint PDF over data data and latent variables is . We focus on the problem of learning the parameters of Bayesian networks with continuous latent variables where the data likelihood is intractable to compute or differentiate (which is true in general). We also focus on the case where the joint distribution over all variables is (at least) once differentiable, so it is possible to efficiently compute firstorder partial derivatives of the joint distribution w.r.t. variables and parameters.
During inference we often treat the parameters as a random variable where we let . In this case, two wellknown modes of learning can be distinghuised: (1) maximum a posteriori (MAP) inference where we are interested in a value of that (approximately) maximizes
, and (2) full Bayesian inference where we are interested in finding (or approximating) the full posterior distribution
.2.3 Gradientbased learning algorithms
For the learning problem outlined in section 2.2, we summarize some wellknown gradientbased learning approaches using approximate inference.
MAP inference with EM.
A general algorithm for finding MAP estimates in the presence of latent variables, is the Expectation Maximization (EM)
[DLR77] algorithm. Since in our learning problem the expectation is intractable in general, approximations are required such as Monte Carlo EM [WT90] (MCEM) where an MCMCbased numerical expectation is used in the Estep. The likelihood and prior are differentiable in the case under consideration, so gradients are easily obtained and Hybrid Monte Carlo [DKPR87] (HMC) can be used as a sampler, using for fixed and . HMC has fast mixing properties in the highdimensional and continuous case. A faster method for approximate MAP inference is where we treat the latent variables as parameters, and we maximize w.r.t. and simultaniously using gradient ascent, which can be shown to optimize a lower bound of [NH98]. However, this comes with a relatively large risk of overfitting.Full Bayesian inference with HMC.
In the case of full Bayesian estimation we are interested in estimating (or approximating) the full posterior distribution . We can do this by sampling and simultanously from the joint density: and then keeping only samples of . Similar to MCEM, HMC can be used to efficiently generate samples.
2.4 Gradientbased information flow
The gradientbased approaches outlined above generally perform updates of using gradients of the joint PDF in their inner loop, i.e. (1) for current values of , acquire gradient information by differentiation; (2) update using currently available gradient information. In Bayesian networks, the joint PDF is available in factorized form (eq. (1)). From eq. (1) it is easy to see that the gradient of the joint PDF with respect to some variable is only dependent on the variables that are in the the Markov blanket of , because only the variables in the Markov blanket of share factors in which appear. The gradient of the joint logPDF with respect to a variable is equal to the sum of gradients of the log of the connected factors in the factor graph.
In consequence, when performing a gradientbased value update of the variables (as outlined above), the current value of each variable only influences the new values of variables in its Markov blanket. The minimum number of factors that separate two variables in the factor graph determines the number of gradient steps required before information about the value of has reached and vice versa. We will show that it is possible to formulate the problem in an auxiliary form in which gradientbased inference and learning is generally faster.
3 Inference and learning in auxiliary form
We propose a method for transforming the original Bayesian network into an equivalent form in which gradientbased inference and learning is more efficient.
3.1 Method
We will explain our method with a Bayesian network with where is the set of observed variables and is the set of continuous latent variables that have parents in the graph. For the sake of clarity we don’t include discrete latent variables in our explanation. Let the variables and be elements of these sets, with corresponding distributions and , where is again some global set of parameters. For each , is the set of parents of each variable; the parents of each node are subdivided into the sets where and . The Bayesian network models the joint PDF over the variables with factors and as:
(3)  
(4)  
(5) 
(a)  (b) 
Step 1: Form an auxiliary Bayesian network.
Form an auxiliary Bayesian network over variables , and , where each original is replaced by a conditionally deterministic variable (see eq. (2)) , where each has an extra parent with (an auxiliary variable) that is a root node of the auxiliary network. Let in the auxiliary network be each node’s parents except any , so , and a value of ^{2}^{2}2Latent variables that are root nodes of the graph can actually be copied to the auxiliary graph unchanged, since there is no computational advantage in putting them in auxiliary form, but for the sake of simplicity we will also convert them in this description.. See figure 1.
Step 2: For each , define an appropriate generating function and auxiliary latent variable .
For each we are going to define an appropriate distribution , and for each an appropriate deterministic generating function where:
(6) 
such that and are equal in distribution:
(7) 
where:
(8)  
(9) 
In other words, for each continuous latent variable with parents we define an auxiliary latent variable with an appropriate distribution , and a deterministic generating function , such that the distributions of the new random variable and the original are equal. Note that the variable is deterministic, but is nondeterministic (when conditioned on only ). Examples of valid choices of and are discussed in section 3.3.
Under the condition of eq. (7) it can be shown that the joint and the joint are equal in distribution:
(10)  
(11) 
Step 3: Define the auxiliary PDF.
Since each is a root node in the auxiliary network, its factor corresponds to its PDF we defined above: . Each factor corresponding to an observed variable in the auxiliary network is equal to the original factor , except that that any in the function arguments is substituted for . Each factor corresponds to a conditionally deterministic variable (see eq. (2)). In effect the PDF of the joint distribution is zero almost everywhere. Interestingly, and this is important, the marginal PDF can be quite easily retrieved by marginalizing out the conditionally deterministic :
(12)  
(13)  
(14)  
(15) 
where the values of are computed (recursively) from their ancestors with . We call the marginalized PDF of eq. (15) the auxiliary form of the original problem.
Step 4: Fast inference and learning in auxiliary form.
Important to note from eq. (15) is that factors (conditonal PDFs) of observed variables with any as their argument, can be expressed as being a function of ’s parents, by substituting by . If then again has any other as his parents, the factor can again be expressed as a function of ’s parents, and so on, recursively. As a result any factor can be expressed as being a function with ancestral observed variables ’s and ancestral auxiliary variables ’s as its arguments. Thus, the main advantage of the auxiliary form (eq. 15) is that the Markov blankets of the observed variables extend (recusively) through the conditionally deterministic variables, reaching (typically) much more variables. This leads to faster spread of information when performing gradientbased inference and learning, especially for Bayesian nets with many layers of interdependent continuous latent variables.
Evaluating and differentiating the auxiliary PDF for certain values of and is straightforward, which we explain in section 3.2. We can then e.g. apply gradientbased algorithms (section 2.4) to perform fast inference and learning. Given equality 11, the values of , computed from corresponding values of and , can at any point be treated as samples of .
3.2 Efficient Implementation
The method to efficiently evaluate and differentiate the auxiliary joint PDF given values , of the variables and is straightforward. First compute the corresponding values of each according to their topological order. This order will make sure that for each
, the value of its parents are computed first. Subsequently compute the values of all factors, and finally the value of the full joint. Computation of gradients of the joint w.r.t. variables and parameters can be done by the backpropagation algorithm
[RHW86]. An alternative approach that requires substantially less manual differentiation work is through the use automatic differentiation software, such as Theano
[BBB10].3.3 Choice of and
An example of a valid choice of and is based on the inverse transformation method (or Smirnov transform) [Dev86]
, a method for generating samples from a continuous target distribution by first sampling from a uniform distribution and then transforming it through the inverse CDF of the target distribution. Indeed, an obvious choice would be to let
, where is the inverse CDF of and where . For some distributions the inverse CDF is not available or not differentiable. In these cases the inverse CDF can often be accurately approximated using moderatedegree polynomials, e.g. used in software package R for generating samples from a Gaussian.Besides the inverse CDF transform, there are often other valid options, e.g. in the case where , where
’s distribution is univariate Gaussian with some variance
and a mean determined by its parents through some (e.g. nonlinear) scalar function . In this case a valid choice of the generating function would be to let and then let . This is a valid choice (eq. (7)) since:A straightforward extension to the multivariate case (where is a random vector) is where we treat each element as conditionally independent, in which case a valid solution is analogous to eq. (17).
3.4 Illustrative example
(a)  (b) 
Take a simple Bayesian network as in figure 2 with three continuous latent variables , and , and three observed variables , and . The joint PDF factorizes like
(18) 
Now imagine we are going to perform gradientbased inference as outlined in section 2.3, in which we iteratively perform firstorder gradient based updates of the latent variables given observations. The gradient of the joint PDF w.r.t. looks like:
(19) 
Observe that and are not in ’s Markov blanket: at each gradient step, the values of and influence each others new values, the values of and influence each other, and the values of and influence each other. It therefore takes three of such gradient steps for information about the value of to reach in this example.
Now we are going to find an auxiliary form of the logPDF in eq. (18). As described in section 3.1, we define an auxiliary network where , and are replaced by , and , with an auxiliary latent variable and a function such that and equal in distribution. Similarly for and . Note that putting in auxiliary form is trivial and does not bring any advantages (since it doesn’t have parents), but we’ll do it anyway for the sake of consistency.
(20) 
The auxiliary PDF (eq.(15)) is:
(21) 
Observe that each is a recursively a function of all ’s with ; therefore each variable ’s has all ’s with in its Markov blanket, while it only has in its Markov blanket in the original network. This is also reflected in the gradients of the auxiliary joint PDF, e.g. w.r.t. :
(22)  
4 Experiments
Two experiments were performed to empirically evaluate the relative efficiency of the auxiliary form.
4.1 Generative model of MNIST digits
In our first experiment we trained generative models of handwritten digits from the MNIST dataset [LBBH98], and compare convergence speed for inference in original versus auxiliary form. The first model consists of two layers of each continuous latent random vectors of size 64, and one layer with an observed binary random variable of size 768 (the digit images), connected like . All variables are (random) vectors. The joint PDF is , where where , where
is a piecewise sigmoid nonlinearity. The latent variables are connected with the PDF
, where is the Gaussian PDF. The PDF of the first layer is . The parameters are. Note that the dependencies between layers bear similarities to a neural networks architecture. We also trained a network with a third layer of latent variables, like
. The parameters follow a zerocentered Gaussian prior distribution: , except for the ’s which follow .We used Monte Carlo EM (MCEM) [WT90] for learning the parameters of this generative network, where Hybrid Monte Carlo [DKPR87] was used for the Estep, and 5 steps of Adagrad [DHS10] for the Mstep. Each HMC iteration, five leapfrog steps were performed with a stepsize of 0.01. This setting worked well with an acceptance rate of around for all experiments. Adagrad was used with an initial stepsize of for all experiments. Training was performed a random 10000 digits of the training set. All experiments were performed with the same initial parameters by sampling from the Gaussian . The auxiliary form of eq. 17 was used.
Results
MAP inference in original form progressed quite slowly, but much faster in auxiliary form, in both the 2layer and 3layer network, as expected from theory. Inference with auxiliary transform converged an order of magnitude faster in both cases. Interestingly, the 3layer model did not improve upon the data likelihood. See figure 3.
4.2 Dynamical Bayesian Network
In our second experiment we learned the parameters (also using MAP) of a dynamical Bayesian Network of length 10 and a structure as in section 3.4, with both latent variables and observed variables being continuous random vectors of size 10. Dependencies between latent variables were: . Dependencies between latent variables and observed variables were . Also, . The data was generated by sampling parameters from
and then forward sampling through the network, generating 100 datapointsm each with an assigment of observed variables for each timestep. The parameter prior, initial parameters, the used auxiliary form and the MCEM hyperparameters were equal as in the MNIST experiment.
Results
Similar to the MNIST experiment, MAP inference converged an order of magnitude faster in auxiliary form. See figure 3 (right).
5 Conclusion
The main contribution of this paper is that we show how to transform a Bayesian network with continuous latent variables to an auxiliary form. In this auxiliary form, latent variables are conditionally deterministic and can be integrated out. Instead, we sample from auxiliary variables. Since variables in the auxiliary form have larger Markov blankets, such inference should be generally faster. The method improves inference efficiency especially when there are multiple layers of dependent latent variables. The method is applicable to any conditional distribution with differentiable invertable CDFs, but we also show that easier transforms are possible as well. Efficiency is evaluated and confirmed empirically through measurement of inference efficiency with a generative model of the MNIST dataset, and a dynamic Bayesian network.
References
 [BBB10] James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David WardeFarley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), volume 4, 2010.
 [CS05] Barry R Cobb and Prakash P Shenoy. Nonlinear deterministic relationships in Bayesian networks. In Symbolic and Quantitative Approaches to Reasoning with Uncertainty, pages 27–38. Springer, 2005.
 [Dev86] Luc Devroye. Nonuniform Random Variate Generation. Springer, 1986.

[DHS10]
John Duchi, Elad Hazan, and Yoram Singer.
Adaptive subgradient methods for online learning and stochastic
optimization.
Journal of Machine Learning Research
, 12:2121–2159, 2010.  [DKPR87] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid Monte Carlo. Physics letters B, 195(2):216–222, 1987.
 [DLR77] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), pages 1–38, 1977.
 [LBBH98] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.

[Min01]
Thomas P Minka.
Expectation propagation for approximate bayesian inference.
In
Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
, pages 362–369. Morgan Kaufmann Publishers Inc., 2001.  [Mur02] Kevin Patrick Murphy. Dynamic Bayesian networks: representation, inference and learning. PhD thesis, University of California, 2002.
 [Nea93] Radford M Neal. Probabilistic inference using markov chain monte carlo methods. 1993.
 [NH98] Radford M Neal and Geoffrey E Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pages 355–368. Springer, 1998.
 [Pea82] Judea Pearl. Reverend Bayes on inference engines: A distributed hierarchical approach. Cognitive Systems Laboratory, School of Engineering and Applied Science, University of California, Los Angeles, 1982.
 [RHW86] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. Nature, 323(6088):533–536, 1986.
 [WT90] Greg CG Wei and Martin A Tanner. A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithms. Journal of the American Statistical Association, 85(411):699–704, 1990.
Comments
There are no comments yet.