A Unified Approach for Learning the Parameters of Sum-Product Networks

01/03/2016 ∙ by Han Zhao, et al. ∙ University of Waterloo Carnegie Mellon University 0

We present a unified approach for learning the parameters of Sum-Product networks (SPNs). We prove that any complete and decomposable SPN is equivalent to a mixture of trees where each tree corresponds to a product of univariate distributions. Based on the mixture model perspective, we characterize the objective function when learning SPNs based on the maximum likelihood estimation (MLE) principle and show that the optimization problem can be formulated as a signomial program. We construct two parameter learning algorithms for SPNs by using sequential monomial approximations (SMA) and the concave-convex procedure (CCCP), respectively. The two proposed methods naturally admit multiplicative updates, hence effectively avoiding the projection operation. With the help of the unified framework, we also show that, in the case of SPNs, CCCP leads to the same algorithm as Expectation Maximization (EM) despite the fact that they are different in general.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

Prometheus

Code for the AISTATS Prometheus SPN Learning algorithm.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sum-product networks (SPNs) are new deep graphical model architectures that admit exact probabilistic inference in linear time in the size of the network (Poon and Domingos, 2011). Similar to traditional graphical models, there are two main problems when learning SPNs: structure learning and parameter learning. Parameter learning is interesting even if we know the ground truth structure ahead of time; structure learning depends on parameter learning , so better parameter learning can often lead to better structure learning. Poon and Domingos (2011) and Gens and Domingos (2012) proposed both generative and discriminative learning algorithms for parameters in SPNs. At a high level, these approaches view SPNs as deep architectures and apply projected gradient descent (PGD) to optimize the data log-likelihood. There are several drawbacks associated with PGD. For example, the projection step in PGD hurts the convergence of the algorithm and it will often lead to solutions on the boundary of the feasible region. Also, PGD contains an additional arbitrary parameter, the projection margin, which can be hard to set well in practice. In (Poon and Domingos, 2011; Gens and Domingos, 2012), the authors also mentioned the possibility of applying EM algorithms to train SPNs by viewing sum nodes in SPNs as hidden variables. They presented an EM update formula without details. However, the update formula for EM given in (Poon and Domingos, 2011; Gens and Domingos, 2012) is incorrect, as first pointed out and corrected by Peharz (2015).

In this paper we take a different perspective and present a unified framework, which treats Poon and Domingos (2011); Gens and Domingos (2012) as special cases, for learning the parameters of SPNs. We prove that any complete and decomposable SPN is equivalent to a mixture of trees where each tree corresponds to a product of univariate distributions. Based on the mixture model perspective, we can precisely characterize the functional form of the objective function based on the network structure. We show that the optimization problem associated with learning the parameters of SPNs based on the MLE principle can be formulated as a signomial program (SP), where both PGD and exponentiated gradient (EG) can be viewed as first order approximations of the signomial program after suitable transformations of the objective function. We also show that the signomial program formulation can be equivalently transformed into a difference of convex functions (DCP) formulation, where the objective function of the program can be naturally expressed as a difference of two convex functions. The DCP formulation allows us to develop two efficient optimization algorithms for learning the parameters of SPNs based on sequential monomial approximations (SMA) and the concave-convex procedure (CCCP), respectively. Both proposed approaches naturally admit multiplicative updates, hence effectively deal with the positivity constraints of the optimization. Furthermore, under our unified framework, we also show that CCCP leads to the same algorithm as EM despite that these two approaches are different from each other in general. Although we mainly focus on MLE based parameter learning, the mixture model interpretation of SPN also helps to develop a Bayesian learning method for SPNs Zhao et al. (2016).

PGD, EG, SMA and CCCP can all be viewed as different levels of convex relaxation of the original SP. Hence the framework also provides an intuitive way to compare all four approaches. We conduct extensive experiments on 20 benchmark data sets to compare the empirical performance of PGD, EG, SMA and CCCP. Experimental results validate our theoretical analysis that CCCP is the best among all 4 approaches, showing that it converges consistently faster and with more stability than the other three methods. Furthermore, we use CCCP to boost the performance of LearnSPN (Gens and Domingos, 2013), showing that it can achieve results comparable to state-of-the-art structure learning algorithms using SPNs with much smaller network sizes.

2 Background

2.1 Sum-Product Networks

To simplify the discussion of the main idea of our unified framework, we focus our attention on SPNs over Boolean random variables. However, the framework presented here is general and can be easily extended to other discrete and continuous random variables. We first define the notion of

network polynomial. We use to denote an indicator variable that returns 1 when and 0 otherwise.

Definition 1 (Network Polynomial (Darwiche, 2003)).

Let

be an unnormalized probability distribution over a Boolean random vector

. The network polynomial of is a multilinear function of indicator variables, where the summation is over all possible instantiations of the Boolean random vector .

A Sum-Product Network (SPN) over Boolean variables is a rooted DAG that computes the network polynomial over . The leaves are univariate indicators of Boolean variables and internal nodes are either sum or product. Each sum node computes a weighted sum of its children and each product node computes the product of its children. The scope of a node in an SPN is defined as the set of variables that have indicators among the node’s descendants. For any node in an SPN, if is a terminal node, say, an indicator variable over , then , else . An SPN is complete iff each sum node has children with the same scope. An SPN is decomposable iff for every product node , scope() scope() where . The scope of the root node is .

In this paper, we focus on complete and decomposable SPNs. For a complete and decomposable SPN , each node in defines a network polynomial which corresponds to the sub-SPN (subgraph) rooted at . The network polynomial of , denoted by , is the network polynomial defined by the root of , which can be computed recursively from its children. The probability distribution induced by an SPN is defined as . The normalization constant can be computed in in SPNs by setting the values of all the leaf nodes to be 1, i.e.,  Poon and Domingos (2011). This leads to efficient joint/marginal/conditional inference in SPNs.

2.2 Signomial Programming (SP)

Before introducing SP, we first introduce geometric programming (GP), which is a strict subclass of SP. A monomial is defined as a function : , where the domain is restricted to be the positive orthant (), the coefficient is positive and the exponents . A posynomial is a sum of monomials: . One of the key properties of posynomials is positivity, which allows us to transform any posynomial into the log domain. A GP in standard form is defined to be an optimization problem where both the objective function and the inequality constraints are posynomials and the equality constraints are monomials. There is also an implicit constraint that .

A GP in its standard form is not a convex program since posynomials are not convex functions in general. However, we can effectively transform it into a convex problem by using the logarithmic transformation trick on , the multiplicative coefficients of each monomial and also each objective/constraint function (Chiang, 2005; Boyd et al., 2007).

An SP has the same form as GP except that the multiplicative constant inside each monomial is not restricted to be positive, i.e., can take any real value. Although the difference seems to be small, there is a huge difference between GP and SP from the computational perspective. The negative multiplicative constant in monomials invalidates the logarithmic transformation trick frequently used in GP. As a result, SPs cannot be reduced to convex programs and are believed to be hard to solve in general (Boyd et al., 2007).

3 Unified Approach for Learning

In this section we will show that the parameter learning problem of SPNs based on the MLE principle can be formulated as an SP. We will use a sequence of optimal monomial approximations combined with backtracking line search and the concave-convex procedure to tackle the SP. Due to space constraints, we refer interested readers to the supplementary material for all the proof details.

3.1 Sum-Product Networks as a Mixture of Trees

We introduce the notion of induced trees from SPNs and use it to show that every complete and decomposable SPN can be interpreted as a mixture of induced trees, where each induced tree corresponds to a product of univariate distributions. From this perspective, an SPN can be understood as a huge mixture model where the effective number of components in the mixture is determined by its network structure. The method we describe here is not the first method for interpreting an SPN (or the related arithmetic circuit) as a mixture distribution Zhao et al. (2015); Dennis and Ventura (2015); Chan and Darwiche ; but, the new method can result in an exponentially smaller mixture, see the end of this section for more details.

Definition 2 (Induced SPN).

Given a complete and decomposable SPN over , let be a subgraph of . is called an induced SPN from if

  1. .

  2. If is a sum node, then exactly one child of in is in , and the corresponding edge is in .

  3. If is a product node, then all the children of in are in , and the corresponding edges are in .

Theorem 1.

If is an induced SPN from a complete and decomposable SPN , then is a tree that is complete and decomposable.

As a result of Thm. 1, we will use the terms induced SPNs and induced trees interchangeably. With some abuse of notation, we use to mean the value of the network polynomial of with input vector .

Theorem 2.

If is an induced tree from over , then , where is the edge weight of if is a sum node and if is a product node.

Remark. Although we focus our attention on Boolean random variables for the simplicity of discussion and illustration, Thm. 2

can be extended to the case where the univariate distributions at the leaf nodes are continuous or discrete distributions with countably infinitely many values, e.g., Gaussian distributions or Poisson distributions. We can simply replace the product of univariate distributions term,

, in Thm. 2 to be the general form , where is a univariate distribution over . Also note that it is possible for two unique induced trees to share the same product of univariate distributions, but in this case their weight terms are guaranteed to be different. As we will see shortly, Thm. 2

implies that the joint distribution over

represented by an SPN is essentially a mixture model with potentially exponentially many components in the mixture.

Definition 3 (Network cardinality).

The network cardinality of an SPN is the number of unique induced trees.

Theorem 3.

, where is the value of the network polynomial of with input vector and all edge weights set to be .

Theorem 4.

, where is the th unique induced tree of .

Remark. The above four theorems prove the fact that an SPN is an ensemble or mixture of trees, where each tree computes an unnormalized distribution over . The total number of unique trees in is the network cardinality , which only depends on the structure of . Each component is a simple product of univariate distributions. We illustrate the theorems above with a simple example in Fig. 1.

Figure 1: A complete and decomposable SPN is a mixture of induced trees. Double circles indicate univariate distributions over and . Different colors are used to highlight unique induced trees; each induced tree is a product of univariate distributions over and .

Zhao et al. (2015)

show that every complete and decomposable SPN is equivalent to a bipartite Bayesian network with a layer of hidden variables and a layer of observable random variables. The number of hidden variables in the bipartite Bayesian network is equal to the number of sum nodes in

. A naive expansion of such Bayesian network to a mixture model will lead to a huge mixture model with components, where is the number of sum nodes in . Here we complement their theory and show that each complete and decomposable SPN is essentially a mixture of trees and the effective number of unique induced trees is given by . Note that depends only on the network structure, and can often be much smaller than . Without loss of generality, assuming that in layers of sum nodes are alternating with layers of product nodes, then , where is the height of . However, the exponentially many trees are recursively merged and combined in such that the overall network size is still tractable.

3.2 Maximum Likelihood Estimation as SP

Let’s consider the likelihood function computed by an SPN over binary random variables with model parameters and input vector . Here the model parameters in are edge weights from every sum node, and we collect them together into a long vector , where corresponds to the number of edges emanating from sum nodes in . By definition, the probability distribution induced by can be computed by .

Corollary 5.

Let be an SPN with weights over input vector , the network polynomial is a posynomial: , where is the indicator variable whether is in the -th induced tree or not. Each monomial corresponds exactly to a unique induced tree SPN from .

The above statement is a direct corollary of Thm. 2, Thm. 3 and Thm. 4. From the definition of network polynomial, we know that is a multilinear function of the indicator variables. Corollary 5 works as a complement to characterize the functional form of a network polynomial in terms of . It follows that the likelihood function can be expressed as the ratio of two posynomial functions. We now show that the optimization problem based on MLE is an SP. Using the definition of and Corollary 5, let , the MLE problem can be rewritten as

(1)
subject to
Proposition 6.

The MLE problem for SPNs is a signomial program.

Being nonconvex in general, SP is essentially hard to solve from a computational perspective Boyd et al. (2007); Chiang (2005). However, despite the hardness of SP in general, the objective function in the MLE formulation of SPNs has a special structure, i.e., it is the ratio of two posynomials, which makes the design of efficient optimization algorithms possible.

3.3 Difference of Convex Functions

Both PGD and EG are first-order methods and they can be viewed as approximating the SP after applying a logarithmic transformation to the objective function only. Although (1) is a signomial program, its objective function is expressed as the ratio of two posynomials. Hence, we can still apply the logarithmic transformation trick used in geometric programming to its objective function and to the variables to be optimized. More concretely, let and take the of the objective function; it becomes equivalent to maximize the following new objective without any constraint on :

maximize (2)

Note that in the first term of Eq. 2 the upper index depends on the current input . By transforming into the log-space, we naturally guarantee the positivity of the solution at each iteration, hence transforming a constrained optimization problem into an unconstrained optimization problem without any sacrifice. Both terms in Eq. 2 are convex functions in after the transformation. Hence, the transformed objective function is now expressed as the difference of two convex functions, which is called a DC function (Hartman et al., 1959). This helps us to design two efficient algorithms to solve the problem based on the general idea of sequential convex approximations for nonlinear programming.

3.3.1 Sequential Monomial Approximation

Let’s consider the linearization of both terms in Eq. 2 in order to apply first-order methods in the transformed space. To compute the gradient with respect to different components of

, we view each node of an SPN as an intermediate function of the network polynomial and apply the chain rule to back-propagate the gradient. The differentiation of

with respect to the root node of the network is set to be 1. The differentiation of the network polynomial with respect to a partial function at each node can then be computed in two passes of the network: the bottom-up pass evaluates the values of all partial functions given the current input and the top-down pass differentiates the network polynomial with respect to each partial function. Following the evaluation-differentiation passes, the gradient of the objective function in (2) can be computed in . Furthermore, although the computation is conducted in , the results are fully expressed in terms of , which suggests that in practice we do not need to explicitly construct from .

Let . It follows that approximating with the best linear function is equivalent to using the best monomial approximation of the signomial program (1). This leads to a sequential monomial approximations of the original SP formulation: at each iteration , we linearize both terms in Eq. 2 and form the optimal monomial function in terms of . The additive update of leads to a multiplicative update of since , and we use a backtracking line search to determine the step size of the update in each iteration.

3.3.2 Concave-convex Procedure

Sequential monomial approximation fails to use the structure of the problem when learning SPNs. Here we propose another approach based on the concave-convex procedure (CCCP) (Yuille et al., 2002) to use the fact that the objective function is expressed as the difference of two convex functions. At a high level CCCP solves a sequence of concave surrogate optimizations until convergence. In many cases, the maximum of a concave surrogate function can only be solved using other convex solvers and as a result the efficiency of the CCCP highly depends on the choice of the convex solvers. However, we show that by a suitable transformation of the network we can compute the maximum of the concave surrogate in closed form in time that is linear in the network size, which leads to a very efficient algorithm for learning the parameters of SPNs. We also prove the convergence properties of our algorithm.

Consider the objective function to be maximized in DCP: where is a convex function and is a concave function. We can linearize only the convex part to obtain a surrogate function

(3)

for . Now is a concave function in . Due to the convexity of we have and as a result the following two properties always hold for : and . CCCP updates at each iteration by solving unless we already have , in which case a generalized fixed point has been found and the algorithm stops.

It is easy to show that at each iteration of CCCP we always have . Note also that is computing the log-likelihood of input and therefore it is bounded above by 0. By the monotone convergence theorem, exists and the sequence converges.

We now discuss how to compute a closed form solution for the maximization of the concave surrogate . Since is differentiable and concave for any fixed , a sufficient and necessary condition to find its maximum is

(4)

In the above equation, if we consider only the partial derivative with respect to , we obtain

(5)

Eq. 5 leads to a system of nonlinear equations, which is hard to solve in closed form. However, if we do a change of variable by considering locally normalized weights (i.e., and ), then a solution can be easily computed. As described in (Peharz et al., 2015; Zhao et al., 2015), any SPN can be transformed into an equivalent normal SPN with locally normalized weights in a bottom up pass as follows:

(6)

We can then replace in the above equation by the expression it is equal to in Eq. 5 to obtain a closed form solution:

(7)

Note that in the above derivation both and can be treated as constants and hence absorbed since are constrained to be locally normalized. In order to obtain a solution to Eq. 5, for each edge weight , the sufficient statistics include only three terms, i.e, the evaluation value at , the differentiation value at and the previous edge weight , all of which can be obtained in two passes of the network for each input . Thus the computational complexity to obtain a maximum of the concave surrogate is . Interestingly, Eq. 7 leads to the same update formula as in the EM algorithm Peharz (2015) despite the fact that CCCP and EM start from different perspectives. We show that all the limit points of the sequence are guaranteed to be stationary points of DCP in (2).

Theorem 7.

Let be any sequence generated using Eq. 7 from any positive initial point, then all the limiting points of are stationary points of the DCP in (2). In addition, , where is some stationary point of (2).

We summarize all four algorithms and highlight their connections and differences in Table 1. Although we mainly discuss the batch version of those algorithms, all of the four algorithms can be easily adapted to work in stochastic and/or parallel settings.

Algo Var. Update Type Update Formula
PGD Additive
EG Multiplicative
SMA Multiplicative
CCCP Multiplicative
Table 1: Summary of PGD, EG, SMA and CCCP. Var. means the optimization variables.

4 Experiments

4.1 Experimental Setting

We conduct experiments on 20 benchmark data sets from various domains to compare and evaluate the convergence performance of the four algorithms: PGD, EG, SMA and CCCP (EM). These 20 data sets are widely used in (Gens and Domingos, 2013; Rooshenas and Lowd, 2014) to assess different SPNs for the task of density estimation. All the features in the 20 data sets are binary features. All the SPNs that are used for comparisons of PGD, EG, SMA and CCCP are trained using LearnSPN (Gens and Domingos, 2013). We discard the weights returned by LearnSPN and use random weights as initial model parameters. The random weights are determined by the same random seed in all four algorithms. Detailed information about these 20 datasets and the SPNs used in the experiments are provided in the supplementary material.

4.2 Parameter Learning

We implement all four algorithms in C++. For each algorithm, we set the maximum number of iterations to 50. If the absolute difference in the training log-likelihood at two consecutive steps is less than , the algorithms are stopped. For PGD, EG and SMA, we combine each of them with backtracking line search and use a weight shrinking coefficient set at . The learning rates are initialized to for all three methods. For PGD, we set the projection margin to 0.01. There is no learning rate and no backtracking line search in CCCP. We set the smoothing parameter to in CCCP to avoid numerical issues.

We show in Fig. 2 the average log-likelihood scores on 20 training data sets to evaluate the convergence speed and stability of PGD, EG, SMA and CCCP. Clearly, CCCP wins by a large margin over PGD, EG and SMA, both in convergence speed and solution quality. Furthermore, among the four algorithms, CCCP is the most stable one due to its guarantee that the log-likelihood (on training data) will not decrease after each iteration. As shown in Fig. 2, the training curves of CCCP are more smooth than the other three methods in almost all the cases. These 20 experiments also clearly show that CCCP often converges in a few iterations. On the other hand, PGD, EG and SMA are on par with each other since they are all first-order methods. SMA is more stable than PGD and EG and often achieves better solutions than PGD and EG. On large data sets, SMA also converges faster than PGD and EG. Surprisingly, EG performs worse than PGD in some cases and is quite unstable despite the fact that it admits multiplicative updates. The “hook shape” curves of PGD in some data sets, e.g. Kosarak and KDD, are due to the projection operations.

Data set CCCP LearnSPN ID-SPN Data set CCCP LearnSPN ID-SPN
NLTCS -6.029 -6.099 -6.050 DNA -84.921 -85.237 -84.693
MSNBC -6.045 -6.113 -6.048 Kosarak -10.880 -11.057 -10.605
KDD 2k -2.134 -2.233 -2.153 MSWeb -9.970 -10.269 -9.800
Plants -12.872 -12.955 -12.554 Book -35.009 -36.247 -34.436
Audio -40.020 -40.510 -39.824 EachMovie -52.557 -52.816 -51.550
Jester -52.880 -53.454 -52.912 WebKB -157.492 -158.542 -153.293
Netflix -56.782 -57.385 -56.554 Reuters-52 -84.628 -85.979 -84.389
Accidents -27.700 -29.907 -27.232 20 Newsgrp -153.205 -156.605 -151.666
Retail -10.919 -11.138 -10.945 BBC -248.602 -249.794 -252.602
Pumsb-star -24.229 -24.577 -22.552 Ad -27.202 -27.409 -40.012
Table 2: Average log-likelihoods on test data. Highest log-likelihoods are highlighted in bold. shows statistically better log-likelihoods than CCCP and shows statistically worse log-likelihoods than CCCP. The significance is measured based on the Wilcoxon signed-rank test.
Figure 2: Negative log-likelihood values on training data versus number of iterations for PGD, EG, SMA and CCCP.

The computational complexity per update is in all four algorithms. The constant involved in the term of CCCP is slightly larger than those of the other three algorithms as there are more calls in CCCP. However, in practice, CCCP often takes less time than the other three algorithms because it takes fewer iterations to converge. We list detailed running time statistics for all four algorithms on the 20 data sets in the supplementary material.

4.3 Fine Tuning

We combine CCCP as a “fine tuning” procedure with the structure learning algorithm LearnSPN and compare it to the state-of-the-art structure learning algorithm ID-SPN (Rooshenas and Lowd, 2014)

. More concretely, we keep the model parameters learned from LearnSPN and use them to initialize CCCP. We then update the model parameters globally using CCCP as a fine tuning technique. This normally helps to obtain a better generative model since the original parameters are learned greedily and locally during the structure learning algorithm. We use the validation set log-likelihood score to avoid overfitting. The algorithm returns the set of parameters that achieve the best validation set log-likelihood score as output. For LearnSPN and ID-SPN, we use their publicly available implementations provided by the original authors and the default hyperparameter settings. Experimental results are reported in Table. 

2. As shown in Table 2, the use of CCCP after LearnSPN always helps to improve the model performance. By optimizing model parameters on these 20 data sets, we boost LearnSPN to achieve better results than state-of-the-art ID-SPN on 7 data sets, where the original LearnSPN only outperforms ID-SPN on 1 data set. Note that the sizes of the SPNs returned by LearnSPN are much smaller than those produced by ID-SPN. Hence, it is remarkable that by fine tuning the parameters with CCCP, we can achieve better performance despite the fact that the models are smaller. For a fair comparison, we also list the size of the SPNs returned by ID-SPN in the supplementary material.

5 Conclusion

We show that the network polynomial of an SPN is a posynomial function of the model parameters, and that learning the parameter by maximum likelihood yields a signomial program. We propose two convex relaxations to solve the SP. We analyze the convergence properties of CCCP for learning SPNs. Extensive experiments are conducted to evaluate the proposed approaches and current methods. We also recommend combining CCCP with current structure learning algorithms to boost the modeling accuracy.

Acknowledgments

HZ and GG gratefully acknowledge support from ONR contract N000141512365. HZ also thanks Ryan Tibshirani for the helpful discussion about CCCP.

References

  • Boyd et al. (2007) S. Boyd, S.-J. Kim, L. Vandenberghe, and A. Hassibi. A tutorial on geometric programming. Optimization and Engineering, 8(1):67–127, 2007.
  • (2) H. Chan and A. Darwiche. On the robustness of most probable explanations. In

    In Proceedings of the Twenty Second Conference on Uncertainty in Artificial Intelligence

    .
  • Chiang (2005) M. Chiang. Geometric programming for communication systems. Now Publishers Inc, 2005.
  • Darwiche (2003) A. Darwiche. A differential approach to inference in Bayesian networks. Journal of the ACM (JACM), 50(3):280–305, 2003.
  • Dennis and Ventura (2015) A. Dennis and D. Ventura. Greedy structure search for sum-product networks. In International Joint Conference on Artificial Intelligence, volume 24, 2015.
  • Gens and Domingos (2012) R. Gens and P. Domingos. Discriminative learning of sum-product networks. In Advances in Neural Information Processing Systems, pages 3248–3256, 2012.
  • Gens and Domingos (2013) R. Gens and P. Domingos. Learning the structure of sum-product networks. In Proceedings of The 30th International Conference on Machine Learning, pages 873–880, 2013.
  • Gunawardana and Byrne (2005) A. Gunawardana and W. Byrne. Convergence theorems for generalized alternating minimization procedures. The Journal of Machine Learning Research, 6:2049–2073, 2005.
  • Hartman et al. (1959) P. Hartman et al. On functions representable as a difference of convex functions. Pacific J. Math, 9(3):707–713, 1959.
  • Kivinen and Warmuth (1997) J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997.
  • Lanckriet and Sriperumbudur (2009) G. R. Lanckriet and B. K. Sriperumbudur. On the convergence of the concave-convex procedure. pages 1759–1767, 2009.
  • Peharz (2015) R. Peharz. Foundations of Sum-Product Networks for Probabilistic Modeling. PhD thesis, Graz University of Technology, 2015.
  • Peharz et al. (2015) R. Peharz, S. Tschiatschek, F. Pernkopf, and P. Domingos. On theoretical properties of sum-product networks. In AISTATS, 2015.
  • Poon and Domingos (2011) H. Poon and P. Domingos. Sum-product networks: A new deep architecture. In Proc. 12th Conf. on Uncertainty in Artificial Intelligence, pages 2551–2558, 2011.
  • Rooshenas and Lowd (2014) A. Rooshenas and D. Lowd. Learning sum-product networks with direct and indirect variable interactions. In ICML, 2014.
  • Salakhutdinov et al. (2002) R. Salakhutdinov, S. Roweis, and Z. Ghahramani. On the convergence of bound optimization algorithms. UAI, 2002.
  • Wu (1983) C. J. Wu. On the convergence properties of the EM algorithm. The Annals of Statistics, pages 95–103, 1983.
  • Yuille et al. (2002) A. L. Yuille, A. Rangarajan, and A. Yuille. The concave-convex procedure (CCCP). Advances in Neural Information Processing Systems, 2:1033–1040, 2002.
  • Zangwill (1969) W. I. Zangwill. Nonlinear programming: a unified approach, volume 196. Prentice-Hall Englewood Cliffs, NJ, 1969.
  • Zhao et al. (2015) H. Zhao, M. Melibari, and P. Poupart. On the Relationship between Sum-Product Networks and Bayesian Networks. In ICML, 2015.
  • Zhao et al. (2016) H. Zhao, T. Adel, G. Gordon, and B. Amos. Collapsed variational inference for sum-product networks. In ICML, 2016.

Appendix A Proof of SPNs as Mixture of Trees

See 1

Proof.

Argue by contradiction that is not a tree, then there must exist a node such that has more than one parent in . This means that there exist at least two paths and that connect the root of , which we denote by , and . Let be the last node in and such that are common prefix of these two paths. By construction we know that such must exist since these two paths start from the same root node ( will be one candidate of such ). Also, we claim that otherwise these two paths overlap with each other, which contradicts the assumption that has multiple parents. This shows that these two paths can be represented as and where are the common prefix shared by these two paths and since is the last common node. From the construction process defined in Def. 2, we know that both and are children of in . Recall that for each sum node in , Def. 2 takes at most one child, hence we claim that must be a product node, since both and are children of . Then the paths that and indicate that and , leading to , which is a contradiction of the decomposability of the product node . Hence as long as is complete and decomposable, must be a tree.

The completeness of is trivially satisfied because each sum node has only one child in . It is also straightforward to verify that satisfies the decomposability as is an induced subgraph of , which is decomposable. ∎

See 2

Proof.

First, the scope of is the same as the scope of because the root of is also the root of . This shows that for each there is at least one indicator in the leaves otherwise the scope of the root node of will be a strict subset of the scope of the root node of . Furthermore, for each variable there is at most one indicator in the leaves. This is observed by the fact that there is at most one child collected from a sum node into and if and appear simultaneously in the leaves, then their least common ancestor must be a product node. Note that the least common ancestor of and is guaranteed to exist because of the tree structure of . However, this leads to a contradiction of the fact that is decomposable. As a result, there is exactly one indicator for each variable in . Hence the multiplicative constant of the monomial admits the form , which is a product of univariate distributions. More specifically, it is a product of indicator variables in the case of Boolean input variables.

We have already shown that is a tree and only product nodes in can have multiple children. It follows that the functional form of must be a monomial, and only edge weights that are in contribute to the monomial. Combing all the above, we know that . ∎

See 3 See 4

Proof.

We prove by induction on the height of . If the height of is 2, then depending on the type of the root node, we have two cases:

  1. If the root is a sum node with children, then there are different subgraphs that satisfy Def. 2, which is exactly the value of the network by setting all the indicators and edge weights from the root to be 1.

  2. If the root is a product node then there is only 1 subgraph which is the graph itself. Again, this equals to the value of by setting all indicators to be 1.

Assume the theorem is true for SPNs with height . Consider an SPN with height . Again, depending on the type of the root node, we need to discuss two cases:

  1. If the root is a sum node with children, where the th sub-SPN has unique induced trees, then by Def. 2 the total number of unique induced trees of is .

  2. If the root is a product node with children, then the total number of unique induced trees of can then be computed by .

The second part of the theorem follows by using distributive law between multiplication and addition to combine unique trees that share the same prefix in bottom-up order. ∎

Appendix B MLE as Signomial Programming

See 6

Proof.

Using the definition of and Corollary 5, let , the MLE problem can be rewritten as

(8)
subject to

which we claim is equivalent to:

(9)
subject to

It is easy to check that both the objective function and constraint function in (9) are signomials. To see the equivalence of (8) and (9), let be the optimal value of (8) achieved at . Choose and in (9), then is also the optimal solution of (9) otherwise we can find feasible in (9) which has . Combined with the constraint function in (9), we have , which contradicts the optimality of . In the other direction, let be the solution that achieves optimal value of (9), then we claim that is also the optimal value of (8), otherwise there exists a feasible in (8) such that . Since is also feasible in (9) with , this contradicts the optimality of . ∎

The transformation from (8) to (9) does not make the problem any easier to solve. Rather, it destroys the structure of (8), i.e., the objective function of (8) is the ratio of two posynomials. However, the equivalent transformation does reveal some insights about the intrinsic complexity of the optimization problem, which indicates that it is hard to solve (8) efficiently with the guarantee of achieving a globally optimal solution.

Appendix C Convergence of CCCP for SPNs

We discussed before that the sequence of function values converges to a limiting point. However, this fact alone does not necessarily indicate that converges to where is a stationary point of nor does it imply that the sequence converges as . Zangwill’s global convergence theory [Zangwill, 1969] has been successfully applied to study the convergence properties of many iterative algorithms frequently used in machine learning, including EM [Wu, 1983], generalized alternating minimization [Gunawardana and Byrne, 2005] and also CCCP [Lanckriet and Sriperumbudur, 2009]. Here we also apply Zangwill’s theory and combine the analysis from Lanckriet and Sriperumbudur [2009] to show the following theorem: See 7

Proof.

We will use Zangwill’s global convergence theory for iterative algorithms [Zangwill, 1969] to show the convergence in our case. Before showing the proof, we need to first introduce the notion of “point-to-set mapping”, where the output of the mapping is defined to be a set. More formally, a point-to-set map from a set to is defined as , where is the power set of . Suppose and are equipped with the norm and , respectively. A point-to-set map is said to be closed at if , and imply that . A point-to-set map is said to be closed on if is closed at every point in . The concept of closedness in the point-to-set map setting reduces to continuity if we restrict that the output of to be a set of singleton for every possible input, i.e., when is a point-to-point mapping.

Theorem 8 (Global Convergence Theorem [Zangwill, 1969]).

Let the sequence be generated by , where is a point-to-set map from to . Let a solution set be given, and suppose that:

  1. all points are contained in a compact set .

  2. is closed over the complement of .

  3. there is a continuous function on such that:

    1. if , for .

    2. if for .

Then all the limit points of are in the solution set and converges monotonically to for some .

Let . Let and let . Here we use and interchangeably as or each component is a one-to-one mapping. Note that since the given is achievable, is a well defined point-to-set map for .

Specifically, in our case given , at each iteration of Eq. 7 we have

i.e., the point-to-set mapping is given by

Let , the dimensional hyper cube. Then the above update formula indicates that . Furthermore, if we assume , which can be obtained by local normalization before any update, we can guarantee that , which is a compact set in .

The solution to is not unique. In fact, there are infinitely many solutions to this nonlinear equations. However, as we define above, returns one solution to this convex program in the dimensional hyper cube. Hence in our case reduces to a point-to-point map, where the definition of closedness of a point-to-set map reduces to the notion of continuity of a point-to-point map. Define . Hence we only need to verify the continuity of when . To show this, we first characterize the functional form of as it is used inside . We claim that for each node , is again, a posynomial function of . A graphical illustration is given in Fig. 3 to explain the process. This can also be derived from the sum rules and product rules used during top-down differentiation.

Figure 3: Graphical illustration of . The partial derivative of with respect to (in red) is a posynomial that is a product of edge weights lying on the path from root to and network polynomials from nodes that are children of product nodes on the path (highlighted in blue).

More specifically, if is a product node, let be its parents in the network, which are assumed to be sum nodes, the differentiation of with respect to is given by . We reach

(10)

Similarly, if is a sum node and its parents are assumed to be product nodes, we have

(11)

Since is a product node and is a parent of , so the last term in Eq. 11 can be equivalently expressed as

where the index is range from all the children of except . Combining the fact that the partial differentiation of with respect to the root node is 1 and that each is a posynomial function, it follows by induction in top-down order that is also a posynomial function of .

We have shown that both the numerator and the denominator of are posynomial functions of . Because posynomial functions are continuous functions, in order to show that is also continuous on , we need to guarantee that the denominator is not a degenerate posynomial function, i.e., the denominator of for all possible input vector . Recall that , hence , , where is the boundary of the dimensional hyper cube . Hence we have for each component. This immediately leads to . As a result, is continuous on since it is the ratio of two strictly positive posynomial functions.

We now verify the third property in Zangwill’s global convergence theory. At each iteration of CCCP, we have the following two cases to consider:

  1. If , i.e., is not a stationary point of , then , so we have .

  2. If , i.e., is a stationary point of , then , so we have