 # Sum-Product-Quotient Networks

We present a novel tractable generative model that extends Sum-Product Networks (SPNs) and significantly boost their power. We call it Sum-Product-Quotient Networks (SPQNs), whose core concept is to incorporate conditional distributions into the model by direct computation using quotient nodes, e.g. P(A|B)=P(A,B)/P(B). We provide sufficient conditions for the tractability of SPQNs that generalize and relax the decomposable and complete tractability conditions of SPNs. These relaxed conditions give rise to an exponential boost to the expressive efficiency of our model, i.e. we prove that there are distributions which SPQNs can compute efficiently but require SPNs to be of exponential size. Thus, we narrow the gap in expressivity between tractable graphical models and other Neural Network-based generative models.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Sum-Product Networks (SPNs)(Poon and Domingos, 2011)

are a class of generative models capable of exact and tractable inference, where the probability function is directly modelled as a simple computational graph composed of just

weighted sum and product nodes, known also as Arithmetic Circuits (Shpilka and Yehudayoff, 2010), following a strict set of constraints on its connectivity. SPNs have been applied to solve a wide range of tasks, e.g. image classification (Gens and Domingos, 2012), activity recognition (Amer and Todorovic, 2012, 2016), and missing data (Sharir et al., 2016). While SPNs have certain advantages in some areas, in cases where expressiveness is a limiting factor, they fall far behind contemporary generative models such as those leveraging neural networks as their inference engine (Uria et al., 2016; van den Oord et al., 2016b; Dinh et al., 2017).

When SPNs were first introduced, it was hypothesized that perhaps all tractable distributions could be represented efficiently by SPNs. However, this hypothesis was later proven to be false by Martens and Medabalimi (2014)

. More specifically, they have shown that the uniform distribution on the spanning trees of a complete graph on

vertices, which is known to be tractable by other methods, cannot be realized by SPNs, unless their size is exponential in . The reason behind this limitation is not due to the simple operations on which they are built on, as any efficiently computable function could be approximated arbitrarily well by a polynomially-sized arithmetic circuits (Hoover, 1990), but rather its the strict structural constraints of SPNs that are required for tractability.

In this paper, we introduce an extension to SPNs, which we call Sum-Product-Quotient Networks (SPQNs for short), that addresses the limited expressivity of SPNs. The underlying concept behind our extension is to incorporate conditional probabilities into the model through direct computation, i.e. by repeatedly applying the formula . Specifically, we show that by adding a quotient node, i.e. a node with two inputs that computes their division, we can relax the structural constraints of SPNs and still have a model capable of tractable inference, where each internal node represents a conditional probability over its input variables. Moreover, we prove that while SPQNs can represent any distribution SPNs can, by virtue of being their extension, there exists distributions that are efficient for SPQNs but require SPNs to be of exponential size, proving SPQNs are exponentially more expressively efficient.

The rest of the article is organized as follows. In sec. 2 we briefly describe the SPNs model and its basic concepts. This is followed by sec. 3 in which we present our SPQNs extension, and prove that the resulting model is indeed tractable. In sec. 4 we analyze the expressive efficiency of SPQNs with respect to SPNs. Finally, we discuss the implications of our model on prior work and our plans for future research in sec. 5.

## 2 Preliminaries

In this section we give a brief description of Sum-Product Networks (SPNs). For simplicity, we limit our description to probability models over binary variables, where the extension to higher-dimensional or continuous variables is quite straightforward.

An SPN over binary random variables

is a rooted computational directed acyclic graph, which computes the unnormalized probability function of the evidence , denoted by , where denotes missing variables under which the SPN computes just the unnormalized marginal of the visible variables. The leaves of an SPN are univariate indicators of the binary variables, i.e.  and , with the special property that for all respective indicators of equal . The internal nodes of the SPN compute either a positive weighted sum or a product, i.e. an SPN is an Arithmetic Circuit over the indicator variables defined above. We denote by the set of sum nodes, by the set of product nodes, by the set of indicator nodes, and by the set of all nodes in the SPN. For all , we denote by the set of children nodes pointing to , and define the scope of , denoted by , as the index set of all variables, such that there exists a path starting at an indicator of a variable, which ends at the node . Formally, we define for leaf nodes of the -th variables, and otherwise . We denote the function induced by the sub-graph rooted at over the variables in by . Last, we define the following structural properties for SPNs:

###### Definition 1.

An SPN is complete if for every sum node and for every it holds that .

###### Definition 2.

An SPN is decomposable if for every product node and for every , such that , it holds that .

Generally, for an SPN that is not decomposable and complete, only represents an unnormalized distribution over , due to the positive constraints on its weights, while computing its normalization term is not typically tractable. A generative model is said to possess tractable inference if computing its normalized probability function is tractable. Though general SPNs do not posses tractable inference, limiting them to be decomposable and complete (D&C) is a sufficient condition for tractability, under-which computing the normalization term, i.e. computing is equivalent to evaluating , and thus the normalized probability is given by . Also, not only is a valid probability function, but for any , defines a valid distribution over . As shown by Peharz et al. (2015), simply normalizing the weights of each sum node to sum to one ensures that is already a normalized probability function, with no need to compute a normalization factor, and furthermore, this restriction does not affect the expressiveness of the model, namely any SPN with unnormalized sum nodes could be converted to an SPN of same size but with normalized sum nodes. Hence, for the remainder of the article we will simply assume sum nodes have normalized weights.

It is important to understand why D&C leads to tractability. The decomposability condition ensures that the children of a product node do not have shared variables, and because the product of distributions over different sets of variables is also a normalized distribution, then a product node of a decomposable SPN represents a normalized distribution as long as its children represent normalized distributions. Similarly, the completeness condition ensures that the children of sum nodes have the exact same scope, and because a weighted average of distributions over the same set of variables, with normalized sum weights, is also a normalized distribution over these variables, then a sum node represents a normalized distribution if its children do as well. Employing an induction argument, both conditions combined together guarantee that every node in an SPN will represent a valid distribution.

An additional positive outcome of the D&C condition is that not only is it tractable to compute , it is also tractable to compute any of its marginals, e.g.  for , by simply replacing the values of a marginalized variable with the special value , e.g. . We call this last property tractable marginalization, which is distinct from the weaker property of tractable inference.

Lastly, learning an SPN model of a given structure is typically carried out simply according to the Maximum Likelihood Principle, for which several methods have been proposed, ranging from specialized Expectation Maximization algorithms to gradient based methods, e.g. simply performing Stochastic Gradient Ascent.

## 3 Sum-Product-Quotient Networks

As discussed in sec. 1, not all tractable distributions can be represented by an SPN of a reasonable size, a limitation which stems from the D&C connectivity constraints imposed on the computational graphs of SPNs to achieve tractable inference. In this section we describe an extension of SPNs, under which we can relax these constraints and thus dramatically increase its capacity to efficiently represent tractable distributions. At the heart of our model is the introduction of a quotient node, i.e. a node with two inputs, a numerator and a denominator, that outputs their division. Quotient nodes can have a natural interpretation as a conditional probability, i.e. . Hence, we call our model Sum-Product-Quotient Networks, or SPQNs for short.

As with SPNs, not any computational graph made of sum, product and quotient nodes results in a model possessing tractable inference. To ensure the tractability of SPQNs, we introduce a set of restrictions generalizing the D&C conditions defined in sec. 2. Formally, and in accordance with the notations of sec. 2, we denote by the set of quotient nodes, where is the set of all nodes, and for all we denote its numerator and denominator nodes by and , respectively. As we will shortly show, each node of an SPQN essentially represents a conditional distribution over the variables in its scope, which give rise to a natural partition of the scope into two disjoint sets: (i) conditioning scope, denoted by , and (ii) effective scope, denoted by  – under this partition, for tractable SPQNs, each node computes the conditional probability . Formally, the conditioning scope is defined as the complement of the effective scope, i.e. , while the effective scope is defined the same as the general scope for all nodes except for quotient nodes, namely, for leaf nodes and for sum and product nodes. For quotient nodes we define following our intuition of quotient nodes as conditional probabilities, e.g. for it holds that and , because we started with the effective variables of the numerator , from which we subtracted the effective variables of the denominator .

With the above definitions in place, we are now ready to present our generalization of the D&C conditions for SPQNs. As discussed in sec. 2, the intuition behind the D&C conditions is that they allow for a rather basic way to combine the distributions defined by the children of a given node, each over their respective scope, to form a valid distribution over the scope of their parent node. In broad terms, we simply carry over the same idea to SPQNs, but apply it on conditional distributions instead. For sum and product nodes, this translates in essence to applying the D&C conditions with respect to the effective scope of a node instead of its general scope, which we formalize as:

###### Definition 3.

An SPQN is conditionally complete if it is complete with respect to the effective scope, i.e. for every sum node and for every , it holds that .

###### Definition 4.

An SPQN is conditionally decomposable if for every product node :

1. [nosep]

2. It is decomposable with respect to the effective scope, i.e. for every , such that , it holds that .

3. Its induced dependency graph over its children does not contain a cycle, where the directed graph is defined by the vertices and edges .

Under the conditional completeness condition, for every sum node , and for any fixed values to the variables in its conditional scope , we can treat the conditional distributions of its children simply as distributions over the variables in the effective scope. Because is complete with respect to the effective scope, then following the same arguments as in sec. 2, represents a distribution as long as its children do as well. The above logic can also be applied to product nodes under a more restrictive form of conditional decomposability, where for every child it holds that , under which the variables in the conditional scope of each child node are fixed. However, under the more general setting of conditional decomposability, there could be shared variables between the conditional scope of one child and the effective scope of another child  – in which case we say that depends on , as the probability of the effective scope of is conditioned on the variables in the effective scope of . By representing all the dependencies between the children of as a directed graph, then if each child represents a valid conditional distribution over its scope and the graph is acyclic

, then it effectively defines a Bayesian Network factorization to the conditional probability over the scope of

, hence too is a valid conditional distribution.

At this point it is important to note how conditional D&C are actually relaxed versions of their “unconditional” counterparts. First, notice that when the conditional scope is empty, i.e. when the sub-graph rooted at contains only sum and product nodes, or in other words this sub-graph is an SPN, then conditional D&C are equivalent to D&C. Second, and more importantly, notice that when the conditional scope is nonempty, conditional decomposability allows taking the product of nodes with overlapping scopes, which is forbidden under the stricter decomposability constraint. This entails that conditional D&C SPQNs allow for a richer set of structures than D&C SPNs.

At last, to ensure the tractability of SPQNs we must also introduce a condition on its quotient nodes, to which there is no equivalent in classical SPNs. The following condition captures our motivation of a quotient node as a way to compute conditional distributions by direct representation of their definition, i.e. that the denominator is a strictly positive marginal distribution of the numerator:

###### Definition 5.

An SPQN is conditionally sound if for every quotient node , it holds that is strictly positive, as well as a marginal of , i.e. that , , and for all it holds that:

 Ψde(v)(a)=∑\mathclapz∈{0,1,∗}N∀i,i∉eff(v)→zi=ai∀i,i∈eff(v)→zi∈{0,1}Ψnu(v)(z)

An SPQN is strongly conditionally sound if in addition to the above, for such that if and otherwise , it holds that .

The definition of strong conditional soundness above is not required for tractability – only the weaker conditional soundness – but does ensure efficient sampling as discussed in sec. 3.2. We conclude by formally proving that an SPQN that meets the above conditions, which will henceforth be referred to as a tractable SPQN, results in a tractable generative model, as described by the following theorem (see app. A.1 for proof):

###### Theorem 1.

For any conditionally decomposable, conditionally complete, and conditionally sound SPQN over the random binary variables , for all , and any values of the variables found in , it holds that is a normalized probability function over conditioned on .

Given an SPQN with a fixed structure that meets the tractability conditions of theorem 1

, then its output is a differential probability function of the data, and so we can learn its parameters simply by maximizing the likelihood of the data through gradient ascent methods, as commonly employed by both SPNs and other deep learning methods. Adjusting other methods typically used to learn SPNs, e.g. EM-type algorithms for parameter learning and the various suggested structure learning algorithms, is deferred to future works.

Though theorem 1 provides sufficient conditions for SPQNs to be tractable, it is not prescriptive as to how exactly these models must be structured. Specifically, while the conditionally decomposable and conditionally complete conditions are quite simple to follow, it is generally not clear how to adhere to the conditionally sound condition. We address this in the next section.

### 3.1 Conditional Mixing Operator

As discussed in the previous section, tractable SPQNs must comply with the conditionally sound condition, and verifying that a given model adheres to it is nontrivial. In this section, we suggest instead to follow a stricter restriction that leads to a concrete construction of a tractable SPQN. Specifically, we define a building block operator composed of sum, product, and quotient nodes that guarantees the resulting model to be tractable, which we call the Conditional Mixing Operator:

###### Definition 6.

The Conditional Mixing Operator (CMO) over non-negative matrices and , where , , and parametrized by strictly positive weights such that , is defined as follows:

 CMO(A,B;w)=γ∑\mathclapi=1wi(α∏\mathclapj=1Aij)⋅(β∏\mathclapj=1Bij)γ∑\mathclapi=1wiα∏\mathclapj=1Aij (1)

In the context of SPQNs, a CMO node with children outputs , where .

The motivation behind this construction is its connection to the conditional probability of a mixture model. Notice that the numerator of eq. 1 essentially represents a mixture model with decomposable mixing components divided into two sets according to and , while the denominator represents the marginalization over the variables relating to .

The tractability of SPQNs composed of CMOs is ensured by the definition a valid CMO node as follows:

###### Definition 7.

A CMO node with children is said to be valid if the following conditions are met:

1. [nosep]

2. The children of a CMO node are either valid CMO nodes themselves, or it holds that , and its children are exactly and for some .

3. The internal sum nodes of the CMO are conditionally complete.

4. The internal product nodes of the CMO, i.e. the ones computing , , and , are conditionally decomposable, and in the dependency graph of the top product node there are no arrows pointing from to .

5. ,   .

We proceed to formalize our claim as follows:

###### Proposition 1.

Any SPQN that is composed of valid CMO nodes is tractable. Moreover, it is strongly conditionally sound.

###### Proof Sketch.

Since the internal sum and product nodes of a valid CMO are already conditionally D&C, it is only left to show that it is also conditionally sound. This is achieved by an induction argument on the depth of an SPQN composed of valid CMOs, where we assume all nodes up to a given depth are strictly positive, conditionally sound, and hence also represent valid distributions according to theorem 1. By the assumption, the internal sum and product nodes of a depth valid CMO node also represent valid distributions, as they are already conditionally D&C. Hence we can directly compute its marginalization over the variables in the effective scope of the B-type children, to conclude our proof of conditional soundness. Strong conditional soundness follows from conditional soundness and the definition of the CMO, since placing in all variables of the effective scope of the B-type children is equivalent to substituting their values with ’s. See our complete proof in app. A.2. ∎

Unlike conditional soundness, it is practical to validate that all CMO nodes in a given SPQN are valid. Simply start at the root and recursively validate that each of the children of a given node are valid, with the base case of CMO nodes connected to one of the indicator nodes, as govern by the first condition in def. 7. We then proceed to verifying that the internal product and sum nodes follow the conditional D&C constraints, by simply testing their effective and conditional scopes according to def. 3 and def. 4.

Though valid CMOs pave the way to tractable SPQNs, they raise the question of what we have lost in the process. Indeed, conditional soundness allows for a richer set of valid structures than valid CMOs, e.g. they allow for the distribution at the denominator and numerator of a quotient node to be defined by completely different sub-graphs, unlike with CMOs that share children. While we have yet to determine if there is a significant expressivity gap between these two cases, an important property of an SPQN composed of valid CMOs is that any D&C SPN can be effectively represented by such a model111An edge case of SPNs which demands a unique treatment is when there exists a sum node which is connected to just one of or , but not both, while a valid CMO must have positive weights for both indicator leaves. In this scenario we can instead arbitrarily approximate the SPN, by approaching the zero weight ., hence this restriction is at least as expressive as any D&C SPN. In sec. 4 we show that they are in fact significantly more expressive than SPNs.

### 3.2 The Generative Process of SPQNs

In prior sections we have presented our SPQN model, and showed that it can be tractable under simple conditions, and more importantly that any of its internal nodes represent a conditional distribution over its scope. In this section we leverage these relations to describe the generative process of SPQNs, showing sampling from an SPQN is just as efficient as inference, under the strongly conditional soundness constraint. The ability to efficiently draw samples from a probabilistic model is a highly desirable trait with many applications, e.g. completing missing values, and introspection of the learned models.

Sampling from a tractable SPQN model follow the same general steps as sampling from a D&C SPN. We begin at the root node of the graph, and then stochastically traverse the nodes according to parameters of the model, until we reach the indicator nodes, each representing the sampled value for its respective random variable. In SPNs, traversal follow two simple rules: (i) if we encounter a product node, then because it is decomposable, each child is a distribution over separate sets of variables, hence we can recursively sample from each child separately; (ii) if we encounter a sum node, then we sample one of its children according to the categorical distribution defined by their respective weights. Given that SPQNs are extensions of SPNs, their generative process can be seen as simply a generalization of the traversal rules of the SPNs. However, their distinctive property of having nodes which represent conditional distributions, calls for some adjustments. Namely, it is not only required to traverse the graph, but also to keep track of the values that have already been sampled so far in the process, and then pass it along to nodes which depend on it.

The above reasoning brings us to algo. 1, which receives as input a starting root node , and a partial sample , where denotes values which have yet to be sampled. Typically, the first call to algo. 1 will be with the root and , i.e. sampling a complete instance , but often times it is useful to also be able to sample from the conditional distribution222Exactly sampling from a conditional distribution is possible only if it respects the dependencies induced by the model on the input variables., e.g.  by calling with . The inner-workings of algo. 1 follow the traversal workflow of SPNs as described above, with the following adjustments: (i) For quotient nodes, we directly traverse to its numerator child, as the denominator only serve as a normalization factor. (ii) For product nodes, though the effective scopes of the children are disjoint sets and could be processed separately as with SPNs, the dependencies induced by the conditional scopes of each child require sampling according to the topological order of the dependencies graph. Additionally, there is the possibility that the effective scope of some child nodes have already been sampled, in which case we simply skip it. (iii) For sum nodes, the probability of sampling each child is no longer given just by its weights, but also by the marginal probability of the already sampled variables given by , namely if denotes the set of sampled variables then we can factor the conditional distribution of the sum node , i.e. , as the following expression:

 \smashoperator[lr]∑c∈ch(v)Pc(eff(c)∖Q|Q)\crampedwc⋅Pc(eff(c)∩Q|cond(c))\smashoperator[lr]∑c′∈ch(v)wc′⋅Pc′(eff(c′)∩Q|cond(c′))

where can be computed by according to strong conditional soundness, and thus the probability of sampling the child is .

Finally, regarding the complexity of the sampling algorithm, traversing the computational graph is linear in the number of nodes, and while computing when sampling from sum nodes could result in an runtime, in practice we could reuse prior computations to reduce it to just . In this analysis, we do not take into account the topological sort applied to the children of the product nodes, as this is a one time operation that is not required for every sampling. In conclusion, sampling from a tractable SPQN that is also strongly conditionally sound, e.g. by composition of valid CMOs, is just as efficient as with SPNs.

## 4 Analysis of Expressive Efficiency

In sec. 3, we have shown that tractable SPQNs extend D&C SPNs, and can thus efficiently replicate any tractable distribution that D&C SPNs can realize. In this section, we will show a simple tractable distribution which SPQNs can realize, but D&C SPNs cannot, unless their size is exponential in the length of their input, where the size of an SPQN (or SPN) is defined as the number of its internal nodes. More specifically, we show that tractable SPQNs can represent a strictly positive distribution of sampling an undirected triangle-free graph on vertices, where each edge is represented by a random binary number, while D&C SPNs of polynomial size cannot represent, or even approximate, such distributions.

First, let us formally define a strictly positive distribution over triangle-free undirected graphs on vertices. We define the binary random variables , such that if , then the edge is part of the graph, and not otherwise, and denote by the number of variables. For a given graph, we say it contains a triangle if and only if there are three vertices in the graph such that between any two of them there is an edge, i.e. there exists such that . Finally, we say that a probability function on the edges is a strictly positive distribution on triangle-free graphs if it holds that if and only if .

The above definition falsely appears to lead to an efficient realization through SPNs of a strictly positive distribution on triangle-free graphs: simply define a node for each potential triangle, such that it is positive only if it is legal, i.e. at least one of its edges is not part of the graph, and then take the product of all such nodes to guarantee all triangles are legal. More specifically, we can define a sum node for each triplet , for which there are combinations, such that each sum node is equal to , and then take the product of all of these sum nodes and modify their weights such that they output a normalized probability function. However, this SPN is not D&C, because each sum node does not meet the completeness condition as its children have different scopes, e.g. , and because the product node over all sum nodes does not meet the decomposability condition, as each edge is present in multiple triplets, i.e. multiple child nodes, resulting in non-disjoint scopes. Because it is not D&C, computing its normalization factor in practice is intractable. More generally, we can show that any D&C SPN approximating a strictly positive distribution on triangle-free graphs must be exponentially large:

###### Theorem 2.

Let be a strictly positive distribution on triangle-free graphs of vertices. Suppose that can be approximated arbitrarily well by D&C SPNs of size . Then .

###### Proof Sketch.

We have modified the proof of a similar theorem by Martens and Medabalimi (2014), which showed that a D&C SPN that can approximate arbitrarily well the probability function of the uniform distribution on the spanning trees of the complete graph, must be of size . See app. A.3 for the complete modification of that proof to our case. ∎

In contrast to D&C SPNs, tractable SPQNs can efficiently realize at least some strictly positive distributions on triangle-free graphs, with size at most polynomial in . In the case of SPQNs built on CMOs, exact realization is replaced by arbitrarily good approximation, without any size increase. This is formalized by the following theorem:

###### Theorem 3.

There exists a tractable SPQN exactly realizing a probability function , such that is a strictly positive distribution on triangle-free graphs of vertices, where the size of the SPQN is . In the case of SPQNs composed strictly of CMOs, instead of exact realization, they can approximate said distribution arbitrarily well with size .

###### Proof Sketch.

Taking inspiration from the failed attempt to realize such a distribution via D&C SPNs, let us now construct a tractable SPQN which does realize such a distribution efficiently. As before, we begin by examining all potential triangles, but instead of directly modelling the constraints individually, we group them by their largest edge (according to lexical ordering). For each edge and its respective group of triangles, we can define the conditional probability of that edge conditioned on all other edges participating in these triangles, such that the conditional probability is non-zero only if triangles which include this edge are not all part of that graph. For edges that are not part of any triangle for which they are the largest edge, we simply define a sum node which represent an equal probability for including the edge or not. Finally, we can simply take the product of all conditional distributions of each edge, giving rise to a normalized probability function over all edges , which is non-zero if and only if the edges in represent a triangle free graph. See app. A.4 for our complete proof. ∎

To conclude, we have shown that tractable SPQNs, as well as ones composed of valid CMOs, are exponentially efficient with respect to D&C SPNs.

## 5 Discussion and Related Works

In this work we address the limited expressive efficiency of SPNs, which Martens and Medabalimi (2014) have proven to be incapable of approximating even simple tractable distributions, unless their size is exponential in the number of variables. To mitigate this limitation of SPNs, we have presented a novel extension to SPNs which we call Sum-Product-Quotient Networks, or SPQNs for short. SPQNs introduce a new node type that computes the quotient of its two inputs, which in part enabled us to relax the strict structural conditions that are commonly used to ensure the tractability of SPNs. By requiring less strict conditions for tractability, we have proven that SPQNs are a strict superset of SPNs, and moreover that SPQNs are exponentially more expressive efficient than SPQNs.

There is a vast literature on analyzing the expressivity of arithmetic circuits (ACs) (Shpilka and Yehudayoff, 2010; Cohen et al., 2016; Cohen and Shashua, 2017; Cohen et al., 2017; Levine et al., 2017), and more particularly of SPNs (Delalleau and Bengio, 2011; Martens and Medabalimi, 2014). Notable amongst those is the work of Sharir and Shashua (2017), where they compared the expressive efficiency of Convolutional ACs (ConvACs) having no overlapping receptive fields, which are equivalent to a sub-class of SPNs following a tree-structure partitioning of scopes, against a ConvAC with overlaps, which have no equivalent D&C SPN. They have found that simply introducing overlaps, i.e. breaking the decomposability condition, had the effect of exponentially increasing the expressive efficiency of the model. A closer examination of their overlapping ConvAC reveals that it shares the same construct as the numerator of our CMOs nodes, but without the denominator, and thus their results could be trivially adapted to SPQNs following a similar architecture. This entails that not only are there some distributions which SPQNs can represent efficiently that SPNs cannot, as we have showed in sec. 4, but that almost all distributions realized by SPQNs cannot be realized by tree-like SPNs333Not to be confused with Sum-Product Trees (Peharz et al., 2015) that are a far more restricted sub-class of SPNs, in which every sum and product nodes have just a single parent, as opposed to limiting just the sum nodes to have a single parent as in non-overlapping ConvACs., known also as Latent Tree Models (Mourad et al., 2013), unless they are of exponential size. Nevertheless, it is important to stress the importance of our own results, which separate between SPQNs and D&C SPNs of any conceivable structure, and not just a small sub-class of SPNs.

Recently, Telgarsky (2017) has examined the relations between neural networks and rational functions, i.e. quotient of two polynomials, as well as a model he called rational networks

, which is a neural network with activation functions limited to only rational functions. He found that a new neural network with ReLU activations could be approximated arbitrarily well by a similarly size rational network, and that the reverse is true as well. Though this might seem to suggest that SPQNs could be on par with neural networks,

Hoover (1990) proved that any computable function can be realized by ACs – hence the power of SPQNs is not due to quotient nodes, but rather their richer structure.

In the broader literature on ACs (Shpilka and Yehudayoff, 2010), the proposal of introducing a quotient node has been previously considered and deemed unnecessary. Their argument is based on the proof that in circuits which compute polynomial functions all quotient nodes could be replaced by just a single negation node, or in other words, that a quotient node does not add any power to ACs. Despite this negative outcome, it does not apply to our case on two accounts: (i) It assumes the output of the circuit is identically a polynomial function instead of a rational function, and since the proof itself relies on the structural properties of the polynomial, namely its degree and homogenous decomposition, it cannot be adapted to our case. (ii) It does not apply to monotone ACs, where the weights are restricted to be non-negative, as is the case of SPNs, where negation is not allowed. In this context, it was proven that even a single negation gate leads to exponential separation from monotone circuits, and while quotient nodes could be replaced by negation, the reverse is not generally true, hence this last result does not trivialize our own. Overall, given our results, it might suggest that the role of quotient nodes should be reexamined for ACs.

While we prove that our SPQN model is exponentially more expressive than D&C SPNs, this increase in expressive efficiency does not come without a cost. One of the great advantages of SPNs is that they not only possess tractable inference, but also tractable marginalization (see sec. 2). This uncommon ability amongst generative models has many uses, e.g. for missing data (Rubin, 1976; Sharir et al., 2016). However, once we relax decomposability to conditional decomposability, it means that SPQNs effectively induce a partial ordering on the input variables, which limit tractable marginalization only to the subsets of the variables that agree with the ordering. While there appear to be fewer tasks which benefit significantly from tractable marginalization compared to just tractable inference, in the cases in which it is required, SPNs even with their limited expressivity still have an advantage over SPQNs. This is a limitation that we aim to address in future works, as we detailed below. Additionally, Martens and Medabalimi (2014) have shown that under mild assumptions D&C is not only sufficient but also necessary for tractable marginalization, which entails that any possible relaxation to D&C would result in losing general tractable marginalization, hence it is not specific to the case of SPQNs.

Other recent works on tractable generative models have mainly focused on the family of autoregressive models that are based on neural networks, most notable amongst are NADE

(Uria et al., 2016), PixelRNN (van den Oord et al., 2016b), and PixelCNN (van den Oord et al., 2016a). Despite the significant differences between the underlying operations of SPQNs and these models, there are also some similarities and shared concepts. Specifically, both our model and theirs are based on inducing a partial ordering on the input variables, and modelling the conditional probabilities between subsets of them, with the main difference as to how these probabilities are represented. While they employ neural networks as a black box to model them, we leverage interpretable SPNs to compose conditional distributions in a hierarchy. We conjecture that the embedded hierarchy of conditional distributions used in our model leads to an advantage in terms of its expressive capacity, while, in addition, the interpretable nature of the inner-workings of our model has many real-world applications.

Lastly, we conducted preliminary experiments demonstrating the practical advantages of SPQNs over SPNs in app. B. Nevertheless, it still remains to be verified that their superior expressive power translates to real-world applications – a task we aim to tackle in future works. Beyond that, SPQNs give rise to many straightforward extensions:

1. [nosep]

2. Generative Classifier:

SPQNs could be naturally extended to represent a distribution conditioned on a given class, i.e.

, especially suitable for semi-supervised learning, and for classification under missing data.

3. Tractable Marginalization: despite that marginalization is not naturally supported by SPQNs, they do induce normalized distributions over any subset of its input variables, which are not generally consistent with each other. Joint training of SPQNs on random subsets of its variables, could be sufficient for ensuring the consistency of the induced marginal distributions.

4. Convolutional SPQNs: our model has a natural formulation as a ConvNet-like generative model following the theoretical architecture of ConvACs with overlaps (Sharir and Shashua, 2017). The unparalleled success of ConvNets with the theoretical advantages of SPQNs has potential for rivalling neural tractable generative models.

This paper has demonstrated the theoretical viability of Sum-Product-Quotient Networks, suggesting a promising outlook for the above research directions.

#### Acknowledgments

This work is supported by Intel grant ICRI-CI #9-2012-6133, by ISF Center grant 1790/12 and by the European Research Council (TheoryDL project).

## References

• Amer and Todorovic (2012) Mohamed R Amer and Sinisa Todorovic. Sum-product networks for modeling activities with stochastic structure. Computer Vision and Pattern Recognition, 2012.
• Amer and Todorovic (2016) Mohamed R Amer and Sinisa Todorovic. Sum Product Networks for Activity Recognition. IEEE transactions on pattern analysis and machine intelligence, 38(4):800–813, 2016.
• Cohen and Shashua (2017) Nadav Cohen and Amnon Shashua. Inductive Bias of Deep Convolutional Networks through Pooling Geometry. In International Conference on Learning Representations ICLR, April 2017.
• Cohen et al. (2016) Nadav Cohen, Or Sharir, and Amnon Shashua.

On the Expressive Power of Deep Learning: A Tensor Analysis.

In Conference on Learning Theory COLT, May 2016.
• Cohen et al. (2017) Nadav Cohen, Ronen Tamari, and Amnon Shashua. Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions. arXiv.org, 2017.
• Delalleau and Bengio (2011) Olivier Delalleau and Yoshua Bengio. Shallow vs. Deep Sum-Product Networks. Advances in Neural Information Processing Systems, pages 666–674, 2011.
• Dinh et al. (2017) Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio.

Density estimation using Real NVP.

In International Conference on Learning Representations ICLR, April 2017.
• Gens and Domingos (2012) Robert Gens and Pedro M Domingos. Discriminative Learning of Sum-Product Networks. Advances in Neural Information Processing Systems, 2012.
• Gens and Domingos (2013) Robert Gens and Pedro M Domingos. Learning the Structure of Sum-Product Networks. ICML, 2013.
• Hoover (1990) H James Hoover. Feasible Real Functions and Arithmetic Circuits. SIAM Journal on Computing, 19(1):182–204, February 1990.
• Kingma and Ba (2015) Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015.
• Levine et al. (2017) Yoav Levine, David Yakira, Nadav Cohen, and Amnon Shashua. Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design. arXiv.org, April 2017.
• Martens and Medabalimi (2014) James Martens and Venkatesh Medabalimi. On the Expressive Efficiency of Sum Product Networks. CoRR abs/1202.2745, cs.LG, 2014.
• Mourad et al. (2013) Raphaël Mourad, Christine Sinoquet, Nevin Lianwen Zhang, Tengfei Liu, and Philippe Leray. A Survey on Latent Tree Models and Applications.

Journal of Artificial Intelligence Research

, 47(1):157–203, May 2013.
• Peharz et al. (2015) Robert Peharz, Sebastian Tschiatschek, Franz Pernkopf, and Pedro M Domingos. On Theoretical Properties of Sum-Product Networks. International Conference on Artificial Intelligence and Statistics, pages 744–752, 2015.
• Poon and Domingos (2011) Hoifung Poon and Pedro Domingos. Sum-Product Networks: A New Deep Architecture. In Uncertainty in Artificail Intelligence, 2011.
• Rubin (1976) Donald B Rubin. Inference and missing data. Biometrika, 63(3):581–592, December 1976.
• Sharir and Shashua (2017) Or Sharir and Amnon Shashua. On the Expressive Power of Overlapping Operations of Deep Networks. arXiv preprint arXiv:1703.02065, 2017.
• Sharir et al. (2016) Or Sharir, Ronen Tamari, Nadav Cohen, and Amnon Shashua. Tensorial Mixture Models. arXiv.org, October 2016.
• Shpilka and Yehudayoff (2010) Amir Shpilka and Amir Yehudayoff. Arithmetic Circuits: A survey of recent results and open questions. Foundations and Trends® in Theoretical Computer Science, 5(3–4):207–388, March 2010.
• Telgarsky (2017) Matus Telgarsky. Neural networks and rational functions. In

International Conference on Machine Learning ICML

, August 2017.
• Uria et al. (2016) Benigno Uria, Marc-Alexandre Cote, Karol Gregor, Iain Murray, and Hugo Larochelle. Neural Autoregressive Distribution Estimation. Journal of Machine Learning Research (), 17(205):1–37, 2016.
• van den Oord et al. (2016a) Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Koray Kavukcuoglu, Oriol Vinyals, and Alex Graves. Conditional Image Generation with PixelCNN Decoders. Advances in Neural Information Processing Systems, 2016a.
• van den Oord et al. (2016b) Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. In International Conference on Machine Learning, 2016b.

## Appendix A Deferred Proofs

This section contains proofs which were deferred to it from the body of the article.

### a.1 Proof of Theorem 1

We will prove the theorem using induction based on depth of the circuit rooted at a node , i.e. the maximal length of a path connecting a leaf to . Given that is a non-negative function, it is sufficient to show it is normalized, i.e. that for any fixed values of the variables in , denoted by where if , the following holds:

 ∑\mathclapx∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}Ψv(x)=1

For the base case of the induction of depth-1 SPQNs, which means must be an indicator node, i.e.  for some and , then and , and so summing over all possible values of is equal to meeting the normalization condition. Let us now assume that our induction assumption holds for all circuit of depth , and prove it also holds for . Since any SPQNs of depth is greater than 1, then the root node must either be a sum, product or quotient node, and not an indicator node. Additionally, for the root of such a circuit, because each of its child nodes can be viewed as a depth- sub-circuit, then according to the induction assumption it represents a normalized probability function over the variables in for any fixed values of the variables in . Next we will use this property to show that for any possible node type, represent a normalized probability function.

if is a quotient node, then according to conditional soundness then is a strictly positive function and hence the output of the quotient operation is well defined. Additionally, the conditional soundness also entails that is a marginal conditional distribution of , and specifically, that summing over all the possible values of the variables in equals to , and thus:

 \smashoperator[lr]∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}Ψv(x) =∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}Ψnu(v)(x)Ψde(v)(x) =1Ψde(v)(b)⋅\smashoperator[r]∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}Ψnu(v)(x) =1Ψde(v)(b)⋅Ψde(v)(b)=1

where we have used the fact that changing the values of the coordinates of for do not affect the value of as , in combination with the relationship between the sum over and .

If is a sum node, then according to conditional completeness the effective scopes of its child nodes are identical to its own effective scope. This also entails that for any because that is the complement of with respect to . We can also assume without losing our generality that , as variables outside of do not affect the output of regardless of their value. Given the last assumption and the induction assumption, all the children of represent conditional distributions over the same set of variables, and because the weights of are normalized to sum to one, then:

 \smashoperator[lr]∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}Ψv(x) =∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}\smashoperator[lr]∑c∈ch(v)wc⋅Ψc(x) =\smashoperator[l]∑c∈ch(v)wc⋅=1\smashoperator[r]∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}Ψc(x) =\smashoperator[l]∑c∈ch(v)wc=1

where the inner sum equals to due to the normalization of the child nodes.

Finally, we will consider the case that is a product node. Recall that conditional decomposability means that the effective scopes of each child of are disjoint sets, and that the directed dependency graph formed by the children of is an acyclic graph. To prove this case, we will use a secondary induction over the number of children of . In the base case of having just a single child , it holds that , and thus it is a normalized probability function due to the primary induction assumption. Let us assume that out secondary induction assumption holds for with children, and prove it also holds for children. Let be child of that is a sink node in the induced dependency graph, i.e. that none of the variables in its effective scope are part of the conditional scope of another child, hence the following holds:

 \smashoperator[r]∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}Ψv(x) =\smashoperator[r]∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v),xi∈{0,1}Ψ¯c(x)⎛⎜ ⎜ ⎜⎝\smashoperator[r]∏c∈ch(v)c≠¯cΨc(x)⎞⎟ ⎟ ⎟⎠ (1)=\smashoperator[l]∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v)∖eff(¯c),xi∈{0,1}∀i∈eff(¯c),xi=∗⎛⎜ ⎜ ⎜⎝\smashoperator[r]∏c∈ch(v)c≠¯cΨc(x)⎞⎟ ⎟ ⎟⎠=1\smashoperator[r]∑z∈{0,1,∗}N∀i∉eff(¯c),zi=xi∀i∈eff(¯c),zi∈{0,1}Ψ¯c(z) (2)=\smashoperator[l]∑x∈{0,1,∗}N∀i∉eff(v),xi=bi∀i∈eff(v)∖eff(¯c),xi∈{0,1}∀i∈eff(¯c),xi=∗⎛⎜ ⎜ ⎜⎝\smashoperator[r]∏c∈ch(v)c≠¯cΨc(x)⎞⎟ ⎟ ⎟⎠=1

where the equality marked by is due to decomposing the sum into two nested sums, one where we iterate over the different values of just over the coordinates matching the variables in the effective scope of that are not in the effective scope of and the second nested sum over the remaining coordinates of the effective sum. Because the inner sum affects only the variables in we can extract all over nodes out of it, this is because of our assumption that is a sink node and hence is not part of the scopes of the other children, in addition to the fact that the effective scopes are disjoint sets. The equality marked by is because is a normalized probability function according to our primary induction assumption, hence the inner sum equals to one. The final equality is due to our secondary induction assumption, as there are only child nodes left and thus that sum also equals to one. This concludes the proof for both the secondary and the primary induction assumption.

### a.2 Proof of Proposition 1

By the second and third conditions in def. 7, all product and sum nodes in an SPQN composed of valid CMOs must be conditionally D&C, and thus, according to theorem. 1, we only need to prove that it is conditionally sound for it to be tractable. We employ induction on the depth of the SPQN rooted at with the assumption that all SPQNs up to depth that are composed of valid CMO nodes are strongly conditionally sound, hence also valid distributions, strictly positive functions, and that for all such that if it holds that .

We begin with the base case of a CMO node connected to the two indicator leaf nodes and for some , which according to def. 7 is the only valid CMO node that is connected to the leaves. Under this case the output of the CMO node is equal to a single sum node computing , where is strictly positive. Since the output is simply a single sum node over indicators of the same variable, it immediately follows that it is conditionally decomposable, complete and sound. Additionally, since is strictly positive, then the output of the node is also strictly positive for any value of . Finally, when setting the output equals to .

Let denote the root CMO node of an SPQN of depth . Without losing our generality, we can assume that (see def. 6) with children , otherwise we can substitute each of the products, and , with an auxiliary valid CMO node that computes just the product, i.e. with no A-type children, which is trivially conditionally sound. Since we assume all the children of represents strictly positive functions, and since the output of is composed of products and weighted sums with positive weights, then the output of is also strictly positive. According to def. 7, the internal sum and product nodes of are conditionally D&C, and thus their respective rooted sub-SPQNs are tractable by the induction assumption, which means they represent valid distributions. Additionally, def. 7 also entails that the effective scopes of each of are equal to , and do not appear in the conditional scopes of . Now, for any the following holds:

 ∑\mathclapz∈{0,1,∗}N∀i∉eff(v),zi=ai∀i∈eff(v),zi∈{0,1}Ψnu(v)(z) =\smashoperator[l]∑z∈{0,1,∗}N∀i∉eff(v),zi=ai∀i∈eff(v),zi∈{0,1}γ∑i=1Ψai(z)Ψbi(z) (1)=γ∑i=1Ψai(a)=1\smashoperator[lr]∑z∈{0,1,∗}N∀i∉eff(v),zi=ai∀i∈eff(v),zi∈{0,1}Ψbi(z)(2)=Ψde(v)(a)

where the equality marked by is because the nodes of are not affected by the changing coordinates specified by , while the equality marked by follows from our induction assumption that the children already represent normalized probability functions, and thus summing over them equals to one.This proves that the denominator is a marginal of the numerator, which prove that the SPQN rooted at is conditionally sound. To prove that it is also strongly conditionally sound, we simply notice that for any such that it holds that based on our induction assumption as , and thus:

 Ψnu(v)(z)=γ∑i=1=1Ψai(z)Ψbi(z)=Ψde(v)(z)

which proves that the SPQN rooted at is strongly conditionally sound. Additionally from the conditionally sound property we have just proven, it thus follow that

 Ψv(z)=Ψnu(v)(z)Ψde(v)(z)=Ψde(v)(z)Ψde(v)(z)=1

proving that all of our induction assumptions hold and completing our proof of the proposition.

### a.3 Proof of Theorem 2

We heavily base our proof on Martens and Medabalimi (2014), who have proven a very similar claim on a slightly different distribution on complete graphs, namely, that SPNs cannot approximate the uniform distribution on the spanning trees of a complete graph. Next, we go through the steps of their proof, citing the relevant lemmas, and highlighting the places where our proof diverges.

We begin by citing the following decomposition lemma, paraphrased to match the notations and definition of sec. 2:

###### Lemma 1 (paraphrase of theorem 39 of Martens and Medabalimi (2014)).

Suppose are the respective outputs of a sequence of D&C SPNs of size at most over binary variables, which converges point-wise (considered as functions of ) to some function of . Then we have that can be written as:

 γ=k∑i=1gihi (2)

where and for all it holds that and are real-valued non-negative functions of and , respectively, where and are sub-sets / tuples of the variables in satisfying that , , and .

According to lemma 1, it is sufficient to show that if a function in the form of eq. 2 is equal to a strictly positive distribution of triangle-free graphs of vertices, denoted by , where is the number of variables representing the edges of the graph, then , because the is a lower bound on the size of any SPN approximating .

Because the functions that comprise are non-negative, then if and only if for all it holds that . Thus, if , i.e.  represents a triangle-free graph, then either or on . We will prove that by showing that each term can be non-zero on at most a small fraction of the triangle-free graphs, and more specifically, that it can be non-zero only on a small fraction of spanning trees, which are only a sub-set of all triangle-free graphs.

Let and be functions as above, such that , , and , and that implies or . Examining the possible triangles of , we single out all the triangles such that some of the edges are part of and some of . Notice that for such triangles the function must employ a conservative strategy, as each function on its own only see a part of the possible edges of the triangle and hence cannot decide whether all edges are in the graph or not. Martens and Medabalimi (2014) call such triplet of edges constraint triangles, and prove the following claims:

###### Claim 1 (Paraphrase of proposition 42 of Martens and Medabalimi (2014)).

Let , , and be three different edges that form a constraint triangle with respect to and as above, for which if all edges are part of the graph then . Additionally, suppose that both and are in the same set of variables with respect to the partition . Then the following properties hold:

• for all values of such that and , i.e. are part of the graph represents.

• for all values of such that , i.e. is part of the graph represents.

###### Claim 2 (Paraphrase of lemma 43 of Martens and Medabalimi (2014)).

Given any partition of the edges of into disjoint sets , such that