1 Introduction
There exists a tradeoff between classical probabilistic graphical models and recent tractable probabilistic models. Classical models, such as Bayesian networks (BNs) [pear88], provide highlevel interpretability, as conditional independence assumptions are directly reflected in the underlying graphical structure. However, the downside is that performing exact inference in BNs is NPhard [coop90]. In contrast, modern tractable probabilistic models, like sumproduct networks [darwiche09, poon2011sum], allow a wide range of tractable inference, but are harder to interpret. In order to combine advantages of both BNs and SPNs – which are complementary regarding interpretability and inference efficiency – tools to convert back and forth between these types of models are vital.
The direction compiling BNs into SPNs is well understood due to Darwiche’s seminal work on arithmetic circuits (ACs) compilation [darwiche09].^{1}^{1}1ACs and SPNs are equivalent models. Deterministic models are typically referred to as ACs, while nondeterminstic models are called SPNs. See Section 2 for further details. Since inference in ACs/SPNs can be performed in linear time of the network size, BN compilation amounts to finding an inference machine with minimal inference cost. ACs can also take advantage of contextspecificindependence [bout96] in the BN parameters to further reduce the size of the AC.
The converse direction of SPN decompilation into BNs has received limited attention. This lack of attention can be understood historically: since an original purpose of ACs was to serve as efficient inference machine for a known BN, decompilation would seem like a mere academic exercise. The proposition of SPNs, however, introduced some practical changes to the AC model. First, unlike ACs, SPNs are typically learned directly from data, i.e., a reference BN is not available. Thus, providing a corresponding BN would greatly improve the interpretability of the learned SPN. Second, as already mentioned, SPNs are typically nondeterministic, which naturally introduces an interpretation of SPNs as hierarchical latent variable models [Peharz:2016wl, choi2017relaxing]. A decompilation algorithm for SPNs should account for this fact, and generate BNs with a plausible set of latent variables. Note that BNs which do not adequately account for latent variables tend to be densely connected, see e.g. [elidan2001discovering]. Thus, naively decompiling SPNs with a decompilation algorithm devised for ACs would yield densely connected and rather uninterpretable BNs.
In this paper, we address this gap by formalizing SPN decompilation. We propose SPN2BN, an algorithm that converts a trained SPN into a BN. Our algorithm arguably improves over the only two other approaches in the literature addressing the connection between BNs and SPNs [zhao2015relationship] and [Peharz:2016wl]. First, while both of these approaches produce a BN for a given SPN, these BNs are, in general, not minimal independence maps (Imaps) [darwiche09], i.e., they introduce unnecessary dependency assumptions. Our algorithm SPN2BN, on the other hand, produces minimal Imaps. Second, both [zhao2015relationship, Peharz:2016wl] are excessive with the number of introduced latent variables. In fact, both approaches interpret each single sum node in an SPN as a latent variable on its own. In this paper, we devise a more economical approach and identify groups of sum nodes to jointly represent one latent variable. This grouping is based on whether sum nodes are “on the same level of circuit hierarchy” and “responsible” for the same set of observable variables (these notions will be made formal in Section 3).
These design choices for SPN2BN improve over [zhao2015relationship] and [Peharz:2016wl] both in terms of a reduced number of BN nodes (latent variables) and a reduced number of edges (minimal Imapness). While this design leads to more succinct and perhaps more esthetic BNs, SPN2BN is also justified in a formal way. We show that SPN2BN can be seen as the inverse of the compilation process proposed in [darwiche09]. Consider a BN that was compiled into an AC with variable elimination following a reverse topological order (VErto). Convert the AC into an SPN with an optional marginalization operation^{2}^{2}2SPNs are closed under marginalization, that is, any submarginal of any SPN can again be represented as an SPN. [darwiche09]
creating latent variables by rendering random variables unobserved. Then, decompiling SPN
with SPN2BN yields a BN with a set of directed edges that are a superset of the original BN, and that which we call the moral closure of . The SPN2BN algorithm is consistent with respect to a compilation algorithm by always yielding moral closures of any given BN . Consistency is arguably a desirable property for any decompilation method, since it allows us to exactly characterize the output BN with respect to the input BN. This, in turn, allows us to establish idempotence between compiler and decompiler. In other words, the result of applying the compilationdecompilation process once is the same result as applying the process twice. In contrast, [zhao2015relationship] and [Peharz:2016wl] are not consistent with any generalpurpose compilation algorithm, and tend to increase the number of variables and edges in the constructed BN. Lastly, even when the input SPN does not stem from an assumed compiler, e.g., when it is learned from data, the VErto compilation assumption within SPN2BN helps us to interpret the result of decompilation.For example, consider a prominent example of a BN in Figure 1
(subfig:bn_hmm), commonly known as hidden Markov model (HMM). In Figure
1 (subfig:spn_hmm), we see the result of VErto, followed by marginalization of , , and , deeming these three variables latent. This SPN shall be converted back into a BN. In Figure 2 (subfig:zhao_hmm, subfig:peharz_hmm) we see the BNs produced by [zhao2015relationship] and [Peharz:2016wl], respectively. Both BNs introduce more variables than were present originally, and the introduced edges hardly reflect the succinct independence assumptions of HMM. In Figure 2 (subfig:us_hmm), the decompilation result by our SPN2BN algorithm is depicted. I can be seen that SPN2BN recovers the original HMM structure, where “new” latent variables , , and have been introduced, which exactly correspond to the original latent variables , and , respectively. Evidently, no decompilation is able to recover the original labels for these variables, since reference to these has been explicitly removed by the previous (optional) marginalization operation. However, we see that SPN2BN successfully detects their signature in the compiled SPN, enabling it to recover an equivalent set of latent variables.Our theoretical results are empirically confirmed on a systematic range of compiled BNs, namely, all possible connected BNs containing up to 7 variables using all possible VErtos, followed by marginalization of all internal variables. Our algorithm SPN2BN recovers the moral closure of the original BN in every case.
2 SumProduct Networks
Here, we review BNs, ACs, and SPNs, as well as the compilation of BNs into SPNs.
We denote random variables (RVs) by uppercase letters, such as and , possibly with subscripts, and their values by corresponding lowercase letters and . Sets of RVs are denoted by boldfaced uppercase letters and their combined values by corresponding boldfaced lowercase letters. The children of a variable in a directed acyclic graph (DAG) , denoted , are the immediate descendants of in . Similarly, the parents of a variable are immediate ancestors of . The descendants are the variables with a directed path from to in . A variable is called a vstructure in a DAG , if directed edges and appear in , where and are nonadjacent variables in .
Before defining a BN, we review two key concepts: dseparation and Imaps. The independency information encoded in the DAG can be read graphically by the dseparation algorithm in linear time [geigerVermaPearl89].
Definition 1.
[pear88] If , , and are three disjoint subsets of nodes in a DAG , then is said to dseparate from , denoted , if along every path between a node in and a node in there is a node satisfying one of the following two conditions: (i) has converging arrows and none of or its descendants are in , or (ii) does not have converging arrows and is in .
The next definition formalizes when a DAG is an Imap of a joint probability distribution (JPD).
Definition 2.
[darwiche09] Let be a DAG and be a JPD over the same set of variables. is an Imap of if and only if every conditional independence read by dseparation on holds in the distribution . An Imap is minimal, if ceases to be an Imap when we delete any edge from .
BNs are DAGs with nodes representing variables and edges representing variable dependencies, in which the strength of these relationships are quantified by conditional probability tables (CPTs).
Definition 3.
[pear88] Given a JPD on a set of variables , a DAG is called a Bayesian network (BN) of if is a minimal Imap of .
A BN over variables has its CPTs defined over each variable given its parents, that is , for every . One salient feature is that the product of the BN CPTs yields a JPD over the set of all variables. In a BN, the independencies read by dseparation in the DAG are guaranteed to hold in the JPD . Unfortunately, while BNs have clear interpretability, exact inference in BNs is NPhard [coop90].
BNs can be compiled into Arithmetic Circuits (ACs) [darwiche09] by graphically mapping the operations performed when marginalizing all variables from the BN.
Definition 4.
[darwiche09] An arithmetic circuit (AC) over variables is a rooted, DAG whose leaf nodes are labeled with numeric constants, called parameters, or variables, called indicators, and whose other nodes are labeled with multiplication and addition operations.
Notice that parameter variables are set according to the BN CPTs, while indicator variables are set according to any observed evidence.
SPNs are a probabilistic graphical model that can be learned from data using, for instance, the LearnSPN algorithm [gens2013learning].
Definition 5.
A sumproduct network (SPN) is a DAG containing three types of nodes: leaf distributions, sums, and products. Leaves are tractable distribution functions over . Sum nodes compute weighted sums , where are the children of and are weights that are assumed to be nonnegative and normalized [peharz2015theoretical]. Product nodes compute . The value of an SPN, denoted , is the value of its root.
The scope of a sum or product node is recursively defined as , while the scope of a leaf distribution is the set of variables over which the distribution is defined. A valid SPN defines a JPD and allows for efficient inference [poon2011sum]. The following two structural constraints on the DAG guarantee validity. An SPN is complete if, for every sum node, its children have the same scope. An SPN is decomposable if, for every product node, the scopes of its children are pairwise disjoint. Valid SPNs are of particular interest because they represent a JPD over the variables in the problem domain. In addition, like ACs, exact inference is linear in the size of the DAG. Unlike ACs, however, SPNs allow for a latent variable (LV) interpretation [Peharz:2016wl].
In [poon2011sum], it was suggested that SPN sum nodes can be interpreted as LVs by simply connecting indicator variables for each sum node’s child. However, this naive augmentation of the model renders the SPN incomplete. [Peharz:2016wl] remediate this problem by augmenting an SPN with twin sum nodes. We overview next the process of augmenting an SPN as suggested in [Peharz:2016wl]. First, explicitly represent each sum node as LV by connecting indicator variables , for the th child of , using a product node . That is, directed edges , , and are added to . Second, add twin sum nodes to fix the completeness problem as follows. Given a sum node in with children and weights , twin sum node of has the same number of weights and indicator variables as children. Consider sum node in , [Peharz:2016wl] calls a conditioning sum node if there exists a child with . Add a twin sum node , for each sum node in with at least one conditioning sum node . Then, connect every of to using the previously added product nodes . (See AugmentSPN in [Peharz:2016wl] for complete details.)
3 SPN Decompilation
In this section, we formalize SPN decompilation into a BN.
The interpretation of SPN latent space, at a first glance, is akin to reading tea leaves. For instance, every SPN sum node can be viewed itself as a LV, as done in [zhao2015relationship, peharz2015theoretical]. In stark contrast, all SPN sum nodes can be interpreted as one single LV [Peharz2015thesis]. This provides a wide spectrum of interpretations based only on those sum nodes appearing in an SPN. In addition, external LVs can be introduced to an SPN, such as the switching parents in [peharz2015theoretical]. Thus, for a given SPN learned from data, there are seemingly countless possible interpretations of its latent space.
However, if we assume that a given SPN was compiled using a wellknown procedure, we can make more useful interpretations of the latent space. A compilation assumption is introduced to guide the interpretation of SPN latent space. Although there are different methods for compiling an SPN, we suggest the use of a compilation assumption that matches the common understanding of an SPN as a hierarchical mixture model [peharz2015theoretical]. More specifically, the compilation assumption used in this paper is that a given SPN was compiled by Algorithm 1 from a BN, where all BN variables were eliminated using the inference algorithm Variable Elimination (VE) [zhan94]
following any reverse topological order (VErto). The intuition behind this assumption is that the recursive marginalization of variables during VE are responsible for forming the hierarchical layers of sum nodes in an SPN. This sum layer hierarchy is consistent with the natural interpretation of an SPN as a hierarchical Gaussian mixture model, as suggested in
[Peharz:2016wl].In Algorithm 1, start by converting a given BN into an AC in line 4 using VErto , as described in [darwiche09]. Then, in line 6, we remove the leaf parameters from by equivalently redistributing their values as sum weights, yielding an SPN . In line 7, we assume that all internal latent variables in are marginalized and, thus, all of their indicator variables are set to 1. Here, any arbitrary subset of the internal latent variables can be considered instead. Next, we recursively simplify by applying three operations until no further change can be made. In line 11, sum nodes with only indicator nodes as children are converted into a terminal node [zhao2015relationship], which are a univariate distribution over the indicator variable. Product nodes with only product node children are simplified into a single product node in line 12. Finally, in line 14, if two or more product nodes have the same set of children, then they are lumped into a single product node. These three operations are then repeated until no further change can be made on .
On the other hand, by decompilation, we mean the procedure of converting an SPN into a BN. This process involves determining the RVs and a DAG for a BN. We can suggest RVs for the BN by analyzing the compilation assumption. Similarly, a BN DAG can be obtained as an Imap from the SPN DAG. We now formalize these ideas.
Definition 6.
Given an SPN over RVs and a compilation assumption, SPN decompilation is an algorithm that both:

suggests a set of LVs , and

produces an Imap over and .
Task (i) of SPN decompilation is more involved than expected. A naive approach is to disregard the compilation assumption and treat each sum node as one LV. Negative consequences of this approach will be discussed in the next section. We suggest a more elegant approach by interpreting the effect of the compilation assumption on graphical characteristics of the SPN.
Recall that we assume the SPN was compiled using VErto. During compilation of the SPN, marginalizing variables creates groups of sum nodes in the same layer (the distance of the longest path from the root). Hence, identifying these groups is a way of suggesting RVs for the decompiled BN.
More formally, given a sum node , the sum depth of is the number of sum nodes in the longest directed path from the root to .
Example 1.
The sumdepth of sum node in the SPN of Figure 1 (subfig:spn_hmm) is 2, since there are 2 sum nodes on the longest path from the root to . Similarly, the sumdepth of is 1 and of is 0.
A sumlayer is the set of all sum nodes having the same sumdepth.
Example 2.
One sumlayer in the SPN of Figure 1 (subfig:spn_hmm) consists of and , since both and have a sumdepth of 2. Furthermore, and form another sumlayer, as does by itself.
A sumregion is the set of all sumnodes within the same sumlayer and having the same scope.
Example 3.
Sumlayer and in the SPN of Figure 1 (subfig:spn_hmm) has only one sumregion, since and have the same scope. For the same reason, sumlayer and also has only one sumregion.
A sumregion is created by marginalizing variables during our compilation assumption. Therefore, to answer task (i) of SPN decompilation, we suggest the set consists of one LV per sumregion.
Example 4.
In the SPN of Figure 1 (subfig:spn_hmm), we suggest three LVs, namely, , one per sumregion.
We now turn our attention to task (ii) of SPN decompilation, that is, constructing an Imap over and . Augment the SPN as done in [Peharz:2016wl]. However, before continuing, we need to correct the notion of a conditioning sum node for the following reason. Consider sum node in the SPN of Figure 1 (subfig:spn_hmm). [Peharz:2016wl] would not define sum node as a conditioning sum node for , even though would appear as a conditioning variable for in the CPT , as depicted in the constructed Imap in Figure 2 (subfig:peharz_hmm).
Definition 7.
An ancestor sum node of a node in an augmented SPN is called conditioning, if at least one child of reaches a different set of twins for .
Example 5.
Consider sum node in the SPN of Figure 1 (subfig:spn_hmm). Sum node is conditioning w.r.t. since the leftmost child of reaches , but the rightmost child would reach a twin of in the augmented SPN. Node is not conditioning for , since all children of reach the same set of twins for in the augmented SPN.
In Example 5, observe that is not a conditioning sum node for and hence does not appear as parent of in our constructed Imap in Figure 2 (subfig:us_hmm).
The SPN decompilation techniques described thus far are formalized as Algorithm 2.
4 Theoretical Foundation
In this section, we first establish important properties of both SPN decompilation phases. Later, we show a favorable characteristic of our compilation assumption and Algorithm 2.
4.1 On SPN Decompilation
Our decompilation algorithm is parsimonious with the introduction of LVs. LVs are assigned to a group of sum nodes within a region, rather than one LV per sum node.
Regarding the Imap construction, we first show the correctness of the Imap, and then establish that the constructed Imap is minimal.
The Imap correctness follows from the CPT construction suggested in [Peharz:2016wl]. Theorem 1 in [Peharz:2016wl] shows conditional independencies among LVs, which allows us to derive an Imap encoding such independencies. More specifically, it describes CPT values for the CPT of a LV conditioned on all of its LV ancestors . However, the Imap we construct in Algorithm 2 implies the same CPT values, except for the CPT , where are those conditioning nodes defined by Definition 7. Since , the independencies represented in the Imap of [Peharz:2016wl] are a subset of those in the Imap built by Algorithm 2. The proof for these new conditional independencies and, thus, the correctness of our Imap, is formalized in Lemma 1.
Lemma 1.
Consider an augmented SPN with a sum node . Let be all of ’s ancestors and all of ’s conditioning sum nodes. Then, .
Proof.
Let be the nonconditioning sum nodes of . That is, . By Definition 7, for every , all ’s children reach the same set of twins for . That is, conditioning on any does not change the value of . By definition, is independent of given , meaning . ∎
We next show that our constructed Imaps are minimal.
Theorem 1.
Given an augmented SPN over RVs and LVs , the Imap built by Algorithm 2 is minimal.
Proof.
In a minimal Imap, if an edge is removed, then dseparation [pear88] will read an independence that does not hold in the JPD. Consider the Imap constructed by Algorithm 2 from an augmented SPN . By contradiction, assume that is not minimal, that is, an edge can be removed. In the Imap , dseparation would read an independence between and . We show next that this conditional independence does not hold in the JPD encoded by .
By line 2 of Algorithm 2, is a conditioning sum node of . By Definition 2, there exist two children and of such that , where is the set union of and its twins. Then, by construction, in the JPD encoded by over and , the event of selecting child , that is, , has a different outcome than selecting child , that is, . That is,
Thus, there is no conditional independence between and . A contradiction to the assumption of not being minimal. Therefore, must be minimal. ∎
One seeks minimal Imaps because nonminimal Imaps are not necessarily useful in practice [darwiche09, pear88, koll09].
4.2 Compilation and Decompilation
In this section, we first show that the compilationdecompilation algorithm, called BN2SPN2BN, constructs a unique BN for a given set of original BNs. A consequence of this is that BN2SPN2BN is idempotent.
Algorithm 3, called BN2SPN2BN, formalizes the process of compiling and decompiling a BN by applying BN2SPN followed by SPN2BN.
The next example shows that the BN output by BN2SPN2BN can be different than the original BN.
Example 7.
Notice that the directed edges of the original BN are a subset of those in the output BN.
In the reminder of this section, we assume is a fixed topological ordering of a given BN .
Definition 8.
A directed moralization edge is a directed edge added between two nonadjacent vertices and in a given BN whenever there exists a variable such that and , where .
Example 8.
Definition 9.
A directed ancestral moralization edge is a directed edge added between two nonadjacent vertices and in a given BN whenever there exists a variable such that and , and .
Example 9.
By definition, directed moralization edges are also directed ancestral moralization edges. We can now introduce the key notion of moral closure.
Definition 10.
Given a BN and a fixed topological order of , the moral closure of , denoted , is the unique BN formed by augmenting with all directed ancestral moralization edges.
Example 10.
We are now ready to present the first main result of our compilationdecompilation process.
Theorem 2.
Given a BN and a fixed topological order of , the output of the compilationdecompilation algorithm BN2SPN2BN is the moral closure of .
Proof.
(Crux). BN2SPN2BN is formed by BN2SPN followed by SPN2BN. We need to show where the directed ancestral moralization edges are introduced.
In BN2SPN, compiling a BN to an AC in line 4 uses the procedure described in [darwiche09], which pragmatically executes the summing out of variables in . Now, as also discussed in [darwiche09], VE induces a graph with parents of a common child being married. That is, these will be the directed moral edges. Later, during elimination, VE induces a graph by adding fillin edges among the neighbors of eliminated variables following . These will be the directed ancestral moralization edges.
In SPN2BN, the decompilation procedure follows the hierarchical oder of sum nodes in the SPN graph when building the BN. Thus, variables that are eliminated first in BN2SPN will appear in lower layers of the SPN. The connection of variables in the BN follows this hierarchy, which, by construction, builds all directed ancestral moralization edges from BN2SPN. ∎
Theorem 2 has a couple of important consequences. As Theorem 2 establishes that the output of BN2SPN2BN is the moral closure of the input BN , it immediately follows that the output BN is exactly the input BN whenever no directed ancestral moralization edges are added to . One situation where this occurs is when does not have any vstructures, such as in the case of HMMs. Here, , so the output BN of BN2SPN2BN is the same as the input (up to a relabelling of variables). For example, recall the HMM in Figure 1 (subfig:bn_hmm). BN2SPN2BN yielded back the same BN as illustrated in Figure 2 (subfig:us_hmm). A second important case is when the input is itself. This leads to out next result showing that our compilationdecompilation process is idempotent.
Theorem 3.
BN2SPN2BN is idempotent.
Proof.
Let be a BN and a fixed topological ordering of . By Theorem 2,
(1) 
By definition, the moral closure of is itself. Thus,
(2) 
Therefore, BN2SPN2BN is idempotent. ∎
The idempotent characteristic of BN2SPN2BN is useful in practice, since it guarantees the decompiled BN size as the moral closure size. In contrast, if we change the decompilation method to [zhao2015relationship] or [Peharz:2016wl], then applying BN2SPN2BN over and over will always yield a larger and larger BN.
It is perhaps worth mentioning here that directed concepts used in this section have undirected counterparts in the literature. The notion of a directed moralization edge corresponds to an undirected edge added during the process of moralizing a BN [jens88]. Furthermore, directed ancestral moralization edges correspond to fillin edges added between nonadjacent neighbours in an undirected graph when triangulating a BN [darwiche09, pear88, koll09]. In fact, it can be seen that a reverse topological ordering of the moral closure of a BN yields a perfect numbering. That is, eliminating variables does not add any fillin edges which means that (the skeleton of) is triangulated.
5 Synthetic Experiments
In this section, we first reaffirm our result in Theorem 2 with systematic experiments on a number of synthetic BNs. Next, we show how our decompilation method SPN2BN can enhance interpretability of an SPN learned from data.
5.1 Empirical Analysis on Moral Closure
We empirically tested the result in Theorem 2 by verifying whether the output BN of BN2SPN2BN was the moral closure of the input BN .
For a fixed number of variables, we exhaustively construct all possible connected BNs. That is, we discard disconnected BNs as well as cases not forming a DAG.
Table 1 describes our experiments. The number of variables ran from 2 through 7 inclusive. The number of BNs is given in the second line. The total number of possible elimination orderings is given in the third line. Finally, the last line reports the total number of trials that were conducted. In each and every case, our compilationdecompilation process returned the moral closure of the input BN .
2  3  4  5  6  7  

# BNs  1  3  21  315  9.8K  615K 
# elim ord  1  2  6  24  120  720 
# trials  1  6  126  7.6K  1.1M  448M 
5.2 SPN Decompilation and Interpretability
Here, we want to demonstrate how the decompilation algorithm can be used to better understand an SPN learned from data. Let us we decompile the wellknown SPN structure, called a region graph, first suggested in [poon2011sum].
The SPN structure in [poon2011sum] leverages local structure in an image dataset. The idea is to select all rectangular regions, with the smallest regions corresponding to pixels. Subsequently, for each rectangular region, all possible ways to decompose it into two rectangular subregions are considered. This recursive procedure yields a valid SPN structure formed by layers of sum and product nodes known as a region graph [dennis2012learning, peharz2018probabilistic]. By construction, given the height and width of an image, a region graph is unique.
We ran experiments decompiling some fixedsize region graphs. Consider an image of size 3by3. The region graph for such an image is depicted in Figure 4 (subfig:spn_pnd). Use SPN2BN to decompile the region graph into the BN in Figure 4 (subfig:bn_pnd).
We can draw independence conclusions for the BN of Figure 4 (subfig:bn_pnd) using dseparation. For instance, observing variables in the second layer (from bottom to top) of
renders some variables of the third layer dependent on each other. These dependencies can help the interpretability of SPN applications such as an autoencoder. In
[Vergari2018qu], visualization of the latent space shows that the deeper the SPN layer, the higher the level of complexity being learned. For instance, [Vergari2018qu] considers the mnist dataset of handwriting digits. Visualizing the first layers of an SPN learned on mnist shows primitive drawings such as circular and linear forms, while later layers show pieces of digits. Thus, a decompiled BN for this SPN encodes independencies between the circular/linear forms and the pieces of digits.6 Conclusion
There is a tradeoff between the highlevel interpretability of classical probabilistic graphical models, such as Bayesian networks (BNs), and the inference performance of recent tractable probabilistic models, such as sumproduct networks (SPNs). One way of improve performance of BNs is to compile them into SPNs, a well understood technique due to Darwiche’s seminal work on arithmetic circuits (ACs) compilation. The converse direction of SPN decompilation into BNs has received rather limited attention.
In this paper, we formalize SPN decompilation by suggesting SPN2BN, an algorithm that converts an SPN into a BN. SPN2BN is an improvement over the only two other approaches in the literature addressing the connection between BNs and SPNs, namely, the works [zhao2015relationship] and [Peharz:2016wl]. First, SPN2BN produces a minimal BN, meaning no unnecessary dependency assumptions are made. Second, both [zhao2015relationship] and [Peharz:2016wl] are excessive with the number of introduced latent variables.
Our decompilation method, SPN2BN, assumed that a given SPN was compiled from an unknown BN using the BN inference algorithm, called variable elimination (VE) using reverse topological ordering. This compilation assumption is called BN2SPN.
One key result of our SPN decompilation, formalized in Theorem 2, is that it constructs the moral closure of the original BN . This means that in certain cases like for HMMs, where the moral closure of a BN is itself, our SPN decompilation will return the original BN. Theorem 2 also implies that there is a set of BNs related to the original BN that will each return . That is, any BN formed from by adding some directed ancestral moralization edges will yield . Finally, Theorem 3 establishes that our compilationdecompilation process is idempotent. The reason is that the moral closure of is itself. Theorem 3 has practical significance because it limits the maximum size of the decompiled BN to be the size of the moral closure of the input BN. In contrast, if we change the decompilation method to [zhao2015relationship] or [Peharz:2016wl], then applying BN2SPN2BN over and over will always yield a larger and larger BN.
Acknowledgement
This research is partially supported by NSERC Discovery Grant 238880 and has also received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie SkłodowskaCurie Grant Agreement No. 797223 — HYBSPN.
Comments
There are no comments yet.