1 Introduction
Knowledge compilation studies how problems can be solved by compiling them into classes of Boolean circuits or binary decision diagrams (BDDs) to which generalpurpose algorithms can be applied. This field has introduced numerous such classes or compilation targets, defined by various restrictions on the circuits or BDDs, and studied which operations can be solved on them; e.g., the class of dDNNFs requires that negation is only applied at the leaves, gates are on disjoint variable subsets, and gates have mutually exclusive inputs. However, a different way to define restricted classes is to bound some graphtheoretic width parameters, e.g., treewidth, which measures how the data can be decomposed as a tree, or pathwidth, the special case of treewidth with pathshaped decompositions. Such restrictions have been used in particular in the field of database theory and probabilistic databases [46] in the socalled intensional approach where we compute a lineage circuit [32] that represents the output of a query or the possible worlds that make it true, and where these circuits can sometimes be shown to have bounded treewidth [31, 3, 2].
At first glance, classes such as boundedtreewidth circuits
seem very different from usual knowledge compilation classes such as dDNNF. Yet, for some tasks such as probability computation (computing the probability of the circuit under an independent distribution on the variables), both classes are known to be tractable: the problem can be solved in linear time on dDNNFs by definition of the class
[24], and for boundedtreewidth circuits we can use message passing [34] to solve probability computation in time linear in the circuit and exponential in the treewidth. This hints at the existence of a connection between traditional knowledge compilation classes and boundedwidth classes.This paper presents such a connection and shows that the width of circuits is intimately linked to many wellknown knowledge compilation classes. Specifically, we show a link between the treewidth of Boolean circuits and the width of their representations in common circuit targets; and show a similar link between the pathwidth of Boolean circuits and the width of their representation in BDD targets. We demonstrate this link by showing upper bound results on compilation targets, to show that boundedwidth circuits can be compiled to circuits or BDD targets in linear time and with singly exponential complexity in the width parameter. We also show corresponding lower bound results that establish that these compilation targets must be exponential in the width parameters, already for a restricted class of Boolean formulas. We now present our contributions and results in more detail.
The first contribution of this paper (in Section 3
) is to give a systematic picture of the 12 knowledge compilation circuit classes that we investigate. We classify them along three independent axes:

Conjunction: we distinguish between BDD classes, such as OBDDs (ordered binary decision diagrams [18]), where logical conjunction is only used to test the value of a variable and where computation follows a path in the structure; and circuit classes which allow decomposable conjunctions and where computation follows a tree.

Structuredness: we distinguish between structured classes, where the circuit or BDD always decomposes the variables along the same order or vtree [38], and unstructured classes where no such restriction is imposed except decomposability (each variable must be read at most once).

Determinism: we distinguish between classes that feature no disjunctions beyond decision on a variable value (OBDDs, FBDDs, and decDNNFs), classes that feature unambiguous or deterministic disjunctions (uOBDDs, uFBDDs, and dDNNFs), and classes that feature arbitrary disjunctions (nOBDDs, nFBDDs, and DNNFs).
This landscape is summarized in Fig. 1, and we review known translations and separation results that describe the relative expressive power of these features.
The second contribution of this paper (in Sections 4 and 5) is to show an upper bound on the compilation of boundedtreewidth classes to dSDNNFs, and of boundedpathwidth classes to OBDD variants. For pathwidth, existing work had already studied the compilation of boundedpathwidth circuits to OBDDs [32, Corollary 2.13], which can be made constructive [4, Lemma 6.9]. Specifically, they show that a circuit of pathwidth can be converted in polynomial time into an OBDD of width . Our first contribution is to show that, by using unambiguous OBDDs (uOBDDs), we can do the same but with linear time complexity, and with the size of the uOBDD as well as its width (in the classical knowledge compilation sense) being singly exponential in the pathwidth. Specifically:
Result 1 (see Theorem 4.1).
Given as input a Boolean circuit of pathwidth on variables, we can compute in time a complete uOBDD equivalent to of width and size , where is singly exponential.
For treewidth, we show that boundedtreewidth circuits can be compiled to the class of dSDNNF circuits:
Result 2 (see Corollary 4.1).
Given as input a Boolean circuit of treewidth on variables, we can compute in time a complete dSDNNF equivalent to of width and size , where is singly exponential.
The proof of Result 2, and its variant that shows Result 1, is quite challenging: we transform the input circuit bottomup by considering all possible valuations of the gates in each bag of the tree decomposition, and keeping track of additional information to remember which guessed values have been substantiated by a corresponding input. Result 2 generalizes a recent theorem of Bova and Szeider in [16] which we improve in two ways. First, our result is constructive, whereas [16] only shows a bound on the size of the dSDNNF, without bounding the complexity of effectively computing it. Second, our bound is singly exponential in , whereas [16] is doubly exponential; this allows us to be competitive with message passing (also singly exponential in ), and we believe it can be useful for practical applications. We also explain how Result 2 implies the tractability of several tasks on boundedtreewidth circuits, e.g., probabilistic query evaluation, enumeration [1], quantification [22], MAP inference [29], etc.
The third contribution of this paper is to show lower bounds on how efficiently we can convert from widthbased classes to the compilation targets that we study. Our bounds already apply to a weaker formalism of widthbased circuits, namely, monotone formulas in CNF (conjunctive normal form) or DNF (disjunctive normal form). Our first two bounds (in Sections 6 and 7) are shown for structured compilation targets, i.e., OBDDs, where we follow a fixed order on variables, and SDNNFs, where we follow a fixed vtree; and they apply to arbitrary monotone CNFs and DNFs. The first lower bound concerns pathwidth and OBDD representations: we show that, up to factors in the formula arity (maximal size of clauses) and degree (maximal number of variable occurrences), any OBDD for a monotone CNF or DNF must be of width exponential in the pathwidth of the formula. Formally:
Result 3 (Corollary 7).
For any monotone CNF (resp., monotone DNF ) of constant arity and degree, the size of the smallest nOBDD (resp., uOBDD) computing is .
This result generalizes several existing lower bounds in knowledge compilation that exponentially separate CNFs from OBDDs, such as [28] and [15, Theorem 19].
Our second lower bound shows the analogue of Result 3 for treewidth and (d)SDNNFs:
Result 4 (Corollary 7).
For any monotone CNF (resp., monotone DNF ) of constant arity and degree, the size of the smallest SDNNF (resp., dSDNNF) computing is .
These two lower bounds contribute to a vast landscape of knowledge compilation results giving lower bounds on compiling specific Boolean functions to restricted circuits classes, e.g., [28, 41, 15] to OBDDs, [19] to decSDNNF, [8] to sentential decision diagrams (SDDs), [39, 14] to dSDNNF, [14, 20, 21] to dDNNFs and DNNFs. However, all those lower bounds (with the exception of some results in [20, 21]) apply to wellchosen families of Boolean functions (usually CNF), whereas Result 3 and 4 apply to any monotone CNF and DNF. Together with Result 1, these generic lower bounds point to a strong relationship between width parameters and structure representations, on monotone CNFs and DNFs of constant arity and degree. Specifically, the smallest width of OBDD representations of any such formula is in , i.e., precisely singly exponential in the pathwidth; and an analogous bound applies to dSDNNF size and treewidth of DNFs.
To prove these two lower bounds, we leverage known results from knowledge compilation and communication complexity [14] (in Section 6) of which we give a unified presentation. Specifically, we show that Boolean functions captured by uOBDDs (resp., nOBDDs) and dSDNNF (resp., SDNNF) variants can be represented via a small cover (resp., disjoint cover) of socalled rectangles. We also show two Boolean functions (set covering and set intersection) which are known not to have any such covers. We then bootstrap the lower bounds on these two functions to a general lower bound in Section 7, by rephrasing pathwidth and treewidth to new notions of pathsplitwidth and treesplitwidth, which intuitively measure the performance of a variable ordering or vtree. We then show that, for DNFs and CNFs with a high pathsplitwidth (resp., treesplitwidth), we can find the corresponding hard function “within” the CNF or DNF, and establish hardness.
Our last lower bound result is shown in Section 8, where we lift the assumption that the compilation targets are structured:
Result 5 (Corollary 8).
For any monotone CNF of constant arity and degree, the size of the smallest nFBDD computing is , and the size of the smallest DNNF computing is .
This result generalizes Result 3 and 4 by lifting the structuredness assumption, but they only apply to CNFs (and not to DNFs). The proof of these results reuses the notions of pathsplitwidth and treesplitwidth, along with a more involved combinatorial argument on the size of rectangle covers.
The current article extends the conference article [5] in many ways:

We added Section 3 which gives a systematic presentation of knowledge compilation classes and reviews known results that relate them.

In Section 4, the upper bound result on uOBDDs (Result 1) was added, the results were rephrased in terms of width, and the size of the circuit has been improved^{1}^{1}1This observation is due to Stefan Mengel and is adapted from the recent article [22]. to be linear in the number of variables like in [16].

The bounds on unstructured representations in Section 8 are new.

We include full proofs for all results.
2 Preliminaries
We give preliminaries on trees, hypergraphs, treewidth, and Boolean functions.
Graphs, trees, and DAGs.
We use the standard notions of directed and undirected graphs, of paths in a graph, and of cycles. All graphs considered in the paper are finite.
A tree is an undirected graph that has no cycles and that is connected (i.e., there exists exactly one path between any two different nodes). Its size is its number of edges. A tree is rooted if it has a distinguished node called the root of . Given two adjacent nodes of a rooted tree with root , if lies on the (unique) path from to , we say that is the parent of and that is a child of . A leaf of is a node that has no child, and an internal node of is a node that is not a leaf. Given a set of nodes of , we denote the set of leaves of by . A node is a descendant of a node in a rooted tree if and lies on the path from to the root. For , we denote by the subtree of rooted at . A rooted tree is binary if all nodes have at most two children, and it is full if all internal nodes have exactly two children. A rooted full binary tree is called rightlinear if the children of each internal node are ordered (we then talk of a left or right child), and if every right child is a leaf.
A directed acyclic graph (or DAG) is a directed graph that has no cycles. A DAG is rooted if it has a distinguished node such that there is a path from to every node in . A leaf of is a node that has no child.
Hypergraphs, treewidth, pathwidth.
A hypergraph consists of a finite set of nodes (or vertices) and of a set of hyperedges (or simply edges) which are nonempty subsets of . We always assume that hypergraphs have at least one edge. For a node of , we write for the set of edges of that contain . The arity of , written , is the maximal size of an edge of . The degree of , written , is the maximal number of edges to which a vertex belongs, i.e., .
A tree decomposition of a hypergraph is a rooted tree , whose nodes (called bags) are labeled by a subset of , and which satisfies:

for every hyperedge , there is a bag with ;

for all , the set of bags is a connected subtree of .
For brevity, we often identify a bag with its domain . The width of is . The treewidth of is the minimal width of a tree decomposition of . Pathwidth is defined similarly but with path decompositions, tree decompositions where all nodes have at most one child.
It is NPhard to determine the treewidth of a hypergraph , but we can compute a tree decomposition in linear time when parametrizing by the treewidth. This can be done in time with the classical result of [9], or, using a recent algorithm by Bodlaender et al. [10], in time : [[10]] There exists a constant such that, given a hypergraph and an integer , we can check in time whether has treewidth , and if yes output a tree decomposition of of width .
For simplicity, we will often assume that a tree decomposition is friendly, for a node , meaning that:

it is a full binary tree, i.e., each node has exactly zero or two children;

for every internal bag with children we have ;

for every leaf bag we have ;

the root bag of only contains the node .
Assuming a tree decomposition to be friendly for a fixed can be done without loss of generality:
Given a tree decomposition of a hypergraph of width and a node of , we can compute in time a friendly tree decomposition of of width .
Proof.
We first create a bag containing only the node , and make this bag the root of by connecting it to a bag of that contains (if there is no such bag then we connect to an arbitrary bag of ). Then, we make the tree decomposition binary (but not necessarily full) by replacing each bag with children with by a chain of bags with the same label as to which we attach the children . This process is in time and does not change the width.
We then ensure the second and third conditions, by applying a transformation to leaf bags and to internal bags. We first modify every leaf bag containing more than one vertex by a chain of at most internal bags with leaves where the vertices are added one after the other. Then, we modify every internal bag that contains elements not present in the union of its children: we replace by a chain of at most internal bags containing respectively , each bag having a child introducing the corresponding gate . This is in time , and again it does not change the width; further, the result of the process is a tree decomposition that satisfies the second, third and fourth conditions and is still a binary tree.
The only missing part is to ensure that the tree decomposition is full, which we can simply do in linear time by adding bags with an empty label as a second children for internal nodes that have only one child. This is obviously in linear time, does not change the width, and does not affect the other conditions, concluding the proof. ∎
Boolean functions.
A (Boolean) valuation of a set is a function , which can also be seen as the set of elements of mapped to . A Boolean function on variables is a mapping that associates to each valuation of a Boolean value in called the evaluation of according to . We write the number of satisfying valuations of . Given two Boolean functions , we write when every satisfying valuation of also satisfies . We write the Boolean function that maps every valuation to .
Let be two disjoint sets, a valuation on and a valuation on . We denote by the valuation on such that is if and is if . Let be a Boolean function on , and be a valuation on a set . We denote by the Boolean function on variables such that, for any valuation of , . When is a Boolean valuation on and , we denote by the Boolean valuation on defined by for all in .
Two simple formalisms for representing Boolean functions are Boolean circuits and formulas in conjunctive normal form or disjunctive normal form. We will discuss more elaborate formalisms, namely binary decision diagrams and decomposable normal negation forms, in Section 3.
Boolean circuits.
A (Boolean) circuit is a DAG whose vertices are called gates, whose edges are called wires, where is the output gate, and where each gate has a type among (a variable gate), , , . The inputs of a gate is the set of gates such that ; the fanin of is its number of inputs. We require gates to have fanin 1 and gates to have fanin 0. The treewidth of is that of the hypergraph , where is . Its size is the number of wires. The set of variable gates of are those of type . Given a valuation of , we extend it to an evaluation of by mapping each variable to , and evaluating the other gates according to their type. We recall the convention that gates (resp., gates) with no input evaluate to (resp., ). The Boolean function on captured (or computed, or represented) by the circuit is the one that maps to the evaluation of under . Two circuits are equivalent if they capture the same function.
DNFs and CNFs.
We also study other representations of Boolean functions, namely, Boolean formulas in conjunctive normal form (CNFs) and in disjunctive normal form (DNFs). A CNF (resp., DNF) on a set of variables is a conjunction (resp., disjunction) of clauses, each of which is a disjunction (resp., conjunction) of literals on , i.e., variables of (a positive literal) or their negation (a negative literal).
A monotone CNF (resp., monotone DNF) is one where all literals are positive, in which case we often identify a clause to the set of variables that it contains. We always assume that monotone CNFs and monotone DNFs are minimized, i.e., no clause is a subset of another. This ensures that every monotone Boolean function has a unique representation as a monotone CNF (the disjunction of its prime implicants), and likewise for monotone DNF. In particular, when we consider a valuation of a subset of the variables of a monotone CNF/DNF , we see again as a minimized monotone CNF/DNF. We assume that monotone CNFs and DNFs always contain at least one nonempty clause (in particular, they cannot represent constant functions). Monotone CNFs and DNFs are isomorphic to hypergraphs: the vertices are the variables of , and the hyperedges are the clauses of . We often identify with its hypergraph. In particular, the pathwidth and treewidth of , and its arity and degree, are defined as that of its hypergraph.
3 Knowledge Compilation Classes: BDDs and DNNFs
We now review some representation formalisms for Boolean functions that are used in knowledge compilation, based either on binary decision diagrams (also known as branching programs [48]) or on Boolean circuits in decomposable negation normal form [23]; in the rest of the paper we will study translations between boundedwidth Boolean circuits and these classes. The classes that we consider have all been introduced in the literature (see, in particular, [27] for the main ones) but we sometimes give slightly different (but equivalent) definitions in order to see them in a common framework. An element of a knowledge compilation class is associated with its size (describing how compact it is) and with the Boolean function that it captures.
A summary of the classes considered is provided in Fig. 1. This figure also shows (with arrows) when a class can be compiled into another in polynomialtime (i.e., when one can transform an element of class capturing a Boolean function into of class capturing , in time polynomial in ). All classes shown in Fig. 1 are unconditionally separated and some (cf. double arrows) are exponentially separated. Specifically, we say that a class is separated (resp., exponentially separated) from a class if there exists a family of Boolean functions captured by elements of class such that all families of class capturing have size for all (resp., of size for some ).
We first consider general classes, then structured variants of these classes, and further introduce the notion of width of these structured classes. When introducing classes of interest, we recall or prove nontrivial polynomialtime compilation and separation results related to that class.
3.1 Unstructured Classes
We start by defining general, unstructured classes, i.e., those in the background of Fig. 1, namely (nondeterministic) free binary decision diagrams and circuits in decomposable negation normal form.
3.1.1 Free Binary Decision Diagrams
A nondeterministic binary decision diagram (or nBDD) on a set of variables is a rooted DAG with labels on edges and nodes, verifying:

there are exactly two leaves (also called sinks), one being labeled by (the sink), the other one by (the sink);

internal nodes are labeled either by or by a variable of ;

each internal node that is labeled by a variable has two outgoing edges, labeled and .
The size of is its number of edges. Let be a valuation of , and let be a path in from the root to a sink of . We say that is compatible with if for every node of that is labeled by a variable of , the path goes through the outgoing edge of labeled by . An nBDD captures a Boolean function on defined as follows: for every valuation of , if there exists a path from the root to the sink that is compatible with , then , else . An nBDD is unambiguous when, for every valuation , there exists at most one path from the root to the sink that is compatible with . A BDD is an nBDD that has no nodes.
The most general form of nBDDs that we will consider in this paper are nondeterministic free binary decision diagrams (nFBDDs): they are nBDDs such that for every path from the root to a leaf, no two nodes of that path are labeled by the same variable. In addition to the nFBDD class, we will also study the class uFBDD of unambiguous nFBDDs, and the class FBDD of nFBDDs having no nodes.
Proposition .
nFBDDs are exponentially separated from uFBDDs, and uFBDDs are exponentially separated from FBDDs.
Proof.
The exponential separation between nFBDDs and uFBDDs is shown in [14]: Proposition 7 of [14] shows that there exists an nFBDD of size for the Sauerhoff function [44] over variables, while Theorem 9 of [14], relying on [44, Theorem 4.10], shows that any representation of this function as a dDNNF (a formalism that generalizes uFBDD, see our Proposition 3.1.3) necessarily has size .
To separate uFBDDs from FBDDs, we rely on the proof of exponential separation of PBDDs and FBDDs in [12, Theorem 11] (see also [48, Theorem 10.4.7]). Consider the Boolean function on variables that tests whether, in an Boolean matrix, either the number of
’s is odd and there is a row full of
’s, or the number of ’s is even and there is a column full of ’s. As shown in [12, 48], an FBDD for has necessarily size . On the other hand, it is easy to construct an FBDD of size to test if the number of ’s is odd and there is a row full of ’s (enumerating variables in row order), and to construct an FBDD of size to test if the number of ’s is even and there is a column full of ’s (enumerating variables in column order). An uFBDD for is obtaining by simply adding an gate joining these two FBDDs, using the fact that only one of these two functions can evaluate to under a given valuation. ∎3.1.2 Decomposable Negation Normal Forms
We say that a circuit is in negation normal form (NNF) if the inputs of gates are always variable gates. For a gate in a Boolean circuit , we write for the set of variable gates that have a directed path to in . An gate of is decomposable if for every two input gates of we have . We call decomposable if each gate is. We write DNNF for an NNF that is decomposable. Some of our proofs will use the standard notion of a trace in an NNF:
Let be an DNNF and be a gate of . A trace of starting at is a set of gates of that is minimal by inclusion and where:

We have ;

If and is an gate, then all inputs of are in , i.e., ;

If and is an gate, then exactly one input of is in ;

If and if a gate with input variable gate , then is in .
Observe that a gate is satisfiable (i.e., there exists a valuation such that evaluates to under ) if and only if there exists a trace of starting at . Indeed, given such a trace, define the valuation that maps to all the variables such that a gate with input is in , and to all the other variables: this valuation clearly satisfies , noting in particular that each variable occurs at most once in thanks to decomposability. Conversely, when is satisfiable, it is clear that one can obtain a trace starting at whose literals (variable gates, and negations of the variables that are an input to a gate) evaluate to under the witnessing valuation . This means that we can check in linear time whether a DNNF is satisfiable, i.e., if it has an accepting valuation, by computing bottomup the set of gates at which a trace starts.
As we will later see, the tractability of DNNFs does not extend to some other tasks (e.g., model counting or probability computation). For these tasks, a useful additional requirement on circuits is determinism. An gate of is deterministic if there is no pair of input gates of and valuation of such that and both evaluate to under . A Boolean circuit is deterministic if each gate is. We write dDNNF for an NNF that is both decomposable and deterministic. Model counting and probability computation can be done in linear time for dDNNFs thanks to decomposability and determinism (in fact this does not even use the restriction of being an NNF).
Observe that, while decomposability is a syntactical restriction that can be checked in linear time, the determinism property is semantic, and it is coNPcomplete to check if a given gate of a circuit is deterministic: hardness comes from the fact that an arbitrary Boolean circuit is unsatisfiable iff is deterministic. This motivates the notion of decision gates, which gives us a syntactic way to impose determinism. Formally, an gate is a decision gate if it is of the form , for some variable and (generally nondisjoint) subcircuits . A decDNNF is a DNNF where all gates are decision gates: it is in particular a dDNNF.
Proposition .
DNNFs are exponentially separated from dDNNFs, and dDNNFs are exponentially separated from decDNNFs.
3.1.3 Connections between FBDDs and DNNFs
We have presented our unstructured classes of decision diagrams (namely FBDDs, uFBDDs, and nFBDDs), and of decomposable NNF circuits (decDNNF, dDNNF, and DNNF). We now discuss the relationship between these various classes. We first observe that nFBDDs (and their subtypes) can be compiled to DNNF (and their subtypes):
Proposition .
nFBDDs (resp., uFBDDs, FBDDs) can be compiled to DNNFs (resp., dDNNFs, decDNNFs) in linear time.
Proof.
We first describe the lineartime compilation of an nFBDD to a DNNF that captures the same function: recursively rewrite every internal node labeled with variable by a circuit , where and are the (not necessarily disjoint) rewritings of the nodes to which respectively had a 0edge and a 1edge. We note that the new gate is a decision gate and the two gates are decomposable. Furthermore:

if is unambiguous, all gates in the rewriting are deterministic, so we obtain a dDNNF;

if is an FBDD, then gates are only introduced in the rewriting, so we obtain a decDNNF. ∎
The proof above implies that nFBDDs (resp., uFBDDs, FBDDs) are the restriction of DNNFs (resp., dDNNFs, decDNNFs) to the case where gates, in addition to being decomposable, are also all decision gates, i.e., gates appearing as children of a decision gate.
Unlike previous compilation results, Proposition 3.1.3 does not come with an exponential separation: we can compile in the other direction at a quasipolynomial cost, i.e., in time for some fixed :
Proposition .
DNNFs (resp., dDNNFs, decDNNFs) can be compiled to nFBDDs (resp., uFBDDs, FBDDs) in quasipolynomial time.
Proof.
3.2 Structured Classes
The classes introduced so far are unstructured: there is no particular order or structure in the way variables appear within an nFBDD, or within a DNNF circuit. In this section, we introduce structured variants of these classes, which impose additional constraints on how variables are used. Such additional restrictions often help with the tractability of some operations: for example, given two FBDDs capturing Boolean functions , it is NPhard to decide if if satisfiable [35, Lemma 8.14]. By contrast, with the ordered binary decision diagrams [17] (OBDDs) that we now define, we can perform this task tractably: given two OBDDs and that are ordered in the same way, we can compute in polynomial time an OBDD representing , for which we can then decide satisfiability. We first present OBDDs, and we then present SDNNFs which are the structured analogues of DNNF.
3.2.1 Ordered Binary Decision Diagrams
A nondeterministic ordered binary decision diagram (nOBDD) is an nFBDD with a total order on the variables which structures , i.e., for every path from the root of to a leaf, the sequence of variables which labels the internal nodes of (ignoring nodes) is a subsequence of . We say that the nOBDD is structured by . We also define uOBDDs as the unambiguous nOBDDs, and OBDDs as the nOBDDs without any node.
Like in the unstructured case (Proposition 3.1.1), these classes are exponentially separated:
Proposition .
nOBDDs are exponentially separated from uOBDDs, and uOBDDs are exponentially separated from OBDDs.
Proof.
The exponential separation between nOBDDs and uOBDDs will follow from our lower bounds on uOBDDs. Indeed, Corollary 7 shows a lower bound on the size of uOBDDs representing boundeddegree and boundedarity monotone DNFs of high pathwidth. But there exists a family of DNFs of bounded degree and arity whose treewidth (hence pathwidth) is linear in their size: for instance, DNFs built from expander graphs (see [30, Theorem 5 and Proposition 1]). Hence, for such a family we have that any uOBDD for is of size . By contrast, it is easy to see that any DNF can be represented as an nOBDD in linear time. To do so, fix an arbitrary variable order of the variables of . Any clause of can clearly be represented as a small OBDD with order . Taking the disjunction of all these OBDDs then yields an nOBDD equivalent to of linear size.
For the separation between uOBDDs and OBDDs, consider the Hidden Weighted Bit function on variables , defined for a valuation of by:
Bryant [17] showed that OBDDs for have size . By contrast, it is not too difficult to construct uOBDDs of polynomial size for . This was observed in [48, Theorem 10.2.1] for nOBDDs, with a note [48, Proof of Corollary 10.2.2] that the constructed nBDDs are unambiguous. See also [13, Theorem 3], which covers the case of sentential decision diagrams instead of uOBDDs. ∎
3.2.2 Structured DNNFs
For NNFs, as for BDDs, it is possible to introduce a notion of structuredness, that goes beyond that of decomposability. A vtree [38] over a set is a rooted full binary tree whose leaves are in bijection with ; we identify each leaf with the associated element of . An extended vtree [22] over a set is like a vtree, except that there is only an injection between and , i.e., some leaves can correspond to no element of : we call those leaves unlabeled (and they can intuitively stand for constant gates in the circuit). A structured DNNF (resp., extended structured DNNF), noted SDNNF (resp., extended SDNNF), is a triple consisting of a DNNF , a vtree (resp., extended vtree) over and a mapping labeling each gate of with a node of that satisfies the following: for every gate of with inputs and , there exist distinct children of such that structures , i.e., we have for all . Note that gates then have at most two inputs because is binary. We also define dSDNNF and decSDNNF as structured dDNNF and decDNNF, and define extended dSDNNF and extended decSDNNF in the expected way.
As in the case of FBDDs and DNNFs, observe that an OBDD (resp., uOBDD, nOBDD) is a special type of decSDNNF (resp., dSDNNF, SDNNF). Namely, the transformation described above Proposition 3.1.3, when applied to an OBDD (resp., uOBDD, nOBDD), yields a decSDNNF (resp., dSDNNF, SDNNF) that is structured by a vtree that is rightlinear (recall the definition from Section 2). Hence, we have:
Proposition .
nOBDDs (resp., uOBDDs, OBDDs) can be compiled to SDNNFs (resp., dSDNNFs, decSDNNFs) in linear time.
Proof.
Given the variable order of an nOBDD, we construct our rightlinear vtree as having a root , internal nodes with being the left child of for , leaf nodes with being the right child of , and leaf node being the right child of . We then apply asis the translation described in the proof of Proposition 3.1.3. ∎
As in the unstructured case (Proposition 3.1.3), there is no exponential separation result: indeed, analogously to Proposition 3.1.3 in the unstructured case, there exist quasipolynomial compilations in the other direction:
Proposition .
SDNNFs (resp., dSDNNFs, decSDNNFs) can be compiled to nOBDDs (resp., uOBDDs, OBDDs) in quasipolynomial time.
Proof.
Quasipolynomial time compilation of a SDNNF into an nOBDD is proved in [11, Theorem 2], by adapting the compilation of [8] from DNNFs to nFBDDs. Furthermore, [11, Proposition 2] shows that the resulting nOBDD is unambiguous if the SDNNF is deterministic. But it is easy to see that the same compilation [11, Simulation 2] yields an OBDD if the input is a decSDNNF: indeed, in a decSDNNF there are no gates that are not decision gates, so no gates are produced in the output. ∎
3.3 Comparing Structured and Unstructured Classes
To obtain all remaining separations in Fig. 1, and justify that no arrows are missing, we need two last results in which we will compare structured and unstructured classes.
The first result describes the power of decomposable gates as opposed to decision gates: it shows that the least powerful class that has arbitrary decomposable gates (decSDNNF) cannot be compiled to the most powerful class with decision gates (nFBDD) without a superpolynomial size increase.
Proposition .
There exists a family of functions that has decSDNNF but no nFBDD of size smaller than .
Proof.
In [42], Razgon constructs for every a family of 2CNF such that has variables and treewidth . He proves ([42, Theorem 1]) a lower bound on the size of any nFBDD computing (Razgon refers to nFBDD as NROBP in his paper). It is known from [23, Section 3] that one can compile any CNF with variables and with treewidth into a decSDNNF of size . Thus, can be computed by a decSDNNF of size .
Taking gives the desired separation: can be computed by a decSDNNF of size but by no nFBDD of size smaller than . ∎
Proposition 3.3 implies that no DNNF class in the upper level of Fig. 1 can be polynomially compiled into any BDD class in the lower level of Fig. 1.
The second result describes the power of unstructured formalisms as opposed to structured ones: it shows that the least powerful unstructured class (FBDD) cannot be compiled to the most powerful structured class (SDNNF) in size less than exponential.
Proposition .
FBDDs are exponentially separated from SDNNFs: there exists a family of functions that has FBDDs of size but no SDNNF of size smaller than .
Proof.
This separation was proved independently by Pipatsrisawat and Capelli in their PhD theses (see [40, Appendix D.2], and [20, Section 6.3]).
In his work, Pipatsrisawat considers the Boolean function circular bit shift : it is defined on a tuple of variables with , , for some , and it evaluates to on valuation iff shifting the bits of by (as written in binary) positions yields . Pipatsrisawat shows that the CBS function on variables has an FBDD of size , but that any SDNNF for CBS has size .
The proof of Capelli uses techniques close to the ones used in Section 7. ∎
Proposition 3.3 implies that no unstructured class (in the background of Fig. 1) can be polynomially compiled into any structured class (in the foreground of Fig. 1).
Looking back at Fig. 1, we see that, indeed, all classes are separated and no arrows are missing. The separation is exponential except when moving (on the vertical axis in the figure) from BDDlike classes to NNFlike classes, in which case we know (cf. Propositions 3.1.3 and 3.2.2) that quasipolynomial compilations exist in the other direction.
3.4 Completeness and Width
Two last notions that will be useful for our results are the notions of completeness and width for structured classes. Intuitively, completeness further restricts the structure of how variables are tested in the circuit or BDD: in addition to the structuredness requirement, we impose that no variables are “skipped”. We will be able to assume completeness without loss of generality, it will be guaranteed by our construction, and it will be useful in our lower bound proofs.
On complete classes, we will additionally be able to define a notion of width that we will use to show finer lower bounds.
Complete OBDDs.
An nOBDD on is complete if every path from the root to a sink tests every variable of . For , the width of a complete nOBDD is the number of nodes labeled with variable . The width of is width of .
It is immediate that partially evaluating a complete nOBDD does not increase its width: Let be a complete nOBDD (resp., uOBDD) on variables , with order and of width , and let be the Boolean function that captures. Let , and be a valuation of . Then there exists a complete nOBDD (resp., uOBDD) , on variables , of order and width , that computes .
Complete SDNNFs.
The notion of completeness and width of OBDDs extends naturally to SDNNFs. Following [22], we say that a (d)SDNNF (resp., extended (d)SDNNF) is complete if labels every gate of (not just gates) with a node of and the following conditions are satisfied:

The output gate of is an gate;

For every variable gate of , we have ;

For every gate of , letting be the variable gate that feeds , we have ;

For every gate of , for any input of , the gate is not an gate, and moreover we have ;

For every gate of , for any input of , the gate is an gate, and we have that is a child of ;

For every gate of and any two inputs of , we have .
For a node of , the width of a complete (d)SDNNF (resp., extended complete) is the number of gates that are structured by . The width of is the maximal width for a node of .
One of the advantages of complete (d)SDNNFs of bounded width is that we can work with extended vtrees, and then compress their size in linear time, so that the vtree becomes nonextended and the size of the circuit is linear in the number of variables. When doing so, the extended vtree is modified in a way that we call a reduction: Let , be two extended vtrees over variables . We say that is a reduction of if, for every internal node of , there exists an internal node of such that and .
We can now show how to compress extended complete (d)SDNNFs:
[[22]] Let be an extended complete (d)SDNNF of width on variables. We can compute in linear time a complete (d)SDNNF of width such that is a reduction of and such that is in .
Proof.
We present a complete proof, inspired by the proof in [22, Lemma 4]. As a first prerequisite, we preprocess in linear time so that the number of gates structured by a same node of is in . This can be done, as in [22, Observation 3], by noticing that there can be at most inequivalent gates that are structured by a node . Indeed, this is clear if is a leaf, as such an gate cannot have an input (so there is at most one inequivalent gate). If is an internal node with children and , any gate structured by can have one input among the gates structured by or no input among these gates, and likewise it can have one input among the gates structured by or no input among these gates, so there are possible inequivalent gates. We can then merge all the gates that are equivalent, and obtain a complete (d)SDNNF where for each node of the vtree, at most gates are structured by .
The second prerequisite is to eliminate the gates that are not connected to the output of , and then to propagate the constants in the circuit (i.e., to evaluate it partially). In other words, eliminate all gates (and their wires) that are not connected to the output of , and then repeat the following until convergence:

For every constant gate (i.e., an gate with no input) and wire , if is an gate then simply remove the wire , and if is an gate, then replace by a constant gate; then remove and all the wires connected to it.

For every constant gate (i.e., an gate with no input) and wire , if is an gate then simply remove the wire , and if is an gate, then replace by a constant gate; then remove and all the wires connected to it.
This again can be done in linear time (by a DFS traversal of the circuit, for instance), and it does not change the properties of the circuit or the captured function. Further, it ensures that  and gates of the resulting circuit always have at least one input, or that we get to one single constant gate (if the circuit captures a constant Boolean function): as this second case of constant functions is uninteresting, we assume that we are in the first case. We call the resulting circuit . It is clear that is still structured by (by taking the restriction of to the gates that have not been removed).
Having enforced these prerequisites on , the idea is to eliminate unlabeled leaves in he vtree one by one by merging the parent and the sibling of . Formally, whenever we can find in an unlabeled leaf with parent and sibling , we perform these two steps:

Remove from the leaf (and its parent edge) noticing that no gate of was structured by because we propagated the constants in the circuit in our second preprocessing step; then replace the parent in by its remaining child so that it is again binary and full.

We now need to modify so that is an extended complete (d)SDNNF structured by the new vtree. There is nothing to do in the case that was an unlabeled leaf, because then no gate of was structured by , or even by (since we propagated constants). In the case where was a variable leaf or an internal node, then, for every gate that was structured by , we compute the set of gates that were structured by , that are not an gate, and such that there is a path from to in . Thanks to our first preprocessing step, the set can be computed in time as this bounds the number of gates structured by and by . Observe that gates in can be either gates that were structured by (in case was an internal node), or gates or variable gates (in case was a variable leaf). Now, remove from all the gates that were structured by , all the gates that were structured by , and all the edges connected to them. For each gate that was structured by , set its new inputs to be all the gates in . One can check that the resulting circuit captures the same function (this uses the fact that we propagated constants), and that determinism cannot be broken in case the original circuit was a dSDNNF. Moreover, is now an extended complete (d)SDNNF structured by the new vtree .
By iterating this process, we will end up with a vtree that is not extended, and the resulting circuit will be an equivalent complete (d)SDNNF of width and size . The total time is linear since we spend time to eliminate each single unlabeled leaf. Moreover it is clear that the final vtree obtained is a reduction of the original vtree, as the property is preserved by each elimination. ∎
Like for OBDDs (Lemma 3.4), we will use the fact that partially evaluating a complete (d)SDNNF cannot increase the width: Let be a complete (d)SDNNF of width over variables , and let be the Boolean function that captures. Let , and be a valuation of . Then there exists a complete (d)SDNNF of width on variables computing such that is a reduction of .
Proof.
We replace every leaf of that corresponds to a variable by an unlabeled leaf, replace every variable gate in by a constant gate, replace every gate with input variable by a constant gate, and then propagate constants as in the second prerequisite in the proof of Lemma 3.4. This yields an extended complete (d)SDNNF computing . We then conclude by applying Lemma 3.4. ∎
Making nOBDDs and SDNNFs complete.
Imposing completeness on nOBDDs or SDNNFs is in fact not too restrictive, as we can assume that OBDDs and SDNNFs are complete up to multiplying the size by the number of variables:
For any nOBDD (resp., SDNNF) on variables , there exists an equivalent complete nOBDD (resp., SDNNF) of size at most .
Proof.
The result will follow from a more general completion result on unstructured classes given later in the paper (Lemma 8); it is straightforward to observe that applying the constructions of that lemma yield structured outputs when the input representations are themselves structured. ∎
4 Upper Bound
In this section we study how to compile Boolean circuits to dSDNNFs (resp., uOBDDs), parameterized by the treewidth (resp., pathwidth) of the input circuits. We first present our results in Section 4.1, then show some examples of applications in Section 4.2, before providing full proofs in Section 5.
4.1 Results
To present our upper bounds, we first review the independent result that was recently shown by Bova and Szeider [16] on compiling boundedtreewidth circuits to dSDNNFs:
[[16, Theorem 3 and Equation (22)]] Given a Boolean circuit of treewidth , there exists an equivalent dSDNNF of size , where is doubly exponential.
Their result has two drawbacks: (i) it has a doubly exponential dependency on the width; and (ii) it is nonconstructive, because [16] gives no time bound on the computation, leaving open the question of effectively compiling boundedtreewidth circuits to dSDNNFs. The nonconstructive aspect can easily be tackled by encoding in linear time the input circuit into a relational instance of same treewidth, and then use [4, Theorem 6.11] to construct in linear time a dSDNNF representation of the provenance on of a fixed MSO formula describing how to evaluate Boolean circuits (see the conference version of this paper [5] for more details). This “naïve” approach computes a dSDNNF in time , but where is a superexponential function that does not address the first drawback. We show that we can get to be singly exponential.
Treewidth bound.
Our main upper bound result addresses both drawbacks and shows that we can compile in time linear in the circuit and singly exponential in the treewidth. Our proof is independent from [16]. Formally, we show:
There exists a function that is in for any such that the following holds. Given as input a Boolean circuit and tree decomposition of width of , we can compute a complete extended dSDNNF equivalent to of width in time .
This result assumes that the tree decomposition is provided as input; but we can instead use Theorem 2 to obtain it. We can also apply Lemma 3.4 to the resulting circuit to get a proper (nonextended) dSDNNF and reduce its size so that it only depends on the number of variables of the input circuit (i.e., rather than ), which allows us to truly generalize Theorem 4.1. Putting all of this together, we get:
There exists a constant such that the following holds. Given as input a Boolean circuit of treewidth , we can compute in time a complete dSDNNF equivalent to of width and size .
Pathwidth bound.
A byproduct of our construction is that, in the special case where we start with a path decomposition, it turns out that the dSDNNF computed is in fact an uOBDD. The compilation of boundedpathwidth Boolean circuits to OBDDs had already been studied in [32, 4]: Corollary 2.13 of [32] shows that a circuit of pathwidth has an equivalent OBDD of width , and [4, Lemma 6.9] justifies that the transformation can be made in polynomial time. Our second upper bound result is that, by using uOBDDs instead of OBDDs, we can get a singly exponential dependency:
There exists a function that is in for any such that the following holds. Given as input a Boolean circuit and path decomposition of width of , we can compute a complete uOBDD equivalent to of width in time .
4.2 Applications
Theorem 4.1 implies several consequences for boundedtreewidth circuits. The first one deals with probability computation: we are given a probability valuation mapping each variable to a probability that is true (independently from other variables), and we wish to compute the probability that evaluates to true under , assuming that arithmetic operations (sum and product) take unit time. More formally, we define the probability of a valuation as
The probability of Boolean circuit with probability assignment is then the total probability of the valuations that satisfy . Formally:
When for every variable, the probability computation problem simplifies to the model counting problem, i.e., counting the number of satisfying valuations, noted . Indeed, in this case we have . Hence, the probability computation problem is #Phard for arbitrary circuits. However, it is tractable for deterministic decomposable circuits [24]. Thus, our result implies the following, where denotes the size of writing the probability valuation :
Let be the function from Theorem 4.1. Given a Boolean circuit , a tree decomposition of width of , and a probability valuation of , we can compute in .
Proof.
This improves the bound obtained when applying message passing techniques [34] directly on the boundedtreewidth input circuit (as presented, e.g., in [3, Theorem D.2]). Indeed, message passing applies to moralized representations of the input: for each gate, the tree decomposition must contain a bag containing all inputs of this gate simultaneously, which is problematic for circuits of large fanin. Indeed, if the original circuit has a tree decomposition of width , rewriting it to make it moralized will result in a tree decomposition of width (see [2, Lemmas 53 and 55]), and the bound of [3, Theorem D.2] then yields an overall complexity of for message passing. Our Corollary 4.2 achieves a more favorable bound because Theorem 4.1 directly uses the associativity of and . We note that the connection between messagepassing techniques and structured circuits has also been investigated by Darwiche, but his construction [25, Theorem 6] produces arithmetic circuits rather than dDNNFs, and it also needs the input to be moralized.
A second consequence concerns the task of enumerating the accepting valuations of circuits, i.e., producing them one after the other, with small delay between each accepting valuation. The valuations are concisely represented as assignments, i.e., as a set of variables that are set to true, omitting those that are set to false. This task is of course NPhard on arbitrary circuits (as it implies that we can check whether an accepting valuation exists), but was recently shown in [1] to be feasible on dSDNNFs with lineartime preprocessing and delay linear in the Hamming weight of each produced assignment. Hence, we have:
Let be the function from Theorem 4.1. Given a Boolean circuit and a tree decomposition of width of , we can enumerate the accepting assignments of with preprocessing in and delay linear in the size of each produced assignment.
Proof.
A third consequence concerns the tractability of quantifying variables in boundedtreewidth circuits. Let be a Boolean function on variables , and let be disjoint subsets of . A quantifier prefix of length is a prefix of the form , where each is either or , with . Let be the Boolean function on variables , with the obvious semantics. Then [22] shows:
[[22, Theorem 5]] There is an algorithm that, given a complete SDNNF of width and , computes in time a complete dSDNNF of width at most having a designated gate computing and another designated gate computing .
By iterating the construction of Theorem 4.2 and using the identity , one can easily get: Let be a quantifier prefix of length with . There is an algorithm that, given a complete dSDNNF on variables of width and , computes in time a complete structured dSDNNF of width representing , where .
5 Proof of the Upper Bound
We first present in Section 5.1 the construction used for Theorem 4.1, then prove in Section 5.2 that this construction is correct and can be done within the prescribed time bound. We then explain how to specialize the construction to the case of boundedpathwidth circuits and uOBBDs in Section 5.3.
5.1 Construction
Let be the input circuit on variables, and the input tree decomposition of of width . We start with prerequisites.
Prerequisites.
Let be the output gate of . Thanks to Lemma 2, we can assume that is friendly. For every variable gate , we choose a leaf bag of such that . Such a leaf bag exists because is friendly (specifically, thanks to bullet points and ). We say that is responsible for the variable gate . We can obviously choose such a for every variable gate in linear time in .
To abstract away the type of gates and their values in the construction, we will talk of strong and weak values. Intuitively, a value is strong for a gate if any input of which carries this value determines the value of ; and weak otherwise. Formally:
Let be a gate and :

If is an gate, we say that is strong for and is weak for ;

If is an gate, we say that is strong for and is weak for ;

If is a gate, and are both strong for ;

For technical convenience, if is a gate, and are both weak for .
If we take any valuation of the circuit , and extend it to an evaluation , then will respect the semantics of gates. In particular, it will respect strong values: for any gate of , if has an input for which is a strong value, then is determined by , specifically, it is if is an  or an gate, and if is a gate. In our construction, we will need to guess how gates of the circuit are evaluated, focusing on a subset of the gates (as given by a bag of ); we will then call almostevaluation an assignment of these gates that respects strong values. Formally: Let be a set of gates of . We call a almostevaluation if it respects strong values, i.e., for every gate , if there is an input of in such that is a strong value for , then is determined from in the sense above.
Respecting strong values is a necessary condition for such an assignment to be extensible to a valuation of the entire circuit. However, it is not sufficient: an almostevaluation may map a gate to a strong value even though has no input that can justify this value. This is hard to avoid: when we focus on the set , we do not know about other inputs of . For now, let us call unjustified the gates of that carry a strong value that is not justified by :
Let be a set of gates of a circuit and a almostevaluation. We call unjustified if is a strong value for , but, for every input of in , the value is weak for ; otherwise, is justified. The set of unjustified gates is written .
Let us start to explain in a highlevel manner how to construct the dSDNNF equivalent to the input circuit (we will later describe the construction formally). We do so by traversing bottomup, and for each bag of we create gates in , where is a almostevaluation and is a subset of which we call the suspicious gates of . We will connect the gates of created for each internal bag with the gates created for its children in , in a way that we will explain later. Intuitively, for a gate of , the suspicious gates in the set are gates of whose strong value is not justified by (i.e., ), and is not justified either by any of the almostevaluations at descendant bags of to which is connected. We call innocent the other gates of ; hence, they are the gates that are justified in (in particular, those who carry weak values), and the gates that are unjustified in but have been justified by an almostevaluation at a descendant bag of . Crucially, in the latter case, the gate justifying the strong value in may no longer appear in , making unjustified for ; this is why we remember the set .
We still have to explain how we connect the gates of to the gates and created for the children and of in . The first condition is that and must mutually agree, i.e., for all , and must then be the union of and , restricted to . We impose a second condition to prohibit suspicious gates from escaping before they have been justified, which we formalize as connectibility of a pair at bag to the parent bag of . Let be a nonroot bag, its parent bag, and a almostevaluation. For any set , we say that is connectible to if , i.e., the suspicious gates of must still appear in . If a gate is such that is not connectible to the parent bag , then this gate will not be used as input to any other gate, but we do not try to preemptively remove these useless gates in the construction (but note that this will be taken care of at the end, when we will apply Lemma 3.4). We are now ready to give the formal definition that will be used to explain how gates are connected: Let be an internal bag with children and , let and be respectively and almostevaluations that mutually agree, and and
Comments
There are no comments yet.