The Complexity of Reasoning with FODD and GFODD

Recent work introduced Generalized First Order Decision Diagrams (GFODD) as a knowledge representation that is useful in mechanizing decision theoretic planning in relational domains. GFODDs generalize function-free first order logic and include numerical values and numerical generalizations of existential and universal quantification. Previous work presented heuristic inference algorithms for GFODDs and implemented these heuristics in systems for decision theoretic planning. In this paper, we study the complexity of the computational problems addressed by such implementations. In particular, we study the evaluation problem, the satisfiability problem, and the equivalence problem for GFODDs under the assumption that the size of the intended model is given with the problem, a restriction that guarantees decidability. Our results provide a complete characterization placing these problems within the polynomial hierarchy. The same characterization applies to the corresponding restriction of problems in first order logic, giving an interesting new avenue for efficient inference when the number of objects is bounded. Our results show that for Σ_k formulas, and for corresponding GFODDs, evaluation and satisfiability are Σ_k^p complete, and equivalence is Π_k+1^p complete. For Π_k formulas evaluation is Π_k^p complete, satisfiability is one level higher and is Σ_k+1^p complete, and equivalence is Π_k+1^p complete.

Authors

• 1 publication
• 8 publications
• The Complexity of Prenex Separation Logic with One Selector

We first show that infinite satisfiability can be reduced to finite sati...
04/10/2018 ∙ by Mnacho Echenim, et al. ∙ 0

• Do Hard SAT-Related Reasoning Tasks Become Easier in the Krom Fragment?

Many reasoning problems are based on the problem of satisfiability (SAT)...
11/21/2017 ∙ by Nadia Creignou, et al. ∙ 0

• SHACL Satisfiability and Containment (Extended Paper)

The Shapes Constraint Language (SHACL) is a recent W3C recommendation la...
08/31/2020 ∙ by Paolo Pareti, et al. ∙ 0

• Enumerating Teams in First-Order Team Logics

We start the study of the enumeration complexity of different satisfiabi...
06/12/2020 ∙ by Anselm Haak, et al. ∙ 0

• One-Pass and Tree-Shaped Tableau Systems for TPTL and TPTLb+Past

In this paper, we propose a novel one-pass and tree-shaped tableau metho...
09/10/2018 ∙ by Luca Geatti, et al. ∙ 0

• Towards Understanding and Harnessing the Potential of Clause Learning

Efficient implementations of DPLL with the addition of clause learning a...
06/30/2011 ∙ by P. Beame, et al. ∙ 0

• A Tractable Logic for Molecular Biology

We introduce a logic for knowledge representation and reasoning on prote...
09/18/2019 ∙ by Adrien Husson, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The complexity of inference in first order logic has been investigated intensively. It is well known that the problem is undecidable, and that this holds even with strong restrictions on the types and number of predicates allowed in the logical language. For example, the problem is undecidable for quantifier prefix with a signature having a single binary predicate and equality [8]. Unfortunately, the problem is also undecidable if we restrict attention to satisfiability under finite structures [6, 24]. Thus, in either case, one cannot quantify the relative difficulty of problems without further specialization or assumptions. On the other hand, algorithmic progress in AI has made it possible to reason efficiently in some cases. In this paper we study such problems under the additional restriction that an upper bound on the intended model size is given explicitly. This restriction is natural for many applications, where the number of objects is either known in advance or known to be bounded by some quantity. Since the inference problem is decidable under this restriction, we can provide a more detailed complexity analysis.

This paper is motivated by recent work on decision diagrams, known as FODDs and GFODDs, and the computational questions associated with them. Binary decision diagrams [3, 1] are a successful knowledge representation capturing functions over propositional variables, that allows for efficient manipulation and composition of functions, and diagrams have been used in various applications in program verification and AI [3, 1, 11]. Motivated by this success, several authors have attempted generalizations to handle relational structure and first order quantification [9, 33, 30, 16]. In particular FODDs [33] and their generalization GFODDs [16] have been introduced and shown to be useful in the context of decision theoretic planning [2, 20, 12, 13] for problems with relational structure [15, 17].

GFODDs can be seen to generalize the function-free portion of first order logic (i.e., signatures with constants but without higher arity functions) to allow for non-binary numerical values generalizing truth values, and for numerical quantifiers generalizing existential and universal quantification in logic. Efficient heuristic inference algorithms for such diagrams have been developed focusing on the finite model case, and using the notion of “reasoning from examples” [22, 23, 21]. This paper analyses the complexity of the evaluation, satisfiability, and equivalence problems for such diagrams, focusing on the GFODD subset with and aggregation that are defined in the next section. To avoid undecidability and get a more refined classification of complexity, we study a restricted form of the problem where the finite size of the intended model is given as part of the input to the problem. As we argue below this is natural and relevant in the applications of GFODDs for solving decision theoretic control problems. The same restrictions can be used for the corresponding (evaluation, satisfiability and equivalence) problems in first order logic, but to our knowledge this has not been studied before. We provide a complete characterization of the complexity showing an interesting structure. Our results are developed for the GFODD representation and require detailed arguments about the graphical representation of formulas in that language. The same lines of argument (with simpler proof details) yield similar results for first order logic. To translate our results to the language of logic, consider the quantifier prefix of a first order logic formula using the standard notation using , to denote alternation depth of quantifiers in the formula. With this translation, our results show that:

(1) Evaluation over finite structures spans the polynomial hierarchy, that is, evaluation of formulas is complete, and evaluation of formulas is complete.

(2) Satisfiability, with a given bound on model size, follows a different pattern: satisfiability of formulas is complete, and satisfiability of formulas is complete.

(3) Equivalence, under the set of models bounded by a given size, depends only on quantifier depth: both the equivalence of formulas and equivalence of formulas are complete.

The positive results allow for constants in the signature but the hardness results, except for satisfiability for formulas, hold even without constants. For signatures without constants, satisfiability of formulas is in NP; when constants are allowed, it is complete as in the general template.

These results are useful in that they clearly characterize the complexity of the problems solved heuristically by implementations of GFODD systems [15, 17] and can be used to partly motivate or justify the use of these heuristics. For example, the “model checking reductions” of [16] that simplify the structure of diagrams replace equivalence tests with model evaluation on a “representative” set of models. When this set is chosen heuristically, as in [15], this leads to inference that is correct with respect to these models but otherwise incomplete. Our results show that this indeed leads to a reduction of the complexity of the inference problem, so that the reduction in accuracy is traded for improved worst case run time. Importantly, it shows that without compromising correctness, the complexity of equivalence tests that are used to compress the representation will be higher. These issues and further questions for future work are discussed in the concluding section of the paper.

The rest of the paper is organized as follows. The next section defines FODDs and GFODDs and provides a more detailed motivation for the technical questions. Section 3 then develops the results for FODDs. We treat the FODD case separately for three reasons. First, this serves for an easy introduction into the results that avoids some of the more involved arguments that are required for GFODDs. Second, as will become clear, for FODDs we do not need the additional assumption on model size, so that the results are in a sense stronger. Finally, some of the proofs for GFODDs require alternation depth of at least two so that separate proofs are needed for FODDs in any case. Section 4 develops the results for GFODDs. The final section concludes with a discussion and directions for future work.

2 FODDs and GFODDs and their Computational Problems

This section introduces the GFODD representation and associated computational problems, and explains how they are motivated by prior work on applying GFODDs in decision theoretic planning. We assume familiarity with basic concepts and notation in predicate logic [25, 29, 4] as well as basic notions from complexity theory [14, 32, 26].

Decision diagrams are similar to expressions in first order logic (FOL). They are defined relative to a relational signature, with a finite set of predicates each with an associated arity (number of arguments), a countable set of variables , and a set of constants . We do not allow function symbols other than constants (that is, functions with arity ). In addition, we assume that the arity of predicates is bounded by some numerical constant. A term is a variable or constant and an atom is either an equality between two terms or a predicate with an appropriate list of terms as arguments. Intuitively, a term refers to an object in the world of interest and an atom is a property which is either true or false.

To motivate the diagram representation consider first a simpler language of generalized expressions which we illustrate informally by some examples. In FOL we can consider open formulas that have unbound variables. For example, the atom is such a formula and its truth value depends on the assignment of and to objects in the world. To simplify the discussion, we assume for this example that arguments are typed and ranges over “objects” and over “colors”. We can then quantify over these variables to get a sentence which will be evaluated to a truth value in any concrete possible world. For example, we can write expressing the statement that there is a color associated with all objects. Generalized expressions allow for more general open formulas that evaluate to numerical values. For example, is similar to the logical expression and returns non binary values. Quantifiers from logic are replaced with aggregation operators that combine numerical values and provide a generalization of the logical constructs. In particular, when the open formula is restricted to values 0 and 1, the operators and simulate existential and universal quantification. Thus, is equivalent to the sentence above. But we can allow for other types of aggregations. For example, evaluates to the largest number of objects associated with one color, and evaluates to the number of objects that have no color association. GFODDs are also related to work in statistical relational learning [28, 27, 5]. For example, if the expression

captures probability of ground facts which are mutually independent then

captures the joint probability for all such facts. Of course, the open formulas in logic can include more than one atom and similarly expressions can be more involved. In this manner, a generalized expression represents a function from possible worlds to numerical values. GFODDs capture the same set of functions but provide an alternative representation for the open formulas through directed graphs. GFODDs were introduced together with a set of operations that can be used to manipulate and combine functions and in this way provide a tool for computation with numerical functions over possible worlds. Prior work includes implementation of the FODD fragment where the only aggregation operator allowed is [17, 15] and more recently implementations for GFODDs with and average aggregations [19, 18]. In this paper we investigate several computational questions for GFODDs with and aggregation.

2.1 Syntax

First order decision diagrams (FODD) and their generalization (GFODD) were defined by [33, 16] inspired by previous work in [9]. GFODDs are composed of two parts, including the aggregation functions and the open formula portion which is captured by a diagram or graph. The aggregation portion is given by a listing of the variables in the diagram in some arbitrary order and a corresponding list of length specifying aggregation over each . In this paper we restrict aggregation operators for each variable to be or . To reflect the structure of GFODDs, and distinguish between aggregation list and the graph portion of a diagram , we sometimes denote a GFODD by . However, when clear from the context we use as a shorthand for . FODDs are a special case of GFODDs where the aggregation function is for all variables. Due to associativity and commutativity of , the aggregation function for FODDs does not need to be represented explicitly.

As in propositional decision diagrams [3, 1], the diagram portion is a rooted acyclic graph with directed edges. Each node in the graph is labeled. A non-leaf node is labeled with an atom from the signature and it has exactly two outgoing edges. The directed edges correspond to the truth values of the node’s atom. A leaf is labeled with a non-negative numerical value. We sometimes restrict diagrams to have only binary leaves with values 0 or 1. In this case we can consider the values to be the logical values false and true. An example diagram is shown in Figure 1. In this diagram and all other diagrams in this paper, left going edges denote the true branch out of a node and right going edges represent the false branch.

Similar to the propositional case [3, 1], GFODD syntax is restricted to comply with a predefined total order on atoms. In the propositional case the ordering constraint yields a normal form (a unique minimal representation for each function) which is in turn the main source of efficient reasoning. For GFODDs, a normal form has not been established but the use of ordering makes for more efficient simplification of diagrams. In particular, following [33], we assume a fixed ordering on predicate names, e.g., , and a fixed ordering on variable names, e.g., and constants and require that for all and . The order is extended to atoms by considering them as lists. That is, if and if in the lexicographic ordering over the lists. Node labels in the GFODD must obey this order so that if node is above node in the diagram then the labels satisfy . The example of Figure 1 is ordered with predicate ordering and lexicographic variable ordering .

The ordering assumption is helpful when constructing systems using GFODDs because it simplifies the computations. Our complexity results hold in general, whether the assumption holds or not, therefore showing that while the assumption is convenient it does not fundamentally change the complexity of the problems. In particular, for the positive results, the algorithms showing membership in various complexity classes hold even in the more general case when the diagrams are not sorted. For the hardness results, the reductions developed hold even in the more restricted case when the diagrams are sorted. A significant amount of details in our analysis is devoted to handling ordering issues in hardness results.

Our complexity analysis will use the following classification of GFODD into subclasses. We say that a GFODD is a --alternating GFODD if its set of aggregation operators has blocks of aggregation operators, where the first includes aggregation, the second includes aggregation, and so on. We similarly define --alternating GFODD where the first block has aggregation operators. A GFODD has aggregation depth if it is in one of these two classes.

2.2 Semantics

Diagrams, like first order formulas, are evaluated in possible worlds that provide an interpretation of their symbols.111Possible worlds are known in the literature under various names including first order structures, first order models, and interpretations. In this paper we use the term interpretations. In particular, a possible world or Interpretation , , specifies a domain of objects, an assignment of each constant in the signature to an object in the domain, and the truth values of predicates over these objects.

The semantics assigns a value, denoted , for any diagram on any interpretation by considering all possible valuations. A variable valuation is a mapping from the set of variables in to domain elements in the interpretation . This mapping assigns each node label to a concrete (“ground”) atom in the interpretation and therefore to its truth value and in this way defines a single path from root to leaf. The value of this leaf is the value of the GFODD under the interpretation, , with variable valuation and is denoted . The final value, , is defined by aggregating over . In particular, considering the aggregation order we loop with taking values from to 1 aggregating values over using its aggregation operator. We denote this by where for the special case of FODDs this yields .

Consider evaluating the FODD example in Figure 1 on interpretation . Then for we have but for we have and therefore .

2.3 Computations with GFODDs

The GFODD representation was introduced as a tool for mechanizing and solving decision problems given by structured Markov Decision Processes (MDP), also known as Relational MDP or First Order MDP. A detailed exposition is beyond scope of this paper (see

[33, 16]). This section provides some necessary technical details and some background to motivate the computational problems investigated in the paper. In this context, a planning problem world state can be described using an interpretation providing the objects in the world and the relations among them. An action moves the world from one state to another, where in MDPs this transition is non-deterministic. The so-called function

provides a quality estimate of each action

in each state . Using this function, one can control the MDP by picking in state . There are several algorithms to calculate such functions and previous work has introduced GFODDs as a compact representation for these functions. This is done by implementing a symbolic version of the well known Value Iteration (VI) algorithm, where the symbolic algorithm operates by manipulating GFODDs. Action selection provides our first computational question, that is, evaluating . In our context, this means calculating where captures and and is the representation of the function. The same computational problem occurs in several other steps in the symbolic VI algorithm. We define this problem below as GFODD Evaluation.

Recall that a GFODD represents a function from interpretations to real values. One of the main operations required for the symbolic VI algorithm is combination of such functions. In particular, let and be functions represented by two GFODDs, and let be any binary operation over real values (e.g., plus). The combination operation returns a GFODD representing a function such that for all we have . That is, is a symbolic representation of the pointwise operation over function values of and . Note that since and are closed expressions we can standardize apart their variables before taking this operation.

Figure 2 shows how to combine the diagram portions (i.e., the open expressions) in a semantically coherent manner using the Apply procedure of [33]. The following theorem identifies conditions for correctness of Apply when used with closed expressions. We say that a binary operation is safe with respect to aggregation operator if it distributes with respect to it, that is . A list of safe pairs of binary operations and aggregation operators was provided by [16]. For the arguments of this paper we recall that the binary operations and are safe with respect to max and min aggregation. For example . With this definition we have:

Theorem 1 (see Theorem 4 of [16])

Let and be GFODDs that do not share any variables and assume that is safe with respect to all operators in and . Let apply(). Let be any permutation of the list of variable in and so long as the relative order of operators in and remains unchanged, and let . Then for any interpretation , .

Therefore, when adding (or taking the logical-and of) functions represented by diagrams that are standardized apart we can use the Apply procedure on the graphical representations of these functions, and at the same time we have some flexibility in putting together their list of aggregation functions. This will be useful in our reductions.

The Apply procedure can introduce redundancy into diagrams. By this we mean that a simpler syntactic form, often a sub-diagram, can represent the same function. To illustrate, consider the diagrams of Figure 2 as FODDs (i.e., with aggregation) and consider edge marked by . It is easy to see that this edge can be redirected to a leaf with value zero without changing for any . This is true because if we can reach the leaf with value 6 using some valuation then we can also reach the leaf with value 14 using another valuation because is not constrained. Therefore, the aggregation will always ignore valuations reaching value 6. It is also easy to see that the edge marked can be redirected to value 14 without changing . Simplification222 Simplification was called reduction by [33]; to avoid confusion with the standard complexity theory meaning of the term reduction we use the term simplification instead. of diagrams by removing unnecessary portions is crucial for efficiency of GFODD implementations and a significant amount of previous work was devoted to mechanizing this process. Note that it is most natural to keep the aggregation portion fixed and simply manipulate the diagram portion. In this paper we abstract this process as testing for GFODD Equivalence, that is, testing whether the diagram is equivalent to a second simpler one. Motivated by the focus in the implementations on algorithms that remove one edge at a time, as illustrated in the example, we also formalize this special case.

2.4 Complexity Theory Notation

Recall that the polynomial hierarchy is defined from P, NP, and co-NP using an inductive constraction with reference to computation with oracles [14, 32, 26]. In particular we have that NP, and co-NP. An algorithm is in the class if it uses computation in with a polynomial number of calls to an oracle for a problem in class . Then we have , and . A problem is in iff its complement is in and thus (since the oracle always answers deterministically and correctly) either of these can serve as the oracle in the definition.

2.5 Computational Problems

Before defining the computational problems we must define the representation of inputs. We assume that GFODDs are given using a list of aggregation operators and associated variables and a labelled graph representation of the diagram. This is clearly polynomially related to the number of variables and number of nodes in the GFODD. Some of our problems require interpretations as input. Here we assume a finite domain so as to avoid issues of representing the interpretation. Thus an interpretation is given as a list of objects serving as domain elements, a list specifying the mapping of constants to objects, and the extension of each predicate on these objects. Given that the signature is fixed and the arity of each predicate is constant, this implies that the size of is polynomially related to the number of objects in . As illustrated in the example of Figure 1, a graph can be seen as an interpretation with domain and with one predicate formed by the edge relation. We can now define the computational problems of interest. We separate the definitions for FODDs and GFODDs because for GFODDs the unrestricted problems are undecidable and they require further refinement. The simplest problem requires us to evaluate a diagram on a given interpretation.

Definition 2 (FODD Evaluation)

Given diagram , interpretation with finite domain, and value : return Yes iff . In the special case when the leaves are restricted to and this can be seen as a returning Yes iff is true.

To calculate we can “run” a procedure for FODD Evaluation multiple times, once for each leaf value as , and return the highest achievable result. Thus, if FODD Evaluation is in complexity class , we can calculate the function value in . This fact is used several times in our constructions.

Since diagrams generalize FOL it is natural to investigate satisfiability:

Definition 3 (FODD Satisfiability)

Given diagram with leaves in : return Yes iff there is some such that is true.

When has more than two values in its leaves the satisfiability problem becomes:

Definition 4 (FODD Value)

Given diagram and value : return Yes iff there is some such that .

Notice that FODD Value requires that is achievable but no value larger than is achievable on the same and, as the proofs below show, the extra requirement makes the problem harder. On the other hand, if we replace equality with in FODD Value, the problem is equivalent to FODD Satisfiability because we can simply replace leaf values in the diagram with 0,1 according to whether they are .

Finally, as motivated above, we investigate the simplification problem and its special case with single edge removal.

Definition 5 (FODD Equivalence)

Given diagrams and : return Yes iff for all .

Definition 6 (FODD Edge Removal)

Given diagrams and , where can be obtained from by redirecting one edge to a zero valued leaf: return Yes iff for all .

Given the discussion above, GFODDs with binary leaves can be seen to capture the function free fragment of first order logic with equality. It is well known that satisfiability and therefore also equivalence of expressions in this fragment of first order logic is not decidable. In fact, the problem is undecidable even for very restricted forms of quantifier alternation (see survey and discussion in [8]). For example, the problem is undecidable for quantifier prefix with a single binary predicate and equality. The problem is also undecidable if we restrict attention to satisfiability under finite structures. Therefore, without further restrictions, we cannot expect much by way of classification of the complexity of the problems stated above for GFODDs.

We therefore restrict the problems so that the size of interpretations is given as part of the input. This makes the problems decidable and reveals the structure promised above. There are two motivations for using such a restriction. The first is that in some applications we might know in advance that the number of relevant objects is bounded by some large constant. For example, the main application of GFODDs to date has been for solving decision theoretic planning problems; in this context the number of objects in an instance (e.g., the number of trucks or packages in a logistics transportation problem) might be bounded by some known quantity. The second is that our results show that even under such strong conditions the computational problems are hard, providing some justification for the heuristic approaches used in FODD and GFODD implementations [15, 17, 18].

Definition 7 (GFODD Model Evaluation)

Given diagram , interpretation with finite domain, and value : return Yes iff . Note that when the leaves are restricted to and this can be seen as a returning Yes iff is true.

Definition 8 (GFODD Satisfiability)

Given diagram with leaves in and integer in unary: return Yes iff there is some , with at most objects, such that is true.

Definition 9 (GFODD Value)

Given diagram , integer in unary and value : return Yes iff there is some , with at most objects, such that .

Definition 10 (GFODD Equivalence)

Given diagrams and (with the same aggregation functions) and integer in unary: return Yes iff for all with at most objects, .

Definition 11 (GFODD Edge Removal)

Given diagrams and (with the same aggregation functions), where where can be obtained from by redirecting one edge to a zero valued leaf, and given integer in unary: return Yes iff for all with at most objects, .

Since we are assuming a fixed arity , the assumption that is in unary is convenient because it implies that the size of an intended interpretation is polynomial in . Therefore, an algorithm for these problems can explicitly represent an interpretation of the required size and test it. Our hardness results use which is at most linear in the size of the corresponding diagram .

3 The Complexity of Reasoning with FODD

In this section we develop the complexity results for the special case of FODDs. Evaluation of FODDs is essentially the same as evaluation of conjunctive queries in databases and can be analyzed similarly. We include the argument here for completeness.

Theorem 12

FODD Evaluation is -complete.

Proof. Membership in NP is shown by the algorithm that guesses a valuation , calculates and returns Yes iff the leaf reached has value . Yes is returned iff some valuation yields a value as needed.

For hardness we reduce the directed Hamiltonian path to this problem. As illustrated in Figure 1, given the number of nodes in a graph we can represent a generic Hamiltonian path verifier as a FODD . To do this we simply produce a left going path which verifies existence of the edges, followed by equality tests to verify that all nodes are distinct. All “failure exits” on this path go to 0 and the success exit of the last test yields 1. Call this diagram . This diagram is ordered with and lexicographic ordering over arguments. Now, given any input for Hamiltonian path, we represent it as an interpretation and produce as the input for FODD Evaluation. Clearly, has a Hamiltonian path iff .

The other results for FODDs rely on the existence of small models:

Lemma 13

For any FODD with variables and constants, if for some then there is an interpretation with at most objects such that .

Proof. Let be as in the statement. Then there is a valuation such that reaches a leaf valued in . Let be an interpretation including the objects that are used in the path traversed by where the truth value of any predicate over arguments from these objects agrees with . We have that has at most objects, is a suitable valuation for and . In addition, no other valuation leads to a value larger than because, if it did, the same value would be achievable in . Hence, .

Theorem 14

FODD Satisfiability is NP-Complete.

Proof. For membership we can guess an interpretation , which by the previous lemma can be small, and guess a valuation for that interpretation. We return Yes if and only if .

We show hardness with a reduction from 3SAT. Let be an arbitrary 3CNF formula. We create a new FODD variable for each literal in the CNF so that corresponds to the th literal in the th clause.

Our FODD will have three portions connected in a chain. The first portion checks that the predicate in the interpretation can be used to simulate Boolean assignments. To achieve this, we first ensure that the interpretation has at least two different objects, referred to by variables and . We then use a small block that ensures that the truth value of is not equal to . As a result and correspond to true and false logical values. This is shown in Figure 3.

The second portion ensures that if and correspond to the same Boolean variable then they map to the same object. For every variable we create a shadow FODD variable and equate it to all the that correspond to . We call this sequence of equalities a consistency block. For example, consider the CNF

 (x1∨¯¯¯¯¯x2∨x4)∧(¯¯¯¯¯x1∨x2∨x3)∧(x1∨x3∨¯¯¯¯¯x4)

where the corresponding FODD variables are

 v(1,1),v(1,2),v(1,3),   v(2,1),v(2,2),v(2,3),   v(3,1),v(3,2),v(3,3).

The first block, corresponding to , ensures that are all assigned the same value. In addition to testing that the values are equivalent the block tests that each variable gets bound to the same object as or . The only possible way to not get a 0 in these blocks is to ensure that each variable in the block has the same value and that it is equal to either or . Figure 4 shows the consistency blocks for our example.

The third portion tracks the structure of to guarantee the same truth value in the FODD. To follow the structure of , we build a block for each clause and chain these blocks together. Each block has 3 nodes corresponding to the 3 literals in the clause. In particular, if the th literal in the th clause is positive the true edge (literal satisfied; call this success) continues to the next clause, and the false edge (literal failed) continues to the next literal. For a negative literal the true and false directions are swapped. The fail exit of the 3rd literal is attached to 0. Clause blocks have one entry and one exit and they are chained together. The success exit of the last clause is connected to the leaf 1. The only way to reach a value of 1 is if every clause block was satisfied by the valuations to . Figure 5 illustrates the clause blocks for our example.

Each of the portions, including the clause blocks, has one entry and one exit and we chain them together to get the diagram . For a valuation to be mapped to 1 it must succeed in all three portions. We claim that is satisfied if and only if there is some interpretation such that .

Consider first the case where is satisfiable. We introduce the interpretation that has two objects, and , where true, and false. Let be a satisfying assignment for and let be a valuation for on where and if maps to 1 then and its block are mapped to and otherwise the block is mapped to . Here, succeeds in all blocks, implying that and therefore .

Consider next the case where for some and let be such that . Then we claim that identifies a satisfying assignment. First, since succeeds in the first block we identify two objects that correspond via to truth values, without loss of generality assume that is true. Then success in the second portion implies that we can identify an assignment to the Boolean variables, if the th block is assigned to we let and otherwise . Finally, success in the third portion implies that the clauses in are satisfied by the assignment to the ’s. This completes the correctness proof.

Finally we address node ordering in the diagram. The only violation of ordering is the use of in the first block. Otherwise, we have all equalities above , variable ordering , and lexicographic ordering within a group. Now because our diagram forms one chain of blocks leading to a single sink leaf with value 1 we can move the three nodes to the bottom of the diagram in Figure 4. This does not change the map value for any valuation and thus does not affect correctness. We therefore conclude that is consistently sorted and is satisfiable iff for some .

This proof illustrates the differences in arguments needed for FODDs and GFODDs vs. First Order Logic. For the latter, the reduction can use the sentence to show the hardness result. However, this cannot be easily represented as a FODD because the literals appearing in the clauses will violate predicate order and, if we try to reorder the nodes from a naive FODD encoding, the result might be exponentially larger. An alternative formulation can use to avoid the problem with predicate order. However, similar ordering issues now arise for the arguments. Our reduction introduces additional variables as well as the variable consistency gadget to get around these issues. The same structure of reduction from 3SAT instances and their QBF generalizations will be used in the results for GFODD.

Theorem 15

FODD Equivalence and FODD Edge Removal are -complete.

Proof. Since Edge Removal is a special case of Equivalence it suffices to show membership for Equivalence and hardness for Edge Removal. The hardness result is given in two stages; we first present a reduction which does not respect the constraint on node ordering and Edge Removal structure, and then show how to fix the construction to respect these restrictions.

Membership in Πp2:

First observe that, by Lemma 13, if the diagrams are not equivalent then there is a small interpretation that serves as a witness for the difference. Using this fact, we can show that non-equivalence is in . Given , we guess an interpretation of the appropriate size, and then appeal to an oracle for FODD Evaluation to calculate and . Using these values we return Yes or No accordingly. To calculate the map values, let be one of these diagrams, and let the leaf values of the diagram be . We make calls to FODD Evaluation with as input. is the largest value on which the oracle returns Yes. If a witness for non-equivalence exists then this process can discover it and say No, and otherwise it will always say Yes. Therefore non-equivalence is in , and equivalence is in .

Reduction basics:

To show hardness, consider the problem of deciding arrowing from the Ramsey theory of graphs [31]. Given two graphs , we say that includes an embedding of if there is a 1-1 mapping from nodes of to nodes of , such that for every edge of , the edge is in . We say that includes an isomorphic embedding of if, in addition, satisfies that for every edge not in , the edge is not in .

We say that arrows , denoted , if for every 2-color edge-coloring of into colors red and blue, the red subgraph of includes an embedding of or the the blue subgraph of includes an embedding of .

The arrowing problem is as follows: Given graphs as input, return Yes iff arrows . This problem was shown to be -complete by [31]. We reduce this problem to FODD equivalence. The signature includes equality and two arity-2 predicates and , where captures the edge relation of the main graph and is a coloring of all possible edges such that when is true the edge is colored red and when it is false the edge is colored blue.

The main construction:

To transform arrowing into an instance of FODD equivalence we build two FODDs with binary leaves. The first FODD is satisfied iff includes an isomorphic embedding of in its edge relation . The second FODD is satisfied iff the same condition holds and the coloring defined by has a red embedding of or a blue embedding of . Note that, due to the 1-1 requirement, must have at least as many objects as there are nodes in . We illustrate the construction using the example input in Figure 6. Here the input graphs F, G, H are a positive instance of arrowing.

To build a FODD which verifies that has an isomorphic embedding of , we map each node to a variable in the FODD and test that each node has its correct neighbors. We first build a “node mapping” gadget that makes sure that each variable in the FODD is mapped to a different object in the interpretation. This is done by following a path of inequalities, where off-path edges go to 0 and the final exit continues to the next portion. This gadget, for our example graph with 5 nodes, is shown in Figure 7. To test isomorphism to we test the neighbors of each node in sequence to verify that edges exist iff they are in . The FODD fragment in Figure 8 shows how this can be tested for vertex in the example. If the edge is present in the graph we continue left (using the true branch) to the next neighbor and if the edge is not in we continue to the right child (the false branch). Edges off this path are directed to the zero leaf. The endpoint of the path will connect to the next portion of the FODD. This construction can be done for each node and the fragments can be connected together to yield the verifier. This is illustrated in Figure 9. Finally, the diagram is built by connecting the verifier at the bottom of node mapping gadget, and replacing the bottom node of the verifier with a leaf valued 1. We refer to this diagram as the “complete verifier” below. This construction can be done in polynomial time for any graph . It should be clear form the construction that iff includes an isomorphic embedding of in its edge relation . In addition, the verifier diagram is ordered where we have , and where variables are ordered lexicographically.

The second digram includes the complete verifier and additional FODD fragments that are described next to capture the conditions on and respectively. In order to verify the embedding of colored subgraph we first define a node mapping capturing the mapping of nodes into nodes, and then verify that the required edges exist and that they have the correct color. The FODD fragment in Figure 10 shows how we can select a node mapping for vertex . This fragment returns 0 unless is mapped to one of the nodes in that are identified in the portion. As depicted in Figure 11, this can be repeated for all the nodes in , verifying that each node in is mapped to a node in . Next we need to verify that the mapping is one to one. This can be done by using a path of inequalities between the variables referring to nodes of . This FODD fragment is given in Figure 12. For correctness, we need to chain the two tests together, but this will violate node ordering. We therefore interleave the tests putting the uniqueness equality tests for a variable exactly after the equalities selecting its value. This change is possible because each such block has exactly one exit point. The resulting diagram, for our running example, is shown in Figure 13.

To complete the embedding test, we need to check that the edges are preserved and that they have the correct color. We do this by first checking that the corresponding edges in are in . We can do this using a left going path testing each edge in turn, where we test both and to account for the fact that the graph is undirected.333The test of both directions of the edge is not necessary, because a different portion of the diagram already verifies that the embedding of is undirected, but we include it here to simplify the argument. This is illustrated on the left hand side of Figure 14. Note that, because we are testing for an embedding (i.e., not for an isomorphic embedding) we test only for the edges in and do not need to verify nonexistence of the edges not in (it just happens here that is a clique so this is not visible in the example). The same FODD structure is repeated with predicate replacing to verify that the edges of G are colored red, as shown on the right of Figure 14.

A similar construction with node mapping, edge verifier, and color verifier can be used for . The node mapping construction is identical. Figure 15 shows the edge and color verifiers. The only difference in construction is that the color verifier tests that the edge is not in to capture the color blue and therefore has a mirror structure to the one verifying the edges. Note that in this case is not a complete graph and we are indeed only testing for the edges in . This construction can be done in polynomial time for any and .

Finally we connect the three portions together to obtain as follows. The final output of the complete verifier is connected to the root of the verifier. The final output of the verifier is connected to 1. The zero leaf of the verifier is removed and instead connected to the root of the verifier. The final output of the verifier is connected to 1. Therefore, there are exactly two edges leading to the 1 leaf in this diagram, corresponding to the positive outputs of the and verifiers. Figure 16 shows an overview of the two FODDs, and , generated by the reduction.

The diagrams and are not consistent with any sorting order over node labels, and thus we need to modify them to get a consistent ordering. We show below how this can be done with only a linear growth in the size of the diagrams and without changing the semantics of and . Before presenting this transformation we show that iff and are equivalent.

Correctness of the construction:

Consider the case when , that is, for every 2-color edge-coloring of there is a red or a blue . We show that the two FODDs are equivalent by way of contradiction. Assume that and are not equivalent and let be any witness to this fact. Now, implies because the only paths to 1 in go through a copy of . Therefore, for the assumed witness , it must be the case that and .

By construction, implies that has an isomorphic embedding of . Because , any coloring of that embedding, including the coloring captured by in , has a red or a blue . Assume that the embedding in has a red . Then we can construct the appropriate node mapping in a valuation to show that , contradicting the assumption. The same argument handles the case when the embedding has a blue .

Consider the case when does not arrow . Then there is a valid 2-color edge-coloring of which does not have a red and does not have a blue . Construct the corresponding interpretation that represents and this edge-coloring. We claim that and . The fact follows by mapping the nodes in to the variables that represent them. Now if then for some and we can trace the path that traverses in . This path together with can be used to identify either a red G or a blue H in and therefore in the corresponding coloring of . This contradicts the assumption that the coloring is a witness for non-arrowing.

Fixing the construction to handle ordering and edge removal special case:

We next consider the node ordering in and . The diagram is sorted, where predicate order puts equalities above and arguments are lexicographically ordered. For we consider the sub-block structure of the construction. Expanding each of the sub-blocks of , , in Figure 16 we observe that has the structure shown in Figure 17. We further observe that each block is internally sorted, but blocks of equalities, and are interleaved. By analyzing this structure we see that the blocks can be reordered at the cost of duplicating some portions yielding the structure in Figure 18. It is easy to see that is satisfied in if and only if the reordered diagram is satisfied in . The diagrams yield the same value for any valuation which does not exit to 0 due to bad node mapping for or . Thus the original version might yield 1 (e.g., through path) when the reordered diagram yields 0 on such a valuation (e.g., via the equalities). But in such a case there is another valuation that is identical to except that it modifies the bad node mapping (the equalities) and that yields 1 for the new diagram. The final diagram is consistent with predicate ordering and variable ordering where for all .

Finally, we further change by adding the equality blocks of and to the construction, so that the modified is as shown in Figure 19. Using the same argument as in one can see that this does not change the semantics of . Moreover, with this change can be obtained from by one edge removal (of the edge below the verifier in ) so that the reduction holds for this more restricted case.

As mentioned above, FODD Value is defined similarly to FODD Satisfiability but requires more stringent conditions. The next result shows that this difference is important and FODD Value is one level higher in the hierarchy.

Theorem 16

FODD Value is -complete.

Proof. The algorithm showing membership is as follows. We first observe that by Lemma 13 we can restrict our attention to small interpretations. Given input and we guess an interpretation of the appropriate size. We then make two calls to an oracle for FODD Evaluation. Let be either the least leaf value greater than or one greater than the max leaf if is the maximum. We query the oracle for FODD Evaluation on and and return Yes iff the oracle returns Yes on the first and No on the second. The algorithm returns Yes iff there is an interpretation with value .

For hardness we present a reduction from non-Equivalence of FODDs with binary leaves, which was shown to be -hard in Theorem 15. We are given and as input for FODD non-Equivalence where and are standardized apart so that , stand for disjoint sets of variables. We construct the diagram where can be calculated directly on the graph representation of and using the apply procedure of [33] (see Figure 2). Because and are disjoint, the diagram has the following behavior for any interpretation : if and then ; otherwise if exactly one of them evaluates to 1 then ; and otherwise . We produce as input for FODD Value.

Now, if and are not equivalent then there is an interpretation such that their maps are different, and without loss of generality we may assume and . As argued above in this case as needed. For the other direction let be such that . Then, again using the argument above, we have and or vice versa and the diagrams are not equivalent.

4 The Complexity of Reasoning with GFODD

In this section we analyze the computational problems for GFODD. We start with some observations on a notion of “complements” for GFODDs. Let be a GFODD associated with the ordered list of variables , and aggregation list where each is or . Let (with respect to maximum value ) be the diagram corresponding to where we change leaf values and aggregation operators as follows: Let be any value greater or equal to the max leaf value in . Any leaf value is replaced with . Each aggregation operator is replaced with where where if is then is and vice versa.

Theorem 17

Let be a GFODD with and aggregation and maximum leaf value , and let . For any interpretation , .

Proof. By the construction of , for any valuation , we have that . Considering the aggregation process, note that . Now using this fact, we can argue by induction backward from the innermost (rightmost) aggregation that for any prefix of variables , valuation for these variables, and remaining variables , we have . When the prefix is empty we get the statement of the theorem.

Notice that for diagrams with binary leaves this yields , that is, negation. As an immediate application we get the following:

Corollary 18

The complexity of GFODD Equivalence for --alternating GFODD is the same as the complexity of GFODD Equivalence for --alternating GFODD.

Proof. By Theorem 17, two diagrams are equivalent if and only if their complements are equivalent where we can use the maximum among the leaf values of the two diagrams as .

Corollary 19

The equivalence problem for -GFODD is -complete.

We can now turn to analysis of the computational problems. Evaluation is similar to the FODD case but the hardness proof is more involved due to the interaction between quantifier order and node ordering in the diagram.

Theorem 20

GFODD Evaluation for --alternating GFODDs is -complete. GFODD Evaluation for --alternating GFODDs is -complete.

Proof. We prove membership by induction on . Since the inductive step includes diagrams that do not satisfy the sorting order we show that the claim holds in this more general case. Consider the input . For the base case, , we guess a valuation , calculate , and return Yes iff . In the case, if the true value is at least then we say Yes for some , and if the true value is less than then for all and therefore we always say No. Thus the problem is in NP. In the case, if the true value is at least then all yield Yes, and if the true value is less than then some yields No. Thus the problem is in co-NP.

For the inductive step assume that the claim holds for and consider the input with an interpretation , value bound and a --alternating diagram where in order to simplify the notation each may be a single variable or a set of variables and we use the boldface notation to denote this fact.

Now for each tuple of domain objects in (which is appropriate for the number of variables in ) let diagram be . Clearly is appropriate for evaluation on and by the inductive hypothesis we can appeal to a oracle to solve GFODD Evaluation on . Our algorithm guesses a value , calculates , appeals to the oracle, and returns the same answer. Now, if the true value is then by definition any call to the oracle would yield No and we correctly answer No. If the true value is then for some the oracle would return Yes. Therefore we nondeterministically return Yes and our algorithm is in NP. The argument for the other aggregation prefix is symmetric and argued in the same manner yielding an algorithm in co-NP.

To show hardness we give a reduction from . Given a quantified 3CNF Boolean formula we transform this into a GFODD and interpretation so that the following claim holds:

Claim 1: evaluates to 1 in if and only if the quantified Boolean formula is satisfied.

This claim establishes the theorem. The reduction uses a similar structure to the one used for FODD satisfiability with two main differences. First because here we consider evaluation and we can control we do not need to test for an embedding of a Boolean predicate in , that is, the first portion in that construction is not needed. On the other hand the construction and proof are more involved because of the alternation of quantifiers.

The interpretation has two objects, a and b, where true, and false. Namely, true, and false.

Let the QBF formula be where is a quantifier or and the quantifiers come in alternating blocks. As above, we start the construction by creating a set of “shadow variables” corresponding to each QBF variable . The corresponding GFODD variables include and the set of that refer to or in the QBF. We define to be the set of variables in the block corresponding to and associate these variables with an aggregation operator where if is a then is and if is a then is . Using these variables, we build GFODD fragments we call variable consistency blocks. For each , this gadget ensures that if two literals in the QBF refer to the same variable then the corresponding variables in the GFODD will have the same value. If this holds then a valuation goes through the block and continues to the next block. Otherwise, it exits to a default value, where for blocks the default value is 0, and for blocks the default value is 1.

Consider the expression

 ∀x1∃x2∀x3∃x4(x1∨¯¯¯¯¯x2∨x4)∧(¯¯¯¯¯x1∨x2∨x3)∧(x1∨x3∨¯¯¯¯¯x4),

which has the same clauses as in the previous proof but where we have changed the quantification. Figure 20 shows the variable consistency blocks for this example. Since, , , and refer to we need to ensure that when they are evaluated they are evaluated consistently and this is done by the first block. Because is a variable the default output value is 1. The consistency blocks are chained in the same order as in the quantification of the QBF. Once every consistency block has been checked, we continue to the clause blocks whose construction is exactly the same as in the previous proof (see Figure 5). This yields the diagram where we set the aggregation function to be . Note that if the QBF has alternating blocks of quantifiers then has aggregation depth . The output of the reduction is the pair . The diagram is ordered with and variables ordered lexicographically.

We next show that Claim 1 holds. We start by showing a correspondence between assignments to the Boolean formula and object assignments from to . Let be a Boolean assignment. If assigns to 1 then maps the entire block to . Otherwise maps the block to . It is then easy to see that for all , satisfies the consistency blocks and if and only if . This, however, does not complete the proof because must also consider valuations that do not arise as maps of assignments .

We divide the set of valuations to the GFODD into two groups. The first group of legal valuations, called Group 1 below, is the set of valuations that is consistent with some .

The second group, Group 2, includes valuations that do not arise as and therefore they violate at least one of the consistency blocks. Let be such a valuation and let be the first block from the left whose constraint is violated. By the construction of , in particular the order of equality blocks along paths in the GFODD, we have that the evaluation of the diagram on “exits” to a default value on the first violation. Therefore, if is a then and if is a then .

We can now show the correspondence in truth values. Consider any partition of the blocks into a prefix and remainder , and any Boolean assignment to the prefix blocks. We claim that for all such partitions

 Qj+1xj+1,…,Qmxm,  f((x1,…,xj)=v,(xj+1,…,xm))= QAj+1wj+1,…,QAmwm,  MAPB(I,[