Constraint satisfaction problems are computational problems where, informally, the input consists of a finite set of variables and a finite set of constraints imposed on those variables; the task is to decide whether there is an assignment of values to the variables such that all the constraints are simultaneously satisfied. Set constraint satisfaction problems are special constraint satisfaction problems where the values are sets, and the constraints might, for instance, force that one set includes another set , or that one set is disjoint to another set . The constraints might also be ternary, such as the constraint that the intersection of two sets and is contained in , in symbols .
To systematically study the computational complexity of constraint satisfaction problems, it has turned out to be a fruitful approach to consider constraint satisfaction problems where the set of allowed constraints is formed from a fixed set of relations over a common domain . This way of parametrizing the constraint satisfaction problem by a constraint language has led to many strong algorithmic results [BD06, IMM10, BK09a, BK07, BK09b], and to many powerful hardness conditions for large classes of constraint satisfaction problems [Sch78, BKJ05, Bul03, Bul06, BK09b].
A set constraint language is a set of relations where the common domain is the set of all subsets of the natural numbers; moreover, we require that each relation can be defined by a Boolean combination of equations over the signature , , , , and , which are function symbols for intersection, union, complementation, the empty and full set, respectively. Details of the formal definition and many examples of set constraint languages can be found in Section 3. The choice of is just for notational convenience; as we will see, we could have selected any infinite set for our purposes. In the following, a set constraint satisfaction problem (set CSP) is a problem of the form for a set constraint language . It has been shown by Marriott and Odersky [MO96] that all set CSPs are contained in NP; they also showed that the largest set constraint language, which consists of all relations that can be defined as described above, has an NP-hard set CSP.
Drakengren and Jonsson [DJ98] initiated the search for set CSPs that can be solved in polynomial time. They showed that can be solved in polynomial time, where
holds iff is a subset of or equal to ;
holds iff and are disjoint sets; and
holds iff and are distinct sets.
They also showed that can be solved in polynomial time if all relations in can be defined by formulas of the form
or of the form
where are not necessarily distinct variables. We will call the set of all relations that can be defined in this way Drakengren and Jonsson’s set constraint language. It is easy to see that the algorithm they present runs in time quadratic in the size of the input.
On the other hand, Drakengren and Jonsson [DJ98] show that if contains the relations defined by formulas of the form
the problem is NP-hard.
Contributions and Outline.
We present a significant extension of Drakengren and Jonsson’s set constraint language (Section 3) whose CSP can still be solved in quadratic time in the input size (Section 6); we call this set constraint language . Unlike Drakengren and Jonsson’s set constraint language, our language also contains the ternary relation defined by , which is a relation that is of particular interest in description logics – we will discuss this below. Moreover, we show that any further extension of contains a finite sublanguage with an NP-hard set CSP (Section 7), using concepts from model theory and universal algebra. In this sense, we present a maximal tractable class of set constraint satisfaction problems.
Our algorithm is based on the concept of independence in constraint languages which was discovered several times independently in the 90’s [Kou01, JB98, MO96] – see also [BJR02, CJJK00]; however, we apply this concept twice in a novel, nested way, which leads to a two level resolution procedure that can be implemented to run in quadratic time. The technique we use to prove the correctness of the algorithm is also an important contribution of our paper, and we believe that a similar approach can be applied in many other contexts; our technique is inspired by the already mentioned connection to universal algebra.
Application Areas and Related Literature
Set Constraints for Programming Languages.
Set constraints find applications in program analysis; here, a set constraint is of the form , where and are set expressions. Examples of set expressions are (denoting the empty set), set-valued variables, and union and intersection of sets, but also expressions of the form where is a function symbol and are again set expressions. Unfortunately, the worst-case complexity of most of the reasoning tasks considered in this setting is very high, often EXPTIME-hard; see [Aik94] for a survey. More recently, it has been shown that the quantifier-free combination of set constraints (without function symbols) and cardinality constraints (quantifier-free Pressburger arithmetic) has a satisfiability problem in NP [KR07]. This logic (called QFBAPA) is interesting for program verification [KNR06].
Tractable Description Logics.
Description logics are a family of knowledge representation formalisms that can be used to formalize and reason with concept definitions. The computational complexity of most of the computational tasks that have been studied for the various formalisms is usually quite high. However, in the last years a series of description logics (for example , , Horn-, and various extensions and fragments [KM02, Baa03, BBL05, KRH06]) has been discovered where crucial tasks such as e.g. entailment, concept satisfiability and knowledge base satisfiability can be decided in polynomial time.
Two of the basic assertions that can be made in and Horn- are (there is no that is also ) and (every that is is also ), for concept names . These are set constraints, and the latter has not been treated in the framework of Drakengren and Jonsson. None of the description logics with a tractable knowledge base satisfiability problem contains all set constraints.
Several spatial reasoning formalisms (like RCC-5 and RCC-8) are closely related to set constraint satisfaction problems. These formalisms allow to reason about relations between regions; in the fundamental formalism RCC-5 (see e.g. [JD97]), one can think of a region as a non-empty set, and possible (binary) relationships are containment, disjointness, equality, overlap, and disjunctive combinations thereof. Thus, the exclusion of the empty set is the most prominent difference between the set constraint languages studied by Drakengren and Jonsson in [DJ98] (which are contained in the class of set constraint languages considered here), and RCC-5 and its fragments.
2 Constraint Satisfaction Problems
To use existing terminology in logic and model theory, it will be convenient to formalize constraint languages as (relational) structures (see e.g. [Hod93]). A structure is a tuple where is a set (the domain of ), each is a function from (where is called the arity of ), and each is a relation over , i.e., a subset of (where is called the arity of ). For each function we assume that there is a function symbol which we denote by , and for each relation we have a relation symbol which we denote by . Constant symbols will be treated as -ary function symbols. The set of all relation and function symbols for some structure is called the signature of , and we also say that is a -structure. If the signature of only contains relation symbols and no function symbols, we also say that is a relational structure. In the context of constraint satisfaction, relational structures are also called constraint languages, and a constraint language is called a sublanguage (or reduct) of a constraint language if the relations in are a subset of the relations in (and is called an expansion of ).
Let be a relational structure with domain and a finite signature . The constraint satisfaction problem for is the following computational problem, also denoted by : given a finite set of variables and a conjunction of atomic formulas of the form , where and , does there exists an assignment such that for every constraint in the input we have that ?
The mapping is also called a solution to the instance of , and the conjuncts of are called constraints. Note that we only introduce constraint satisfaction problems for finite constraint languages, i.e., relational structures with a finite relational signature.
3 Set Constraint Languages
In this section, we give formal definitions of set constraint languages. Let be the structure with domain , the set of all subsets of natural numbers, and with signature , where
is a binary function symbol that denotes intersection, i.e., ;
is a binary function symbol for union, i.e., ;
is a unary function symbol for complementation, i.e., is the function that maps to ;
and are constants (treated as -ary function symbols) denoting the empty set and the full set , respectively.
Sometimes, we simply write for the function and for the function , i.e., we do not distinguish between a function symbol and the respective function. We use the symbols and not the symbols to prevent confusion with meta-mathematical usages of and in the text.
A set constraint language is a relational structure with a set of relations with a quantifier-free first-order definition in . We always allow equality in first-order formulas, and the equality symbol is always interpreted to be the true equality relation on the domain of the structure.
The ternary relation has the quantifier-free first-order definition over .
Theorem 2 (Follows from Proposition 5.8 in [Mo96]).
Let be a set constraint language with a finite signature. Then is in NP.
It is well-known that the structure is a Boolean algebra, with
playing the role of false, and playing the role of true;
playing the role of ;
and playing the role of and , respectively.
To not confuse logical connectives with the connectives of Boolean algebras, we always use the symbols , , and instead of the usual function symbols , , and in Boolean algebras. To facilitate the notation, we also write instead of , and instead of .
We assume that all terms over the functional signature are written in (inner) conjunctive normal form (CNF), i.e., as where is either of the form or of the form for a variable (every term over can be re-written into an equivalent term of this form, using the usual laws of Boolean algebras [Boo47]). We allow the special case (in which case becomes ), and the special case (in which case becomes ). We refer to as an (inner) clause of , and to as an (inner) literal of . We say that a set of inner clauses is satisfiable if there exists an assignment from such that for all inner clauses, the union of the evaluation of all literals equals (this is the case if and only if the formula has a satisfying assignment).
We assume that all quantifier-free first-order formulas over the signature are written in (outer) conjunctive normal form (CNF), i.e., as where is either of the form (a positive (outer) literal) or of the form (a negative (outer) literal). Again, it is well-known and easy to see that we can for every quantifier-free formula find a formula in this form which is equivalent to it in every Boolean algebra. We refer to as an (outer) clause of , and to as an (outer) literal of . Whenever convenient, we identify with its set of clauses.
4 Set Constraints
To define set constraints, we need to introduce a series of important functions defined on the set of subsets of natural numbers.
be the function that maps to the set ;
be the function that maps to the set of finite non-empty subsets of ;
be a bijection between and the set of finite non-empty subsets of (since both sets are countable, such a bijection exists);
be defined by
be the function defined by .
Let be a function, and be a relation. Then we say that preserves if the following holds: for all we have that if for all . If does not preserve , we also say that violates . We say that strongly preserves if for all we have that if and only if for all . If is a first-order formula that defines a relation over , and preserves (strongly preserves) , then we also say that preserves (strongly preserves) . Finally, if is a function, we say that preserves (strongly preserves) if it preserves (strongly preserves) the graph of , i.e., the relation .
Note that if an injective function preserves a function , then it must also strongly preserve .
The set of all relations with a quantifier-free first-order definition over that are preserved by the operation is denoted by .
Proposition 12 shows that has a large subclass, called Horn-Horn, which has an intuitive syntactic description. In Section 5 we also present many examples of relations that are from and of relations that are not from . Before, we will establish some properties of the functions and .
The mapping is an isomorphism between and .
The mapping can be inverted by the mapping that sends to . It is straightforward to verify that strongly preserves , , , , . ∎
We write as an abbreviation for .
The function has the following properties.
strongly preserves , , and , and
for such that , not , and not , we have that .
We verify the properties one by one. Since is bijective, if and only if and have the same finite subsets. This is the case if and only if , and hence is injective. Thus, to prove that strongly preserves , , and , it suffices to check that preserves , , and .
Since is bijective, we have that equals the set of all finite subsets of , and hence , which shows that preserves . We also compute .
Next, we verify that for all we have . Let be arbitrary. We have if and only if . By definition of and since is a finite subset of , this is the case if and only if . This is the case if and only if , which concludes the proof that preserves .
We verify that if , not , and not , then . First observe that for all with we have since preserves . This implies that . Since and , there are such that , , , . Then we have that , but . Hence, , but . This shows that . ∎
Note that in particular preserves , , and . Moreover, : this follows from preservation of , since , and therefore , which is equivalent to the inclusion above. Both and strongly preserve , , and , and therefore also strongly preserves , , and .
5 Horn-Horn Set Constraints
A large and important subclass of set constraints is the class of Horn-Horn set constraints.
A quantifier-free first-order formula is called Horn-Horn if
every outer clause is outer Horn, i.e., contains at most one positive outer literal, and
every inner clause of positive outer literals is inner Horn, i.e., contains at most one positive inner literal.
A relation is called
outer Horn if it can be defined over by a conjunction of outer Horn clauses;
inner Horn if it can be defined over by a formula of the form where each is inner Horn;
Horn-Horn if it can be defined by a Horn-Horn formula over .
The following is a direct consequence of the fact that isomorphisms between and preserve Horn formulas over ; since the simple proof is instructive for what follows, we give it here for the special case that is relevant here.
Outer Horn relations are preserved by .
Let be a conjunction of outer Horn clauses with variables . Let be an outer clause of . Let be two assignments that satisfy this clause. Let be given by . Suppose that satisfies for all . Since is injective we must have that for both and for , and therefore neither assignment satisfies the negative literals. Hence, and must satisfy . Since is an isomorphism between and , it preserves in particular , and hence also satisfies . ∎
Inner Horn relations are strongly preserved by .
Observe that is equivalent to , which is strongly preserved by since strongly preserves . This clearly implies the statement. ∎
Let , where . Then the following are equivalent.
there exists an such that .
there exists an such that .
For , we have that if and only if .
For the implication from to , suppose that there is for every an such that . Let be . Then for each , we have that . To see this, first observe that . Therefore, for all . We conclude that .
Every Horn-Horn relation is preserved by and ; in particular, it is from .
Suppose that has a Horn-Horn definition over with variables . Since is in particular outer Horn, it is preserved by by Fact 9.
Now we verify that is preserved by . Let be an assignment that satisfies . That is, satisfies at least one literal in each outer clause of . It suffices to show that the assignment defined by satisfies the same outer literal. Suppose first that the outer literal is positive; because is Horn-Horn, it is of the form or of the form , which is preserved by by Lemma 11.
Now, suppose that the outer literal is negative, that is, of the form for some . We will treat the case , the other case being similar. Suppose for contradiction that . By Lemma 11, there exists an such that . But then we have in particular that , in contradiction to the assumption that satisfies . ∎
The disjointness relation is Horn-Horn: it has the definition .
The inequality relation is inner Horn: it has the definition .
Using the previous example, the relation can easily be seen to be Horn-Horn.
The ternary relation , which we have encountered above, has the Horn-Horn definition .
Examples of relations that are clearly not Horn-Horn: is violated by , and is violated by .
is clearly not Horn-Horn. However, the relation defined by the formula is from : if und are from that relation, then neither nor . By Proposition 7, satisfies the formula.
There is no equivalent Horn-Horn formula, since the formula is not preserved by .
The formula is not Horn-Horn. However, it is preserved by and by : the reason is that one of its clauses has the negative literal , and the conjuncts and . Therefore, for every tuple the tuple satisfies and is in as well. By Fact 9, is preserved by .
In this case, the authors suspect that there is no equivalent Horn-Horn formula. More generally, it is an open problem whether there exist formulas that are preserved by and , but that are not equivalent to a Horn-Horn formula.
Drakengren and Jonsson’s set constraint language only contains Horn-Horn relations.
For inclusion , disjointness , and inequality this has been discussed in the examples. Horn-Horn is preserved under adding additional outer disequality literals to the outer clauses, so all relations considered in Drakengren and Jonsson’s language are Horn-Horn. ∎
We prepare now some results that can be viewed as a partial converse of Proposition 12.
A quantifier-free first-order formula (in the syntactic form described at the end of Section 3) is called reduced if if every formula obtained from by removing an outer literal is not equivalent to over .
Every quantifier-free formula is over equivalent to a reduced formula.
It is clear that every quantifier-free formula can be written as a formula in CNF and in the form as we have discussed it after Theorem 2. We now remove successively outer literals as long as this results in an equivalent formula. ∎
We first prove the converse of Fact 9.
Let be a reduced formula that is preserved by . Then each outer clause of is Horn.
Let be the set of variables of . Assume for contradiction that contains an outer clause with two positive literals, and . If we remove the literal from its clause , the resulting formula is inequivalent to , and hence there is an assignment that satisfies none of the literals of except for . Similarly, there is an assignment that satisfies none of the literals of except for . By injectivity of , and since strongly preserves , and , the assignment defined by does not satisfy the two literals and . Since strongly preserves , , , none of the other literals in is satisfied by those mappings as well, in contradiction to the assumption that is preserved by . ∎
Let be a set of variables, and be a mapping. Then a function from of the form is called a core assignment.
For every quantifier-free formula there exists a formula such that all inner clauses are inner Horn, and such that and have the same satisfying core assignments. If is preserved by , then the set of all satisfying core assignments of is closed under .
Suppose that has an outer clause with a positive outer literal such that contains an inner clause that is not Horn, i.e., . Then we replace the outer literal in by literals where is obtained from by replacing by .
We claim that the resulting formula has the same set of satisfying core assignments. Observe that , and hence implies . An arbitrary satisfying assignment of satisfies either one of the positive outer literals , in which case that observation shows that it also satisfies , or it satisfies one of the other outer literals of , in which case it also satisfies this literal in . Hence, implies . Conversely, let be a satisfying core assignment of . If satisfies a literal from other than , then it also satisfies this literal in , and satisfies . Otherwise, must satisfy , and hence . Since is a core assignment, Lemma 11 implies that there exists an such that . So satisfies .
Suppose that has an outer clause with a negative outer literal such that contains an inner clause that is not Horn, i.e., . Then we replace the clause in by clauses , …, where is obtained from by replacing with .
We claim that the resulting formula has the same set of satisfying core assignments. Observe that implies that , for every . The observation shows that an arbitrary assignment of is also an assignment of . Conversely, let be a satisfying core assignment of . If satisfies one of the other literals of other than , then satisfies . Otherwise, must satisfy for all , and by Lemma 11 we have that also satisfies .
We perform these replacements until we obtain a formula where all inner clauses are Horn; this formula satisfies the requirements of the first statement of the lemma.
To prove the second statement, let be two satisfying core assignments of . Since and have the same satisfying core assignments, and also satisfy . Then the mapping given by is a core assignment, and because preserves , the mapping satisfies . Since and have the same core assignments, is also a satisfying assignment of , which proves the statement. ∎
A quantifier-free first-order formula (in the syntactic form described at the end of Section 3) is called strongly reduced if every formula obtained from by removing an outer literal does not have the same set of satisfying core assignments over .
Let be a strongly reduced formula all of whose inner clauses are Horn. If the set of satisfying core assignments of is closed under , then is Horn-Horn.
Let be the set of variables of . It suffices to show that all clauses of are outer Horn. Assume for contradiction that contains an outer clause with two positive literals, and . If we remove the literal from its clause , the resulting formula has strictly less satisfying core assignments; this shows the existence of a core assignment that satisfies none of the literals of except for . Similarly, there exists a core assignment that satisfies none of the literals of except for . By assumption, the inner clauses of and are Horn. We claim that the assignment defined by does not satisfy the clause . Since strongly preserves inner Horn clauses, we have that does not satisfy . For the same reasons does not satisfy any other literals in ; this contradicts the assumption that the satisfying core assignments for are preserved by . ∎
Let be a finite set constraint language from . Then can be reduced in linear time to the problem to find a satisfying assignment for a given set of Horn-Horn clauses.
Let be an instance of , and let be the set of variables that appear in . For each constraint from , let be the definition of over . By Lemma 18, there exists a formula that has the same satisfying core assignments as and where all inner clauses are Horn; moreover, since is preserved by , the lemma asserts that the set of all satisfying core assignments of is preserved by . We can assume without loss of generality that is strongly reduced; this can be seen similarly to Lemma 15. By Proposition 20, the formula is Horn-Horn.
Let be the set of all Horn-Horn clauses of formulas obtained from constraints in in the described manner. We claim that is a satisfiable instance of if and only if is satisfiable. This follows from the fact that for each constraint in , the formulas and have the same satisfying core assignments, and that both and are preserved by (for this follows from Proposition 12), so in particular by the function . ∎
Note that in Proposition 21 we reduce satisfiability for to satisfiability for a proper subclass of Horn-Horn set constraints: while for general Horn-Horn set constraints we allow that inner clauses of negative outer literals are not Horn, the reduction only produces Horn-Horn clauses where all inner clauses are Horn.
6 Algorithm for Horn-Horn Set Constraints
We present an algorithm that takes as input a set of Horn-Horn clauses and decides satisfiability of over in time quadratic to the length of the input. By Proposition 21, this section will therefore conclude the proof that is tractable when all relations in are from .
We first discuss an important sub-routine of our algorithm, which we call the inner resolution algorithm. As in the case of Boolean positive unit resolution [DG84] one can implement the procedure Inner-Res such that it runs in linear time in the input size.
Let be a finite set of inner Horn clauses. Then the following are equivalent.
is satisfiable over .
Inner-Res from Figure 1 accepts.
has a solution whose image is contained in .
It is obvious that is unsatisfiable when Inner-Res rejects; in fact, for all inner clauses derived by Inner-Res from , the formula is logically implied by . Conversely, if the algorithm accepts then we can set all eliminated variables to and all remaining variables to , which satisfies all clauses: in the removed clauses the positive literal is satisfied, and in the remaining clauses we have at least one negative literal at the final stage of the algorithm, and all clauses with negative literals at the final stage of the algorithm are satisfied. ∎
The proof of the previous lemma shows that is satisfiable over if and only if is satisfiable over the two-element Boolean algebra. As we will see in the following, this holds more generally (and not only for inner Horn clauses). The following should be well-known, and can be shown with the same proof as given in [Kop89] for the weaker Proposition 2.19 there. We repeat the proof here for the convenience of the reader (for definitions of the notions appearing in the proof, however, we refer to [Kop89]).
Let be terms over . Then the following are equivalent:
is satisfiable over the two-element Boolean algebra;
is satisfiable over all Boolean algebras;
is satisfiable in a Boolean algebra.
Obviously, 1 implies 2, and 2 implies 3. For 3 implies 1, assume that has a satisfying assignment in some Boolean algebra . Let be the element denoted by in under this assignment. It is well-known that every element of a Boolean algebra is contained in an ultrafilter (see e.g. Corollary 2.17 in [Kop89]). So let be an ultrafilter of that contains , and let
be the characteristic function of. Then is a homomorphism from to the two-element Boolean algebra that maps to ; thus is satisfiable over . ∎
Let be a finite set of inner Horn clauses. Then Inner-Res rejects if and only if implies that over .
implies that if and only if is unsatisfiable over . By Fact 23, this is the case if and only if is unsatisfiable over the 2-element Boolean algebra, which is the case if and only if is unsatisfiable over the two-element Boolean algebra. As we have seen in Lemma 22, this is turn holds if and only if Inner-Res rejects. ∎
The algorithm ‘Outer-Res’ in Figure 2 decides satisfiability for sets of Horn-Horn clauses in quadratic time.
We first argue that if the algorithm rejects , then has indeed no solution. First note that during the whole argument, the set of clauses has the same satisfying tuples (i.e. the corresponding formulas are equivalent): Observe that only negative literals get removed from clauses, and that a negative literal only gets removed from a clause when Inner-Res rejects for each inner clause of . By Lemma 24, if Inner-Res rejects then implies that . Hence, the positive unit clauses imply that and therefore the literal can be removed from the clause without changing the set of satisfying tuples. Now the algorithm rejects if either Inner-Res rejects or if it derives the empty clause. In both cases it is clear that is not satisfiable.
Thus, it suffices to construct a solution when the algorithm accepts. Let be the set of all inner clauses of terms from positive unit clauses at the final stage, when the algorithm accepts. For each remaining negative outer literal and each remaining inner clause of there exists an assignment from that satisfies : otherwise, by Lemma 24, the inner resolution algorithm would have rejected , and would have removed the inner clause from . Let be an enumeration of all remaining inner clauses that appear in all remaining negative outer literals.
Write for the -ary operation defined by