A modifiction of the CSP algorithm for infinite languages

Constraint Satisfaction Problem on finite sets is known to be NP-complete in general but certain restrictions on the constraint language can ensure tractability. It was proved that if a constraint language has a weak near unanimity polymorphism then the corresponding constraint satisfaction problem is tractable, otherwise it is NP-complete. In the paper we present a modification of the algorithm that works in polynomial time even for infinite constraint languages.

Authors

• 12 publications
07/03/2018

The complexity of disjunctive linear Diophantine constraints

We study the Constraint Satisfaction Problem CSP(A), where A is first-or...
06/30/2017

Parameterized Complexity of CSP for Infinite Constraint Languages

We study parameterized Constraint Satisfaction Problem for infinite cons...
01/18/2012

A Dichotomy for 2-Constraint Forbidden CSP Patterns

Although the CSP (constraint satisfaction problem) is NP-complete, even ...
09/17/2018

Best-case and Worst-case Sparsifiability of Boolean CSPs

We continue the investigation of polynomial-time sparsification for NP-c...
06/29/2016

Algebraic foundations for qualitative calculi and networks

A qualitative representation ϕ is like an ordinary representation of a r...
05/01/2020

Strong subalgebras and the Constraint Satisfaction Problem

In 2007 it was conjectured that the Constraint Satisfaction Problem (CSP...
09/06/2021

Tractability Frontier for Dually-Closed Temporal Quantified Constraint Satisfaction Problems

A temporal (constraint) language is a relational structure with a first-...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Formally, the Constraint Satisfaction Problem (CSP) is defined as a triple , where

• is a set of variables,

• is a set of the respective domains,

• is a set of constraints,

where each variable can take on values in the nonempty domain , every constraint is a pair where is a tuple of variables of length , called the constraint scope, and is an -ary relation on the corresponding domains, called the constraint relation.

The question is whether there exists a solution to , that is a mapping that assigns a value from to every variable such that for each constraints the image of the constraint scope is a member of the constraint relation.

In this paper we consider only CSP over finite domains. The general CSP is known to be NP-complete [11, 13]; however, certain restrictions on the allowed form of constraints involved may ensure tractability (solvability in polynomial time) [6, 8, 9, 10, 3, 5]. Below we provide a formalization to this idea.

To simplify the presentation we assume that all the domains are subsets of a finite set . By we denote the set of all finitary relations on , that is, subsets of for some . Then all constraint relations can be viewed as relations from .

For a set of relations by we denote the Constraint Satisfaction Problem where all the constraint relations are from . The set is called a constraint language. Another way to formalize the Constraint Satisfaction Problem is via conjunctive formulas. Every -ary relation on can be viewed as a predicate, that is, a mapping . Suppose , then is the following decision problem: given a formula

 ρ1(x1,1,…,x1,n1)∧⋯∧ρs(xs,1,…,x1,ns)

where for every ; decide whether this formula is satisfiable.

It is well known that many combinatorial problems can be expressed as for some constraint language . Moreover, for some sets the corresponding decision problem can be solved in polynomial time; while for others it is NP-complete. It was conjectured that is either in P, or NP-complete [7].

An operation is called idempotent if . An operation is called a weak near-unanimity operation (WNU) if

In the paper we present a modification of the algorithm from [18] that works in polynomial time for infinite constraint languages and therefore prove CSP dichotomy conjecture for infinite constraint languages.

Theorem 1.1.

Suppose is a set of relations. Then can be solved in polynomial time if there exists a WNU preserving ; is NP-complete otherwise.

Note that the algorithm presented in [4] also works for infinite constraint languages.

The paper is organized as follows. In Section 2 we give all necessary definitions, in Section 3 we explain the algorithm starting with the new ideas. In Section 4 we prove statements that show the correctness of the algorithm.

2 Definitions

A set of operations is called a clone if it is closed under composition and contains all projections. For a set of operations by we denote the clone generated by .

An idempotent WNU is called special if , where . It is not hard to show that for any idempotent WNU on a finite set there exists a special WNU (see Lemma 4.7 in [12]).

A relation is called subdirect if for every the projection of onto the -th coordinate is . For a relation by we denote the projection of onto the coordinates .

Algebras. An algebra is a pair , where is a finite set, called universe, and is a family of operations on , called basic operations of . In the paper we always assume that we have a special WNU preserving all constraint relations. Therefore, every domain can be viewed as an algebra . By we denote the clone generated by all basic operations of .

Congruences. An equivalence relation on the universe of an algebra is called a congruence if it is preserved by every operation of the algebra. A congruence (an equivalence relation) is called proper, if it is not equal to the full relation . We use standard universal algebraic notions of term operation, subalgebra, factor algebra, product of algebras, see [2]. We say that a subalgebra is a subdirect subalgebra of if is a subdirect relation in .

We say that the -th variable of a relation is compatible with an equivalence relation if and implies . We say that a relation is compatible with if every variable of this relation is compatible with .

For a relation by we denote the binary relation defined by

 ∃x1…∃xi−1∃xi+1…∃xnρ(x1,…,xi−1,y,xi+1,…,xn)∧ρ(x1,…,xi−1,y′,xi+1,…,xn).

For a constraint , by we denote .

Essential and critical relations. A relation is called essential if it cannot be represented as a conjunction of relations with smaller arities. It is easy to see that any relation can be represented as a conjunction of essential relations. A relation is called critical if it cannot be represented as an intersection of other subalgebras of

and it has no dummy variables.

A tuple is called essential for an -ary relation if and there exist such that for every .

It is not hard to check the following lemma.

Lemma 2.1.

[15, 16, 17] Suppose , where . Then the following conditions are equivalent:

1. is an essential relation;

2. there exists an essential tuple for .

Parallelogram property. We say that a relation has the parallelogram property if any permutation of its variables gives a relation satisfying

 ∀α1,β1,α2,β2:(α1β2,β1α2,β1β2∈ρ′⇒α1α2∈ρ′).

We say that the -th variable of a relation is rectangular if for every and we have . We say that a relation is rectangular if all of its variables are rectangular. The following facts can be easily seen: if the -th variable of is rectangular then is a congruence; if a relation has the parallelogram property then it is rectangular.

Polynomially complete algebras. An algebra is called polynomially complete (PC) if the clone generated by and all constants on is the clone of all operations on .

Linear algebra. A finite algebra is called linear if it is isomorphic to for prime numbers . It is not hard to show that for every algebra there exists a minimal congruence , called the minimal linear congruence, such that is linear.

Absorption. Let be a subalgebra of . We say that absorbs if there exists such that for any position of . In this case we also say that is an absorbing subuniverse of . If the operation can be chosen binary or ternary then is called a binary or ternary absorbing subuniverse of .

Center. Suppose is a finite algebra with a special WNU operation. is called a center if there exists an algebra with a special WNU operation of the same arity and a subdirect subalgebra of such that there is no binary absorbing subuniverse in and

CSP instance. An instance of the constraint satisfaction problem is called a CSP instance. Sometimes we use the same letter for a CSP instance and for the set of all constraints of this instance. For a variable by we denote the domain of the variable .

We say that is a path in if are in the scope of for every . We say that a path connects and if there exists for every such that , , and the projection of onto contains the tuple .

A CSP instance is called 1-consistent if every constraint of the instance is subdirect. A CSP instance is called cycle-consistent if for every variable and any path starting and ending with in connects and . A CSP instance is called linked if for every variable appearing in and every there exists a path starting and ending with in that connects and .

Suppose . Then we can define a projection of onto , that is a CSP instance where variables are elements of and constraints are projections of the constraints of onto . We say that an instance is fragmented if the set of variables can be divided into 2 nonempty disjoint sets and such that the constraint scope of any constraint of either has variables only from , or only from .

A CSP instance is called irreducible if any instance such that every constraint of is a projection of a constraint from on some set of variables is fragmented, linked, or its solution set is subdirect.

Weaker constraints. We say that a constraint is weaker than a constraint if , , and, additionally, or . Suppose is a constraint and where is a minimal congruence such that . Then is called a congruence-weakened constraint.

Minimal linear reduction. Suppose the domain set of the instance is . The domain set is called a minimal linear reduction if is an equivalence class of the minimal linear congruence of for every . The reduction is called 1-consistent if the instance obtained after the reduction of every domain is 1-consistent.

Crucial instances. Let for every . A constraint of is called crucial in if has no solutions in but the replacement of by all weaker constraints gives an instance with a solution in . A CSP instance is called crucial in if every constraint of is crucial in . To simplify, instead of “crucial in ” we say “crucial”

3 Algorithm

In this section we present a modified algorithm from [18]. The main problem that does not allow to use the original algorithm for infinite languages is in Steps 3 and 11, where we replace a constraint by all weaker constraints. If is infinite, we do not have a polynomial upper bound on the number of such replacements. To fix this problem, instead of replacing a constraint by all weaker constraints we replace it by all congruence-weakened constraints. Moreover, to restrict the depth of the recursion we additionally transform our instance to ensure that all the constraint relations are essential relations with the parallelogram property.

We start with the new procedures, then we explain auxiliary procedures from [18], and finish with the modified main part of the algorithm.

We have made only the following modifications of the algorithm.

1. We added Step 3 to work only with essential relations. This property is important because, by Lemma 4.1, any essential relation preserved by an idempotent WNU operation has exponentially many tuples.

2. We added Step 4 to ensure that every relation has the parallelogram property.

3. We added Step 5 to ensure that is an irreducible congruence for every constraint relation .

4. We changed Step 6 (Step 3 in [18]). Instead of replacing every constraint by all weaker constraints we replace it by all congruence-weakened constraints, and therefore we increase the congruence for every constraint relation .

5. Similarly, we replaced Step 11 in [18] by Steps 13 and 14. In Step 13, instead of replacing a constraint by all weaker constraints we replace it by all congruence-weakened constraints. In Step 14, we try to replace a constraint by all projections onto all variables but one appearing in the constraint.

6. Instead of considering a center we consider a ternary absorption, thus we replace Steps 4 and 5 in [18] by Step 7.

3.1 New parts

Finding an appropriate projection. Suppose , . Here we explain how to find a minimal subset such that . Note that is always an essential relation.

1. Put .

2. Put .

3. While do .

4. Put

5. If , go to step 2.

Essential Representation. Suppose . An essential representation of is the following formula

 ρ(x1,…,xn)=δ1(z1,1,…,z1,n1)∧⋯∧δs(zs,1,…,zs,ns),

where is an essential relation from , , for every and . Below we explain how to find a set , where for every , such that form an essential representation of . To guarantee this property we require that every is essential and for every there exists such that .

1. Put .

2. Choose a tuple such that for some (we have at most such tuples).

3. Find a minimal subset such that . Put .

4. Go to the next tuple in 2).

5. Put .

6. By recursive call we calculate the essential representation corresponding to .

7. Put .

8. Remove from all sets that are not maximal in by inclusion.

Note that the obtained essential representation of an essential relation consists of the original relation. Therefore, the above procedure can also be used to check whether a relation is essential.

Providing the Parallelogram Property. In this section we explain how to find the minimal relation having the parallelogram property. We say that tuples form a rectangle if there exists such that , , , . To find the minimal relation with the parallelogram property it is sufficient to close the relation under adding the forth tuple of a rectangle. This can be done in the following way.

For each .

1. Put , .

2. If then find the forth tuple of the corresponding rectangle.

3. If , add to .

By Lemma 4.1, every essential relation has exponentially many tuples, therefore, this procedure works in polynomial time on the size of if is an essential.

3.2 Cycle-consistency, non-linked instances, and irreducibility

Provide cycle-consistency. To provide cycle-consistency it is sufficient to use constraint propagation providing (2,3)-consistency. Formally, it can be done in the following way. First, for every pair of variables we consider the intersections of projections of all constraints onto these variables. The corresponding relation we denote by . For every we replace by where It is not hard to see that this replacement does not change the solution set.

We repeat this procedure while we can change some

. If at some moment we get a relation

that is not subdirect in , then we can either reduce or , or, if is empty, state that there are no solutions. If we cannot change any relation and every is subdirect in , then the original CSP instance is cycle-consistent.

Solve the instance that is not linked. Suppose the instance is not linked and not fragmented, then it can be solved in the following way. We say that an element and an element are linked if there exists a path that connects and . Let be the set of pairs such that , . Then can be divided into the linked components.

It is easy to see that it is sufficient to solve the problem for every linked component and join the results. Precisely, for a linked component by we denote the set of all elements such that is in the component. It is easy to see that for every . Therefore, the reduction to is a CSP instance on smaller domains.

Check irreducibility. For every and every maximal congruence on we do the following.

1. Put .

2. Choose a constraint having the variable in the scope for some , choose another variable from the scope such that .

3. Denote the projection of onto by .

4. Put . If is a proper equivalence relation, then add to .

5. go to the next , , and in 2.

As a result we get a set and a congruence on for every . Put . It follows from the construction that for every equivalence class of and every there exists a unique equivalence class of such that there can be a solution with and . Thus, for every equivalence class of we have a reduction to the instance on smaller domains. Then for every and we consider the corresponding reduction and check whether there exists a solution with .

Thus, we can check whether the solution set of the projection of the instance onto is subdirect or empty. If it is empty then we state that there are no solutions. If it is not subdirect, then we can reduce the corresponding domain. If it is subdirect, then we go to the next and next maximal congruence on , and repeat the procedure.

3.3 Main part

In this section we provide an algorithm that solves in polynomial time for constraint languages (finite or infinite) that are preserved by an idempotent WNU operation. We know that is also preserved by a special WNU operation . We extend to the set of all relations preserved by . Let the arity of the WNU be equal to . Suppose we have a CSP instance , where is a set of variables, is a set of the respective domains, is a set of constraints.

The algorithm is recursive, the list of all possible recursive calls is given in the end of this subsection. One of the recursive calls is the reduction of a subuniverse to such that either has a solution with , or it has no solutions at all.

Step 1.

Check whether is cycle-consistent. If not then we reduce a domain for some or state that there are no solutions.

Step 2.

Check whether is irreducible. If not then we reduce a domain for some or state that there are no solutions.

Step 3.

Replace every constraint by its essential representation.

By Theorem 4.13, if has no solutions then we cannot get a solution while doing the following step.

Step 4.

Replace every constraint relation by the corresponding constraint relation having the parallelogram property. If one of the obtained constraint relation is not essential, go to Step 3.

By Lemma 4.18, if has no solutions then we cannot get a solution in the following step.

Step 5.

If the congruence is not irreducible for some constraint relation , then replace the constraint by the corresponding congruence-weakened constraints and go to Step 3.

At the moment all constraint relations have the parallelogram property and is an irreducible congruence, therefore for every constraint there exists a unique congruence-weakened constraint.

Step 6.

Replace every constraint of by the corresponding congruence-weakened constraint, then replace every constraint relation by the corresponding constraint relation having the parallelogram property. Recursively calling the algorithm, check that the obtained instance has a solution with for every and . If not, reduce to the projection onto of the solution set of the obtained instance.

By Theorems 4.8 and 4.11 we cannot loose the only solution while doing the following step.

Step 7.

If has a binary or ternary absorbing subuniverse for some , then we reduce to .

By Theorem 4.10 we can do the following step.

Step 8.

If there exists a congruence on such that the algebra is polynomially complete, then we reduce to any equivalence class of .

By Theorem 4.5, it remains to consider the case when for every domain there exists a congruence on such that is linear, i.e. it is isomorphic to for prime numbers . Moreover, is proper if .

We denote by . We define a new CSP instance with domains . To every constraint we assign a constraint , where and The constraints of are all constraints that are assigned to the constraints of .

Since every relation on preserved by is known to be a conjunction of linear equations, the instance can be viewed as a system of linear equations in for different .

Our general idea is to add some linear equations to so that for any solution of there exists the corresponding solution of . We start with the empty set of equations , which is a set of constraints on .

Put .

Step 10.

Solve the system of linear equations and choose independent variables . If it has no solutions then has no solutions. If it has just one solution, then, recursively calling the algorithm, solve the reduction of to this solution. Either we get a solution of , or has no solutions.

Then there exist and a linear mapping such that any solution of can be obtained as for some .

Note that for any tuple we can check recursively whether has a solution in . To do this, we just need to solve an easier CSP instance (on smaller domains). Similarly, we can check whether has a solution in for every . To do this, we just need to check the existence of a solution in and for any position of .

Step 11.

Check whether has a solution in . If it has then stop the algorithm.

Step 12.

Put . Iteratively remove from all constraints that are weaker than some other constraints of .

In the following two steps we try to weaken the instance so that it still does not have a solution in for some .

Step 13.

For every constraint

1. Let be obtained from by replacing the constraint by the corresponding congruence-weakened constraints.

2. Replace every new constraint of by its essential representation and remove from all constraints that are weaker than some other constraints of .

3. If has no solutions in for some , then put and repeat Step 13.

Step 14.

For every constraint

1. Let be obtained from by replacing the constraint by its projections onto all variables but one appearing in .

2. Replace the new constraints by its essential representation and remove from all constraints that are weaker than some other constraints of .

3. If has no solutions in for some , then put and go to Step 13.

At this moment, the CSP instance has the following property. has no solutions in for some but if we replace any constraint by the corresponding congruence-weakened constraints then we get an instance that has a solution in for every . Unlike the original algorithm, we cannot claim that is crucial in . Nevertheless, Theorem 4.19 proves that we can finish the algorithm in the same way as in the original paper.

In the remaining steps we will find a new linear equation that can be added to . Suppose is an affine subspace of of dimension , thus is the solution set of a linear equation . Then the coefficients can be learned (up to a multiplicative constant) by queries of the form “?” as follows. First, we need at most queries to find a tuple . Then, to find this equation it is sufficient to check for every and every whether the tuple satisfies this equation.

Step 15.

Suppose is not linked. For each from to

1. Check that for every there exist and a solution of in .

2. If yes, go to the next .

3. If no, then find an equation such that for every satisfying there exist and a solution of in .

4. Add the equation to Eq.

5. Go to Step 10.

It is not hard to see that satisfies the conditions of Theorem 4.19. Then there exists a constraint in and a relation such that , , and is a congruence. We add a new variable with domain and a variable with the same domain as . Then we replace by and add the constraint . We denote the obtained instance by . Let be the set of all tuples such that has a solution with in . By Theorem 4.14, if we replace the constraint relation in by the minimal relation having the parallelogram property then a minimal linear reduction cannot get a solution after the replacement. Therefore, has no tuple . Similarly, if we replace in by defined by

 ρ′′(xi1,xi2,…,xis)=∃x′i1ρ(x′i1,xi2,…,xis)∧σ(xi1,x′i1),

where , then we get a solution in . Otherwise, Theorem 4.14 implies that the replacement of by the minimal relation having the parallelogram property still does not give a solution in . This contradicts the fact that if we replace any constraint by the corresponding congruence-weakened constraints then we get an instance that has a solution in for every . Thus, we know that the projection of onto the first coordinates is a full relation.

Therefore, is defined by one linear equation. If this equation is for some , then both and have no solutions. Otherwise, we put in this equation and get an equation that describes all such that has a solution in . It remains to find this equation.

Step 16.

1. Find an equation such that for every satisfying there exists a solution of in .

3. Add the equation to Eq.

4. Go to Step 10.

Note that every time we reduce our domains, we get constraint relations that are still from .

We have four types of recursive calls of the algorithm:

1. we reduce one domain , for example to a binary absorbing subuniverse (Steps 1, 7, 8).

2. we solve an instance that is not linked. In this case we divide the instance into the linked parts and solve each of them independently (Steps 2, 15).

3. we replace every constraint by the corresponding congruence-weakened constraint and solve an easier CSP instance (Step 6).

4. we reduce every domain such that (Steps 10, 11, 13, 14, 16).

Lemma 4.3 states that the depth of the recursive calls of type 3 is at most . It is easy to see that the depth of the recursive calls of type 2 and 4 is at most .

4 Correctness of the Algorithm

4.1 The size of an essential relation and depth of the recursion

Lemma 4.1.

Suppose is an essential relation preserved by a special WNU of arity Then

Proof.

By Lemma 2.1, there exists an essential tuple for . For every choose such that

 (a1,…,ai−1,bi,ai+1,…,an)∈ρ.

Put for every . Without loss of generality assume that for every and for every . Since preserves , we can show that . Hence, . Consider the projection of onto the last variables, which we denote by . It is not hard to check that and for every .

For any subset put , where if and otherwise. We know that for every . Since is a special WNU, . Then we can check that for any disjoint subsets we have . Thus, contains the tuple for any . Therefore, both and contain at least tuples. ∎

Lemma 4.2.

Suppose is an essential relation with the parallelogram property, is an irreducible congruence, , Then for every .

Proof.

Put Since is an essential relation with the parallelogram property, we have . Since