Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning

by   Serge Gaspers, et al.

We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning under structural restrictions. All these problems involve two tasks: (i) identifying the structure in the input as required by the restriction, and (ii) using the identified structure to solve the reasoning task efficiently. We show that for most of the considered problems, task (i) admits a polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, in contrast to task (ii) which does not admit such a reduction to a problem kernel of polynomial size, subject to a complexity theoretic assumption. As a notable exception we show that the consistency problem for the AtMost-NValue constraint admits a polynomial kernel consisting of a quadratic number of variables and domain values. Our results provide a firm worst-case guarantees and theoretical boundaries for the performance of polynomial-time preprocessing algorithms for the considered problems.



There are no comments yet.


page 1

page 2

page 3

page 4


Limits of Preprocessing

We present a first theoretical analysis of the power of polynomial-time ...

Kernels for Global Constraints

Bessiere et al. (AAAI'08) showed that several intractable global constra...

A Unifying Framework for Structural Properties of CSPs: Definitions, Complexity, Tractability

Literature on Constraint Satisfaction exhibits the definition of several...

A Constraint Satisfaction Framework for Executing Perceptions and Actions in Diagrammatic Reasoning

Diagrammatic reasoning (DR) is pervasive in human problem solving as a p...

Digraph homomorphism problem and weak near unanimity polymorphism

We consider the problem of finding a homomorphism from an input digraph ...

Optimizing the ecological connectivity of landscapes with generalized flow models and preprocessing

In this paper we consider the problem of optimizing the ecological conne...

Polynomial-Time Preprocessing for Weighted Problems Beyond Additive Goal Functions

Kernelization is the fundamental notion for polynomial-time prepocessing...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many important computational problems that arise in various areas of AI are intractable. Nevertheless, AI research has been very successful in developing and implementing heuristic solvers that work well on real-world instances. An important component of virtually every solver is a powerful polynomial-time preprocessing procedure that reduces the problem input. For instance, preprocessing techniques for the propositional satisfiability problem are based on Boolean Constraint Propagation (see, e.g.,

[27]), CSP solvers make use of various local consistency algorithms that filter the domains of variables (see, e.g., [4]); similar preprocessing methods are used by solvers for Nonmonotonic and Bayesian reasoning problems (see, e.g., [38, 13], respectively). The history of preprocessing, like applying reduction rules to simplify truth functions, can be traced back to the 1950’s [55]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem.

Until recently, no provable performance guarantees for polynomial-time preprocessing methods have been obtained, and so preprocessing was only subject of empirical studies. A possible reason for the lack of theoretical results is a certain inadequacy of the P vs NP framework for such an analysis: if we could reduce in polynomial time an instance of an NP-hard problem just by one bit, then we can solve the entire problem in polynomial time by repeating the reduction step a polynomial number of times, and follows.

With the advent of parameterized complexity [25], a new theoretical framework became available that provides suitable tools to analyze the power of preprocessing. Parameterized complexity considers a problem in a two-dimensional setting, where in addition to the input size , a problem parameter  is taken into consideration. This parameter can encode a structural aspect of the problem instance. A problem is called fixed-parameter tractable (FPT) if it can be solved in time where is a function of the parameter and is a polynomial of the input size . Thus, for FPT problems, the combinatorial explosion can be confined to the parameter and is independent of the input size. It is known that a problem is fixed-parameter tractable if and only if every problem input can be reduced by polynomial-time preprocessing to an equivalent input whose size is bounded by a function of the parameter [24]. The reduced instance is called the problem kernel, the preprocessing is called kernelization. The power of polynomial-time preprocessing can now be benchmarked in terms of the size of the kernel. Once a small kernel is obtained, we can apply any method of choice to solve the kernel: brute-force search, heuristics, approximation, etc. [42]. Because of this flexibility a small kernel is generally preferable to a less flexible branching-based fixed-parameter algorithm. Thus, small kernels provide an additional value that goes beyond bare fixed-parameter tractability.

Kernelization is an important algorithmic technique that has become the subject of a very active field in state-of-the-art combinatorial optimization (see, e.g., the references in

[28, 42, 45, 57]). Kernelization can be seen as a preprocessing with performance guarantee that reduces a problem instance in polynomial time to an equivalent instance, the kernel, whose size is a function of the parameter [28, 33, 42, 45].

Once a kernel is obtained, the time required to solve the instance is a function of the parameter only and therefore independent of the input size. While, in general, the time needed to solve an instance does not necessarily depend on the size of the instance alone, the kernelization view is that it preprocesses the easy parts of an instance, leaving a core instance encoding the hard parts of the problem instance. Naturally one aims at kernels that are as small as possible, in order to guarantee good worst-case running times as a function of the parameter, and the kernel size provides a performance guarantee for the preprocessing. Some NP-hard combinatorial problems such as -Vertex Cover admit polynomially sized kernels, for others such as -Path an exponential kernel is the best one can hope for [11].

As an example of a polynomial kernel, consider the -Vertex Cover problem, which, for a graph and an integer parameter , is to decide whether there is a set of at most vertices such that each edge from has at least one endpoint in . Buss’ kernelization algorithm for -Vertex Cover (see [14]) computes the set of vertices with degree at least in . If , then reject the instance, i.e., output a trivial No-instance (e.g., the graph consisting of one edge and the parameter ), since every vertex cover of size at most contains each vertex from . Otherwise, if has more than edges, then reject the instance, since each vertex from covers at most edges. Otherwise, output the instance , where is the set of degree-0 vertices in . This instance has vertices and edges. Thus, Buss’ kernelization algorithm gives a quadratic kernel for -Vertex Cover.

In previous research several NP-hard AI problems have been shown to be fixed-parameter tractable. We list some important examples from various areas:

  1. Constraint satisfaction problems (CSP) over a fixed universe of values, parameterized by the induced width [41].

  2. Consistency and generalized arc consistency for intractable global constraints, parameterized by the cardinalities of certain sets of values [5].

  3. Propositional satisfiability (SAT), parameterized by the size of backdoors [50].

  4. Positive inference in Bayesian networks with variables of bounded domain size, parameterized by size of loop cutsets 

    [52, 9].

  5. Nonmonotonic reasoning with normal logic programs, parameterized by feedback width 


All these problems involve the following two tasks.

  1. Structure Recognition Task: identify the structure in the input as required by the considered parameter.

  2. Reasoning Task: use the identified structure to solve a reasoning task efficiently.

For most of the considered problems we observe the following pattern: the Structure Recognition Task admits a polynomial kernel, in contrast to the Reasoning Task, which does not admit a polynomial kernel, unless the Polynomial Hierarchy collapses to its third level.

A negative exception to this pattern is the recognition problem for CSPs of small induced width, which most likely does not admit a polynomial kernel.

A positive exception to this pattern is the AtMost-NValue global constraint, for which we obtain a polynomial kernel. As in [5], the parameter is the number of holes in the domains of the variables, measuring how close the domains are to being intervals. More specifically, we present a linear time preprocessing algorithm that reduces an AtMost-NValue constraint  with holes to a consistency-equivalent AtMost-NValue constraint of size polynomial in . In fact, has at most variables and domain values. We also give an improved branching algorithm checking the consistency of in time . The combination of kernelization and branching yields efficient algorithms for the consistency and propagation of (AtMost-)NValue constraints.


This article is organized as follows. Parameterized complexity and kernelization are formally introduced in Section 2. Section 3 describes the tools we use to show that certain parameterized problems do not have polynomial kernels. Sections 48 prove kernel lower bounds for parameterized problems in the areas of constraint networks, satisfiability, global constraints, Bayesian reasoning, and nonmonotonic reasoning. Each of these sections also gives all necessary definitions, relevant background, and related work for the considered problems. In addition, Section 6 describes a polynomial kernel for the consistency problem for the AtMost-NValue constraint parameterized by the number of holes in the variable domains, and an FPT algorithm that uses this kernel as a first step. The correctness and performance guarantees of the kernelization algorithm are only outlined in Section 6 and proved in detail in A. The conclusion, Section 9, broadly recapitulates the results and suggests the study of Turing kernels to overcome the shortcomings of (standard) kernels for many fundamental AI and Resoning problems.

2 Formal Background

A parameterized problem is a subset of for some finite alphabet . For a problem instance we call the main part and the parameter. We assume the parameter is represented in unary. For the parameterized problems considered in this paper, the parameter is a function of the main part, i.e., for a function . We then denote the problem as , e.g., -CSP(width) denotes the problem -CSP parameterized by the width of the given tree decomposition.

A parameterized problem is fixed-parameter tractable if there exists an algorithm that solves any input in time where is an arbitrary computable function of and is a polynomial in .

A kernelization for a parameterized problem is an algorithm that, given , outputs in time polynomial in a pair such that

  1. if and only if , and

  2. , where is an arbitrary computable function, called the size of the kernel.

In particular, for constant the kernel has constant size . If is a polynomial then we say that admits a polynomial kernel.

Every fixed-parameter tractable problem admits a kernel. This can be seen by the following argument due to Downey et al. [24]. Assume we can decide instances of problem in time . We kernelize an instance as follows. If then we already have a kernel of size . Otherwise, if , then is a polynomial; hence we can decide the instance in polynomial time and replace it with a small decision-equivalent instance . Thus we always have a kernel of size at most . However, is super-polynomial for NP-hard problems (unless ), hence this generic construction does not provide polynomial kernels.

We understand preprocessing for an NP-hard problem as a polynomial-time procedure that transforms an instance of the problem to a (possible smaller) solution-equivalent instance of the same problem. Kernelization is such a preprocessing with a performance guarantee, i.e., we are guaranteed that the preprocessing yields a kernel whose size is bounded in terms of the parameter of the given problem instance. In the literature also different forms of preprocessing have been considered. An important one is knowledge compilation, a two-phases approach to reasoning problems where in a first phase a given knowledge base is (possibly in exponential time) preprocessed (“compiled”), such that in a second phase various queries can be answered in polynomial time [15].

3 Tools for Kernel Lower Bounds

In the sequel we will use recently developed tools to obtain kernel lower bounds. Our kernel lower bounds are subject to the widely believed complexity theoretic assumption . In other words, the tools allow us to show that a parameterized problem does not admit a polynomial kernel unless . In particular, would imply the collapse of the Polynomial Hierarchy to the third level: [51].

A composition algorithm for a parameterized problem is an algorithm that receives as input a sequence , uses time polynomial in , and outputs with (i)  if and only if for some , and (ii)  is polynomial in . A parameterized problem is compositional if it has a composition algorithm. With each parameterized problem we associate a classical problem

where denotes an arbitrary symbol from and is a new symbol not in . We call the unparameterized version of .

The following result is the basis for our kernel lower bounds.

Theorem 1 ([11, 34]).

Let be a parameterized problem whose unparameterized version is NP-complete. If is compositional, then it does not admit a polynomial kernel unless .

Let be parameterized problems. We say that is polynomial parameter reducible to if there exists a polynomial time computable function and a polynomial , such that for all we have (i)  if and only if , and (ii) . The function is called a polynomial parameter transformation.

The following theorem allows us to transform kernel lower bounds from one problem to another.

Theorem 2 ([12]).

Let and be parameterized problems such that is NP-complete, is in NP, and there is a polynomial parameter transformation from to . If has a polynomial kernel, then has a polynomial kernel.

4 Constraint Networks

Constraint networks have proven successful in modeling everyday cognitive tasks such as vision, language comprehension, default reasoning, and abduction, as well as in applications such as scheduling, design, diagnosis, and temporal and spatial reasoning [21]. A constraint network is a triple where is a finite set of variables, is a finite universe of values, and is set of constraints. Each constraint is a pair where is a list of variables of length called the constraint scope, and is an -ary relation over , called the constraint relation. The tuples of indicate the allowed combinations of simultaneous values for the variables . A solution is a mapping such that for each and , we have . A constraint network is satisfiable if it has a solution.

With a constraint network we associate its constraint graph where contains an edge between two variables if and only if they occur together in the scope of a constraint. A width tree decomposition of a graph is a pair where is a tree and is a labeling of the nodes of with sets of vertices of such that the following properties are satisfied: (i) every vertex of belongs to for some node of ; (ii) every edge of is is contained in for some node of ; (iii) For each vertex of the set of all tree nodes with induces a connected subtree of ; (iv)  holds for all tree nodes . The treewidth of is the smallest such that has a width tree decomposition. The induced width of a constraint network is the treewidth of its constraint graph [22].

Kernelization fits perfectly into the context of Constraint Processing where preprocessing and data reduction (e.g., in terms of local consistency algorithms, propagation, and domain filtering) are key methods [4, 63].

Let be a fixed universe containing at least two elements. We consider the following parameterized version of the constraint satisfaction problem (CSP).

Instance: A constraint network and a width tree decomposition of the constraint graph of .
Parameter: The integer .
Question: Is satisfiable?

Associated with this problem is also the task of recognizing instances of small treewidth. We state this problem in form of the following decision problem.

Instance: A constraint network and an integer .
Parameter: The integer .
Question: Does admit a tree decomposition of width ?

It is well known that -CSP(width) is fixed-parameter tractable over any fixed universe  [22, 41] (for generalizations see [61]). We contrast this classical result and show that it is unlikely that -CSP(width) admits a polynomial kernel, even in the simplest case where .

Theorem 3.

-CSP(width) does not admit a polynomial kernel unless .


We show that -CSP(width) is compositional. Let , , be a given sequence of instances of -CSP(width) where is a constraint network and is a width tree decomposition of the constraint graph of . We may assume, w.l.o.g., that for (otherwise we can simply change the names of variables). We form a new constraint network as follows. We put where are new variables. We define the set of constraints in three groups.

  1. For each and each constraint we add to a new constraint where .

  2. We add ternary constraints where and , , .

  3. Finally, we add two unary constraints and which force the values of and to and , respectively.

Let be the constraint graphs of and , respectively. Fig. 1 shows an illustration of  for .

Figure 1: Constraint graph .

We observe that are cut vertices of . Removing these vertices separates into independent parts where is the path , and is isomorphic to . By standard techniques (see, e.g., [43]), we can put the given width tree decompositions of and the trivial width 1 tree decomposition of together to a width tree decomposition of . Clearly can be obtained from , , in polynomial time.

We claim that is satisfiable if and only if at least one of the is satisfiable. This claim can be verified by means of the following observations: The constraints in groups (2) and (3) provide that for any satisfying assignment there will be some such that are all set to 0 and are all set to ; consequently is set to and all for are set to 1. The constraints in group (1) provide that if we set to , then we obtain from the original constraint ; if we set to then we obtain a constraint that can be satisfied by setting all remaining variables to . We conclude that -CSP(width) is compositional.

In order to apply Theorem 1, we need to establish that the unparameterized version of -CSP(width) is NP-complete. Deciding whether a constraint network  over the universe is satisfiable is well-known to be NP-complete (say by reducing 3-SAT). To a constraint network on variables we can always add a trivial width tree decomposition of its constraint graph (taking a single tree node where contains all variables of ). Hence is NP-complete. ∎

Let us turn now to the recognition problem Rec--CSP(width). By Bodlaender’s Theorem [10], the problem is fixed-parameter tractable. However, the problem is unlikely to admit a polynomial kernel. In fact, Bodlaender et al. [11] showed that the related problem of testing whether a graph has treewidth at most does not have a polynomial kernel (taking as the parameter), unless a certain “AND-conjecture” fails. In turn, Drucker [26] showed that a failure of the AND-conjecture implies . The combination of these two results relates directly to Rec--CSP(width).

Proposition 1.

Rec--CSP(width) does not admit a polynomial kernel unless .

5 Satisfiability

The propositional satisfiability problem (SAT) was the first problem shown to be NP-hard [18]. Despite its hardness, SAT solvers are increasingly leaving their mark as a general-purpose tool in areas as diverse as software and hardware verification, automatic test pattern generation, planning, scheduling, and even challenging problems from algebra [40]. SAT solvers are capable of exploiting the hidden structure present in real-world problem instances. The concept of backdoors, introduced by Williams et al. [65], provides a means for making the vague notion of a hidden structure explicit. Backdoors are defined with respect to a “sub-solver” which is a polynomial-time algorithm that correctly decides the satisfiability for a class of CNF formulas. More specifically, Gomes et al. [40] define a sub-solver to be an algorithm that takes as input a CNF formula and has the following properties:

  1. Trichotomy: either rejects the input , or determines correctly as unsatisfiable or satisfiable;

  2. Efficiency: runs in polynomial time;

  3. Trivial Solvability: can determine if is trivially satisfiable (has no clauses) or trivially unsatisfiable (contains only the empty clause);

  4. Self-Reducibility: if determines , then for any variable and value , determines . denotes the formula obtained from by applying the partial assignment , i.e., satisfied clauses are removed and false literals are removed from the remaining clauses.

We identify a sub-solver with the class of CNF formulas whose satisfiability can be determined by . A strong -backdoor set (or -backdoor, for short) of a CNF formula is a set of variables such that for each possible truth assignment to the variables in , the satisfiability of can be determined by sub-solver in time . The smaller the backdoor , the more useful it is for satisfiability solving, which makes the size of the backdoor a natural parameter to consider (see [37] for a survey on the parameterized complexity of backdoor problems). If we know an -backdoor of size , we can decide the satisfiability of by running on instances , yielding a time bound of . Hence SAT decision is fixed-parameter tractable in the backdoor size  for any sub-solver . Hence the following problem is clearly fixed-parameter tractable for any sub-solver .

Instance: A CNF formula , and an -backdoor of of size .
Parameter: The integer .
Question: Is satisfiable?

We also consider for every subsolver the associated recognition problem.

Instance: A CNF formula , and an integer .
Parameter: The integer .
Question: Does have an -backdoor of size at most ?

With the problem SAT(-backdoor) we are concerned with the question of whether instead of trying all possible partial assignments we can reduce the instance to a polynomial kernel. We will establish a very general result that applies to all possible sub-solvers.

Theorem 4.

SAT(-backdoor) does not admit a polynomial kernel for any sub-solver unless .


We will devise polynomial parameter transformations from the following parameterized problem which is known to be compositional [34] and therefore unlikely to admit a polynomial kernel.

Instance: A propositional formula in CNF on variables.

Parameter: The number of variables.
Question: Is satisfiable?

Let be a CNF formula and the set of all variables of . Due to trivial solvability (Property 3) of a sub-solver, is an -backdoor set for any . Hence, by mapping (as an instance of SAT(vars)) to (as an instance of SAT(-backdoor)) provides a (trivial) polynomial parameter transformation from SAT(vars) to SAT(-backdoor). Since the unparameterized versions of both problems are clearly NP-complete, the result follows by Theorem 2. ∎

Let us denote by CNF the class of CNF formulas where each clause has at most literals, and by Horn the class of CNF formulas where each clause has at most one positive literal. Sub-solvers for Horn and CNF follow from [23] and [44], respectively.

Let 3SAT() (where is an arbitrary parameterization) denote the problem SAT() restricted to 3CNF formulas. In contrast to SAT(vars), the parameterized problem 3SAT(vars) has a trivial polynomial kernel: if we remove duplicate clauses, then any 3CNF formula on variables contains at most clauses, and so is a polynomial kernel. Hence the easy proof of Theorem 4 does not carry over to 3SAT(-backdoor). We therefore consider the cases 3SAT(Horn-backdoor) and 3SAT(2CNF-backdoor) separately, these cases are important since the detection of Horn and 2CNF-backdoors is fixed-parameter tractable [50].

Theorem 5.

Neither 3SAT(Horn-backdoor) nor 3SAT(2CNF-backdoor) admit a polynomial kernel unless .


Let . We show that 3SAT(-backdoor) is compositional. Let , , be a given sequence of instances of 3SAT(-backdoor) where is a 3CNF formula and is a -backdoor set of of size . We distinguish two cases.

Case 1: . Let and . Whether is satisfiable or not can be decided in time since the satisfiability of a Horn or 2CNF formula can be decided in linear time. We can check whether at least one of the formulas is satisfiable in time which is polynomial in . If some is satisfiable, we output ; otherwise we output ( is unsatisfiable). Hence we have a composition algorithm.

Case 2: . This case is more involved. We construct a new instance of 3SAT(-backdoor) as follows.

Let . Since , follows.

Let denote the set of variables of . We may assume, w.l.o.g., that and that for all since otherwise we can change names of variable accordingly. In a first step we obtain from every a CNF formula as follows. For each variable we take new variables . We replace each positive occurrence of a variable in with the literal and each negative occurrence of with the literal .

We add all clauses of the form for ; we call these clauses “connection clauses.” Let be the formula obtained from in this way. We observe that and are SAT-equivalent, since the connection clauses form an implication chain. Since the connection clauses are both Horn and 2CNF, is also a -backdoor of .

For an illustration of this construction see Example 1 below.

We take a set of new variables. Let be the sequence of all possible clauses (modulo permutation of literals within a clause) containing each variable from either positively or negatively. Consequently we can write as where .

For we add to each connection clause of the literal . Let denote the 3CNF formula obtained from this way.

For we define 3CNF formulas as follows. If then consists just of the clause . If then we take new variables and let consist of the clauses , , . Finally, we let be the 3CNF formula containing all the clauses from . Any assignment to that satisfies can be extended to an assignment that satisfies since such assignment satisfies at least one connection clause and so the chain of implications from from to is broken.

It is not difficult to verify the following two claims. (i)  is satisfiable if and only if at least one of the formulas is satisfiable. (ii)  is a -backdoor of . Hence we have also a composition algorithm in Case 2, and thus 3SAT(-backdoor) is compositional. Clearly is NP-complete, hence the result follows from Theorem 1. ∎

Example 1.

We illustrate the constructions of this proof with a running example, where we let , , , and .

Assume that we have


From this we obtain the following formula, containing four connection clauses


Now assume . We add to the connection clauses literals from and we obtain


Assigning to false and to true reduces to . The other three possibilities of assigning truth values to break the connection clauses and make the formula trivially satisfiable.

We now turn to the recognition problem Rec-SAT(-backdoor), in particular for for which, as mentioned above, the problem is known to be fixed-parameter tractable [50]. Here we are able to obtain positive results.

Proposition 2.

Both Rec-SAT(Horn-backdoor) and Rec-SAT(2CNF-backdoor) admit polynomial kernels, with a linear and quadratic number of variables, respectively.


Let be the instance of Rec-SAT(Horn-backdoor). We construct a graph whose vertices are the variables of and which contains an edge between two variables if and only if both variables appear as positive literals together in a clause. It is well-known and easy to see that the vertex covers of are exactly the Horn-backdoor sets of  [60]. Recall that a vertex cover of a graph is a set of vertices that contains at least one end of each edge of the graph. Now, we apply the known kernelization algorithm for vertex covers [17] to and obtain in polynomial time an equivalent instance where has at most vertices. Now it only remains to consider as a CNF formula where each edge gives rise to a binary clause on two positive literals. Since evidently , we conclude that constitutes a polynomial kernel for Rec-SAT(Horn-backdoor).

For Rec-SAT(2CNF-backdoor) we proceed similarly. Let be an instance of this problem. We construct a 3-uniform hypergraph whose vertices are the variables of and which contains a hyperedge on any three variables that appear (positively or negatively) together in a clause of . Again, it is well-known and easy to see that the hitting sets of are exactly the 2CNF-backdoor sets of  [60]. Recall that a hitting set of a hypergraph is a set of vertices that contains at least one vertex from each hyperedge. Now we apply a known kernelization algorithm for the hitting set problem on 3-uniform hypergraphs (3HS) [1] to and obtain in polynomial time an equivalent instance where has at most vertices. It remains to consider as a CNF formula where each hyperedge gives rise to a ternary clause on three positive literals. Since evidently , we conclude that constitutes a polynomial kernel for Rec-SAT(2CNF-backdoor). ∎

6 Global Constraints

Constraint programming (CP) offers a powerful framework for efficient modeling and solving of a wide range of hard problems [58]. At the heart of efficient CP solvers are so-called global constraints that specify patterns that frequently occur in real-world problems. Efficient propagation algorithms for global constraints help speed up the solver significantly [63]. For instance, a frequently occurring pattern is that we require that certain variables must all take different values (e.g., activities requiring the same resource must all be assigned different times). Therefore most constraint solvers provide a global AllDifferent constraint and algorithms for its propagation. Unfortunately, for several important global constraints a complete propagation is NP-hard, and one switches therefore to incomplete propagation such as bound consistency [8].

In their AAAI’08 paper, Bessière et al. [5] showed that a complete propagation of several intractable constraints can efficiently be done as long as certain natural problem parameters are small, i.e., the propagation is fixed-parameter tractable [25]. Among others, they showed fixed-parameter tractability of the AtLeast-NValue and Extended Global Cardinality (EGC) constraints parameterized by the number of “holes” in the domains of the variables. If there are no holes, then all domains are intervals and complete propagation is polynomial by classical results; thus the number of holes provides a way of scaling up the nice properties of constraints with interval domains.

In the sequel we bring this approach a significant step forward, picking up a long-term research objective suggested by Bessière et al. [5] in their concluding remarks: whether intractable global constraints admit a reduction to a problem kernel or kernelization.

More formally, a global constraint is defined for a set of variables, each variable ranges over a finite domain of values. For a set of variables we write . An instantiation is an assignment such that for each . A global constraint defines which instantiations are legal and which are not. This definition is usually implicit, as opposed to classical constraints, which list all legal tuples. Examples of global constraints include:

  1. The global constraint NValue is defined over a set of variables and a variable and requires from a legal instantiation that ;

  2. The global constraint AtMost-NValue is defined for fixed values of over a set of variables and requires from a legal instantiation that ;

  3. The global constraint Disjoint is specified by two sets of variables and requires that for each pair and ;

  4. The global constraint Uses is also specified by two sets of variables and requires that for each there is some such that .

  5. The global constraint EGC is specified by a set of variables , a set of values , and a finite domain for each value , and it requires that for each we have .

A global constraint is consistent if there is a legal instantiation of its variables. The constraint is hyper arc consistent (HAC) if for each variable and each value , there is a legal instantiation such that (in that case we say that supports for ). In the literature, HAC is also called domain consistent or generalized arc consistent. The constraint is bound consistent if when a variable is assigned the minimum or maximum value of its domain, there are compatible values between the minimum and maximum domain value for all other variables in . The main algorithmic problems for a global constraint are the following: Consistency, to decide whether is consistent, and Enforcing HAC, to remove from all domains the values that are not supported by the respective variable.

It is clear that if HAC can be enforced in polynomial time for a constraint , then the consistency of can also be decided in polynomial time (we just need to see if any domain became empty). The reverse is true if for each and , the consistency of , requiring to be assigned the value , can be decided in polynomial time (see [63, Theorem 17]). This is the case for most constraints of practical use, and in particular for all constraints considered below. The same correspondence holds with respect to fixed-parameter tractability. Hence, we will focus mainly on Consistency.

For several important types of global constraints, the problem of deciding whether a constraint of type is consistent is NP-hard. This includes the 5 global constraints NValue, AtMost-NValue, Disjoint, Uses, and EGC defined above (see [8]).

Each global constraint of type and parameter par gives rise to a parameterized problem:

Instance: A global constraint of type .
Parameter: The integer par.
Question: Is consistent?

Bessière et al. [5] considered as parameter for NValue, as parameter for Disjoint, and as parameter for Uses. They showed that consistency checking is fixed-parameter tractable for the constraints under the respective parameterizations, i.e., the problems NValue-Cons(), Disjoint-Cons(), and Uses-Cons() are fixed-parameter tractable.

Bessière et al. [5] also showed that polynomial time algorithms for enforcing bounds consistency imply that the corresponding consistency problem is fixed-parameter tractable parameterized by the number of holes. This is the case for the global constraints NValue, AtMost-NValue, and EGC.

Definition 1.

When is totally ordered, a hole in a subset is a couple , such that there is a with and there is no with .

We denote the number of holes in the domain of a variable by . The parameter of the consistency problem for AtMost-NValue constraints is .

6.1 Kernel Lower Bounds

We show that it is unlikely that most of the FPT results of Bessière et al. [5] can be improved to polynomial kernels.

Theorem 6.

The problems NValue-Cons(), Disjoint-Cons(), Uses-Cons() do not admit polynomial kernels unless .


We devise a polynomial parameter transformation from SAT(vars). We use a construction of Bessière et al. [8]. Let be a CNF formula over variables . We consider the clauses and variables of as the variables of a global constraint with domains , and . Now can be encoded as an NValue constraint with and . By the pigeonhole principle, a legal instantiation for this constraint has . Setting corresponds to setting the variable of to 1 and setting corresponds to setting the variable of to 0. Now, for each , since only values are available for , and the literal corresponding to satisfies the clause . Since we have a polynomial parameter reduction from SAT(vars) to NValue-Cons(). Similarly, as observed by Bessière et al. [7], can be encoded as a Disjoint constraint with and (), or as a Uses constraint with and (). Since the unparameterized problems are clearly NP-complete, and SAT(vars) is known to have no polynomial kernel unless (as remarked in the proof of Theorem 4), the result follows by Theorem 2. ∎

The Consistency problem for EGC constraints is NP-hard [54]. However, if all sets are intervals, then consistency can be checked in polynomial time using network flows [56]. By the result of Bessière et al. [5], the Consistency problem for EGC constraints is fixed-parameter tractable, parameterized by the number of holes in the sets . Thus Régin’s result generalizes to instances that are close to the interval case.

However, it is unlikely that EGC constraints admit a polynomial kernel.

Theorem 7.

EGC-Cons(holes) does not admit a polynomial kernel unless .


We use the following result of Quimper et al. [54]: Given a CNF formula on variables, one can construct in polynomial time an EGC constraint such that

  1. for each value of , for an integer ,

  2. for at most values , and

  3. is satisfiable if and only if is consistent.

Thus, the number of holes in is at most twice the number of variables of .

We observe that this result provides a polynomial parameter reduction from SAT(vars) to EGC-Cons(holes). As remarked in the proof of Theorem 4, SAT(vars) is known to have no polynomial kernel unless . Hence the theorem follows. ∎

6.2 A Polynomial Kernel for NValue Constraints

Beldiceanu [3] and Bessière et al. [6] decompose NValue constraints into two other global constraints: AtMost-NValue and AtLeast-NValue, which require that at most or at least values are used for the variables in , respectively. The Consistency problem is NP-complete for NValue and AtMost-NValue constraints, and polynomial time solvable for AtLeast-NValue constraints.

In this subsection, we will present a polynomial kernel for AtMost-NValue-Cons(holes).

Instance: An instance , where is a set of variables, is a totally ordered set of values, is a map assigning a non-empty domain to each variable , and an integer .
Parameter: The integer .
Question: Is there a set , , such that for every variable , ?

Theorem 8.

The problem AtMost-NValue-Cons(holes) has a polynomial kernel. In particular, an AtMost-NValue constraint with holes can be reduced in linear time to a consistency-equivalent AtMost-NValue constraint with variables and domain values.

The proof of the theorem is based on a kernelization algorithm that we will describe in the remaining part of this section.

We say that a subset of is an interval if it has no hole. An interval of a variable is an inclusion-wise maximal hole-free subset of its domain. Its left endpoint and right endpoint are the values and , respectively. Fig. 2 gives an example of an instance and its interval representation. We assume that instances are given by a succinct description, in which the domain of a variable is given by the left and right endpoint of each of its intervals. As the number of intervals of the instance is , its size is . In case is given by an extensive list of the values in the domain of each variable, a succinct representation can be computed in linear time.

Also, in a variant of AtMost-NValue-Cons(holes) where is not part of the input, we may construct by sorting the set of all endpoints of intervals in time . Since, w.l.o.g., a solution contains only endpoints of intervals, this step does not compromise the correctness.

A greedy algorithm by Beldiceanu [3] checks the consistency of an AtMost-NValue constraint in linear time when all domains are intervals (i.e., ). Further, Bessière et al. [5] have shown that Consistency (and Enforcing HAC) is fixed-parameter tractable, parameterized by the number of holes, for all constraints for which bound consistency can be enforced in polynomial time. A simple algorithm for checking the consistency of AtMost-NValue goes over all instances obtained from restricting the domain of each variable to one of its intervals, and executes the algorithm of [3] for each of these instances. The running time of this algorithm is clearly bounded by .

Figure 2: Interval representation of an AtMost-NValue instance , with , , , and , etc.

Let be an instance for the consistency problem for AtMost-NValue constraints. The algorithm is more intuitively described using the interval representation of the instance.The friends of an interval  are the other intervals of ’s variable. An interval is optional if it has at least one friend, and required otherwise. For a value , let denote the set of intervals containing .

A solution for is a subset of at most values such that there exists an instantiation assigning the values in to the variables in . The algorithm may detect for some value , that, if the problem has a solution, then it has a solution containing . In this case, the algorithm selects , i.e., it removes all variables whose domain contains , it removes from , and it decrements by one. The algorithm may detect for some value , that, if the problem has a solution, then it has a solution not containing . In this case, the algorithm discards , i.e., it removes from every domain and from . (Note that no new holes are created since is replaced by .) The algorithm may detect for some variable , that every solution for contains a value from . In that case, it removes .

The algorithm sorts the intervals by increasing right endpoint (ties are broken arbitrarily). Then, it exhaustively applies the following three reduction rules.

Red-: If there are two intervals such that and is required, then remove the variable of (and its intervals).

Red-Dom: If there are two values such that , then discard .

Red-Unit: If for some variable , then select the value in .

In the example from Fig. 2, Red- removes the variables and because and , Red-Dom removes the values and , Red-Unit selects , which deletes variables and , and Red-Dom removes from . The resulting instance is depicted in Fig. 3.

After none of the previous rules apply, the algorithm scans the remaining intervals from left to right (i.e., by increasing right endpoint). An interval that has already been scanned is either a leader or a follower of a subset of leaders. Informally, for a leader , if a solution contains , then there is a solution containing and the right endpoint of each of its followers.

Figure 3: Instance obtained from the instance of Fig. 2 by exhaustively applying rules Red-, Red-Dom, and Red-Unit.

The algorithm scans the first intervals up to, and including, the first required interval. All these intervals become leaders.

The algorithm then continues scanning intervals one by one. Let be the interval that is currently scanned and be the last interval that was scanned. The active intervals are those that have already been scanned and intersect . A popular leader is a leader that is either active or has at least one active follower.

  • If is optional, then becomes a leader, the algorithm continues scanning intervals until scanning a required interval; all these intervals become leaders.

  • If is required, then it becomes a follower of all popular leaders that do not intersect and that have no follower intersecting . If all popular leaders have at least two followers, then set and merge the second-last follower of each popular leader with the last follower of the corresponding leader; i.e., for every popular leader, the right endpoint of its second-last follower is set to the right endpoint of its last follower, and then the last follower of every popular leader is removed.

After having scanned all the intervals, the algorithm exhaustively applies the reduction rules Red-, Red-Dom, and Red-Unit again.

In the example from Fig. 3, the interval of variable is merged with ’s interval, and the interval of with the interval of . Red-Dom then removes the values  and , resulting in the instance depicted in Fig. 4.

The correctness and performance guarantee of this kernelization algorithm are proved in A. In particular, for the correctness, we prove that a solution for an instance can be obtained from a solution for an instance that is obtained from by one merge-operation by adding to one value that is common to all second-last followers of the popular leaders that were merged. We can easily bound the number of leaders by and we prove that each leader has at most followers. Since each interval is a leader or a follower of at least one leader, this bounds the total number of intervals by . Using the succinct description of the domains, the size of the kernel is . We also give some details for a linear-time implementation of the algorithm.

Remark: Denoting , Rule Red-Dom can be generalized to discard any for which there exists a such that at the expense of a higher running time.

The kernel for AtMost-NValue-Cons(holes) can now be used to derive a kernel for NValue-Cons(holes).

Corollary 1.

The problem NValue-Cons(holes) has a polynomial kernel. In particular, an NValue constraint with holes can be reduced in time to a consistency-equivalent NValue constraint with variables and domain values, where is the exponent of matrix multiplication.


As in [6], we determine the largest possible value for if its domain were the set of all integers. This can be done in time [48, 64] by computing a maximum matching in the graph whose vertices are with an edge between and iff . Suppose this largest possible value is . Now, set , giving a consistency-equivalent NValue constraint. Note that if this constraint has a legal instantiation with , then it has a legal instantiation with