Decomposition of the NVALUE constraint

09/17/2009 ∙ by Christian Bessiere, et al. ∙ CSIRO 0

We study decompositions of NVALUE, a global constraint that can be used to model a wide range of problems where values need to be counted. Whilst decomposition typically hinders propagation, we identify one decomposition that maintains a global view as enforcing bound consistency on the decomposition achieves bound consistency on the original global NVALUE constraint. Such decompositions offer the prospect for advanced solving techniques like nogood learning and impact based branching heuristics. They may also help SAT and IP solvers take advantage of the propagation of global constraints.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Global constraints are an important feature of constraint programming. They capture common patterns in real world problems, and provide efficient propagators for pruning the search space. Consider, for example, the NValue constraint which counts the number of values used by a set of variables [1]. This global constraint can model problems where values represent resources. This is a common constraint that can be used to model many practical problems such as timetabling and frequency allocation. Whilst enforcing domain consistency on the NValue constraint is NP-hard [2], bound consistency is polynomial to achieve. At least four different propagation algorithms for the NValue constraint have been proposed, some of which achieve bound consistency [3, 4, 5].

We have recently proposed simulating propagators for global constraints with decompositions. For instance, we have shown that carefully designed decompositions of the global All-Different and GCC constraints can efficiently simulated the corresponding bound consistency propagators [6]. We turn now to the NValue constraint. We study a number of different decompositions, one of which permits the achievement of bound consistency on the NValue constraint. Such decompositions open out a number of promising directions. For example, they suggest schema for learning nogoods. As a second example, such decompositions may help construct nogood and impact based branching heuristics. As a third and final example, such decompositions may permit SAT and IP solvers to take advantage of the inferences performed by the propagators of global constraints. We have, for instance, seen this with our decompositions of the All-Different constraint [6].

2 Background

We assume values are taken from the set 1 to . We write for the domain of possible values for , for the smallest value in , for the greatest, and for the interval . A global constraint is one in which the number of variables is a parameter. For instance, the global constraint ensures that [1]. Constraint solvers typically use backtracking search to explore the space of partial assignments. After each assignment, propagation algorithms prune the search space by enforcing local consistency properties like domain, range or bound consistency. A constraint is domain consistent (DC) iff when a variable is assigned any of the values in its domain, there exist compatible values in the domains of all the other variables of the constraint. Such an assignment is called a support. A constraint is disentailed iff there is no possible support. A constraint is range consistent (RC) iff, when a variable is assigned any of the values in its domain, there exist compatible values between the minimum and maximum domain value for all the other variables of the constraint. Such an assignment is called a bound support. A constraint is bound consistent (BC) iff the minimum and maximum value of every variable of the constraint belong to a bound support. We will compare local consistency properties applied to sets of constraints, and which are logically equivalent. As in [7], a local consistency property on is as strong as on iff, given any domains, if holds on then holds on ; on is stronger than on iff on is as strong as on but not vice versa; on is equivalent to on iff on is as strong as on and vice versa. Finally, as constraint solvers usually enforce local consistency after each assignment down a branch in the search tree, we will compute the total amortised cost of enforcing a local consistency down an entire branch of the search tree. This captures the incremental cost of propagation.

3 NValue constraint

Pachet and Roy proposed the NValue constraint (called by them the “cardinality on attribute values” constraint) to model a combinatorial problem in selecting musical play-lists [1]. It can also be used to model the number of frequencies used in a frequency allocation problem or the number of rooms needed to timetable a set of exams. It generalizes several other global constraints including All-Different (which ensures a set of variables take all different values) and Not-All-Equal (which ensures a set of variables do not all take the same value). Enforcing domain consistency on the NValue constraint is NP-hard (Theorem 3 in [2]) even when is fixed (Theorem 2 in [4]). In fact, computing the lower bound on is NP-hard (Theorem 3 in [8]). In addition, enforcing domain consistency on the NValue constraint is not fixed parameter tractable since it is in the [2]-complete complexity class along with problems like minimum hitting set (Theorem 2 in [9]). However, a number of polynomial propagation algorithms have been proposed that achieve bound consistency and some closely related levels of local consistency [3, 4, 5].

3.1 Simple decomposition

We can decompose the NValue constraint by introducing 0/1 variables and posting the following constraints:

(1)
(2)
(3)

Unfortunately, this simple decomposition hinders propagation. It can be BC whereas BC on the corresponding NValue constraint detects disentailment.

Theorem 1

BC on NValue is stronger than BC on its decomposition into (1) to (3).

Proof:  Clearly BC on NValue is at least as strong as BC on the decomposition. To show strictness, consider , , for , and . Constraints (1) to (3) are BC. However, the corresponding NValue constraint has no bound support and thus enforcing BC on it detects disentailment.

We observe that enforcing DC instead of BC on constraints (1) to (3) in the example of the proof above still does not prune any value. To decompose NValue without hindering propagation, we must look to more complex decompositions.

3.2 Decomposition into AtMostNValue and AtLeastNValue

Our first step in decomposing the NValue constraint is to split it into two parts: an AtMostNValue and an AtLeastNValue constraint. holds iff whilst holds iff .

Running Example

Consider a NValue constraint over the following variables and values:

Suppose we decompose this into an AtMostNValue and an AtLeastNValue constraint. Consider the AtLeastNValue constraint. The 5 variables can take at most 4 different values because , and can only take values and . Hence, there is no bound support for . Enforcing BC on the AtLeastNValue constraint therefore prunes . Consider now the AtMostNValue constraint. Since and guarantee that we take at least 2 different values, there is no bound support for . Hence enforcing BC on an AtMostNValue constraint prunes . If , or , or then any complete assignment uses at least 3 different values. Hence there is also no bound support for these assignments. Pruning these values gives bound consistent domains for the original NValue constraint:

To show that decomposing the NValue constraint into these two parts does not hinder propagation in general, we will use the following lemma. Given an assignment of values, denotes the number of distinct values in

. Given a vector of variables

, and .

Lemma 1 (adapted from [5])

Consider . If , then is BC.

Proof:  Let be an assignment of in with and be an assignment of in with . Consider the sequence where is the same as except that has been assigned its value in instead of its value in . because they only differ on . Hence, for any , there exists with . Thus, is a bound support for on . Therefore, and have a bound support.

We now prove that decomposing the NValue constraint into AtMostNValue and AtLeastNValue constraints does not hinder pruning when enforcing BC.

Theorem 2

BC on is equivalent to BC on and on .

Proof:  Suppose the AtMostNValue and AtLeastNValue constraints are BC. The AtMostNValue constraint guarantees that and the AtLeastNValue constraint guarantees that . Therefore, . By Lemma 1, the variable is bound consistent.

Consider a variable/bound value pair . Let be a bound support of in the AtLeastNValue constraint and be a bound support of in the AtMostNValue constraint. We have and by definition of AtLeastNValue and AtMostNValue. Consider the sequence where is the same as except that has been assigned its value in instead of its value in . because they only differ on . Hence, there exists with . We know that and belong to because they belong to bound supports. Thus, and is a bound support for on .

When enforcing domain consistency, Bessiere et al. [5] noted that decomposing the NValue constraint into AtMostNValue and AtLeastNValue constraints does hinder propagation, but only when contains just and and there is a gap in the domain in-between (see Theorem 1 in [5] and the discussion that follows). When enforcing BC, any such gap in the domain for is ignored.

4 AtMostNValue constraint

We now give a decomposition for the AtMostNValue constraint which does not hinder bound consistency propagation. To decompose the AtMostNValue constraint, we introduce 0/1 variables, to represent whether uses a value in the interval , and “pyramid” variables, with domains which count the number of values taken inside the interval . To constrain these introduced variables, we post the following constraints:

(4)
(5)
(6)
(7)
Running Example

Consider the decomposition of an AtMostNValue constraint over the following variables and values:

Observe that we consider that value 5 for has already been pruned by AtLeastNValue, as will be shown in next sections. Bound consistency reasoning on the decomposition will make the following inferences. As , from (4) we get . Hence by (5), . Similarly, as , we get and . Now . By (7) and (6), , , , , . Since , we deduce that and hence . This gives . By (5), . Finally, from (4), we get and . This gives us bound consistent domains for the AtMostNValue constraint.

We now prove that this decomposition does not hinder propagation in general.

Theorem 3

BC on constraints (4) to (7) is equivalent to BC on AtMostNValue , and takes time to enforce down the branch of the search tree.

Proof:  First note that changing the domains of the variables cannot affect the upper bound of by the AtMostNValue constraint and, conversely, changing the lower bound of cannot affect the domains of the variables.

Let be a maximum cardinality subset of variables of whose ranges are pairwise disjoint (i.e., ). Let be the corresponding ordered set of disjoint ranges of the variables in . It has been shown in [4] that .

Consider the interval . Constraints (5) ensure that the variables are greater than or equal to and constraints (6) ensure that the variable is greater than or equal to the sum of lower bounds of variables , , because intervals are disjoint. Therefore, the variable is greater than or equal to and it is bound consistent.

We show that when is BC and , all variables are . Take any assignment such that . Let be the assignment where the value of in has been replaced by , one of the bounds of . We know that because only one variable has been flipped. Hence, any assignment with is a bound support. necessarily contains such a value by assumption.

The only case when pruning might occur is if the variable is ground and . Constraints (6) imply that equals the sum of variables . The lower bound of the variable is greater than one and there are of these intervals. Therefore, by constraint (7), the upper bound of variables that correspond to intervals outside the set are forced to zero.

There are constraints (4) and constraints (5) that can be woken times down the branch of the search tree. Each requires time for a total of down the branch. There are constraints (6) which can be woken times down the branch and each invocation takes time. This gives a total of . The final complexity down the branch of the search tree is therefore .

5 Faster decompositions

We can improve how our solver handles this decomposition of the AtMostNValue constraint by adding implied constraints and by implementing specialized propagators. Our first improvement is to add an implied constraint and enforce BC on it:

(8)

This does not change the asymptotic complexity of reasoning with the decomposition, nor does it improve the level of propagation achieved. However, we have found that the fixed point of propagation is reached quicker in practice with such an implied constraint.

Our second improvement decreases the asymptotic complexity of enforcing BC on the decomposition of Section 4. The complexity is dominated by reasoning with constraints (4) which channel from to and thence onto (through constraints (5)). If constraints (4) were not woken uselessly, enforcing BC should cost per constraint down the branch. Unfortunately, existing solvers wake up such constraints as soon as a bound is modified, thus a cost in . We therefore implemented a specialized propagator to channel between and efficiently. To be more precise, we remove the variables and replace them with Boolean variables . We then add the following constraints

(9)
(10)

These constraints are enough to channel changes in the bounds of the variables to . There are constraints (9), each of which can be propagated in time over a branch, for a total of . There are clausal constraints (10) and each of them can be made BC in time down a branch of the search tree, for a total cost of . Since channeling dominates the asymptotic complexity of the entire decomposition of Section 4, this improves the complexity of this decomposition to . This is similar to the technique used in [6] to improve the asymptotic complexity of the decomposition of the All-Different constraint.

Our third improvement is to enforce stronger pruning by observing that when , we can remove the interval from all variables, regardless of whether this modifies their bounds. This corresponds to enforcing RC on constraints (4). Interestingly, this is sufficient to achieve RC on the AtMostNValue constraint. Unfortunately, constraints (10) cannot achieve this pruning and using constraints (4) increases the complexity of the decomposition back to . We do it by extending the decomposition with Boolean variables . The following constraint ensures that .

(11)

Clearly we can enforce RC on this constraint in time over a branch, and for all variables . We can then use the following clausal constraints to channel from variables to these variables and on to the variables. These constraints are posted for every and integers such that :

(12)
(13)
(14)
(15)

The variable , similarly to the variables , is true when , but instead of having one such variable for every interval, we only have them for intervals whose length is a power of two. When , with , the constraints (14)–(15) set to 0 the variables that correspond to the two intervals of length that start at and finish at , respectively. In turn, the constraints (12)–(13) set to 0 the variables that correspond to intervals of length , all the way down to intervals of size 1. These trigger the constraints (11), so all values in the interval are removed from the domains of all variables.

Example

Suppose . Then, by (9), , and by (10), . Conversely, suppose and . Then, by (14)–(15), we get and . From and (12)–(13) we get , , , and by (11), the interval is pruned from . Similarly, causes the interval to be removed from , so .

Note that RC can be enforced on each of these constraints in constant time over a branch. There exist of the constraints (12)–(13) and of the constraints (14)–(15), so the total time to propagate them all down a branch is .

6 AtLeastNValue constraint

There is a similar decomposition for the AtLeastNValue constraint. We introduce 0/1 variables, to represent whether uses a value in the interval , and integer variables, with domains to count the number of times values in are re-used, that is, how much the number of variables taking values in exceeds the number of values in . To constrain these introduced variables, we post the following constraints:

(16)
(17)
(18)
(19)
Running Example

Consider the decomposition of an AtLeastNValue constraint over the following variables and values:

Bound consistency reasoning on the decomposition will make the following inferences. As for , from (16) we get for . Hence, by (17), . By (18), , . Since we deduce that . Finally, from (19) and the fact that , we get . This gives us bound consistent domains for the AtLeastNValue constraint.

We now prove that this decomposition does not hinder propagation in general.

Theorem 4

BC on the constraints (16) to (19) is equivalent to BC on AtLeastNValue , and takes time to enforce down the branch of the search tree.

Proof:  First note that changing the domains of the variables cannot affect the lower bound of by the AtLeastNValue constraint and, conversely, changing the upper bound of cannot affect the domains of the variables.

It is known [3] that is equal to the size of a maximum matching in the value graph of the constraint. Since , we show that the lower bound of is equal to .111We assume that is not pruned by other constraints. We first show that we can construct a matching of size , then show that it is a maximum matching. The proof uses a partition of the interval into a set of maximal saturated intervals , such that and a set of unsaturated intervals such that .

Let be the ordered set of maximal intervals such that . Note that the intervals in are disjoint otherwise intervals are not maximal. An interval is smaller than iff . We denote the union of the first intervals , , and the variables whose domain is inside one of intervals .

Our construction of a matching uses two sets of variables, and . First, we identify the cardinality of these two sets. Namely, we show that the size of the set is and the size of the set is .

Intervals are saturated therefore each value from these intervals are taken by a variable in . Therefore, has size at least . Moreover, there exist additional variables that take values from , because values from intervals between two consecutive intervals in do not contribute to the lower bound of the variable by construction of . Therefore, the number of variables in is at least . Note that constraints (18) imply that equals the sum of variables . As intervals in are disjoint then . If then and the lower bound of the variable will be increased. Hence, .

Since all these intervals are saturated, we can construct a matching of size using the variables in . The size of is . We show by contradiction that we can construct a matching of size using the variables in and the values .

Suppose such a matching does not exist. Then, there exists an interval such that , i.e., after consuming the values in with variables in , we are left with fewer values in than variables whose domain is contained in . We denote , so that is the number of values inside the interval that are taken by variables in . The total number of variables inside the interval is greater than or equal to . The total number of variables inside the interval equals to . Therefore, . On the other hand, the number of values that are not taken by the variables in the interval is . Therefore, we obtain the inequality or . By construction of , , otherwise the intervals in that are subsets of are not maximal. This leads to a contradiction, so we can construct a matching of size .

Now suppose that is not a maximum matching. This means that is overestimated by propagation on (16) and (19). Since is not a maximum matching, there exists an augmenting path of , that produces , such that . This new matching covers all the values that covers and one additional value . We show that cannot belong to the interval .

The value cannot be in any interval in , because all values in are used by variables whose domain is contained in . In addition, cannot be in an interval between two consecutive intervals in , because those intervals do not contribute to the lower bound of . Thus, cannot cover more values than and they must have the same size, a contradiction.

We show that when is BC and , all variables are . Take any assignment such that . Let be the assignment where the value of in has been replaced by , one of the bounds of . We know that because only one variable has been flipped. Hence, any assignment with is a bound support. necessarily contains such a value by assumption.

We now show that if , enforcing BC on the constraints (16)–(19) makes the variables BC with respect to the AtLeastNValue constraint. We first observe that in a bound support, variables must take the maximum number of different values because . Hence, in a bound support, variables that are not included in a saturated interval will take values outside any saturated interval they overlap and they all take different values. We recall that . Hence, by constraint (19), . We recall the the size of set equals . Constraints (18) imply that equals the sum of variables and . Hence, by constraints (18), the upper bounds of all variables that correspond to the saturated intervals are forced to . Thus, by constraints (16) and (17), all variables in have their bounds pruned if they belong to . By constraints (18) again, the upper bounds of all variables that correspond to the unsaturated intervals are forced to take value 0, and all variables with are forced to 0 as well. Thus, by constraints (16) and (17), all variables in have their bounds pruned if they belong to a Hall interval of other variables in . This is what BC on the All-Different constraint does [6].

There are constraints (16) that can be woken times down the branch of the search tree in , so a total of down the branch. There are constraints (17) which can be propagated in time down the branch for a . There are constraints (18) which can be woken times each down the branch for a total cost in time down the branch. Thus a total of . The final complexity down the branch of the search tree is therefore .

The complexity of enforcing BC on the AtLeastNValue constraint can be improved to in way similar to that described in Section 5 and in [6].

7 Experimental results

To evaluate these decompositions, we performed experiments on two problem domains. We used the same problems as in a previous experimental comparison of propagators for the AtMostNValue constraint [5]. We ran experiments with Ilog Solver 6.2 on an Intel Xeon 4 CPU, 2.0 Ghz, 4Gb RAM.

7.1 Dominating set of the Queen’s graph

The problem is to put the minimum number of queens on a chessboard, so that each square either contains a queen or is attacked by one. This is equivalent to the dominating set problem of the Queen’s graph. Each vertex in the Queen’s graph corresponds to a square of the chessboard and there exists an edge between two vertices iff a queen from one square can attack a queen from the other square. To model the problem, we use a variable for each square, and values from to and post a single constraint. The value belongs to iff there exists an edge in the Queen’s graph or . We use minimum domain variable ordering and a lexicographical value ordering. For , all minimum dominating sets for the Queen’s problem are either of size or [10]. We therefore only solved instances for these two values of .

We compare our decomposition with two simple decompositions of the AtMostNValue constraint. The first decomposition is the one described in Section 3.1 except that in constraint (3), we replace “” by “”. We denote this decomposition . The second decomposition is similar to the first one, but we use the cardinality variables of a GCC constraint to keep track of the used values. We call this decomposition . The final two decompositions are variants of the decomposition described in Section 4, which we call or depending whether we enforce BC or RC on our decomposition. As explained in Section 5, we channel the variables directly to the pyramid variables to avoid introducing many auxiliary variables and we add the redundant constraint to the decomposition to speed up the propagation across the pyramid. For the decomposition that enforces RC, we did not fully implement the decomposition of Section 5, but rather a simple channeling propagator that achieves RC in on (4), but with better asymptotic constants than constraints (4). Finally, we re-implemented the ternary sum constraint in Ilog. This gave us about speed up.

backtracks time backtracks time backtracks time backtracks time

5
3 34 0.01 34 0.06 7 0.00 7 0.00
6 3 540 0.16 540 2.56 118 0.03 118 0.03
7 4 195,212 84.50 195,212 1681,21 83,731 15.49 83,731 21.21
8 5 390,717 255.64 390,717 8,568.35 256,582 58.42 256,582 89.30

Table 1: Backtracks and rumtime (in seconds) to solve the dominating set problem for the Queen’s graph. Best results for any statistic are bold fonted.

Results are presented in Table 1. Our decomposition performs better than the other two decompositions, both in runtime and in number of backtracks. We observe that BC and RC prune the same (i.e., same number of backtracks) on our decomposition but BC is faster on larger problems. It should be pointed out that our results are comparable with the results for the AtMostNValue bounds consistency propagator from [5]. Whilst our decomposition is not as efficient as the best results presented in that paper, our decomposition was easier to implement.

7.2 Random binary CSP problems

We also reproduced the set of experiments on random binary CSP problems from  [5]. These problems can be described by four parameters. The number of variables , the domain size , the n