1 Introduction
One of the important features of constraint programming are global constraints. These capture common modelling patterns (e.g. “these jobs need to be processed on the same machine so must take place at different times”). In addition, efficient propagation algorithms are associated with global constraints for pruning the search space (e.g. “these 5 jobs have only 4 time slots between them so, by a pigeonhole argument, the problem is infeasible”). One of the oldest and most useful global constraints is the AllDifferent constraint [1]. This specifies that a set of variables takes all different values. Several algorithms have been proposed for propagating this constraint (e.g. [2, 3, 4, 5, 6]). Such propagators can have a significant impact on our ability to solve problems (see, for instance, [7]
). It is not hard to provide pathological problems on which some of these propagation algorithms provide exponential savings. A number of hybrid frameworks have been proposed to combine the benefits of such propagation algorithms and OR methods like integer linear programming (see, for instance,
[8]). In addition, the convex hull of a number of global constraints has been studied in detail (see, for instance, [9]).In this paper, we consider a modelling pattern [10] that occurs in many problems involving AllDifferent constraints. In addition to the constraint that no pair of variables can take the same value, we may also have a constraint that certain pairs of variables are ordered (e.g. “these two jobs need to be processed on the same machine so must take place at different times, but the first job must be processed before the second”). We propose a new global constraint, AllDiffPrec that captures this pattern. This global constraint is a specialization of the general framework that combines several Cumulative and precedence constraints [11, 12]. Reasoning about such combinations of global constraints may achieve additional pruning. In this work we propose an efficient propagation algorithm for the AllDiffPrec constraint. However, we also prove that propagating the constraint completely is computationally intractable.
2 Formal background
A constraint satisfaction problem (CSP) consists of a set of variables, each with a domain of possible values, and a set of constraints specifying allowed values for subsets of variables. A solution is an assignment of values to the variables satisfying the constraints. We write for the domain of the variable . Domains can be ordered (e.g. integers). In this case, we write and for the minimum and maximum elements in . The scope of a constraint is the set of variables to which it is applied. A global constraint is one in which the number of variables is not fixed. For instance, the global constraint ensures for . By comparison, the binary constraint, is not global.
When solving a CSP, we often use propagation algorithms to prune the search space by enforcing properties like domain, bounds or range consistency. A support on a constraint is an assignment of all variables in the scope of to values in their domain such that is satisfied. A variablevalue is consistent on iff it belongs to a support of . A constraint is domain consistent (DC) iff every value in the domain of every variable in the scope of is consistent on . A bound support on is an assignment of all variables in the scope of to values between their minimum and maximum values (respectively called lower and upper bound) such that is satisfied. A variablevalue is bounds consistent on iff it belongs to a bound support of . A constraint is bounds consistent (BC) iff the lower and upper bounds of every variable in the scope of are bounds consistent on . Range consistency is stronger than but is weaker than . A constraint is range consistent (RC) iff iff every value in the domain of every variable in the scope of is bounds consistent on . A CSP is // iff each constraint is //. Generic algorithms exists for enforcing such local consistency properties. For global constraints like AllDifferent, specialized methods have also been developed which offer computational efficiencies. For example, a bounds consistency propagator for AllDifferent is based on the notion of Hall interval. A Hall interval is an interval of domain values that completely contains the domains of variables. Clearly, variables whose domains are contained within the Hall interval consume all the values in the Hall interval, whilst any other variables must find their support outside the Hall interval.
We will compare local consistency properties applied to logically equivalent constraints. As in [13], we say that a local consistency property on the set of constraints is stronger than on the logically equivalent set iff, given any domains, removes all values removes, and sometimes more. For example, domain consistency on is stronger than domain consistency on . In other words, decomposition of the global AllDifferent constraint into binary notequals constraints hinders propagation.
3 Some examples
To motivate the introduction of this global constraint, we give some examples of models where we have one or more sets of variables which take alldifferent values, as well as certain pairs of these variables which are ordered.
3.1 Exam timetabling
Suppose we are timetabling exams. A straight forward model has variables for exams, and values which are the possible times for these exams. In such a model, we may have temporal precedences (e.g. part 1 of the physics exam must be before part 2) as well as AllDifferent constraints on those sets of exams with students in common (e.g. all physics, maths, and chemistry exams must occur at different times since there are students that need to sit all three exams).
3.2 Scheduling
Suppose we are scheduling a single machine with unittime tasks, subject to precedence constraints and release and due times [14]. A straight forward model has variables for the tasks, and values which are the possible times that we execute each task. In such a model, we have an AllDiffPrec constraint on variables whose domains are the appropriate intervals. For example, consider scheduling instructions in a block (a straightline sequence of code with a single entry and exit point) on one processor where all instructions take the same time to execute. Such a schedule is subject to a number of different types of precedence constraints. For instance, instruction must execute before if:
 Readafterwrite dependency:

reads a register written by ;
 Writeafterwrite dependency:

writes a register also written by ;
 Writeafterread dependency:

writes a register that reads.
Such dependencies give rise to precedence constraints between the instructions.
3.3 Breaking value symmetry
Many constraint models contain value symmetry. Puget has proposed a general method for breaking any number of value symmetries in polynomial time [15, 16]. This method introduces variables to represent the index of the first occurrence of each value:
Value symmetry on the is transformed into variable symmetry on the . This variable symmetry is easy to break. We simply need to post precedence constraints on the . Depending on the value symmetry, we need different precedence constraints.
Consider, for example, finding a graceful labelling of a graph. A graceful labelling is a labelling of the vertices of a graph with distinct integers 0 to such that the edges (which are labelled with the absolute differences of the labels of the two connected vertices) are also distinct. Graceful labellings have applications in radio astronomy, communication networks, Xray crystallography, coding theory and elsewhere. Here is the graceful labelling of the graph :
A straight forward model for graceful labelling a graph has variables for the vertex labels, and values which are integers 0 to . This model has a simple value symmetry as we can map every value onto . In [16], Puget breaks this value symmetry for with the following ordering constraints:
Note that all the take different values as each integer first occurs in the graph at a different index. Hence, we have a sequence of variables on which there is both an AllDifferent and precedence constraints.
4 AllDiffPrec
Motivated by such examples, we propose the global constraint:
Where is a set containing pairs of variable indices. This ensures for any and for any . Without loss of generality, we assume that does not contain cycles. If it does, the constraint is trivially unsatisfiable. It is not hard to see that decomposition of this global constraint into separate AllDifferent and binary ordering constraints can hinder propagation.
Lemma 1
Domain consistency on the constraint is stronger than domain consistency on the decomposition into and the binary ordering constraints, for . Bounds consistency on is stronger than bounds consistency on the decomposition, whilst range consistency on is stronger than range consistency on the decomposition.
Proof: Consider with and . Then the decomposition into and the binary ordering constraints, , and is domain consistent. Hence, it is also range and bounds consistent. However, enforcing bounds consistency directly on the global AllDiffPrec constraint will prune 2 from the domain of since this assignment has no bound support. Similarly, enforcing range or domain consistency will prune 2 from the domain of .
A simple greedy method will find a bound support for the AllDiffPrec constraint. This method is an adaptation of the greedy method to build a bound support of the AllDifferent constraint. For simplicity, we suppose that contains the transitive closure of the precedence constraints. In fact, this step is not required but makes our argument easier. First, we need to preprocess variables domains so that they respect the precedence constraints , : and . However, we notice that it is sufficient to enforce a weaker condition on bounds of variables and such that and . If these conditions on variables domains are satisfied then we say that domains are preprocessed. Second, we construct a satisfying assignment as follows. We process all values in the increasing order. When processing a value , we assign to the variable with the smallest upper bound, that has not yet been assigned and that contains in its domain. Suppose, there exists a set of variables that have the upper bound , so that . To construct a solution for AllDifferent, we would break these ties arbitrarily. In this case, however, we select a variable that is not successor of any variable in the set . Such a variable always exists, as the transitive closure of the precedence graph does not contain cycles. By the correctness of the original algorithm the resulting assignment is a solution. In addition to satisfying the AllDifferent constraint, this solution also satisfies the precedence constraints. Indeed, for the constraint , the upper bound of is necessarily smaller than or equal to the upper bound of . In the case of equality, we tie break in favor of . Therefore, a value is assigned to before a value gets assigned to . Since we process values in increasing order, we obtain as required.
Example 1
Consider with , and . First, we preprocess domains to ensure that and , , . This gives , . As in the greedy algorithm, we consider the first value . This value is contained in domains of variables , and . As , by tie breaking we select variables that are not successors of any other variables among variables . There are two such variables: and . We break this tie arbitrarily and set to 1. The new domains are , , . The next value we consider is . Again, there exist two variables that contain this value, and they have the same upper bounds. By tiebreaking, we select . Finally, we assign and to 3 and 4 respectively.
We can design a filtering algorithm based on this satisfiability test. By successively reducing a variable domain in halves with a binary search we can filter the lower and upper bounds of a variable domain with tests where is the cardinality of the domain. Consider, for example, a variable with the domain . We are looking for a support for . At the first step we temporally fix the domain of X to the first half so that and run the bounds disentailment detection algorithm. If this algorithm fails, we halved the search and repeat with the other half. If this algorithm does not fail, we know that there is a value in that has a bounds support. Hence, we continue with the binary search within this half. As each test takes time and there are variables to prune, the total running time is . In the rest of this paper, we improve on this using sophisticated algorithmic ideas.
5 Bounds consistency
We present an algorithm that enforces bounds consistency on the AllDiffPrec constraint. First, we consider an assignment and a partial filtering that this assignment causes. We call this filtering direct pruning caused by the assignment or, in short, direct pruning of . Informally, direct pruning works as follows. If takes then the value becomes unavailable for the other variables due to the AllDifferent constraint. Hence, we remove from the domains of variables that have as their lower bound or upper bound. Due to precedence constraints, we increase the lower bounds of successors of to and decrease the upper bounds of predecessors of to . Note that direct pruning does not enforce bounds consistency on either AllDiffPrec or the single AllDifferent constraint. However, direct pruning is sufficient to detect bounds inconsistency as we show below.
Let and be the sets of variables that precede and succeed , respectively. We denote the domains obtained after direct pruning of as , so that for all :
(1)  
(2)  
(3)  
(4) 
These bounds could be pruned further but we will first analyze the properties that this simple filtering offers.
Example 2
Consider constraint with , , . For example, an assignment results in the domains: , and . We point out again that we can continue pruning as values and have to be removed from . However, direct pruning of is sufficient for our purpose. Consider another example. An assignment results in the domains: , and .
Our algorithm is based on the following lemma.
Lemma 2
Let AllDifferent and precedence constraints be bounds consistent over variables , , be an assignment of a variable to its bound and be the domains after direct pruning of . Then, is bounds consistent iff , where domains of variables are , has a solution.
Proof: Suppose AllDifferent and the precedence constraints are bounds consistent. As precedence constraints are bounds consistent, we know that for all , , and . Consider direct pruning of . Note, direct pruning of preserves the property of domains being preprocessed. The pruning can only create equality of lower bounds or upper bounds for some precedence constraints. The assignment demonstrates this situation in Example 2. Direct pruning of forces lower bounds of and , that are in the precedence relation, to be equal.
As domains are preprocessed, we know that the greedy algorithm (Section 4) will find a solution of AllDifferent on the domains that also satisfies the precedence constraints if a solution exists. This solution is a support for . ∎
Based on Lemma 2 we prove that we can enforce bounds consistency on the AllDiffPrec constraint in . However, we start with a simpler and less efficient algorithm to explain the idea . We show how to improve this algorithm in the next section. Given Lemma 2, the most straightforward algorithm to enforce bounds consistency for is to assign to , perform the direct pruning, run the greedy algorithm and, if it fails, prune . Interestingly enough, to detect bounds disentailment we do not have to run a greedy algorithm for each pair . If the AllDifferent constraint and the precedence constraints are bounds consistent, we show that it is sufficient to check that a set of conditions (5)(10) holds for each interval of values. If these conditions are satisfied then the pair is bounds consistent. Hence, for each pair , , , and for each interval we enforce the following conditions. We assume that . For , , and for all intervals and , and , the following conditions have to be satisfied:
(5)  
(6)  
(7)  
(8)  
(9)  
(10) 
Note that we actually do not have to consider all possible intervals. For every variablevalue pair we consider all intervals , and all intervals , . The parameter () is used to slide between intervals , , . Equations (5)–(7) make sure that the number of variables that fall into an interval , after the assignment to , is less than or equal to the length of the interval minus 1. Symmetrically, Equations (8)–(10) ensure that the same condition is satisfied for all intervals . If there exists an interval () that violates the condition for a pair then this interval is removed from .
Example 3
Consider . Domains of the variables are , and . Consider a variablevalue pair . By the direct pruning we get the following domains: , , , and . The interval is a violated Hall interval as it contains four variables. We show that Equations (5)–(6) detect that the interval has to be pruned from .
Consider the pair and the interval , where , . We get that and which is greater than . Hence, the interval has to be removed from .
Theorem 5.1
Proof: Suppose conditions (5)–(10) are fulfilled, AllDifferent and precedence constraints are bounds consistent and the AllDiffPrec constraint is not bounds consistent. Let an assignment of a variable to its bound be an unsupported bound. We denote to simplify notations. We recall that we denoted the domains after direct pruning of . By Lemma 2 the constraint where domains of variables are fails. Hence, there exists a violated Hall interval such that .
Note that direct pruning of does not cause the pruning of variables in , as all precedence constraints are bounds consistent on the original domains. Next we consider several cases depending on the relative position of the value and the violated Hall interval on the line. Note that the interval was not a violated Hall interval before the assignment . However, due to direct pruning of a number of additional variables domains can be forced to be inside . Hence, we analyze these additional variables and show that conditions (5)–(10) prevent the creation of a violated Hall interval.
Case 1. Suppose . As is a violated Hall interval, we have that
Note that the number of additional variables that fall into the interval after setting to consists only of variables that succeed , such that . Hence, , and
Case 2. Suppose . If or , the assignment does not force any extra variables to fall into the interval . Hence, the interval is a violated Hall interval before the assignment. This contradicts that AllDifferent is bounds consistent.
Case 3. Suppose . In this case the assignment does not force any additional variables among successors to fall into , as . Note that there are no successors that are contained in the interval , because precedence constraints are bounds consistent. Therefore, . Hence, the only additional variables that fall into are variables that do not have a precedence relation with and , so . As is a violated Hall interval, we have
This contradicts Equation (10) as the first term equals 0 in the equation by the argument above.
Case 4. Suppose . In this case the set of additional variables that fall into the interval consists of two subsets of variables. The first set contains variables that succeed , such that , and . The second set contains the variables that do not have precedence relation with and . Consider the interval . As conditions (5)–(7) are satisfied for the interval , we get that
On the other hand, as the is violated we have
We know that and by the construction of the direct pruning. This leads to a contradiction between the last two inequalities.
Therefore, the interval cannot be a violated Hall interval. Similarly, we can prove the same result for the minimum value of .
The reverse direction is trivial. ∎
Theorem 5.1 proves that conditions (5)–(10) together with bounds consistency on the AllDifferent constraint and the precedence constraints are necessary and sufficient conditions to enforce bounds consistency on the AllDiffPrec constraint. The time complexity of enforcing these conditions in , as for each variable we check intervals. This time complexity can be reduced by making an observation, that we do not need to check intervals of length greater than as conditions are trivially satisfied for such intervals. This reduces the complexity to .
We make an observation that helps to further reduce the time complexity of enforcing these conditions. We denote the set of all minimum values in variables domains and the set of all maximum values in variables domains . Let be an interval that violates the conditions. We denote the amount of violation in this interval:
Observation 1
Proof: Consider a violated interval . In this case . There exists an interval such that . We take the largest interval . Note that such an interval always exists as the interval is contained inside the interval . The interval also violates the conditions, because it contains the same variables. So, we have . We note that as there are no lower bounds in the interval . Similarly, there are no upper bounds in the interval . Hence, . Therefore, . The value is greater than as .
∎Observation 1 shows that it is sufficient to check intervals , . We can infer all pruning from these intervals. Let , be an interval that violates conditions (5)–(7) for a variable and be the violation cost. Then we remove the interval from , as any interval between and is a violated interval. A dual observation holds for conditions (8)–(10). This reduces the time complexity of checking (5)–(10) to .
6 Faster bounds consistency algorithm
Observation 1 allows us to construct a faster algorithm to enforce conditions (5)–(10). First, we observe that the conditions can be checked for each variable independently. Consider a variable . We sort all variables , in a nondecreasing order of their upper bounds. When processing a variable , , we assign to the smallest value that has not been taken. When processing a variable , , we store information about the number of successors that we have seen so far. We perform pruning if we find an interval such that the number of available values in this interval equals the number of successors in the interval . We use a disjoint set data structure to perform counting operations in time.
Algorithm 1 shows a pseudocode of our algorithm. We denote a disjoint set data structure. The function returns the set that contains the value . The function joins the values and into a single set. We use a disjoint set union data structure [22] that allows to perform and in time.
Proof: Enforcing conditions (5)–(7) on the th variable corresponds to the th loop (line 1). Hence, we can consider each run independently.
We denote a set of values that are taken by nonsuccessors of after the variable is processed. The algorithm maintains a pointer that stores the minimum value such that the number of available values in the interval is equal to after the variable is processed.
Invariant. We prove the invariant for the pointer by induction. The invariant holds at step . Note that the first variable can not be a successor of . Indeed, and the interval is empty. Let us assume that the invariant holds after processing the variable .
Suppose the next variable to process is . After we assigned to a value, we move forward to capture a possible increase of the upper bound from to (line 1) and, then, backward if either is a successor of or is a nonsuccessor and takes a value such that (line 1). Note, that when we move , we ignore values in . To point this out we call steps of availablevaluesteps. Thanks to a disjoint set union data structure we can jump over values in in per step [22].
Moving forward. We move the pointer on availablevaluesteps forward. We denote a new value of . The line 1 ensures that the number of available values in the interval equals to the number of available values in the interval . This operation preserves the invariant by the induction hypothesis.
Moving backward. We consider two cases.
Case 1. is a successor of . In this case, we move one availablevaluestep backward to capture that is a successor (line 1). This preserves the invariant.
Case 2. is not a successor of . Suppose and are in the same set, so that . Then we move to the minimum element in this set. This step does not change the number of available values between the pointer and . However, it makes sure that stores the minimum possible value. This preserves the invariant.
Suppose and are in different sets. If then we move one availablevaluestep backward, as took one of the available values in . This preserves the invariant. If then the invariant holds by the induction hypothesis. Hence, the new value of preserves the invariant.
Note that the length of the interval equals the sum of and due to the invariant. This means that the interval violates conditions (5)–(7), as the sum has to be less than or equal to the length of the interval minus 1.
Soundness. Suppose we pruned an interval from after the processing of the variable . This pruning is sound because the interval violates conditions (5)–(7).
Completeness. Suppose there exists an interval that violates conditions (5)–(7), so that . However, the algorithm does not prune the upper bound of to . Suppose that , . As the pointer preserves the invariant, there are exactly available values between . Hence points to and .
Suppose that , . We consider the step when the last pruning of the variable occurs. Suppose we processed the variable at this step. The pointer stores . As does not move backward in the following steps, we conclude that neither successors nor nonsuccessors with domains that are contained inside the interval occur. Hence, , , , . Hence is not a violated interval.
Complexity. At each iteration of the loop (line 1) the pointer moves times forward and times backward. Due to a disjoint set data structure the total cost of the operations is , the functions and take [22]. The total time complexity is . ∎
We can construct a similar algorithm to Algorithm 1 to enforce conditions (8)–(10) and prune lower bounds.
Example 4
Consider for Example 3. We show how our algorithm works on this example.
We represent values in the disjoint set data structure with circles. We use rectangles to denote sets of joint values. Initially, all values are in disjoint sets. If a variable takes a value we put the label in the th circle. Figure 1 shows five steps of the algorithm when processing the variable (line 1, ).
Consider the first step. We set as is 1. We join the values and into a single set (line 1). The pointer is set to . Consider the second step. We process the variable which is a successor of . As we move one availablevaluestep forward, . However, as is a successor, we move availablevaluestep backward. Hence, . Consider the third step. We process which is a successor of . As we do not move forward. However, as is a successor, we move availablevaluestep backward, is set to 5. Consider the fourth step. We process which is a nonsuccessor of . The value is 3. Hence, and join
Comments
There are no comments yet.