Toward an automaton Constraint for Local Search

10/07/2009 ∙ by Jun He, et al. ∙ 0

We explore the idea of using finite automata to implement new constraints for local search (this is already a successful technique in constraint-based global search). We show how it is possible to maintain incrementally the violations of a constraint and its decision variables from an automaton that describes a ground checker for that constraint. We establish the practicality of our approach idea on real-life personnel rostering problems, and show that it is competitive with the approach of [Pralong, 2007].

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

When a high-level constraint programming (CP) language lacks a (possibly global) constraint that would allow the formulation of a particular model of a combinatorial problem, then the modeller traditionally has the choice of (1) switching to another CP language that has all the required constraints, (2) formulating a different model that does not require the lacking constraints, or (3) implementing the lacking constraint in the low-level implementation language of the chosen CP language. This paper addresses the core question of facilitating the third option, and as a side effect often makes the first two options unnecessary.

The user-level extensibility of CP languages has been an important goal for over a decade. In the traditional global search approach to CP (namely heuristic-based tree search interleaved with propagation), higher-level abstractions for describing new constraints include indexicals

[18]; (possibly enriched) deterministic finite automata (DFAs) via the [3] and [12] generic constraints; and multi-valued decision diagrams (MDDs) via the [6] generic constraint. Usually, a generic but efficient propagation algorithm achieves a suitable level of local consistency by processing the higher-level description of the new constraint. In the more recent local search approach to CP (called constraint-based local search, CBLS, in [15]), higher-level abstractions for describing new constraints include invariants [10]; a subset of first-order logic with arithmetic via combinators [17] and differentiable invariants [16]; and existential monadic second-order logic for constraints on set decision variables [2]. Usually, a generic but incremental algorithm maintains the constraint and variable violations by processing the higher-level description of the new constraint.

In this paper, we revisit the description of new constraints via automata, already successfully tried within the global search approach to CP [3, 12], and show that it can also be successfully used within the local search approach to CP. The significance of this endeavour can be assessed by noting that of the currently global constraints in the Global Constraint Catalogue [4] are described by DFAs that are possibly enriched with counters and conditional transitions [3] (note that DFA generators can easily be written for other constraints, such as the [5] and  [11] constraints, taking the necessarily ground parameters as inputs), so that all these constraints will instantly become available in CBLS once we show how to implement fully the enriched DFAs that are necessary for some of the described global constraints.

The rest of this paper is organised as follows. In Section 2, we present our algorithm for incrementally maintaining both the violation of a constraint described by an automaton, and the violations of each decision variable of that constraint. In Section 3, we present experimental results establishing the practicality of our results, also in comparison to the prior approach of [13]. Finally, in Section 4, we summarise this work and discuss related as well as future work.

2 Incremental Violation Maintenance with Automata

In CBLS, three things are required of an implemented constraint: a method for calculating the violation of the constraint and each of its decision variables for the initial assignment (initialisation); a method for computing the differences of these violations upon a candidate local move (differentiability) to a neighbouring assignment; and a method for incrementally maintaining these violations when an actual move is made (incrementality). Intuitively, the higher the violation of a decision variable, the more can be gained by changing the value of that decision variable. It is essential to maintain incrementally the violations rather than recomputing them from scratch upon each local move, since by its nature a local search procedure will try many local moves to find one that ideally reduces the violation of the constraint or one of its decision variables.

Our running example is the following, for a simple work scheduling constraint. There are values for two work shifts, day () and evening (), as well as a value for enjoying a day off (). Work shifts are subject to the following three conditions: one must take at least one day off before a change of work shift; one cannot work for more than two days in a row; and one cannot have more than two days off in a row. A DFA for checking ground instances of this constraint is given in Figure 1. The start state  is marked by a transition entering from nowhere, while the success states  and are marked by double circles. Missing transitions, say from state  upon reading value , are assumed to go to an implicit failure state, with a self-looping transition for every value (so that no success state is reachable from it).

[1](0,-0.7)(8,4) [1](0,2)1 [2](3,4)2 [3](3,2)3 [4](3,0)4 [5](7,3)5 [6](7,1)6 12d 13x 14e [0.6]23x 32d 43x [0.6]34e [0.2]25d [0.15]36x [0.35]45e [0.25]53x [0.06]62d [0.1]64e 1

Figure 1: An automaton for a simple work scheduling constraint

2.1 Violations of a Constraint

To define and compute the violations of a constraint described by an automaton, we first introduce the notion of a segmentation of an assignment:

Definition 1 (Segmentation)

Given an assignment , a segmentation is a possibly empty sequence of non-empty sub-strings (referred to here as segments) of such that for each and we have that .

For example, a possible segmentation of the assignment is ; note that the third character of the assignment is not part of any segment. In general, an assignment has multiple possible segmentations. We are interested in segmentations that are accepted by an automaton, in the following sense:

Definition 2 (Acceptance)

Given an automaton and an assignment , a segmentation is accepted by the automaton if there exist strings , where only and may be empty, such that the concatenated string

is accepted by the automaton.

For example, given the automaton in Figure 1, the assignment has a segmentation with , which is accepted by the automaton via the string with (the empty string) and .

Given an assignment, the algorithm presented below initialises and updates a segmentation. The violations of the constraint and its decision variables are calculated relative to the current segmentation:

Definition 3 (Violations)

Given an automaton describing a constraint and given a segmentation of an assignment for a sequence of decision variables :

  • The constraint violation of is .

  • The variable violation of decision variable is if there exists a segment index in such that , and otherwise.

It can easily be seen that the violation of a constraint is also the sum of the violations of its decision variables, and that it is never an underestimate of the minimal Hamming distance between the current assignment and any satisfying assignment.

Our approach, described in the next three sub-sections, greedily grows a segmentation from left to right across the current assignment relative to a satisfying assignment, and makes stochastic choices whenever greedy growth is impossible.

2.2 Initialisation

A finite automaton is first unrolled for a given length of a sequence of decision variables, as in [12]:

Definition 4 (Layered Graph)

Given a finite automaton with states, the layered graph over a given number of decision variables is a graph with nodes. Each of the vertical layers has a node for each of the states of the automaton. The node for the start state of the automaton in layer 1 is marked as the start node. There is an arc labelled  from node  in layer  to node  in layer  if and only if there is a transition labelled  from  to  in the automaton. A node in layer  is marked as a success node if it corresponds to a success state in the automaton.

The layered graph is further processed by removing all nodes and arcs that do not lead to a success node. The resulting graph, seen as a DFA (or as an ordered MDD), need not be minimised (or reduced) for our approach (although this is a good idea for the global search approaches [3, 12], as argued in [8], and would be a good idea for the local search approach of [13]), as the number of arcs of the graph does not influence the time complexity of our algorithm below. For instance, the minimised unrolled version for decision variables of the automaton in Figure 1 is given in Figure 2. Note that a satisfying assignment corresponds to a path from the start node in layer to a success node in layer , such that each arc from layer to layer of this path is labelled .

[1](0,-1)(22,11) white white (-1,9.2)a1 (1,9.2)a2 (2.5,9.2)b1 (4.5,9.2)b2 (6,9.2)c1 (8,9.2)c2 (9.5,9.2)d1 (11.5,9.2)d2 (13,9.2)e1 (15,9.2)e2 (16.5,9.2)f1 (18.5,9.2)f2 (20,9.2)g1 (22,9.2)g2 white a1a2Layer 1 b1b2Layer 2 c1c2Layer 3 d1d2Layer 4 e1e2Layer 5 f1f2Layer 6 g1g2Layer 7 black

white purple [1](17.5,7.8)b [1](17.5,4.8)c [1](17.5,1.8)d [1](14,6.8)e [2](14,1.2)f [3](10.5,8.8)g [4](10.5,6.8)h [3](10.5,4.8)i [2](10.5,2.8)j [2](10.5,-0.8)k [6](7,8.8)l [8](7,5.2)m [6](7,3.2)n [4](7,1.2)o [6](7,-0.8)p [12](3.5,7.8)q [18](3.5,4.8)r [12](3.5,0.2)s [42](0,4.8)t

black black [2](3.5,7)22 [4](3.5,1)24 [2](7,8)32 [3](7,6)33 [5](7,2)35 [6](7,0)36 [2](10.5,8)42 [4](10.5,4)44 [6](10.5,0)46 [3](14,2)53 [2](17.5,7)62 [4](17.5,1)64

redred [1](0,4)11 [3](3.5,4)23 [4](7,4)34

greengreen [2](14,6)52 [3](17.5,4)63 [5](21,4)75

blueblue [3](10.5,6)43 [5](10.5,2)45

11

1122d 1124e [0.1]2235d [0.2]2233x [0.1]2336x [0.17]2332d [0.2]2435e [0.1]2433x [0.06]3245d [0.1]3243x [0.04]3344e [0.06]3346x [0.05]3342d [0.07]3543x [0.2]3642d [0.21]3644e [0.1]4252d [0.15]4253x [0.1]4353x [0.08]4352d [0.15]4453x [0.1]4452e [0.12]4553x [0.05]4652d [0.1]4652e 5364e [0.2]5362d 6275d 6475e redred 1123x [0.2]2334e greengreen [0.30]4352e [0.25]5263x 6375x blueblue [0.05]3443x [0.07]3445e

Figure 2: The minimised unrolled automaton of Figure 1. The number by each node is the number of paths from that node to the success node in the last layer. The colour coding is purely for the convenience of the reader to spot a particular path mentioned in the running text.

Further, we require a number of data structures, where is the number of states in the given automaton and is the number of decision variables it was unrolled for:

  • records the number of paths from node in layer to a success node in the last layer; for example, see the numbers by each node in Figure 2;

  • is the number of segments in the current segmentation;

  • segments record the current segmentation;

  • records the current violation of decision variable (see Definition 3);

The matrix can be computed in straightforward fashion by dynamic programming. The other three data structures are initialised (when the starting position is ) and maintained (when decision variable is changed, with ) by the procedure of Algorithm 1. Upon some initialisations (lines 2 and 3), it (re)visits only the decision variables (line 4). If the value of the currently visited decision variable triggers the extension of the currently last segment (lines 6 and 9) or the creation of a new segment (lines 6 to 9), then its violation is (line 10). Otherwise, its violation is

and a successor node is picked with a probability weighted according to the number of paths from the current node to a success node (lines 11 to 14). Toward this, we maintain the nodes of the picked path (line 16).

1:  procedure 
2:  let be the number of segments picked for at the previous run; assume at the first run
3:  ;
4:  for all   to  do
5:     if the current value, say , of is the label of an arc from to some node  then
6:        if  then
7:           ; ; {c}reate a new segment
8:        end if
9:        
10:        
11:     else
12:        
13:        
14:        pick a successor of with probability
15:     end if
16:     
17:  end for
Algorithm 1 Computation and update of the current segmentation from position

The time complexity of Algorithm 1 is linear in the number of decision variables, because only one path (from layer to layer ) is explored, with a constant-time effort at each node. Once the pre-processing is done, the time complexity of Algorithm 1 is thus independent of the number of arcs of the unrolled automaton! Hence the minimisation (or reduction) of the unrolled automaton would be merely for space savings (and for the convenience of human reading) as well as for accelerating the pre-processing computation of the matrix. In our experiments, these space and time savings are not warranted by the time required for minimisation (or reduction).

Note that this algorithm works without change or loss of performance on non-deterministic finite automata (NFAs). This is potentially interesting since NFAs are often smaller than their equivalent DFAs, but (as just seen) the number of arcs has no influence on the time complexity of Algorithm 1.

For example, in Figure 2, with the initial assignment and a first call to Algorithm 1 with , the first segment will be (the red path). Next, the assignment triggers a violation of for decision variable (we say that it is a violated variable) because there is no arc labelled that connects the current node 4 in layer 3 with any nodes in layer 4. However, node 4 in layer 3 has two out-going arcs, namely to nodes 3 and 5 in layer 4 (in blue). In layer 4, there are paths from node 3 to the last layer, compared to such paths from node 5, so node 3 is picked with probability and node 5 is picked with probability (where the , , and are the purple numbers by those nodes), and we assume that node 3 in layer 4 is picked. From there, we get the second segment  (the green path), which stops at success node 5 in the last layer. The violation of the constraint is thus , because the value of one decision variable does not participate in any segment.

Continuing the example, we assume now that decision variable is changed to value , and hence we call Algorithm 1 with . Only segment can be kept from the previous segmentation picked for , namely (the red path). Since there is an arc labelled from the current node 4 in layer 3, namely to node 5 in layer 4, segment  is extended (line 9) to . However, with decision variable still having value , this segment cannot be extended further, since there is no arc labelled from node 5 in layer 4, and hence is violated. Similarly, decision variables and are violated no matter which successors are picked, so no new segment is ever created. The violation of the constraint is thus because the value of three decision variables do not participate in any segment. Hence changing decision variable from value to value would not be considered a good local move, as the constraint violation increases from  to . Changing decision variable to value instead would be a much better local move, as the first segment is then extended to the entire current assignment , without detecting any violated variables, so that the violation of the constraint is then , meaning that a satisfying assignment was found.

In Section 3, we experiment with a deterministic method [13] for picking the next node and experimentally show that our random pick is computationally quicker at finding solutions.

2.3 Differentiability

At present, the differences of the (constraint and variable) violations upon a candidate local move are calculated naïvely by first making the candidate move and then undoing it.

2.4 Incrementality

Local search proceeds from the current assignment by checking a number of neighbours of that assignment and picking a neighbour that ideally reduces the violation: the exact heuristics are often problem dependent. But in order to make local search computationally efficient, the violations of the constraint and its decision variables have to be computed in an incremental fashion whenever a decision variable changes value. As shown in Subsection 2.2, our initialisation Algorithm 1 can also be invoked with an arbitrary starting position when decision variable  is assigned a new value.

We have implemented this algorithm in Comet [15], an object-oriented CP language with among others a CBLS back-end (available at www.dynadec.com).

3 Experiments

We now establish the practicality of the proposed violation maintenance algorithm by experimenting with it. All local search experiments were conducted under Comet (version 2.0 beta) on an Intel 2.4 GHz Linux machine with 512 MB memory while the constraint programming examples where implemented using SICStus Prolog.

Mon Tue Wed Thu Fri Sat Sun
1
2
3
4
5
Table 1: A five-week rotating schedule with uniform daily workload

Many industries and services need to function around the clock. Rotating schedules such as the one in Table 1 (a real-life example taken from [9]) are a popular way of guaranteeing a maximum of equity to the involved work teams (see [9]). In our first benchmark, there are day (), evening (), and night () shifts of work, as well as days off (). Each team works maximum one shift per day. The scheduling horizon has as many weeks as there are teams. In the first week, team  is assigned to the schedule in row . For any next week, each team moves down to the next row, while the team on the last row moves up to the first row. Note how this gives almost full equity to the teams, except, for instance, that team  does not enjoy the six consecutive days off that the other teams have, but rather three consecutive days off at the beginning of week  and another three at the end of week . The daily workload may be uniform: for instance, in Table 1, each day has exactly one team on-duty for each work shift, and two teams entirely off-duty; we denote this as ; assuming the work shifts average h, each employee will work h over the five-week-cycle, or h per week. Daily workload, whether uniform or not, can be enforced by global cardinality () constraints [14] on the columns. Further, any number of consecutive workdays must be between two and seven, and any change in work shift can only occur after two to seven days off. This can be enforced by a constraint [5] and a circular constraint [11] on the table flattened row-wise into a sequence .

Our model posts the and constraints described by automata. The constraints on the columns of the matrix are kept invariant: the first assignment is chosen so as to satisfy them, and then only swap moves inside a column are considered. As a meta-heuristic, we use tabu search with restarting. At each iteration, the search procedure in Algorithm 2 selects a violated variable (line 4; recall that the violation of a decision variable is here at most ) and another variable of distinct value in the same column so that their swap (line 8) gives the greatest violation change (lines 5 to 7). The length of the tabu list is the maximum between and the sum of the violations of all constraints (lines 9 and 10). The best solution so far is maintained (lines 11 to 14). Restarting is done every iterations (lines 15 and 16). The expressions for the length of the tabu list and the restart criterion were experimentally determined.

 1: void search(var{int}[] V, ConstraintSystem<LS> S, var{int} violations,
 2:             Solution bestSolution, Counter it, int best, int[,] tabu,
 3:             int restartIter){
 4:   select(x in 1..n : S.violation(V[x]) > 0)
 5:     selectMin(y in 1..n : (x-y) % 7 == 0 && V[x] != V[y],
 6:               nv = S.getSwapDelta(V[x],V[y]) :
 7:               tabu[x,y] <= it || (violations+nv) < best)(nv){
 8:       V[x] :=: V[y];
 9:       tabu[x,y] = it + max(violations,6);
10:       tabu[y,x] = tabu[x,y];
11:       if(best > violations){
12:         best = violations;
13:         bestSolution = new Solution(ls);
14:       }
15:       it++;
16:       if(it % restartIter == 0) restart();
17:     }
18: }
Algorithm 2 The search procedure

Recall that Algorithm 1 computes a greedy random segmentation; hence it might give different segmentations when used for probing a swap and when used for actually performing that swap. Therefore, we record the segmentation of each swap probe, and at the actual swap we just apply its recorded segmentation.

We ran experiments over the eight instances from to (we write the latter as ) with uniform daily workload, where the weekly workload is h. For example, instance has the uniform daily workload of teams on the day shift, teams on the evening shift, teams on the night shift, and teams off-duty. Table 2 gives statistics on the run times and numbers of iterations to find the first solutions over runs from random initial assignments.

optimisation time (ms) number of iterations
instance min max avg min max avg
6 100 22 20 9 528 115 108
12 692 168 154 32 2484 585 561
32 2588 688 611 44 6612 1726 1571
80 6553 1199 1275 86 11212 2125 2303
60 9373 1417 1545 72 15604 2292 2556
160 5901 1527 1227 161 8051 2051 1681
176 9896 1720 1686 157 11680 1966 1981
216 12472 2620 2309 150 12588 2603 2354
Table 2:

Minimum, maximum, average, standard deviation of optimisation times (in milliseconds) and numbers of iterations to the first solutions of rotating nurse schedules (100 runs) from

random initial assignments.

Posting the product of the and automata (accepting the intersection of their two regular languages) has been experimentally determined to be more efficient than posting the two automata individually, hence all experiments in this paper use the product automaton.

Further improvements can be achieved by using a non-random initial assignment. Table 3 gives statistics on the run times and numbers of iterations to find the first solutions over runs, where the initial assignment of instance consists of copies of Table 4. The results show that this non-random initialisation provides a better starting point. Although much more experimentation is required, these initial results show that even on the instance with decision variables it is possible to find solutions quickly.

optimisation time (ms) number of iterations
instance min max avg min max avg
1 16 2 3 6 61 11 9
1 64 12 12 13 235 34 46
12 76 25 13 21 173 46 29
28 172 51 27 32 297 72 49
28 200 79 42 34 286 106 61
56 368 135 76 61 487 156 101
84 768 188 123 69 848 189 140
112 764 233 112 72 736 202 113
Table 3: Minimum, maximum, average, standard deviation of optimisation times (in milliseconds) and numbers of iterations to the first solutions of rotating nurse schedules (100 runs) from non-random initial assignments.
Mon Tue Wed Thu Fri Sat Sun
1
2
3
4
5
6
Table 4: Non-random initial assignment for the instance

The only related work we are aware of is a Comet implementation [13] of the constraint [12], based on the ideas for the propagator of the soft constraint [7]

. The difference is that they estimate the violation change compared to the

nearest solution (in terms of Hamming distance from the current assignment), whereas we estimate it compared to one randomly picked solution. In our terminology (although it is not implemented that way in [13]), they find a segmentation, such that an accepting string for the automaton has the minimal Hamming distance to the current assignment.

Tables 5 and 6 give comparisons between (our re-implementation of)  [13], our method, and a SICStus Prolog constraint program (CP) where the product automaton of the and constraints was posted using the built-in propagation-based implementation of the constraint [3]. These experiments show:

  • Compared with  [13], our method has a higher number of iterations, as it is more stochastic. However, our run times are lower, as our cost of one iteration is much smaller (linear in the number of decision variables, instead of linear in the number of arcs of the unrolled automaton).

  • Compared with the CP method, both local search methods need more time to find the first solution when the number of weeks is small. However, when the number of weeks increases, the runtime of CP increases sharply. From the instance , the runtime of CP exceeds the average runtime of our method. From the instance , the runtime of CP exceeds also the average runtime of  [13].

optimisation time (ms) number of iterations
instance our method  [13] CP our method  [13]
avg avg avg avg
22 20 395 378 10 115 108 98 100
168 154 1584 1187 10 585 561 223 170
688 611 3441 2871 10 1726 1571 333 287
1199 1275 5584 4423 40 2125 2303 399 319
1417 1545 8828 7606 100 2292 2556 514 444
1527 1227 13888 10863 510 2051 1681 672 529
1720 1686 13170 9814 3520 1966 1981 536 485
2620 2309 20202 11530 25820 2603 2354 745 602
St Louis Police 12740 11199 50261 48026 20287 17952 3248 2498
Table 5: Comparison between our method,  [13], and a SICStus Prolog program using the  [3] constraint: average and standard deviation of optimisation times (in milliseconds) and numbers of iterations to the first solutions; rotating nurse schedules (100 runs) and the St Louis Police instance (50 runs), from random initial assignments.
optimisation time (ms) number of iterations
instance our method  [13] our method  [13]
avg avg avg avg
2 3 44 26 11 9 10 7
12 12 182 65 34 46 22 9
25 13 478 201 46 29 41 17
51 27 834 254 72 49 56 18
79 42 1546 876 106 61 87 51
135 76 2414 1233 156 101 113 59
188 123 4517 3276 189 140 181 134
233 112 4473 1958 202 113 160 71
St Louis Police 3990 4012 67159 55632 5389 5598 3949 3159
Table 6: Comparison between our method and  [13]: average and standard deviation of optimisation times (in milliseconds) and numbers of iterations to the first solutions; rotating nurse schedules (100 runs) and the St Louis Police instance (50 runs), from non-random initial assignments.

Besides the rotating nurse instances, we ran experiments on another, harder real-life scheduling instance. The St Louis Police problem (described in [13]) has a seventeen-week-cycle; however it has more constraints than the rotating nurse problem. It has non-uniform daily workloads. For example, on Mondays, five teams work during the day, five at night, four in the evening, and three teams enjoy a day off; while on Sundays, three teams work during the day, four at night, four in the evening, and six teams enjoy a day off. Any number of consecutive workdays must be between three and eight, and any change in work shift can only occur after two to seven days off. The problem has other vertical constraints; for example, no team can work in the same shift on four consecutive Mondays. Further, the problem has complex constraints that limit possible changes of work shifts; for example, only the patterns , , , , , and are allowed. For this hard real-life problem, our method still works well: experimental results can also be found in Tables 5 and 6.

It is possible to post (see Figure 3) the constraints of the rotating nurse problem using the differentiable invariants [16] of Comet. This is possible in general for any automaton by encoding all the paths to a success state by using Comet’s conjunction and disjunction combinators. As the automata get larger, these expressions can become too large to post, and even when it is possible to post these expressions our current experiments show that our approach is more efficient.

Figure 3: A model for the unrolled DFA in Figure 2 with Comet combinators

4 Conclusion

In summary, we have shown that the idea of describing novel constraints by automata can be successfully imported from classical (global search) constraint programming to constraint-based local search (CBLS). Our violation algorithms take time linear in the number of decision variables, whereas the propagation algorithms take amortised time linear in the number of arcs of the unrolled automaton [3, 12]. We have also experimentally shown that our approach is competitive with the CBLS approach of [13].

There is of course a trade-off between using an automaton to describe a constraint and using a hand-crafted implementation of that constraint. On the one hand, a hand-crafted implementation of a constraint is normally more efficient during search, because properties of the constraint can be exploited, but it may take a lot of time to implement and verify it. On the other hand, the (violation or propagation) algorithm processing the automaton is implemented and verified once and for all, and our assumption is that it takes a lot less time to describe and verify a new constraint by an automaton than to implement and verify its algorithm. We see thus opportunities for rapid prototyping with constraints described by automata: once a sufficiently efficient model, heuristic, and meta-heuristic have been experimentally determined with its help, some extra efficiency may be achieved, if necessary, by hand-crafting implementations of any constraints described by automata.

As witnessed in our experiments, constraint composition (by conjunction) is easy to experiment with under the DFA approach, as there exist standard and efficient algorithms for composing and minimising DFAs, but there is no known systematic way of composing violation (or propagation) algorithms when decomposition is believed to obstruct efficiency.

In the global search approach to CP, the common modelling device of reification can be used to shrink the size of DFAs describing constraints [3]. For instance, consider the constraint, which holds if and only if . Upon reifying the decision variables into new Boolean decision variables such that , it suffices to pose the constraint, where corresponds to the regular expression , meaning that at least one must be found in the sequence of the decision variables. However, such explicit reification constraints are not necessary in constraint-based local search, as a total assignment of values to all decision variables is maintained at all times: instead of processing the values of the decision variables when computing the segments, one can process their reified values.

It has been shown that the use of counters (initialised at the start state and evolving during possibly conditional transitions) to enrich the language of DFAs and thereby shrink the size of DFAs can be handled in the global search approach to CP [3], possibly upon some concessions at the level of local consistency that can be achieved. In the Global Constraint Catalogue [4], some of the currently constraints described by DFAs use lists of counters, and another constraints use arrays of counters. We need to investigate the effects on our violation maintenance algorithm of introducing counters and conditional transitions.

Acknowledgements

The authors are supported by grant 2007-6445 of the Swedish Research Council (VR), and Jun He is also supported by grant 2008-611010 of China Scholarship Council and the National University of Defence Technology of China. Many thanks to Magnus Ågren (SICS) for some useful discussions on this work, and to the anonymous referees of both LSCS’09 and the Doctoral Programme of CP’09, especially for pointing out the existence of [13, 7].

References

  • [1]
  • [2] Magnus Ågren, Pierre Flener & Justin Pearson (2007): Generic incremental algorithms for local search. Constraints 12(3), pp. 293–324. (Collects the results of papers at CP-AI-OR’05, CP’05, and CP’06, published in LNCS 3524, 3709, and 4204).
  • [3] Nicolas Beldiceanu, Mats Carlsson & Thierry Petit (2004): Deriving filtering algorithms from constraint checkers. In: Mark Wallace, editor: Proceedings of CP’04, LNCS 3258. Springer-Verlag, pp. 107–122.
  • [4] Nicolas Beldiceanu, Mats Carlsson & Jean-Xavier Rampon (2007): Global constraint catalogue: Past, present, and future. Constraints 12(1), pp. 21–62. Dynamic on-line version at www.emn.fr/x-info/sdemasse/gccat. Long version as Technical Report T2005:08, Swedish Institute of Computer Science, November 2005.
  • [5] Stéphane Bourdais, Philippe Galinier & Gilles Pesant (2003): HIBISCUS: A constraint programming application to staff scheduling in health care. In: Francesca Rossi, editor: Proceedings of CP’03, LNCS 2833. Springer-Verlag, pp. 153–167.
  • [6] Kenil C. K. Cheng & Roland H. C. Yap (2008): Maintaining generalized arc consistency on ad hoc -ary constraints. In: Peter J. Stuckey, editor: Proceedings of CP’08, LNCS 5202. Springer-Verlag, pp. 509–523.
  • [7] Willem-Jan van Hoeve, Gilles Pesant & Louis-Martin Rousseau (2006): On global warming: Flow-based soft global constraints. Journal of Heuristics 12(4-5), pp. 347–373.
  • [8] Mikael Z. Lagerkvist (2008): Techniques for Efficient Constraint Propagation. Technical Report, KTH – The Royal Institute of Technology, Stockholm, Sweden. Licentiate Thesis.
  • [9] Gilbert Laporte (1999): The art and science of designing rotating schedules. Journal of the Operational Research Society 50(10), pp. 1011–1017.
  • [10] Laurent Michel & Pascal Van Hentenryck (1997): Localizer: A modeling language for local search. In: Gert Smolka, editor: Proceedings of CP’97, LNCS 1330. Springer-Verlag, pp. 237–251.
  • [11] Gilles Pesant (2001): A filtering algorithm for the stretch constraint. In: Toby Walsh, editor: Proceedings of CP’01, LNCS 2239. Springer-Verlag, pp. 183–195.
  • [12] Gilles Pesant (2004): A regular language membership constraint for finite sequences of variables. In: Mark Wallace, editor: Proceedings of CP’04, LNCS 3258. Springer-Verlag, pp. 482–495.
  • [13] Benoit Pralong (2007): Implémentation de la contrainte Regular en Comet. Master’s thesis, École Polytechnique de Montréal, Canada.
  • [14] Jean-Charles Régin (1996): Generalized arc-consistency for global cardinality constraint. In: Proceedings of AAAI’96. AAAI Press, pp. 209–215.
  • [15] Pascal Van Hentenryck & Laurent Michel (2005): Constraint-Based Local Search. The MIT Press.
  • [16] Pascal Van Hentenryck & Laurent Michel (2006): Differentiable invariants. In: Frédéric Benhamou, editor: Proceedings of CP’06, LNCS 4204. Springer-Verlag, pp. 604–619.
  • [17] Pascal Van Hentenryck, Laurent Michel & Liyuan Liu (2004): Constraint-based combinators for local search. In: Mark Wallace, editor: Proceedings of CP’04, LNCS 3258. Springer-Verlag, pp. 47–61.
  • [18] Pascal Van Hentenryck, Vijay Saraswat & Yves Deville (1993): Design, implementation, and evaluation of the constraint language cc(FD). Technical Report CS-93-02, Brown University, Providence, USA. Revised version in

    Journal of Logic Programming

    37(1–3):293–316, 1998. Based on the unpublished manuscript Constraint Processing in cc(FD), 1991.