1 Introduction
In constraint programming the programming process is limited to a generation of constraints and a solution of the so obtained constraint satisfaction problems (CSP’s) by general or domain dependent methods.
On the theoretical side several notions of local consistency, notably arc consistency for constraints of arbitrary arity, have been defined and various search methods have been proposed. On the practical side several constraint programming systems were designed and implemented that provide a substantial support for constraint programming. This support is usually provided in the form of specific builtin constructs that support search and constraint propagation. For example, the arc consistency is built in the ILOG Solver and is present in a library of the most recent version of ECLPS.
In this paper we study CSP’s that are built out of predefined, explicitly given finite constraints. Such CSP’s often arise in practice. Examples include Boolean constraints, constraints dealing with Waltz’s language for describing polyhedreal scenes, Allen’s temporal logic, and constraints in any multivalued logic.
In such situations it is natural to explore the structure of these explicitly given constraints first and to use this information to reduce the considered CSP to a simpler yet equivalent one. This information can be expressed in terms of rules. This leads to a local consistency notion called rule consistency that turns out to be weaker than arc consistency for constraints of arbitrary arity.
When the original domains are all unary or binary, rule consistency coincides with arc consistency. When additionally the predefined constraints are the truth tables of the Boolean connectives, these rules coincide with the wellknown rules for Boolean constraints, sometimes called unit propagation rules (see, e.g. [3]). As a side effect, this shows that the unit propagation rules characterize arc consistency. Rule consistency is thus a generalization of the unit propagation to nonbinary domains.
Next, we show that by generalizing the notion of rules to socalled inclusion rules, we obtain a notion of local consistency that coincides with arc consistency for constraints of arbitrary arity.
The advantage of the rule consistency and this rule based characterization of the arc consistency is that the algorithms that enforce them can be automatically generated and provided on the constraint programming language level. For example, the rules in question can be generated automatically and represented as rules of the CHR language of [6] that is part of the ECLPS system. (For a more recent and more complete overview of CHR see [5].)
Consequently, the implementations of the algorithms that achieve rule consistency and arc consistency for the considered CSP’s are simply these automatically generated CHR programs. When combined with a labeling procedure such CHR programs constitute automatically derived decision procedures for these CSP’s.
The availability of the algorithms that enforce rule consistency and arc consistency on the constraint programming language level further contributes to the automatization of the programming process within the constraint programming framework. In fact, in the case of such CSP’s built out of predefined, explicitly given finite constraints the user does not need to write one’s own CHR rules for the considered constraints and can simply adopt all or some of the rules that are automatically generated. In the final example of the paper we also show how using the rules and the inclusion rules, we can implement more powerful notions of local consistency.
Alternatively, the generated rules and inclusion rules could be fed into any of the generic Chaotic Iteration algorithms of [2] and made available in such systems as the ILOG solver. This would yield rule consistency and an alternative implementation of arc consistency.
The algorithms that for an explicitly given finite constraint generate the appropriate rules that characterize rule consistency and arc consistency have (unavoidably) a running time that is exponential in the number of constraint variables and consequently are in general impractical.
To test the usefulness of these algorithms for small finite domains we implemented them in ECLPS and successfully used them on several examples including the ones mentioned above. The fact that we could handle these examples shows that this approach is of practical value and can be used to automatically derive practical decision procedures for constraint satisfaction problems defined over small finite domains. Also it shows the usefulness of the CHR language for an automatic generation of constraint solvers and of decision procedures.
The rest of the paper is organized as follows. In the next section we formalize the concept that a CSP is built out of predefined constraints. Next, in Section 3 we introduce the notion of a rule, define the notion of rule consistency and discuss an algorithm that can be used to generate the minimal set of rules that characterize this notion of local consistency. Then, in Section 4 we compare rule consistency to arc consistency. In Section 5 we generalize the notion of rules to socalled inclusion rules and discuss an algorithm analogous to the one of Section 3. This entails a notion of local consistency that turns out to be equivalent to arc consistency. Finally, in Section 6 we discuss the implementation of both algorithms. They generate from an explicit representation of a finite constraint a set of CHR rules that characterize respectively rule consistency and arc consistency. We also illustrate the usefulness of these implementations by means of several examples. Due to lack of space all proofs are omitted.
2 CSP’s Built out of Predefined Constraints
Consider a finite sequence of variables where , with respective domains associated with them. So each variable ranges over the domain . By a constraint on we mean a subset of . In this paper we consider only finite domains.
By a constraint satisfaction problem, in short CSP, we mean a finite sequence of variables with respective domains , together with a finite set of constraints, each on a subsequence of . We write it as , where and .
Consider now an element of and a subsequence of . Then we denote by the sequence .
By a solution to we mean an element such that for each constraint on a sequence of variables we have . We call a CSP consistent if it has a solution.
Consider now a constraint on a sequence of variables . Given a subsequence of by the domain of we mean the set of all tuples from , where are the respective domains of the variables from .
In the introduction we informally referred to the notion of a CSP “being built out of predefined, explicitly given finite constraints.” Let us make now this concept formal. We need two auxiliary notions first, where in preparation for the next definition we already consider constraints together with the domains over which they are defined.
Definition 1

Given a constraint and a permutation of we denote by the relation defined by

Given two constraints and we say that is based on if

for ,

.

So the notion of “being based on” involves the domains of both constraints. If is based on , then is the restriction of to the domains over which is defined.
Definition 2
We assume that the “predefined constraints” are presented as a given in advance CSP and the considered CSP is related to as follows:

There is a mapping that relates each constraint of to a constraint of .

Each constraint of is based on , where is a permutation of and the arity of .
We say then that is based on .
In the above definition the “permuted” relations allow us to abstract from the variable ordering used in . The following example illustrates this notion.
Example 1
Consider the wellknown full adder circuit. It is defined by the following formula:
where and are defined in the expected way. We can view the original constraints as the following CSP:
should be viewed just as an “inventory” of the predefined constraints and not as a CSP to be solved. Now, any query concerning the full adder can be viewed as a CSP based on . For example, in Section 6 we shall consider the query . It corresponds to the following CSP based on : .
3 Rule Consistency
Our considerations crucially rely on the following notion of a rule.
Definition 3
Consider a constraint on a sequence of variables VAR, a subsequence of VAR and a variable of VAR not in , a tuple of elements from the domain of and an element from the domain of . We call a rule (for ).

We say that is valid (for ) if for every tuple the equality implies .

We say that is feasible (for ) if for some tuple the equality holds.

Suppose that and . We say that is closed under the rule if the fact that the domain of each variable equals implies that is not an element of the domain of the variable .
Further, given a sequence of variables that extends and a tuple of elements from the domain of that extends , we say that the rule extends . We call a rule minimal if it is feasible and it does not properly extend a valid rule.
Note that rules that are not feasible are trivially valid. To illustrate the introduced notions consider the following example.
Example 2
Take as a constraint the ternary relation that represents the conjunction . It can be viewed as the following relation:
In other words, we assume that each of the variables has the domain and view as the constraint on that consists of the above four triples.
It is easy to see that the rule is valid for . Further, the rule extends the rule and is also valid for . However, out of these two rules only is minimal.
Finally, both rules are feasible while the rules and are not feasible.
Note that a rule that extends a valid rule is valid, as well. So validity extends “upwards”.
Next, we introduce a notion of local consistency that is expressed in terms of rules.
Definition 4
Consider a CSP is based on a CSP . Let be a constraint of on the variables with respective nonempty domains . For some constraint of and a permutation we have .

We call the constraint rule consistent (w.r.t. ) if it is closed under all rules that are valid for .

We call a CSP rule consistent (w.r.t. ) if all its constraints are rule consistent.
In what follows we drop the reference to if it is clear from the context.
Example 3
Take as the base CSP
and consider the following four CSP’s based on it:

,

,

,

,
where is a subset of . We noted in Example 2 that the rule is valid for . In the first three CSP’s its only constraint is closed under this rule, while in the fourth one not since 1 is present in the domain of whereas the domain of equals . So the fourth CSP is not rule consistent. One can show that the first two CSP’s are rule consistent, while the third one is not since it is not closed under the valid rule .
The following observation is useful.
Note 1
Consider two constraints and such that . Then is closed under all valid rules for iff it is closed under all minimal valid rules for .
This allows us to confine our attention to minimal valid rules. We now introduce an algorithm that given a constraint generates the set of all minimal valid rules for it. We collect the generated rules in a list. We denote below the empty list by empty and the result of insertion of an element into a list by .
By an assignment to a sequence of variables we mean here an element from the domain of such that for some we have . Intuitively, if we represent the constraint as a table with rows corresponding to the elements (tuples) of and the columns corresponding to the variables of , then an assignment to is a tuple of elements that appears in some row in the columns that correspond to the variables of . This algorithm has the following form where we assume that the considered constraint is defined on a sequence of variables VAR of cardinality .
Rules Generation algorithm
L := empty; FOR i:= 0 TO n1 DO FOR each subset X of VAR of cardinality i DO FOR each assignment s to X DO FOR each y in VARX DO FOR each element d from the domain of y DO r := X = s \(\>\rightarrow\>\) y \(\neq\) d; IF r is valid for C and it does not extend an element of L THEN insert(r, L)
The following result establishes correctness of this algorithm.
Theorem 3.1
Given a constraint the Rules Generation algorithm produces in L the set of all minimal valid rules for .
Note that because of the minimality property no rule in extends another.
4 Relating Rule Consistency to Arc Consistency
To clarify the status of rule consistency we compare it now to the notion of arc consistency. This notion was introduced in [8] for binary relations and was extended to arbitrary relations in [9]. Let us recall the definition.
Definition 5

We call a constraint on a sequence of variables arc consistent if for every variable in and an element in its domain there exists such that . That is, each element in each domain participates in a solution to .

We call a CSP arc consistent if all its constraints are arc consistent.
The following result relates for constraints of arbitrary arity arc consistency to rule consistency.
Theorem 4.1
Consider a CSP based on a CSP . If is arc consistent then it is rule consistent w.r.t. .
The converse implication does not hold in general as the following example shows.
Example 4
Take as the base the following CSP
where the constraint on that equals the set . So can be viewed as the following relation:
Next, take for the set and the set . Then the CSP , so is based on but is not arc consistent since the value 2 in the domain of does not participate in any solution. Yet, it is easy to show that the only constraint of this CSP is closed under all rules that are valid for .
However, if each domain has at most two elements, then the notions of arc consistency and rule consistency coincide. More precisely, the following result holds.
Theorem 4.2
Let be a CSP each domain of which is unary or binary. Consider a CSP based on . Then is arc consistent iff it is rule consistent w.r.t. .
5 Inclusion Rule Consistency
We saw in the previous section that the notion of rule consistency is weaker than that of arc consistency for constraints of arbitrary arity. We now show how by modifying the format of the rules we can achieve arc consistency. To this end we introduce the following notions.
Definition 6
Consider a constraint over a sequence variables VAR, a subsequence of VAR and a variable of VAR not in , a tuple of respective subsets of the domains of the variables from and an element from the domain of .
We call an inclusion rule (for ). We say that is valid (for ) if for every tuple the fact that for implies that and that is feasible (for ) if for some tuple we have for .
Further, we say that a constraint is closed under the inclusion rule if the fact that the domain of each variable is included in implies that is not an element of the domain of the variable .
By choosing in the above definition singleton sets we see that the inclusion rules generalize the rules of Section 3. Note that inclusion rules that are not feasible are trivially valid.
In analogy to Definition 4 we now introduce the following notion.
Definition 7
Consider a CSP is based on a CSP . Let be a constraint of on the variables with respective nonempty domains . For some constraint of and a permutation we have .

We call the constraint inclusion rule consistent (w.r.t. ) if it is closed under all inclusion rules that are valid for .

We call a CSP inclusion rule consistent (w.r.t. ) if all its constraints are inclusion rule consistent.
We now have the following result.
Theorem 5.1
Consider a CSP based on a CSP . Then is arc consistent iff it is inclusion rule consistent w.r.t. .
Example 4 shows that the notions of rule consistency and inclusion rule consistency do not coincide.
In Section 3 we introduced an algorithm that given a constraint generated the set of all minimal rules valid for . We now modify it to deal with the inclusion rules. First we need to adjust the notions of an extension and of minimality.
Definition 8
Consider a constraint on a sequence of variables VAR. Let and be two subsequences of VAR such that extends and a variable of VAR not in . Further, let be the sequence of respective subsets of the domains of the variables from , the sequence of respective subsets of the domains of the variables from , and an element from the domain of .
We say that the inclusion rule extends if for each common variable of and the corresponding element of is a subset of the corresponding element of . We call an inclusion rule minimal if it is feasible and it does not properly extend a valid inclusion rule.
To clarify these notions consider the following example.
Example 5
Consider a constraint on variables , each with the domain , that is defined by the following relation:
This constraint is the socalled fork junction in the language of [10] for describing polyhedreal scenes. Note that the following three inclusion rules
and
are all valid. Then the inclusion rules and extend while the inclusion rule extends neither nor . Further, the inclusion rules and are incomparable in the sense that none extends the other.
The following counterpart of Note 1 holds.
Note 2
Consider two constraints and such that . Then is closed under all valid inclusion rules for iff it is closed under all minimal valid inclusion rules for .
As in Section 3 we now provide an algorithm that given a constraint generates the set of all minimal valid inclusion rules. We assume here that the considered constraint is defined on a sequence of variables VAR of cardinality .
Instead of assignments that are used in the Rules Generation algorithm we now need a slightly different notion. To define it for each variable from VAR we denote the set by . By a weak assignment to a sequence of variables we mean here a sequence of subsets of, respectively, such that some exists such that for each .
Intuitively, if we represent the constraint as a table with rows corresponding to the elements of and the columns corresponding to the variables of and we view each column as a set of elements, then a weak assignment to is a tuple of subsets of the columns that correspond to the variables of that “shares” an assignment.
In the algorithm below the weak assignments to a fixed sequence of variables are considered in decreasing order in the sense that if the weak assignments and are such that for we have , then is considered first.
Inclusion Rules Generation algorithm
L := empty; FOR i:= 0 TO n1 DO FOR each subset X of VAR of cardinality i DO FOR each weak assignment S to X in decreasing order DO FOR each y in VARX DO FOR each element d from the domain of y DO r := X \(\subseteq\) S \(\>\rightarrow\>\) y \(\neq\) d; IF r is valid for C and it does not extend an element of L THEN insert(r, L)
The following result establishes correctness of this algorithm.
Theorem 5.2
Given a constraint the Inclusion Rules Generation algorithm produces in L the set of all minimal valid inclusion rules for .
6 Applications
In this section we discuss the implemention of the Rules Generation and Inclusion Rules Generation algorithms and discuss their use on selected domains.
6.1 Constraint Handling Rules (Chr)
In order to validate our approach we have realized in the Prolog platform ECLPS a prototype implementation of both the Rules Generation algorithm and the Inclusion Rules Generation algorithm. These implementations generate CHR rules that deal with finite domain variables using an ECLPS library.
Constraint Handling Rules (CHR) of [6] is a declarative language that allows one to write guarded rules for rewriting constraints. These rules are repeatedly applied until a fixpoint is reached. The rule applications have a precedence over the usual resolution step of logic programming.
CHR provides two types of rules: simplification rules that replace a constraint by a simpler one, and propagation rules that add new constraints.
Our rules and inclusion rules can be modelled by means of propagation rules. To illustrate this point consider some constraint on three variables, , each with the domain .
The Rules Generation algorithm generates rules such as
. This rule is translated into a
CHR rule of the form: cons(0,B,1)
==>
B##2
.
Now, when a constraint in the program query matches
cons(0,B,1)
, this rule is fired and the value 2 is removed from
the domain of the variable B
.
In turn, the Inclusion Rules Generation algorithm generates rules such as . This rule is translated into the CHR rule
cons(0,B,C) ==>in(C,[1,2])  B##2
where the in predicate is defined by
in(X,L): dom(X,D), subset(D,L).
So in(X,L) holds if the current domain of the variable X (yielded by the builtin dom of ECLPS) is included in the list L.
Now, when a constraint matches cons(0,B,C)
and the
current domain of the variable C
is included in [1,2]
,
the value 2 is removed from the domain of B
.
So for both types of rules we achieve the desired effect.
In the examples below we combine the rules with the same premise into one rule in an obvious way and present these rules in the CHR syntax.
6.2 Generating the rules
We begin by discussing the generation of rules and inclusion rules for some selected domains. The times given refer to an implementation ran on a Silicon Graphics O2 with 64 Mbytes of memory and a 180 MHZ processor.
Boolean constraints
As the first example consider the Boolean constraints, for example the conjunction constraint and(X,Y,Z) of Example 2. The Rules Generation algorithm generated in 0.02 seconds the following six rules:
and(1,1,X) ==> X##0. and(X,0,Y) ==> Y##1. and(0,X,Y) ==> Y##1. and(X,Y,1) ==> X##0,Y##0. and(1,X,0) ==> X##1. and(X,1,0) ==> X##1.
Because the domains are here binary we can replace the conclusions of the form U ## 0 by U = 1 and U ## 1 by U = 0. These become then the wellknown rules that can be found in [5, page 113].
In this case, by virtue of Theorem 4.2, the notions of rule and arc consistency coincide, so the above six rules characterize the arc consistency of the and constraint. Our implementations of the Rules Generation and the Inclusion Rules Generation algorithms yield here the same rules.
Three valued logics
Next, consider the three valued logic of [7, page 334] that consists of three values, t (true), f (false) and u (unknown). We only consider here the crucial equivalence relation defined by the truth table
t  f  u  

t  t  f  u 
f  f  t  u 
u  u  u  u 
that determines a ternary constraint with nine triples. We obtain for it 20 rules and 26 inclusion rules. Typical examples are
equiv(X,Y,f) ==> X##u,Y##u.
and
equiv(t,X,Y) ==> in(Y,[f, u])  X##t.
Waltz’ language for describing polyhedreal scenes
Waltz’ language consists of four constraints. One of them, the fork junction was already mentioned in Example 5. The Rules Generation algorithm generated for it 12 rules and the Inclusion Rules Generation algorithm 24 inclusion rules.
Another constraint, the socalled T junction, is defined by the following relation:
In this case the Rules Generation algorithm and the Inclusion Rules Generation algorithm both generate the same output that consists of just one rule:
t(X,Y,Z) ==> X##’l’,X##’’,X##’+’,Y##’r’,Y##’’,Y##’+’.
So this rule characterizes both rule consistency and arc consistency for the CSP’s based on the T junction.
For the other two constraints, the L junction and the arrow junction, the generation of the rules and inclusion rules is equally straightforward.
6.3 Using the rules
Next, we show by means of some examples how the generated rules can be used to reduce or to solve specific queries. Also, we show how using compound constraints we can achieve local consistency notions that are stronger than arc consistency for constraints of arbitrary arity.
Waltz’ language for describing polyhedreal scenes
The following predicate describes the impossible object given in Figure 12.18 of [11, page 262]:
imp(AF,AI,AB,IJ,IH,JH,GH,GC,GE,EF,ED,CD,CB): S1=[AF,AI,AB,IJ,IH,JH,GH,GC,GE,EF,ED,CD,CB], S2=[FA,IA,BA,JI,HI,HJ,HG,CG,EG,FE,DE,DC,BC], append(S1,S2,S), S :: [+,,l,r], arrow(AF,AB,AI), l(BC,BA), arrow(CB,CD,CG), l(DE,DC), arrow(ED,EG,EF), l(FA,FE), fork(GH,GC,GE), arrow(HG,HI,HJ), fork(IA,IJ,IH), l(JH,JI), line(AF,FA), line(AB,BA), line(AI,IA), line(IJ,JI), line(IH,HI), line(JH,HJ), line(GH,HG), line(FE,EF), line(GE,EG), line(GC,CG), line(DC,CD), line(ED,DE), line(BC,CB).
where the supplementary constraint line is defined by the following relation:
When using the rules obtained by the Rules Generation
algorithm and associated with the fork
, arrow
, t
,
l
, and line
constraints, the query
imp(AF,AI,AB,IJ,IH,JH,GH,GC,GE,EF,ED,CD,CB)
reduces in 0.009 seconds the variable domains to
AF [+,, l],
AI [+,],
AB [+,,r],
IJ [+,,l,r],
IH [+,,l,r],
JH [+,,l,r],
GH [+,,l,r],
GC [+,,l,r],
GE [+,,l,r],
EF [+,],
ED [+,,l],
CD [+,,r], and
CB [+,,l].
But some constraints remain unsolved, so we need to add a labeling mechanism to prove the inconsistency of the problem. On the other hand, when using the inclusion rules, the inconsistency is detected without any labeling in 0.06 seconds.
In the wellknown example of the cube given in Figure 12.15 of [11, page 260] the inclusion rules are also more powerful than the rules and both sets of rules reduce the problem but in both cases labeling is needed to produce all four solutions.
Temporal reasoning
In [1] approach to temporal reasoning the entities are intervals and the relations are temporal binary relations between them. [1] found that there are 13 possible temporal relations between a pair of events, namely before, during, overlaps, meets, starts, finishes, the symmetric relations of these six relations and equal. We denote these 13 relations respectively by b, d, o, m, s, f, b, d, o, m, s, f, e and their set by TEMP.
Consider now three events, A, B and C and suppose that we know the temporal relations between the pairs A and B, and B and C. The question is what is the temporal relation between A and C. To answer it [1] provided a 13 13 table. This table determines a ternary constraint between a triple of events, A, B and C that we denote by tr. For example,
since A overlaps B and B is before C implies that A is before C.
Using this table, the Rule Generation algorithm produced for the constraint tr 498 rules in 31.16 seconds.
We tried this set of rules to solve the following problem from [1]: “John was not in the room when I touched the switch to turn on the light.”. We have here three events: S, the time of touching the switch; L, the time the light was on; and J, the time that John was in the room. Further, we have two relations: R1 between L and S, and R2 between S and J. This problem is translated into the CSP , where tr is the above constraint on the variables R1, R2, R3.
To infer the relation R3 between L and J we can use the following query ^{1}^{1}1Since no variable is instantiated, we need to perform labeling to effectively apply the rules.:
R1::[o,m], R2::[b,m,b,m], R3::[b,d,o,m,s,f,b,d,o,m,s,f,e], tr(R1,R2,R3), labeling([R1,R2,R3]).
We then obtain the following solutions in 0.06 seconds:
(R1,R2,R3)
{(m,b,b),
(m,b,d),
(m,b,f),
(m,b,m),
(m,b,o),
(m,b,b),
(m,m,e),
(m,m,s),
(m,m,s),
(m,m,b),
(o,b,b),
(o,b,d),
(o,b,f),
(o,b,m),
(o,b,o),
(o,b,b),
(o,m,d),
(o,m,f),
(o,m,o),
(o,m,b)}.
To carry on (as in [1]), we now complete the problem with: “But John was in the room later while the light went out.”. This is translated into: “L overlaps, starts, or is during J”, i.e., R3 [o,s,d].
We now run the following query:
R1::[o,m], R2::[b,m,b,m], R3::[o,s,d], tr(R1,R2,R3), labeling([R1,R2,R3]).
and obtain four solutions in 0.04 seconds:
(R1,R2,R3)
{(m,b,o),
(m,m,s),
(o,b,o),
(o,m,o)}.
Full adder
This final example illustrates how we can use the rules and the inclusion rules to implement more powerful notions of local consistency. The already discussed in Example 1 full adder circuit can be defined by the following constraint logic program (see, e.g., [5]) that uses the Boolean constraints and, xor and or:
add(I1,I2,I3,O1,O2): [I1,I2,I3,O1,O2,A1,A2,X1]:: 0..1, xor(I1,I2,X1), and(I1,I2,A1), xor(X1,I3,O2), and(I3,X1,A2), or(A1,A2,O1).
The query add(I1,I2,I3,O1,O2)
followed by a labeling mechanism
generates the explicit definition (truth table) of the full_adder constraint with eight entries such as
full_adder(1,0,1,1,0).
We can now generate rules and inclusion rules for the compound constraint (here the full_adder constraint) that is defined by means of some basic constraints (here the and, or and xor constraints). These rules refer to the compound constraint and allow us to reason about it directly instead of by using the rules that deal with the basic constraints.
In the case of the full_adder constraint the Rules
Generation algorithm generated 52 rules in 0.27 seconds. The
constraint propagation carried out by means of these rules is more
powerful than the one carried out by means of the rules generated for
the and, or and xor constraints.
For example,
the query [X,Y,Z]::[0,1], full_adder(1,X,Y,Z,0)
reduces Z to
1 whereas the query
[X,Y,Z]::[0,1], add(1,X,Y,Z,0)
does not reduce Z at all.
This shows that the rule consistency for a compound constraint defined by means of the basic constraints is in general stronger than the rule consistency for the basic constraints treated separately. In fact, in the above case the rules for the full_adder constraint yield the relational (1,5)consistency notion of [4], whereas by virtue of Theorem 4.2, the rules for the and, or and xor constraints yield a weaker notion of arc consistency.
7 Conclusions
The aim of this paper was to show that constraint satisfaction problems built out of explicitly given constraints defined over small finite domains can be often solved by means of automatically generated constraint propagation algorithms.
We argued that such CSP’s often arise in practice and consequently the methods here developed can be of practical use. Currently we are investigating how the approach of this paper can be applied to a study of various decision problems concerning specific multivalued logics and how this in turn could be used for an analysis of digital circuits. Other applications we are now studying involve nonlinear constraints over small finite domains and the analysis of polyhedreal scenes in presence of shadows (see [10]).
The introduced notion of rule consistency is weaker than arc consistency and can be in some circumstances the more appropriate one to use. For example, for the case of temporal reasoning considered in the last section we easily generated all 498 rules that enforce rule consistency whereas 24 hours turned out not be enough to generate the inclusion rules that enforce arc consistency.
Finally, the notions of rule consistency and inclusion rule consistency could be parametrized by the desired maximal number of variables used in the rule premises. Such parametrized versions of these notions could be useful when dealing with constraints involving a large number of variables. Both the Rules Generation algorithm and the Inclusion Rules Generation algorithm and their implementations can be trivially adapted to such parametrized notions.
The approach proposed in this paper could be easily integrated into constraint logic programming systems such as ECLPS. This could be done by providing an automatic constraint propagation by means of the rules or the inclusion rules for flagged predicates that are defined by a list of ground facts, much in the same way as now constraint propagation for linear constraints over finite systems is automatically provided.
Acknowledgements
We would like to thank Thom Frühwirth, Andrea Schaerf and the anonymous referees for useful suggestions concerning this paper.
References
 [1] J.F. Allen. Maintaining knowledge about temporal intervals. Communications of ACM, 26(11):832–843, 1983.

[2]
K. R. Apt.
The essence of constraint propagation.
Theoretical Computer Science, 221(1–2):179–210, 1999.
Available via
http://xxx.lanl.gov/archive/cs/
. 
[3]
M. Dalal.
Efficient Propositional Constraint Propagation.
In Proceedings of the
National Conference on Artificial Intelligence, AAAI’92
, pages 409–414, 1992. San Jose, California.  [4] R. Dechter and P. van Beek. Local and global relational consistency. Theoretical Computer Science, 173(1):283–308, 20 February 1997.
 [5] T. Frühwirth. Theory and practice of constraint handling rules. Journal of Logic Programming, 37(1–3):95–138, October 1998. Special Issue on Constraint Logic Programming (P. Stuckey and K. Marriot, Eds.).
 [6] Thom Frühwirth. Constraint Handling Rules. In Andreas Podelski, editor, Constraint Programming: Basics and Trends, LNCS 910, pages 90–107. SpringerVerlag, 1995. (ChâtillonsurSeine Spring School, France, May 1994).
 [7] S. C. Kleene. Introduction to Metamathematics. van Nostrand, New York, 1952.
 [8] A. Mackworth. Consistency in networks of relations. Artificial Intelligence, 8(1):99–118, 1977.
 [9] R. Mohr and G. Masini. Good old discrete relaxation. In Y. Kodratoff, editor, Proceedings of the 8th European Conference on Artificial Intelligence (ECAI), pages 651–656. Pitman Publishers, 1988.

[10]
D. L. Waltz.
Generating semantic descriptions from drawings of scenes with
shadows.
In P. H. Winston, editor,
The Psychology of Computer Vision
. McGraw Hill, 1975.  [11] P.H. Winston. Artificial Intelligence. AddisonWesley, Reading, Massachusetts, third edition, 1992.
Comments
There are no comments yet.