# View-based Propagator Derivation

When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear constraints both with unit and non-unit coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance than resorting to constraint decomposition. This paper shows how to use views to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and type conversion are developed. The paper introduces an implementation architecture for views that is independent of the underlying constraint programming system. A detailed evaluation of views implemented in Gecode shows that derived propagators are efficient and that views often incur no overhead. Without views, Gecode would either require 180 000 rather than 40 000 lines of propagator code, or would lack many efficient propagator variants. Compared to 8 000 lines of code for views, the reduction in code for propagators yields a 1750

## Authors

• 2 publications
• 6 publications
• ### Domain Views for Constraint Programming

Views are a standard abstraction in constraint programming: They make it...
01/21/2014 ∙ by Pascal Van Hentenryck, et al. ∙ 0

• ### Replacements and Replaceables: Making the Case for Code Variants

There are often multiple ways to implement the same requirement in sourc...
06/06/2020 ∙ by Venkatesh Vinayakarao, et al. ∙ 0

• ### A Generalized Arc-Consistency Algorithm for a Class of Counting Constraints: Revised Edition that Incorporates One Correction

This paper introduces the SEQ BIN meta-constraint with a polytime algori...
10/21/2011 ∙ by Thierry Petit, et al. ∙ 0

• ### Generic Analysis of Model Product Lines via Constraint Lifting

Engineering a product-line is more than just describing a product-line: ...
08/26/2020 ∙ by Andreas Bayha, et al. ∙ 0

• ### Reverse-engineer the Distributional Structure of Infant Egocentric Views for Training Generalizable Image Classifiers

We analyze egocentric views of attended objects from infants. This paper...
06/12/2021 ∙ by Satoshi Tsutsui, et al. ∙ 4

• ### To Memory Safety through Proofs

We present a type system capable of guaranteeing the memory safety of pr...
10/29/2018 ∙ by Hongwei Xi, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

When implementing a propagator for a constraint, one typically must also decide whether to implement some of its variants. When implementing a propagator for the constraint , should one also implement ? The latter can be implemented using the former as . When implementing a propagator for the linear equation for integer variables and integers and , should one also implement the special case for better performance? When implementing a propagator for the reified linear equation , should one also implement ? These two constraints only differ by the sign of , as the latter is equivalent to .

The two straightforward approaches for implementing constraint variants are to either implement dedicated propagators for the variants, or to decompose. In the last example, for instance, the reified constraint could be decomposed into two propagators, one for , and one for , introducing an additional variable .

Implementing the variants inflates code and documentation and is error prone. Given the potential code explosion, one may be able to only implement some variants (say, and ). Other variants important for performance (say, ternary and ) may be infeasible due to excessive programming and maintenance effort. Decomposing, on the other hand, massively increases memory consumption and runtime.

This paper introduces a third approach: deriving propagators from already existing propagators using views. This approach combines the efficiency of dedicated propagator implementations with the simplicity and effortlessness of decomposition.

###### Example 1 (Deriving a minimum propagator)

Consider a propagator for the constraint . Given three additional propagators for , , and , we could propagate the constraint using the propagator for . Instead, this paper proposes to derive a propagator using views that perform the simple transformations corresponding to the three additional propagators.

Views transform input and output of a propagator. For example, a minus view on a variable transforms the variable domain of by negating each element, passes the transformed domain to the propagator, and performs the inverse transformation on the domain returned by the propagator. With views, the implementation of the maximum propagator can be reused: a propagator for the minimum constraint can be derived from a propagator for the maximum constraint and a minus view for each variable.

This paper contributes an implementation-independent model for views and derived propagators, techniques for deriving propagators, concrete implementation techniques, and an evaluation that shows that views are widely applicable, drastically reduce programming effort, and yield an efficient implementation.

More specifically, we identify the properties of views that are essential for deriving perfect propagators. The paper establishes a formal model that defines a view as a function and a derived propagator as functional composition of views (mapping values to values) with a propagator (mapping domains to domains). This model yields all the desired results: derived propagators are indeed propagators; they faithfully implement the intended constraints; domain consistency carries over to derived propagators; different forms of bounds consistency over integer variables carry over provided that the views satisfy additional yet natural properties.

We introduce techniques for deriving propagators that use views for transformation, generalization, specialization, and type conversion of propagators. We show how to apply these techniques for different variable domains using various views and how views can be used for the derivation of dual scheduling propagators.

We present and evaluate different implementation approaches for views and derived propagators. An implementation using parametric polymorphism (such as templates in C++) is shown to incur no or very low overhead. The architecture is orthogonal to the used constraint programming system and has been fully implemented in Gecode [10]. We analyze how successful the use of derived propagators has been for Gecode.

#### Plan of the paper.

Section 2 introduces constraints and propagators. Section 3 establishes views and propagator derivation. Section 4 presents propagator derivation techniques. Section 5 describes an implementation architecture based on parametric propagators and range iterators. Section 6 discusses limitations of views. The implementation is evaluated in Section 7, and Section 8 concludes.

## 2 Preliminaries

This section introduces constraints, propagators, and propagation strength.

#### Variables, constraints, and domains.

Constraint satisfaction problems use a finite set of variables and a finite set of values . We typically write variables as and values as .

A solution of a constraint satisfaction problem assigns a single value to each variable. A constraint restricts which assignments of values to variables are allowed.

###### Definition 1 (Assignments and constraints)

An assignment is a function mapping variables to values. The set of all assignments is . A constraint is a set of assignments, (we write for the power set of ). Any assignment is a solution of .

Constraints are defined on assignments as total functions on all variables. For a typical constraint , only a subset of the variables is significant; the constraint is the full relation for all . Constraints are either written as sets of assignments (for example, ) or as expressions with the usual meaning, using the notation (for example, ).

###### Example 2 (Sum constraint)

Let and . The constraint corresponds to the following set of assignments:

 ⟦x=y+z⟧={ (x↦a,y↦b,z↦c) | a,b,c∈V∧a=b+c} ={ (x↦2,y↦1,z↦1),(x↦3,y↦1,z↦2), (x↦3,y↦2,z↦1),(x↦4,y↦2,z↦2)}

###### Definition 2 (Domains)

A domain is a function mapping variables to sets of values, such that . The set of all domains is . The set of values in for a particular variable , , is called the variable domain of . A domain represents a set of assignments, a constraint, defined as

 con(d)={a∈Asn|∀x∈X:a(x)∈d(x)}

An assignment is licensed by .

Domains thus represent Cartesian sets of assignments. In this sense, any domain is also a constraint. For a more uniform representation, we take the liberty to use domains as constraints. In particular, (instead of ) denotes an assignment licensed by , and denotes .

A domain that maps some variable to the empty value set is failed, written , as it represents no valid assignments (). A domain representing a single assignment, , is assigned, and is written as .

###### Definition 3 (Constraint satisfaction problems)

A constraint satisfaction problem (CSP) is a pair of a domain and a set of constraints . The solutions of a CSP are the assignments licensed by that satisfy all constraints in , defined as .

#### Propagators.

A propagation-based constraint solver employs propagators to implement constraints. A propagator for a constraint takes a domain as input and removes values from the variable domains in that are in conflict with .

A domain is stronger than a domain , written , if and only if for all . A domain is strictly stronger than a domain , written , if and only if is stronger than and for some variable . The goal of constraint propagation is to prune values from variable domains, thus inferring stronger domains, without removing solutions of the constraints.

A propagator is a function that takes a domain as its argument and returns a stronger domain, it may only prune assignments. If the original domain is an assigned domain , the propagator either accepts () or rejects () it, realizing a decision procedure for its constraint. The pruning and the decision procedure must be consistent: if the decision procedure accepts an assignment, the pruning procedure must never remove this assignment from any domain. This property is enforced by requiring propagators to be monotonic.

###### Definition 4 (Propagators)

A propagator is a function that is

• contracting: for any domain ;

• monotonic: for any domains .

The set of all propagators is . If a propagator returns a strictly stronger domain (), we say that prunes the domain . The propagator induces the unique constraint defined by the set of assignments accepted by :

 cp={a∈Asn|p({a})={a}}

Propagators can also be idempotent ( for any domain ). Idempotency is not required to make propagation sound or complete, but it can make propagation more efficient [33]. Like idempotency, monotonicity as defined here is not necessary for soundness or completeness of a solver [34]. Most definitions and theorems in this paper are independent of whether propagators are monotonic or not. Non-monotonicity will thus only be discussed where it is relevant.

#### Propagation strength.

Each propagator induces a single constraint, but different propagators can induce the same constraint, differing in strength. Typical examples are propagators for the all-different constraint that perform naive pruning when variables are assigned, or establish bounds consistency [26] or domain consistency [30].

In the literature, propagation strength is usually defined as a property of a domain in relation to a constraint. For example, a domain is domain-consistent (also known as generalized arc-consistent) with respect to a constraint if only contains values that appear in at least one solution of for each variable . As this paper is concerned with propagators, propagation strength is defined with respect to a propagator.

A propagator is domain-complete if any domain it returns is domain-consistent with respect to . For any constraint , there is exactly one domain-complete propagator for (as domains form a lattice). It is defined as , where is the domain relaxation of , the strongest domain that contains all assignments of , .

For constraints over integer variables (), several weaker notions of propagation strength are known. The most well-known is bounds consistency, which in fact can mean one of four special cases: range, , , and consistency (as discussed in [7, 28]).

The first three differ in whether holes are ignored in the original domain, in the resulting domain, or in both, in that order. Holes in a domain are ignored by the function , which returns the convex hull of a variable domain in . consistency only requires solutions to be found in the real-valued relaxation of the constraint (written ), and is defined using the real-valued convex hull and domain relaxation (written and ). The different notions of bounds consistency give rise to the respective definitions of bounds completeness.

###### Definition 5 (Bounds completeness)

A propagator is

• range-complete if and only if ,

• -complete if and only if ,

• -complete if and only if , and

• -complete if and only if

for any domain .

## 3 Views

This section defines views and proves properties of view-derived propagators.

### 3.1 Views and Derived Propagators

Given a propagator , a view is represented by two functions, and , that can be composed with such that is the desired derived propagator. The function transforms the input domain, and applies the inverse transformation to the propagator’s output domain.

###### Definition 6 (Variable views and views)

A variable view for a variable is an injective function mapping values to values. The set may be different from , and the corresponding sets of assignments, domains, constraints, and propagators are called , , , and , respectively.

Given a family of variable views for all , we lift them point-wise to assignments: . A view is a family of variable views, lifted to constraints: . The inverse of a view is defined as .

###### Definition 7 (Derived propagators and constraints)

Given a propagator and a view , the derived propagator is defined as . Similarly, a derived constraint is defined to be for a given .

###### Example 3 (Scale views)

Given a propagator for the constraint , we want to derive a propagator for using a view such that .

Intuitively, the function leaves as it is and scales by , while does the inverse transformation. We thus define and . That clarifies the need for different sets and , as must contain all elements of multiplied by .

The derived propagator is . We say that “uses a scale view on” , meaning that is the function defined as . Similarly, using an identity view on amounts to being the identity function on .

Given the assignment , we first apply and get . This is accepted by and returned unchanged, so transforms it back to . Another assignment is transformed to , rejected (), and the empty domain is mapped to the empty domain by . The propagator induces .

### 3.2 Correctness of Derived Propagators

Derived propagators are well-defined and correct: a derived propagator is in fact a propagator, and it induces the desired constraint (). The proofs of these statements employ the following direct consequences of the definitions of views:

1. and are monotonic by construction (as and are defined point-wise).

2. (the identity function, as is injective).

3. , .

4. For any view and domain , we have , and for any , we have (as views are defined point-wise).

###### Proposition 1 (Correctness)

For a propagator and view , is a propagator.

###### Proof.

The derived propagator is well-defined because both and are domains (see P4 above). We have to show that is contracting and monotonic.

For contraction, we have as is contracting. From monotonicity of (with P1), it follows that . As (with P2), we have , which proves that is contracting.

Monotonicity is shown as follows for all domains with :

 φ(d′) ⊆φ(d) (φ monotonic, P1) ⟹ p(φ(d′)) ⊆p(φ(d)) (p monotonic) ⟹ φ−(p(φ(d′))) ⊆φ−(p(φ(d))) (φ− monotonic, P1)

In summary, for any propagator , is a propagator.

Non-monotonic propagators as defined in [34] must at least be weakly monotonic, which means that for all domains and assignments . The above proof can be easily adjusted to weakly monotonic propagators by replacing with and using P3 in the second line of the proof.

###### Proposition 2 (Induced constraints)

Let be a propagator, and let be a view. Then induces the constraint .

###### Proof.

As induces , we know for all assignments . With (P3), we have . Furthermore, we know that is either or .

• Case : We have .

• Case : With P2, we have and hence . Furthermore, .

Together, this shows that .

Another important property is that views preserve contraction: if a propagator prunes a domain, the pruning will not be lost after transformation by .

###### Proposition 3 (Views preserve contraction)

Let be a propagator, let be a view, and let be a domain such that . Then .

###### Proof.

The definition of is . Hence, . Similarly, we know that . From , it follows that . Together, this yields . We have seen in Proposition 1 that , so we can conclude that .

### 3.3 Completeness of Derived Propagators

Ideally, a propagator derived from a domain- or bounds-complete propagator should inherit its completeness. It turns out to not generally be true for all notions of completeness and all views. This section first shows how completeness is inherited, and then generalizes this result to the other notions.

The key insight is that completeness of propagators derived using a view depends on whether and commute with the operator, as defined below.

###### Definition 8

A constraint is a -constraint for a view if and only if for all , there is a such that . A view is hull-injective if and only if for all -constraints . It is hull-surjective if and only if for all domains . It is hull-bijective if and only if it is hull-injective and hull-surjective.

The proofs rely on the additional fact that views commute with set intersection.

###### Lemma 1

For any view , the equation holds.

###### Proof.

By definition of , we have

 φ−(c1∩c2)={a∈Asn|φAsn(a)∈c1∧φAsn(a)∈c2}

As is a function, this is equal to

 {a∈Asn|φAsn(a)∈c1}∩{a∈Asn|φAsn(a)∈c2}=φ−(c1)∩φ−(c2)

###### Theorem 1 (Bounds(Z) completeness)

Let be a -complete propagator. For any hull-bijective view , the propagator is -complete.

###### Proof.

From Proposition 2, we know that induces the constraint . By monotonicity of (with P1) and completeness of , we know that

 φ−∘p∘φ(d)⊆φ−(hull(dom(cp∩hull(φ(d)))))

We now use the fact that both and commute with and set intersection:

 =φ−(hull(dom(cp∩hull(φ(d))))) =φ−(hull(dom(cp∩φ(hull(d))))) (hull-surjective) =hull(dom(φ−(cp∩φ(hull(d))))) (hull-injective) =hull(dom(φ−(cp)∩φ−(φ(hull(d))))) (commute with ∩) =hull(dom(φ−(cp)∩hull(d))) (P2)

The second step uses hull injectivity, so it requires to be a -constraint. All assignments in a -constraint have to be the image of some assignment under . This is the case here, as the intersection with can only contain such assignments. So in summary, we get

 φ−∘p∘φ(d)⊆hull(dom(φ−(cp)∩hull(d))

which is the definition of being -complete.

#### Stronger notions of completeness.

Similar theorems hold for domain completeness, range and completeness. The theorems directly follow from the fact that any view is domain-injective, meaning that for all constraints . We split this statement into the following two lemmas.

###### Lemma 2

Given a constraint , let . Then for all , we have .

###### Proof.

We prove both directions of the equivalence:

• There must be such an assignment because otherwise one can construct a strictly stronger with such that still .

• Each domain in the intersection must contain the value as . So for the result of the intersection , .

###### Lemma 3

Any view is domain-injective.

###### Proof.

We have to show that holds for any constraint and any view . For clarity, we write the equation including the implicit operations: . By definition of and , we have

 φ−(con(dom(c))) ={a∈Asn∣∣∀x∈X: φAsn(a)(x)∈dom(c)(x)} ={a∈Asn|∀x∈X∃b∈c:φAsn(a)(x)=b(x)} (Lemma 2)

As is an injective function, we can find such a that is in the range of , if and only if there is also a such that . Therefore, we get

 {a∈Asn∣∣∀x∈X∃b′∈φ−(c): a(x)=b′(x)} = {a∈Asn∣∣∀x∈X: a(x)∈dom(φ−(c))(x)} = con(dom(φ−(c)))

The following three theorems express under which conditions the different notions of completeness are preserved when deriving propagators. The proofs for these theorems are analogous to the proof of Theorem 1, using Lemma 3.

###### Theorem 2 (Bounds(D) completeness)

Let be a -complete propagator. For any hull-injective view , the propagator is -complete.

###### Theorem 3 (Range completeness)

Let be a range-complete propagator. For any hull-surjective view , the propagator is range-complete.

###### Theorem 4 (Domain completeness)

Let be a domain-complete propagator, and let be a view. Then is domain-complete.

A propagator derived from a -complete propagator and a hull-injective but not hull-surjective view is only -complete. This is exactly what we would expect from a propagator for linear equations, as the next example demonstrates.

###### Example 4 (Linear constraints)

A propagator for a linear constraint and scale views (see Example 3) yield a propagator for a linear constraint with coefficients .

The usual propagator for a linear constraint with coefficients achieves consistency in linear time  [15]. However, it is -complete for unit coefficients. Hence, the above-mentioned property applies: The propagator for is -complete, scale views are only hull-injective, so the derived propagator for is -complete. Implementing the simpler propagator without coefficients and deriving the variant with coefficients yields propagators with exactly the same runtime complexity and propagation strength as manually implemented propagators.

### 3.4 Additional Properties of Derived Propagators

This section discusses how views can be composed, and how derived propagators behave with respect to idempotency and subsumption.

#### View composition.

A derived propagator permits further derivation. Consider a propagator and two views . Then is a perfectly acceptable derived propagator, and properties like correctness and completeness carry over transitively. For instance, we can derive a propagator for from a propagator for , combining an offset view () and a minus view () on . This yields a propagator for .

#### Fixed points.

Schulte and Stuckey [33] show how to optimize the scheduling of propagators that are known to be at a fixed point. Views preserve fixed points of propagators, so the same optimizations apply to derived propagators.

###### Proposition 4

Let be a propagator, let be a view, and let be a domain. If is a fixed point of , then is a fixed point of .

###### Proof.

Assume . We have to show . With the assumption, we can write . We know that if . As we first apply , this is the case here, so we can add in the middle, yielding . With function composition being associative, this is equal to .

#### Subsumption.

A propagator is subsumed (also known as entailed) by a domain if and only if for all stronger domains , . Subsumed propagators cannot do any pruning in the remaining subtree of the search, and can therefore be removed. Deciding subsumption is coNP-complete in general, but for many practically relevant propagators an approximation can be decided easily (such as when a domain becomes assigned). The following theorem states that the approximation is also valid for the derived propagator.

###### Proposition 5

Let be a propagator and let be a view. The propagator is subsumed by a domain if and only if is subsumed by .

###### Proof.

With P2 we get that is equivalent to . As is a function, and because it preserves contraction (see Proposition 3), this is equivalent to . This can be rewritten to because all are subsets of .

### 3.5 Related Work

While the idea to systematically derive propagators using views is novel, there are a few related approaches we can point out. Reusing functionality (like a propagator) by wrapping it in an adaptor (like a view) is of course a much more general technique—think of higher-order functions like fold or map in functional programming languages; or chaining command-line tools in Unix operating systems using pipes.

#### Propagator derivation.

Views that perform arithmetic transformations are related to the concept of indexicals (see [5, 36]). An indexical is a propagator that prunes a single variable and is defined in terms of range expressions. In contrast to views, range expressions can involve multiple variables, but on the other hand only operate in one direction. For instance, in an indexical for the constraint , the range expression would be used to prune the domain of , but not for pruning the domains of or . Views must work in both directions, which is why they are limited in expressiveness.

Unit propagation in SAT solvers performs propagation for Boolean clauses, which are disjunctions of literals, which in turn are positive or negated Boolean variables. In implementations such as MiniSat [9], the Boolean clause propagator is in fact derived from a simple -ary disjunction propagator and literal views of the variables that perform negation for the negative literals.

#### Constraint composition.

Instead of regarding a view as transforming a constraint , one can regard as additional constraints, implementing the decomposition. Assuming , we use additional variables . Instead of , we use , which is the same relation as , but on . Finally, view constraints link the original variables to the new variables, each being equivalent to the relation . The solutions of the decomposition model, restricted to the , are exactly the solutions of the original view-based model.

Every view constraint shares exactly one variable with and no variable with any other . Thus, the constraint graph is Berge-acyclic [3], and a fixed point can be computed by first propagating all the , then propagating , and then again propagating the . This is exactly what does. Constraint solvers typically do not provide any means of specifying the propagator scheduling in such a fine-grained way (Lagerkvist and Schulte show how to use propagator groups to achieve this [20]). Thus, deriving propagators using views is also a technique for specifying perfect propagator scheduling.

On a more historical level, a derived propagator is related to the notion of path consistency. A domain is path-consistent for a set of constraints, if for any subset of its variables, and implies that there is a value such that the pair satisfies all the (binary) constraints between and , the pair satisfies all the (binary) constraints between and , and the pair satisfies all the (binary) constraints between and  [21]. If is domain-complete for , then it achieves path consistency for the constraint and all the in the decomposition model.

## 4 Propagator Derivation Techniques

This section introduces techniques for deriving propagators using views. The techniques capture the transformation, generalization, specialization, and type conversion of propagators and are shown to be widely applicable across variable domains and application areas.

### 4.1 Transformation

#### Boolean connectives.

For Boolean variables, where , the only view apart from identity for Boolean variables captures negation. A negation view on defines for and . As already noted in Section 3.5, deriving propagators using Boolean views thus means to propagate using literals rather than variables.

The obvious application of negation views is to derive propagators for all Boolean connectives from just three propagators. A negation view for in yields a propagator for . From disjunction one can derive conjunction with negation views on , , , and implication with a negation view on . From equivalence one can derive exclusive or with a negation view on .

As Boolean constraints are widespread, it pays off to optimize frequently occurring cases of propagators for Boolean connectives. One important propagator is for with arbitrarily many variables. Again, conjunction can be derived with negation views on the and on . Another important propagator implements the constraint . A dedicated propagator for this constraint is essential as the constraint occurs frequently and can be implemented efficiently using watched literals, see for example [12]. With views all implementation work is readily reused for conjunction. This shows a general advantage of views: effort put into optimizing a single propagator directly pays off for all other propagators derived from it.

#### Boolean cardinality.

Like the constraint , the Boolean cardinality constraint occurs frequently and can be implemented efficiently using watched literals (requiring watched literals, Boolean disjunction corresponds to the case where ). But also a propagator for can be derived using negation views on the with the following transformation:

 ∑ni=1xi≤c⟺−∑ni=1xi≥−c⟺n−∑ni=1xi≥n−c⟺∑ni=11−xi≥n−c⟺∑ni=1¬xi≥n−c

#### Reification.

Many reified constraints (such as ) also exist in a negated version (such as ). Deriving the negated version is trivial by using a negation view on the Boolean control variable . This contrasts nicely with the effort without views: either the entire code must be duplicated or the parts that perform checking whether the constraint or its negation is subsumed must be factored out and combined differently for the two variants.

#### Transformation using set views.

Set constraints deal with variables whose values are finite sets. Using complement views (analogous to Boolean negation, as sets with their usual operations also form a Boolean algebra) on with a propagator for yields a propagator for . A complement view on yields .

#### Transformation using integer views.

The obvious integer equivalent to negation views for Boolean variables are minus views: a minus view on an integer variable is defined as . Minus views help to derive propagators following simple transformations: for example, can be derived from by using minus views for , , and .

Transformations through minus views can improve performance in subtle ways. Consider a -complete propagator for multiplication (for example, [1, Section 6.5] or [32]). Propagation depends on whether zero is still included in the domains of , , or . Testing for inclusion of zero each time the propagator is executed is inefficient and leads to a convoluted implementation. Instead, one would like to rewrite the propagator to special variants where , , and are either strictly positive or negative. These variants can propagate more efficiently, in particular because propagation can easily be made idempotent. Instead of implementing three different propagators ( strictly positive; only or strictly positive; only strictly positive), a single propagator assuming that all views are strictly positive is sufficient. The other propagators can be derived using minus views.

Again, with views it becomes realistic to optimize a single implementation of a propagator and derive other, equally optimized, implementations. The effort to implement all required specialized versions without views is typically unrealistic.

#### Scheduling propagators.

An important application area is constraint-based scheduling, see for example [2]. Many propagation algorithms for constraint-based scheduling are based on tasks, where a task is characterized by its start time, processing time (how long does the task take to be executed on a resource), and end time. Scheduling algorithms are typically expressed in terms of earliest start time (), latest start time (), earliest completion time (), and latest completion time ().

Another particular aspect of scheduling algorithms is that they are often required in two, mutually dual, variants. Let us consider not-first/not-last propagation as an example. Assume a set of tasks and a task to be scheduled on the same resource. Then cannot be scheduled before the tasks in ( is not-first in ), if (where

is a conservative estimate of the latest start time of all tasks in

). Hence, can be adjusted to leave some room for at least one task from . The dual variant is that is not-last: if (again, estimates the earliest completion time of ), then can be adjusted.

Running the dual variant of a scheduling algorithm on tasks is the same as running the original algorithm on the dual tasks , which are simply mirrored at the -origin of the time scale (see Figure 1):

 est(t′)=−lct(t)ect(t′)=−lst(t)lst(t′)=−ect(t)lct(t′)=−est(t)

The dual variant of a scheduling propagator can be automatically derived using a minus view that transforms the time values. In our example, only a propagator for not-first needs to be implemented and the propagator for not-last can be derived (or vice versa). This is in particular beneficial if the algorithms use sophisticated data structures such as -trees [37]. Here, also the data structure needs to be implemented only once and the dual data structure for the dual propagator is derived.

### 4.2 Generalization

Common views for integer variables capture linear transformations of the integer values: an

offset view for on is defined as , and a scale view for on is defined as .

Offset and scale views are useful for generalizing propagators. Generalization has two key advantages: simplicity and efficiency. A more specialized propagator is often simpler to implement (and simpler to implement correctly) than a generalized version. The specialized version can save memory and runtime during execution.

We can devise an efficient propagation algorithm for a linear equality constraint for the common case that the linear equation has only unit coefficients. The more general case can be derived by using scale views for on (the same technique of course applies to linear inequalities and disequality rather than equality). Similarly, a propagator for can be generalized to by using offset views for on . Likewise, from a propagator for the element constraint for integers and integer variables and , we can derive the generalized version with an offset view, where provides a useful offset for the index variable .

These generalizations can be applied to domain- as well as bounds-complete propagators. While most Boolean propagators are domain-complete, bounds completeness plays an important role for integer propagators. Section 3.3 shows that, given appropriate hull-surjective and/or hull-injective views, the different notions of bounds consistency are preserved when deriving propagators.

The views for integer variables presented in this section have the following properties: minus and offset views are hull-bijective, whereas a scale view for on is always hull-injective and only hull-surjective if or (in which cases it coincides with the identity view or a minus view, respectively).

### 4.3 Specialization

We employ constant views to specialize propagators. A constant view behaves like an assigned variable. In practice, specialization has two advantages. Fewer variables require less memory. And specialized propagators can be compiled to more efficient code, if the constants are known at compile time.

Examples for specialization are

• a propagator for binary linear inequality derived from a propagator for by using a constant for ;

• a reified propagator for from and a constant for ;

• propagators for the counting constraints and from a propagator for ;

• a propagator for set disjointness from a propagator for and a constant empty set for ; and many more.

We have to straightforwardly extend the model for constant views. Propagators may now be defined with respect to a superset of the variables, . A constant view for the value on a variable translates between the two sets of variables:

 φ(c)={a[k/z]|a∈c}φ−(c)={a|X∣∣a∈c}

Here, means augmenting the assignment so that it maps to , and is the functional restriction of to the set .

It is important that this definition preserves failure. If a propagator returns a failed domain that maps to the empty set, then is the empty set, too (recall that this is really , and if ).

### 4.4 Type Conversion

A type conversion view lets propagators for one type of variable work with a different type, by translating the underlying representation. Our model already accommodates for this, as a view maps elements between different sets and .

#### Integer views.

Boolean variables are essentially integer variables restricted to the values . Constraint programming systems may choose a more efficient implementation for Boolean variables and hence the variable types for integer and Boolean variables differ. By wrapping an efficient Boolean variable in an integer view, all integer propagators can be directly reused with Boolean variables. This can save substantial effort: for example, an implementation of the regular-constraint for Boolean variables can be derived which is actually useful in practice [19].

#### Singleton set views.

A singleton set view on an integer variable , defined as , presents an integer variable as a set variable. Many constraints involve both integer and set variables, and some of them can be expressed with singleton set views. A simple constraint is , where is an integer variable and a set variable. Singleton set views derive it as . This extends to for all other set relations .

Singleton set views can also be used to derive pure integer constraints from set propagators. For example, the constraint with integer variables states that the variables take the same values as the variables . With singleton set views, implements this constraint (albeit with weaker propagation than the algorithm presented in [4]).

#### Set bounds and complete set domain variables.

Most systems approximate set variable domains as set intervals defined by lower and upper bounds [25, 13]. However, [16] introduces a representation for the complete domains of set variables, using ROBDDs. Type conversion views can translate between set interval and ROBDD-based implementations. We can derive a propagator on ROBDD-based variables from a set interval propagator, and thus reuse set interval propagators for which no efficient ROBDD representation exists.