 # Propagators and Violation Functions for Geometric and Workload Constraints Arising in Airspace Sectorisation

Airspace sectorisation provides a partition of a given airspace into sectors, subject to geometric constraints and workload constraints, so that some cost metric is minimised. We make a study of the constraints that arise in airspace sectorisation. For each constraint, we give an analysis of what algorithms and properties are required under systematic search and stochastic local search.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We continue our work in [11, 12] on applying constraint programming (see Section 3) to airspace sectorisation (see Section 2). In that work, we used existing constraints or implemented new ones in an ad hoc fashion just to solve the problem at hand, without much theoretical analysis of the constraints and their underlying algorithms.

We give a complete theoretical analysis of constraints that arise in airspace sectorisation. For each constraint, we analyse what propagation is possible under systematic search. Under stochastic local search, we give efficient algorithms for maintaining constraint and variable violations, as well as efficient algorithms for probing the effect of local search moves; such algorithms are necessary if stochastic local search is to be done efficiently.

The remainder of this report is structured as follows. Towards making it self-contained, we first give in Section 2 a brief overview on airspace sectorisation, and in Section 3 a brief tutorial on constraint programming, especially about its core concept: a constraint is a reusable software component that can be used declaratively, when modelling a combinatorial problem, and procedurally, when solving the problem. Experts on airspace sectorisation or CP may safely skip these sections. Next, in Section 4, we lay the mathematical foundation for specifying, in Section 5, the constraints identified above to occur commonly in airspace sectorisation problems Finally, in Section 6, we summarise our contributions and identify avenues for future work.

## 2 Background: Airspace Sectorisation

Airspace sectorisation provides a partition of a given airspace into a given (or upper-bounded, or minimal) number of control sectors, subject to geometric constraints and workload constraints, so that some cost metric is minimised. The entire material of this section is a suitably condensed version of Section 2 of our report , where we have surveyed the algorithmic aspects of methods for automatic airspace sectorisation, for an intended readership of experts on air traffic management.

In this report, we focus on region-based models of airspace sectorisation, where the airspace is initially partitioned into some kind of regions that are smaller than the targeted sectors, so that the combinatorial problem of partitioning these regions in principle needs no geometric post-processing step. Sectorisation then starts from regions of (any combination of) the following granularities: a mesh of blocks of the same size and shape; ATC functional blocks (AFBs); elementary sectors, namely the ones of the existing sectorisation; control sectors; areas of specialisation (AOS); and air traffic control centres (ATCC).

We assume in this report that the airspace sectorisation is computed in three dimensions, hence the ideas scale down to two dimensions. For simplifying the discussion, we assume without loss of generality in this report that the number of sectors is a given constant, rather than upper-bounded or to be minimised.

Airspace sectorisation aims at satisfying some constraints. The following constraints have been found in the literature, so that a subset thereof is chosen for a given tool:

• Balanced workload: The workload of each sector must be within some given imbalance factor of the average across all sectors.

• Bounded workload: The workload of each sector must not exceed some upper bound.

• Balanced size: The size of each sector must be within some given imbalance factor of the average across all sectors.

• Minimum dwell time: Every flight entering a sector must stay within it for a given minimum amount of time (say two minutes), so that the coordination work pays off and that conflict management is possible.

• Minimum distance: Each existing trajectory must be inside each sector by a minimum distance (say ten nautical miles), so that conflict management is entirely local to sectors.

• Convexity of the sectors. Convexity can be in the usual geometric sense, or trajectory-based (no flight enters the same sector more than once), or more complex.

• Connectedness: A sector must be a contiguous portion of airspace and can thus not be fragmented into a union of unconnected portions of airspace.

• Compactness: A sector must have a geometric shape that is easy to keep in mind.

• Non-jagged boundaries: A sector must have a boundary that is not too jagged.

For each sector, there are three kinds of workload: the monitoring workload, the conflict workload, and the coordination workload; the first two workloads occur inside the sector, and the third one between the sector and an adjacent sector. The quantitative definition of workload varies strongly between papers.

Airspace sectorisation often aims at minimising some cost. The following costs have been found in the literature, so that a subset thereof is combined into the cost function for a given tool, the subset being empty if sectorisation is not seen as an optimisation problem:

• Coordination cost: The cost of the total coordination workload between the sectors must be minimised.

• Transition cost: The cost of switching from the old sectorisation to the new one must be minimised.

• Workload imbalance: The imbalance between the workload of the sectors must be minimised.

• Number of sectors: The number of sectors must be minimised.

• Entry points: The total number of entry points into the sectors must be minimised.

• If any of the constraints above is soft, then there is the additional cost of minimising the number of violations of soft constraints.

Constraints are to be satisfied, hence the existence or not of a cost function whose value is to be optimised does not affect the design of a constraint, in the sense discussed in the following section. Hence we will no further discuss cost functions in this report.

## 3 Background: Constraint Programming

First, we show how to model combinatorial problems at a very high level of abstraction with the help of the declarative notion of constraint (Section 3.1). One distinguishes between satisfaction problems, where there is no objective function, and optimisation problems, where there is an objective function. Then, we describe two methods for solving combinatorial problems by a mixture of inference and search, upon reusing algorithms that implement these constraints (Section 3.2). The entire material of this section is a condensed version of our tutorial in Section 2 of .

### 3.1 Problem Modelling via Constraints

Constraint programming (CP) is a successful approach to the modelling and solving of combinatorial problems. Its core idea is to capture islands of structure, also called combinatorial sub-structures, that are commonly occurring within such problems and to encapsulate declaratively their procedural inference algorithms by designing specialised reusable software components, called constraints.

As a running example, let us consider the Kakuro puzzle, with the caveat that this is not meant to imply that the discussed techniques are limited to (small) puzzles. See the left side of Figure 1: Each word of a Kakuro puzzle is made of non-zero digits, under the two constraints that the letters of each word are pairwise distinct and add up to the number to the left (for horizontal words) or on top (for vertical words) of the word. Hence there is one decision variable for each letter of the puzzle, with the integer set as domain. This is a satisfaction problem, as there is no objective function. The solution to this puzzle is given on the right side of Figure 1; note that a Kakuro puzzle is always designed so as to have a unique solution.

Beside the usual binary comparison constraints (, , , , , ), with arguments involving the usual binary arithmetic operators (, , , etc), a large number of combinatorial constraints (usually called global constraints in the literature), involving more complex structures and a non-fixed number of arguments, have been identified.

For example, the constraint requires its decision variables  to take pairwise different values. This constraint is useful in many challenging real-life problems, as well as for modelling the first constraint of the Kakuro puzzle, as there is an AllDifferent constraint on each word. Each such constraint wraps a conjunction of binary constraints into a single -ary constraint; this enables a more global view of the structure of the problem, which is a prerequisite for solving it efficiently (as we shall see in Section 3.2).

As another example, the constraint requires the sum of the decision variables  to be in relation with the constant . This constraint can be used for modelling the second constraint of the Kakuro puzzle, as there is a constraint for each word with required sum . The generalised constraint requires the weighted sum of the decision variables  to be in relation with the constant , where the weights are constants.

Hundreds of useful constraints have been identified and are described in the Global Constraint Catalogue , the most famous ones being element , cumulative , alldifferent , and cardinality .

Constraints can be combined to model a combinatorial problem in a declarative and very high-level fashion. For example, the Kakuro puzzle can essentially be modelled as follows, assuming that a hint for word with required sum is encoded as the pair :

 for each hint ⟨[x1,…,xn],s⟩:\textscAllDifferent({x1,…,xn})∧\textscLinear({x1,…,xn},=,s) (1)

For combinatorial problems, CP is not limited to decision variables whose domains are finite sets of integers. For instance, domains can be finite sets of finite sets of integers, and we then speak of set decision variables and set constraints. We call universe the union of the sets in the domain of a set decision variable; the set value eventually taken by a decision set variable is then a subset of the universe.

In the following sub-section, we show that constraints are not just a convenience for high-level modelling, but can also be exploited during the solution process.

### 3.2 Problem Solving by Inference and Search

By intelligent search, we mean search where at least some form of inference takes place at every step of the search in order to reduce the cost of brute-force search :

 combinatorial problem solving~{}=~{}search~{}+~{}inference% ~{}+ ⋯

In the following, we discuss two CP ways of solving problems that have been modelled using constraints. The classical approach is to perform systematic search (Section 3.2.1), and a more recent approach is to trade for time the guarantees of systematic search by performing stochastic local search (Section 3.2.2): the two approaches use completely different forms of inference, which is encapsulated in reusable fashion within the constraints (Section 3.2.3).

#### 3.2.1 Systematic Search

Classically, CP solves combinatorial problems by systematic tree search, together with backtracking, and performs at every node of the search tree a particular kind of inference called propagation. For the purpose of this report, we only need to explain the propagation of a single constraint here: we refer to  for how to propagate multiple constraints and how systematic search is conducted.

For each individual constraint, a propagation algorithm (or propagator) prunes the domains of its decision variables by eliminating impossible values according to some desired level of consistency. For example, under domain consistency (DC) every domain value of every decision variable participates in some solution to the constraint that involves domain values of the other decision variables. Also, under bound consistency (BC) every domain bound of every decision variable participates in some solution to the constraint that involves domain values of the other decision variables.

For example, consider the constraint . Let range over the domain and over : we write and and denote this state by . From this state, propagation to DC leads to the state since and must split the values and between themselves so that cannot take any of these two values. From the same start state , propagation to BC leads to the state since there do exist solutions to the constraint where takes its new lower bound value  or its old (and new) upper bound value , so that the unfeasibility of the intermediate value is not even checked. Note that, in this case, the resulting DC state is strictly stronger than the resulting BC state: while the initial state encodes a set of candidate solutions, the BC state encodes a subset thereof with candidate solutions, and the DC state encodes a subset of both with only candidate solutions, including the solutions. From a second start state , propagation to DC or BC leads to the propagator signalling failure, because it is impossible to assign two values to three variables so that the latter are pairwise distinct. From a third start state , propagation to DC or BC leads to no propagation, but the propagator can simultaneously detect that all candidate solutions actually are solutions, so that the propagator can signal subsumption (or entailment).

The propagation of a constraint amounts to reasoning with possible domain values, but there is no obligation to prune all the impossible domain values, as just witnessed when comparing DC and BC. If a constraint has multiple propagators achieving different strengths of consistency (under different time complexities), then there is a default propagator but the modeller may also choose one of them, possibly via experiments to find out which one leads to the best trade-off in search effort; this is the first non-declarative annotation that may be added to an otherwise declarative constraint model (and we will encounter a second one below). Also, the propagator of a constraint only reasons locally, namely about the decision variables of that constraint, rather than globally, about all the decision variables of the entire problem.

#### 3.2.2 Stochastic Local Search

Systematic search (as just described) explores the whole search space, though not by explicitly trying all possible combinations of domain values for the decision variables, but implicitly thanks to the interleaving of search with inference. Suitable values are found one-by-one for the decision variables. Systematic search offers the guarantee of eventually finding a solution (or finding and proving an optimal solution, in the case of an optimisation problem), if one exists, and proving unsatisfiability otherwise. However, this may take too long and it may be more interesting in some situations to find quickly solutions that may violate some constraints (or may be sub-optimal). The idea of stochastic local search (SLS; see , for example) is to trade this guarantee for speed by not exploring the whole search space. Unsatisfiability of the constraints is a priori not detectable by SLS, and optimality of solutions is a priori not provable by SLS.

SLS starts from a possibly random assignment of domain values to all the decision variables, without concern for whether some constraints are violated. It then tries to find a better assignment (in the sense of violating fewer constraints, or violating some constraints less, or yielding a better value of the objective function) by changing the values of a few decision variables, upon probing the impacts of many such small changes, which are called moves, and then actually selecting and making one of these moves. The set of candidate moves is called the neighbourhood

. This iterates, under suitable heuristics and meta-heuristics, until a sufficiently good assignment has been found, or until some allocated resource (such as running time or a number of iterations) has been exhausted.

SLS is an area of intensive research on its own, but the CP concept of constraint can be usefully imported into SLS, giving rise to what is known as constraint-based local search (CBLS; see  for example). In principle, the declarative part of a constraint model is thus the same as when solving the problem by classical CP (by systematic tree search interleaved with propagation). The inference counterpart of the propagator of a constraint are its violation functions and its differentiation functions, discussed next. For the purpose of this report, we only need to explain these functions for a single constraint here: we refer to  for how to evaluate them for multiple constraints and how stochastic local search is conducted.

For each individual constraint, the following functions are required in a CBLS system:

• The constraint violation function gives a measure of how much the constraint is violated under the current assignment. It must be zero if and only if the constraint is satisfied, and positive otherwise.

• The variable violation function gives a measure of how much a suitable change of a given decision variable may decrease the constraint violation.

• The assignment delta function gives the exact or approximated increase in constraint violation upon a probed assignment move for decision variable and domain value .

• The swap delta function gives the exact or approximated increase in constraint violation upon a probed swap move between two decision variables and .

A constraint or decision variable with higher violation is a stronger candidate for repair by a move. A negative delta reflects a decrease in constraint violation, hence smaller deltas identify better moves. Differentiation functions for other kinds of moves, such as multiple assignments, can be added. Ideally, violations are updated incrementally in constant time upon the actual making of a move, but this is not always possible. Similarly, deltas are ideally computed differentially in constant time rather than by subtracting the constraint violations after and before the probed move.

For example, consider the constraint . Under the assignment , the constraint violation could be , because three variables need to take a suitable new value in order to satisfy the constraint, and the variable violation of  could be , because the constraint violation would decrease by one if were assigned a suitable new value, such as . Upon the assignment moves and , the constraint violation increases by and , respectively, so the latter probed move is better. Upon any swap move, the constraint violation increases by . When maintaining for every domain value the number of variables currently taking it, the violations can be updated in constant time upon an actual move, and the deltas can be computed in constant time for a probed move.

A neighbourhood can often be designed so that some constraints of the model remain satisfied if they are satisfied under the starting assignment. Such constraints are called implicit constraints, since they need not be given in the constraint model, whereas the constraints to be satisfied through search are called explicit constraints and must be given in the constraint model. Since the explicit constraints can be violated under the current assignment, they are often called soft constraints. Conversely, since the implicit constraints can never be violated, they are often called hard constraints.

For example, in a Sudoku puzzle, there is an AllDifferent constraint on each of the nine rows, columns, and blocks: the row AllDifferent constraints can be made implicit upon using a neighbourhood with swap moves inside rows, since these constraints can be satisfied under the starting assignment (obtained by generating nine random permutations of the sequence ) and remain satisfied upon swap moves.

#### 3.2.3 Conclusion about Search and the Role of Constraints in Search

Both in constraint-based systematic search and in constraint-based local search, a problem solver software (or simply: solver) need only provide the master search algorithm, as well as implementations of the built-in (meta-)heuristics and constraints that are used in the problem model. The modeller is free to design custom (meta-)heuristics and constraints. A constraint fully declaratively encapsulates inference algorithms (propagators or violation functions and delta functions), which have been written once and for all and are invoked by the master search algorithm and the (meta-)heuristics in order to conduct the search for solutions.

The usage of constraints achieves code reusability. It also entails a clean separation between the declarative and non-declarative parts of the problem model (which together form the input to the solver), as well as a clean separation between search and inference within the solver itself. The slogan of constraint programming is:

 constraint program~{}=~{}model~{}+~{}search

because we also have more code modularity.

## 4 Mathematical Formulation

We consider two models of airspace sectorisation: one model uses a decision variable per region, its domain being the set of sectors; the other model uses a set decision variable per sector, its universe being the set of regions. A third approach is to consider both models at the same time, and add channelling constraints between them that ensure the two models are consistent. All work we surveyed in  has been done on the first approach, as the set variable approach might be unrealistic because the sets could get too large for current CP solvers. We refer to the first model as the graph colouring approach, and to the second model as the set covering approach. In both models, there is the same background data:

1. A given set of regions, where each region is uniquely identified.

2. For each region, there is a given workload value, or, if there is more than one type of workload, a tuple of workload values. The set of possible (tuples of) workload values is denoted by . The workload of each region is given by a function .

3. A function C that takes a set of regions and returns the combined workload for that set considered as a sector.

4. Each flight is given as the sequence of regions it visits, together with entry and exit time stamps.

5. The background geometry is given as a graph.

In most work, the background geometry was handled implicitly in the definitions of the constraints, while workload and flight were given as parameters to the model.

There are three different types of workload: monitoring workload, conflict workload, and coordination workload. If all three types of workload are considered, then the set is the Cartesian product , without loss of generality. In our previous work [11, 12], we ignored coordination workload and combined monitoring and conflict workload into a single integer: the set was .

In general, the function C performs an addition or a weighted sum. It gets complicated when coordination workload is taken into account, because the coordination between two regions of the same sector need not be taken into account, so that coordination workload is not additive.

In what follows, we shall leave the handling of the aggregation of coordination workload as future work. We assume that the workload of a region is given as single integer, and that the workload combination function C sums up the workloads of the regions in a region set constituting a sector, that is:

### 4.1 Background Geometry

We assume that the airspace is divided into a countable number of regions. These regions can be any shape with flat sides, that is polytopes. What is important is that we have a graph connecting adjacent regions. In the following, for generality of purpose and notation, we rename the set into a vertex set . Formally, the background geometry induces an undirected graph

 ⟨V,E⊆V×V⟩

where is a set of undirected edges. Given , we define to be the set of vertices that are adjacent to :

When considering constraints on the shape of sectors, we need access to the shape of its constituent regions. In particular, if we are considering regions that are polytopes, then each region has a number of lower-dimensional facets. In three dimensions, the facets are two-dimensional surfaces, and in two dimensions, the facets are one-dimensional lines. In algebraic topology , there is a very general theory of simplexes, which are representations of geometric objects together with facets, and facets of facets, all the way down until zero-dimensional points are reached. Although we do not need the full machinery of algebraic topology, the formalisation given here is inspired by its standard treatment .

Given a graph , a facet structure for is a set of facets and a function from vertices to facets . The facet function needs to satisfy the condition that given any two adjacent vertices and (that is ), there is exactly one facet such that and .

A facet is a border facet if there is exactly one vertex such that . A border vertex is a vertex that has a border facet.

Given a graph , we denote the set of border vertices of by . The enveloped graph of , denoted by , is the graph

 ⟨VG∪{⊥},EG∪({⊥}×BG)∪(BG×{⊥})⟩

That is, there is a unique new vertex that is connected to all border vertices. The extended facet function is defined as

 ∂⊥(v)={∂(v)if v≠⊥{⊥f}otherwise

Given a graph and a facet structure with facet function , the extended facet function satisfies the property that for any and there is exactly one facet such that and .

Sometimes, we need to see a vertex set as an ordered set:

• Let denote that vertex is to the left of vertex in the vertex set .

• Let denote that vertex is to the left of vertex in , with possibly .

• Let denote the predecessor of vertex in ; if is the leftmost vertex in , then .

• Similarly, let denote the successor of vertex in , if any, else .

The vertex set is seen as a sequence rather than as a set whenever we use the or relation to specify the semantics of a constraint.

It will sometimes be useful to have information about the volume of a region and the surface area of a facet of a region. We assume that we are given two functions:

• , such that returns the volume of region .

• , such that returns the area of facet or region , with . Technically, the region is a redundant argument, but we keep it for clarity.

We use the terminology of a 3D space. In a 2D space, the facet area would be a side length, and the region volume would be a surface area.

### 4.2 Flight Data

We assume without loss of generality that time stamps are given as integers. A flight plan is a sequence of regions, each with an entry time stamp and an exit time stamp, such that the times are increasing between entry and exit time stamps. Formally a flight plan is a member of such that if

 p=[⟨v1,t1,t′1⟩,⟨v2,t2,t′2⟩,…,⟨vm,tm,t′m⟩]

then for all we have , and for all we have . Note that we require a strict inequality between entry and exit timestamps since aircraft have a finite velocity. Let the flight plan of flight be denoted by .

### 4.3 The Graph Colouring Approach

Consider a graph . Let the given number of sectors be . For each vertex in , we create a decision variable with domain . The goal is to assign values to each decision variable such that we have a valid sectorisation. A solution to such a constraint problem is a mapping . We assume that solving starts with the special vertex being given a colour not made available to the vertices of .

For stochastic local search, the current assignment induces an undirected graph with as vertex set: there is an edge between vertices and if and only if they are adjacent in and have the same colour under . Formally:

 (2)

Note that dynamically changes during search as changes.

For systematic search, the current state induces an undirected graph with as vertex set: there is an edge between vertices and if and only if they are adjacent in and currently have non-disjoint domains, that is they may eventually become part of the same connected component of . Formula (2) becomes:

In a sequence, we call stretch a maximal sub-sequence whose elements are all equal. For instance, the sequence has four stretches, namely , , , and . A stretch in a sequence is the specialisation for a one-dimensional geometry of the notion of connected component in . In the formalisations of constraints in Section 5, we use the predicate, which holds if and only if the sequence has a stretch from index to index :

 Stretch(X,ℓ,r)⇔X[pred(ℓ)]≠X[ℓ]=X[r]≠X[succ(r)]∧∀ℓ≺i≺r:X[i]=X[ℓ]

where is assumed to start and end with a unique value, and and are neither the first nor the last indices inside .

### 4.4 The Set Covering Approach

Let the given number of sectors be . We use set decision variables , which are to be assigned subsets of the universe set . In addition to the other constraints in the problem that restrict the assignments of to be valid sectors, we require that all the sets be pairwise disjoint.

## 5 Geometric and Workload Constraints

We now consider the most important and most representative of the airspace sectorisation constraints listed in Section 2. We encapsulate each as a constraint in the constraint programming sense (as seen in Section 3.1), give its semantics, and design the encapsulated inference algorithms, namely propagators for systematic search (as seen in Section 3.2.1), as well as violation and differentiation functions for stochastic local search (as seen in Section 3.2.2). It is important to keep in mind that these algorithms and functions are designed in a problem-independent fashion, so as to make the constraints highly reusable.

We leave the set colouring approach as future work, and assume the graph colouring approach for all the constraints:

• We consider a graph with vertex set and edge set .

• We look for a mapping , that is we create a colour decision variable for every vertex . Let be the set of available colours.

• Let be a sequence, indexed by , of integer values.

The notation uses the Iverson bracket (typeset in red for the convenience of those viewing this document in colour) to represent the truth of formula , with truth represented by and falsity by .

### 5.1 Connectedness

The constraint, with , holds if and only if the number of colours used in the sequence is in relation with , and there is a path in the graph between any two vertices of the same colour that only visits vertices of that colour. Formally:

 |{Colour(v)∣v∈V}| RelOp N ∧∀v,w∈V:Colour(v)=Colour(w)⇒⟨v,w⟩∈E∗Colour(v) (3)

where denotes the transitive closure of , which is the adjacency relation projected onto adjacency for vertices of colour :

 Ec={⟨v,w⟩∈E∣Colour(v)=c=Colour(w)}

Hence the total number of connected components in the induced graph must be in relation with , and the number of connected components per colour must be at most .

##### Arbitrary Number of Dimensions.

The constraint generalises the main aspect of the constraint of , which considers is “” and considers to be induced by a cuboid divided into same-sized regions; further, there is a special colour (value ) for which there is no restriction on the number of connected components. An initial idea for a propagator is outlined in : we flesh it out below and also give violation and differentiation functions.

The constraint Connected is a hard constraint in our prior work on airspace sectorisation using stochastic local search , hence no violation and differentiation functions are given there. A connectedness constraint was accidentally forgotten in our prior work on airspace sectorisation using systematic search , hence no propagator is given there.

##### One Dimension.

The constraint for a graph induced by a one-dimensional geometry (in which connected components are called stretches) generalises the main aspect of the constraint , which itself generalises the constraint of : the latter constrains only one colour (value ) but the former constrains several colours, and both lack the decision variable and hence ; further, both have a special colour (value ) for which there is no restriction on the number of stretches (there are at most two such stretches when there is only one constrained colour). A propagator for Global_Contiguity is given in , and a decomposition based on the Automaton constraint is given in : we generalise these ideas below and also give violation and differentiation functions.

The trajectory-based convexity of an airspace sectorisation is achieved by posting for every flight a constraint on the sequence of decision variables denoting the sequence of colours of its one-dimensional visited region sequence , where is the imposed or maximum number of sectors. The initial domain of each colour decision variable is .

Such a trajectory-based convexity constraint is called the Contiguity constraint in our prior work on airspace sectorisation under systematic search , but lacks the decision variable and hence ; further, the propagator outlined there prunes less strongly than the one describe below (as it lacks both steps 2).

Such a trajectory-based convexity constraint is a soft constraint in our prior work on airspace sectorisation under stochastic local search , but lacks the decision variable and hence ; further, the constraint violation is defined differently there (in a manner that requires an asymptotically higher runtime to compute than the one we give below), and the variable violation and differentiation functions are not given there (though they are in the unpublished code underlying the experiments).

#### 5.1.1 Violation and Differentiation Functions

The violation and differentiation functions described below have no asymptotically better specialisation to the case of being induced by a one-dimensional space. Hence they apply to both connectedness in a space of an arbitrary number of dimensions and to contiguity in a one-dimensional space.

##### Soft Constraint.

If the Connected constraint is considered explicitly, then we proceed as follows. For representing the induced graph , we show that it suffices to initialise and maintain the following two data structures, which are internal to the constraint:

• Let denote the number of connected components (CCs) of whose vertices currently have colour .

• Let denote the current number of connected components of :

 NCC=∑c∈ColoursNCC(c) (4)

We can now re-formalise the semantics (3): the constraint is satisfied if and only if

 NCC RelOp N∧∀c∈Colours:NCC(c)≤1 (5)

The violation of a colour decision variable, say for vertex , is the current excess number, if any, of connected components of for the colour of :

 violation(Colour(v))=NCC(α(Colour(v)))−1 (6)

This variable violation is zero if currently has a colour for which there is exactly one connected component in , and positive otherwise.

The violation of the counter decision variable is or depending on whether is in relation with the current value of :

 violation(N)=1−\definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100[\definecolor[named]pgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0NCC RelOp α(N)] (7)

This variable violation is zero if currently holds, and one otherwise.

The violation of the constraint is the sum of the variable violation of and the current excess number, if any, of connected components for all colours:

 violation=violation(N)+∑c∈Coloursmax(NCC(c)−1, 0) (8)

The constraint violation is zero if holds and there currently is at most one connected component in for each colour.

The impact on the constraint violation of a colour assignment move is measured by the following colour assignment delta function:

Indeed, only the violation of the unchanged and the numbers of connected components of the old and new colour of vertex need to be considered. On the one hand, the number of connected components of colour increases by one if all vertices adjacent to do not currently have its new colour , so that forms a new connected component of colour . On the other hand, the number of connected components of colour decreases by one if all vertices adjacent to do not currently have that original colour of , so that  was the last element of some connected component of colour . To avoid increasing the number of connected components and increase the likelihood of decreasing their number, it is advisable to use a neighbourhood where vertices at the border of a connected component are re-coloured using a currently unused colour or the colour of an adjacent connected component. An assignment move on vertex can be differentially probed in time linear in the degree of in .

A colour swap move , where vertices and exchange their colours, is the sequential composition of the two colour assignment moves and . The colour swap delta is the sum of the deltas for these two moves (upon incrementally making the first move), and there is no asymptotically faster way to compute this delta, as the complexity of probing an assignment move does not depend on the number of vertices.

The additive impact on the constraint violation of a counter assignment move is measured by the following counter assignment delta function:

 Δ(N\coloneqqn)=\definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100[\definecolor[named]pgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0NCC RelOp α(N)]−\definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100[\definecolor[named]pgfstrokecolorrgb0,0,0\pgfsys@color@gray@stroke0\pgfsys@color@gray@fill0NCC RelOp n]

The violation increases by one if the number of connected components was but now is not in relation with . It decreases by one if the number of connected components now is but was not in relation with . An assignment move on can be differentially probed in constant time.

To achieve incrementality, once a move has been picked and made, the internal data structures and the variable and constraint violations must be updated. From the colour assignment delta function (9), the following updating code for a colour assignment move follows directly, where the new assignment is the old assignment , except that :

1:  if  then { forms a new CC of colour }
2:
3:
4:
5:  if  then { formed a CC of }
6:
7:
8:
9:  for all  do
10:
11:

Incremental updating for a colour assignment move takes time linear in the degree of vertex in and linear in the number of vertices (and colour decision variables). Code follows similarly for the colour swap and counter assignment moves.

For the remaining constraints, we assume that some relationships among internal data structures, such as (4), and the violation functions, such as (6) to (8), are defined as invariants , so that the solver automatically updates these quantities incrementally, without the constraint designer having to write explicit code, such as lines 3, 4, and 7 to 11.

##### Hard Constraint.

If the Connected constraint is considered implicitly, as in our , then it can be satisfied cheaply in the start assignment, by partitioning into connected components and setting according to , and maintained as satisfied upon every move, by only considering moves that re-colour a vertex at the border of a connected component to the colour of an adjacent connected component. For instance, if is equality, then one can partition into connected components and set .

#### 5.1.2 Propagator

##### One Dimension.

If is induced by a one-dimensional space, then propagation goes as follows.

If a vertex is given a colour (by obtaining in the current state through propagation of either another constraint or a search decision), then:

• Let be the rightmost vertex, if any, to the left of where .

• Let be the leftmost vertex, if any, to the right of where .

• Let be the rightmost vertex with where .

• Let be the leftmost vertex with where .

All occurrences of colour must be strictly between vertices and . All vertices from to must get colour . Hence we prune as follows:

• If exists, then:

1. Prune value from the domain of every vertex to the left of .

2. If , then prune value from the domain of every vertex to the right of .

• If exists, then:

1. Prune value from the domain of every vertex to the right of .

2. If , then prune value from the domain of every vertex to the left of .

• Set for every vertex with .

To propagate on , it is best for the propagator to have an internal data structure: see the multi-dimensional case for details.

This propagator is only worth invoking when the domain of one of its decision variables shrinks to a singleton. It achieves domain consistency. No propagation is possible when the domain of a decision variable loses a value without becoming a singleton.

##### Arbitrary Number of Dimensions.

If is induced by a space of more than one dimension, then the notions of ‘left’ and ‘right’ have to be generalised. We assume that the propagator initialises (at its first invocation) and updates (at subsequent invocations) the following internal data structures, which are strongly related to those we use in Section 5.1.1 for stochastic local search:111The strong relationship between internal datastructures needed for the violation and differentiation functions of stochastic local search and internal data structures needed for propagators of systematic search was not obvious to the authors at the outset of this research: this issue is worth investigating further, as code generation might be possible.

• The induced graph (which we define but do not store for stochastic local search).

• Let denote the number of connected components of whose vertices currently have colour in their domains, with .

• Let denote the current sum of all , as in formula (4).

The semantics (5) of the constraint gives us a constraint checker and a feasibility test.

Upon pruning of values (by propagation of either another constraint or a search decision) from the domain of a colour decision variable , we (possibly incrementally) update the internal data structures and use the graph invariants of  in order to obtain possible pruning on other colour decision variables if not the counter decision variable . We conjecture that this achieves domain consistency, like in the one-dimensional case.

Upon pruning of values from the domain of the counter decision variable , we presently do not know whether pruning is possible on the colour decision variables.

##### Discussion.

For many choices of and , the Connected constraint is easy to satisfy, namely by colouring all vertices with the same colour, so that there is only one connected component. In other words, the density of solutions to the Connected constraint may be very high within the Cartesian product of the domains of its decision variables. Such a situation is not very conducive to propagation, as discussed next.

Starting from full colour domains for all vertices, there potentially is only one connected component until the underlying space is cut into at least two sub-spaces by colouring an entire swathe of vertices in some colour that has already been eliminated for other vertices. Only a very specific search procedure would achieve this only situation that is conducive to domain pruning. While this situation often arises naturally in a one-dimensional space, it does not do so in a multi-dimensional space and systematic search with interleaved propagation essentially degenerates into generate-and-test search, because the propagator of Connected can only be invoked near the leaves of the search tree.

In conclusion, we are pessimistic about the utility of a propagator for the Connected constraint in a multi-dimensional space, at least when following the graph colouring approach. Unless a representation more conducive to propagation can be found, we advocate stochastic local search over systematic search in the presence of this constraint.

### 5.2 Compactness

Consider a graph induced by a space of at least two dimensions. The constraint holds if and only if the sum of the sphericity discrepancies of the connected components of the graph induced by and the sequence is at most the threshold , whose value is given.

The sphericity discrepancy of a connected component is defined as follows. Recall that in each facet is endowed with a surface area, and each vertex is endowed with a volume. We define the following concepts:

• In Section 4.1, we formalised the notion of border vertices, that is vertices at the edge of the geometry. Here we are dealing with colouring, so we need to formalise the border of coloured regions. A facet of a vertex is a border facet under a sequence if has no adjacent vertex for or has a different colour than the adjacent vertex that shares . Formally, a facet of a vertex is a border facet if and only if the following statement

 Colour(w)≠Colour(v)

holds for the vertex that shares with .

To simplify the formalisation, we assume that we are working with background geometry of the form for some given background geometry . That is, there is a unique special vertex in , called , that shares a facet with every vertex where there otherwise is no adjacent vertex for that facet. Now a vertex has exactly one adjacent vertex for each of its facets.

• The border surface area of a set of vertices that have the same colour under a sequence (such as a connected component of the induced graph ), denoted by , is the sum of the surface areas of the border facets of the vertices in :

Recall that when vertices and are adjacent.

• The sphere surface area of a set of vertices that have the same colour under a sequence , denoted by , is the surface area of a sphere that has as volume the total volume of the vertices in (see  for the derivation of this formula):

 SW=π1/3⋅(6⋅VW)2/3

where

 VW=∑w∈WVolume(w)

In case is induced by a 3D cuboid divided into same-sized regions, we can rather define the sphere surface area as the surface area of the smallest collection of regions that contains the sphere that has as volume the total volume of the vertices in . We omit the mathematical details, as they are specific to the shape of the regions: a space can be tiled by any kind of polyhedra, such as cubes or beehive cells.

• The sphericity discrepancy of a set of vertices that have the same colour under a sequence , denoted by , is the difference between the border surface area of under and the sphere surface area of under :

 δΨW=AW−SW

We derive this concept from the sphericity of a shape , defined in  to be the ratio between the surface area of a sphere that has the same volume as and the surface area of ; note that if is a sphere, and otherwise, assuming is not empty. Our concept would be defined as the subtraction rather than as the ratio , as we need (for stochastic local search) a non-negative metric that is in the good case, namely when is (the smallest over-approximation of) a sphere.

Note that the Compact constraint imposes no limit on the number of connected components of per colour: if there are several connected components for a colour, then the total sphericity discrepancy may be unnecessarily large.

Such a compactness constraint is a soft constraint in our prior work on airspace sectorisation under stochastic local search , but lacks the threshold there; we generalise the ideas of its violation and differentiation functions using the concept of sphericity discrepancy, and we describe them in much more detail.

There was no compactness constraint in our prior work on airspace sectorisation using systematic search , and we are not aware of any published propagator for any such constraint.

#### 5.2.1 Violation and Differentiation Functions

##### Soft Constraint.

If the Compact constraint is considered explicitly, then we proceed as follows. We initialise and incrementally maintain the following data structures, which are internal to the constraint:

• Let denote the border surface area of the vertex set . If every vertex only has facets of unit surface area (for instance, when is induced by a space divided into same-sized cubes or squares), then is defined as follows:

Otherwise, the formula needs to be generalised as follows, using (10):

• Let denote the current set of connected components of the induced graph , each encoded by a tuple , meaning that it currently has total surface area and total volume .

The violation of a decision variable, say for vertex , is its current weighted border surface area:

 violation(Colour(v))=f(Border(v))

where the weight function can be the identity function, but can also suitably penalise larger border surface areas, provided ; in , we found that using as works well enough. The variable violation is zero if is not a border facet.

The violation of the constraint is the current excess, if any, of the sum of the sphericity discrepancies of the connected components:

 (11)

The constraint violation is zero if the total sphericity discrepancy does not exceed .