Polygon Area Decomposition Using a Compactness Metric

10/08/2021
by   Mariusz Wzorek, et al.
0

In this paper, we consider the problem of partitioning a polygon into a set of connected disjoint sub-polygons, each of which covers an area of a specific size. The work is motivated by terrain covering applications in robotics, where the goal is to find a set of efficient plans for a team of heterogeneous robots to cover a given area. Within this application, solving a polygon partitioning problem is an essential stepping stone. Unlike previous work, the problem formulation proposed in this paper also considers a compactness metric of the generated sub-polygons, in addition to the area size constraints. Maximizing the compactness of sub-polygons directly influences the optimality of any generated motion plans. Consequently, this increases the efficiency with which robotic tasks can be performed within each sub-region. The proposed problem representation is based on grid cell decomposition and a potential field model that allows for the use of standard optimization techniques. A new algorithm, the AreaDecompose algorithm, is proposed to solve this problem. The algorithm includes a number of existing and new optimization techniques combined with two post-processing methods. The approach has been evaluated on a set of randomly generated polygons which are then divided using different criteria and the results have been compared with a state-of-the-art algorithm. Results show that the proposed algorithm can efficiently divide polygon regions maximizing compactness of the resulting partitions, where the sub-polygon regions are on average up to 73

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 8

page 13

08/02/2021

Efficient covering of convex domains by congruent discs

In this paper, we consider the problem of covering a plane region with u...
04/06/2018

Enumerating Graph Partitions Without Too Small Connected Components Using Zero-suppressed Binary and Ternary Decision Diagrams

Partitioning a graph into balanced components is important for several a...
10/06/2017

Covering vectors by spaces: Regular matroids

Seymour's decomposition theorem for regular matroids is a fundamental re...
02/09/2019

Fast Approximation Schemes for Bin Packing

We present new approximation schemes for bin packing based on the follow...
08/16/2017

Weight-based Fish School Search algorithm for Many-Objective Optimization

Optimization problems with more than one objective consist in a very att...
07/07/2020

Learning Combined Set Covering and Traveling Salesman Problem

The Traveling Salesman Problem is one of the most intensively studied co...
09/30/2021

An empirical study of various candidate selection and partitioning techniques in the DIRECT framework

Over the last three decades, many attempts have been made to improve the...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The problem of partitioning a polygon into a set of non-overlapping primitive shapes whose union is equivalent to the original polygon has been extensively studied. Such problems arise in a vast number of applications. Examples include processes of Very Large Scale Integration (VLSI) circuit design [4][5]

, pattern recognition 

[14], image processing [27], database systems [26], or parallel computing [10], to name a few. The broad application spectrum of polygon partitioning results in a wide range of problem formulations where polygons are decomposed into sets of triangles, rectangles, and trapezoids, with or without additional size, perimeter, or other constraints.

In recent work [18], the area partitioning problem has been formulated with a focus on applications to terrain covering in robotics. In this case, the general goal of approaches designed to solve the area partitioning problem is to calculate a set of sub-regions that can be assigned to individual robots for tasks such as data collection, surveillance or exploration. In order to provide efficient solutions for a team of heterogeneous robots, the sub-region sizes should be defined relative to the individual platform capabilities. As a motivating example, we consider the problem of terrain covering using a fleet of Unmanned Aerial Vehicles (UAVs). In this case, one can assume a number of UAVs collaborating to achieve a global collaborative goal such as gathering image sensor data from the given environment using their camera sensors. Each UAV would have its particular capabilities that would limit its range, speed and altitude in addition to constraints imposed by its sensor characteristics, such as field-of-view or image resolution. In order to achieve the common goal of covering an entire environment efficiently, the whole region described as a polygon can be divided into set of sub-regions each of whose area size corresponds to a particular UAV’s relative capabilities and sensor constraints.

Fig. 1: Terrain coverage examples.

Fig. 1 presents two examples where an environment described as a polygon is divided into two sub-regions of equal sizes ( and ) to be assigned for a data gathering mission among two homogeneous UAVs. The partitioning of the polygon can be done arbitrarily as long as the area sizes are equal. In the examples provided, one division results in a triangular shape sub-region (left in the figure) and the other in a rectangular sub-region (right in the figure). The flight paths generated by a motion planner that cover these two different sub-regions will differ in efficiency as the path covering the triangular region is longer and requires more turns which will result in longer execution times and additional power or fuel usage. Additionally, in the triangular case the UAV covering region will unnecessarily overlap with region . Ideally, one would want to minimize the overlap and provide regions that can be efficiently traversed. In the general case, the shape of the resulting sub-region will influence these two factors as demonstrated in previous studies, for example in [34]. One of the measures that can be used to differentiate the properties of these shapes is the polygon compactness or fatness metrics.

Several compactness measures have been proposed in the literature, most notably with applications to problems related to Geographic Information Systems (GIS) [24]. One example is redistricting (i.e. drawing electoral district boundaries) for political elections in order to avoid the problem of gerrymandering ([29, 33, 31]).

Fig. 2: Examples of compactness scores for different shapes with equal areas.

These measures often compare geometric properties of the polygon (e.g. area, perimeter) to the properties of a base geometric shape (e.g. circle) and take values from 0 to 1. Examples of two compactness scores, the Polsby-Popper [29] and Schwartzberg [33] scores for different shapes with equal area sizes are presented in Fig. 2. The base geometric shape used in these measures is a circle, which is considered the most compact shape. Thus the score for a circle is equal to 1.0 and as the shape compactness degrades the score values decrease.

In this paper we propose a novel area partitioning problem formulation where the goal is to divide a polygon into a set of disjoint connected sub-polygons with the focus on maximizing the sub-polygons’ compactness as well as satisfying the area size constraints. The problem formulation is based on grid cell discretiztion and a potential field model. Additionally, an algorithm is proposed for solving the new area partitioning problem efficiently. The algorithm includes a number of optimization techniques combined with two post-processing methods.

Although the terrain covering problem in UAV domains is used as a motivating application in this paper, the proposed approach is generic in nature. Other application domains where the shape characteristics are important could benefit from using the new area partitioning problem formulation and the proposed algorithms. Additionally, the proposed representation allows for applying the approach to GIS problems related to zoning or redistricting. An example of such an application is presented in Section IV-A.

I-a Related work

One of the most common forms of polygon decomposition considered in the literature is polygon triangulation, where the goal is to decompose a polygon into a set of triangles under different settings. For example, in [11] a Delaunay triangulation is used. Work presented in [6, 8] considers decomposition using nonobtuse triangles. Other simple shapes have been used in the polygon decomposition problems as well. These include problems of polygon partitioning into trapezoids [4, 5], rectangles [23], convex polygons [16, 19, 25] or star-shaped polygons [20]. However, these algorithms typically focus on finding a set of simple shapes that constitute a given polygon without considering additional constraints such as area sizes.

Polygon decomposition with area constraints where sub-polygons have equal area sizes have been proposed in [28, 10, 1]. However, these approaches are limited to finding sub-polygons of equal area sizes thus restricting their use in the applications considered in this work.

A more general approach is presented in  [18], where the area sizes can be defined arbitrarily. The authors, Hert&Lumelsky, propose a polynomial-time algorithm that utilizes a divide-and-conquer strategy combined with a sweep-line approach. The algorithm generates sub-polygons that are guaranteed to satisfy the exact area size constraints. In comparison, the approach presented in this paper is based on a representation of the area partitioning problem based on grid cell discretization. This allows for applying standard optimization techniques to find solutions to the problem that includes compactness metrics in addition to satisfying the area size constraints. Empirical comparisons between this algorithm and our approach are made in Section IV.

Several approaches that include the area partitioning algorithms has been proposed in the application domain of terrain covering using robotic systems  [34][7][2][3]. In [34][7], methods for generating flight plans for multiple UAVs are proposed based on algorithm in [18]. In [34] compactness of sub-polygons is measured and the results show that using compact sub-polygons yields more efficient area covering missions as plans generated for each sub-polygon assigned to a UAV exhibit a lower number of turns and shorter flight paths. In [7] motion plans generated for each UAV were optimized for flight time using a camera or laser range finder sensor model. In [2][3] an approach for parallel region coverage using multiple UAVs is presented. The polygon decomposition method proposed is unfortunately limited to rectilinear polygons. Although the approaches presented in these approaches consider application of terrain covering using robotic systems, the underlying methods used for polygon decomposition are either based on the algorithm presented in [18] or are limited to rectilinear polygons and do not include compactness metrics as the optimization objective. As such, these approaches are not directly comparable to the work presented here with the exception of [18], but rather the new specification of the area partitioning problem and the new algorithms presented can be incorporated in these solutions.

I-B Contributions

The work described in this paper includes the following contributions:

  • A new area partitioning problem representation using a grid cell discretization and potential field model [21] that allows for applying standard optimization techniques. Solutions acquired by using the new problem formulation trade-off accuracy of satisfying area size constraints while allowing for maximization of sub-polygon compactness.

  • A new algorithm for solving the area partitioning problem which includes a number of optimization algorithms combined with two post-processing methods. Integrated algorithms include a new heuristic-based method (the Potential Field Heuristic), the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) 

    [17], and Random Search optimization techniques.

  • An empirical evaluation of the proposed algorithm on a set of randomly generated non-convex polygons and comparison with an existing state-of-the-art technique [18].

The structure of this paper is as follows. First, a problem definition is provided in Section II. Section III describes the proposed polygon area decomposition algorithm. The results of experimental evaluation are presented in Section IV. Finally, the conclusions with a short description of future work are presented in Section V.

Ii Problem definition

Formally, the problem considered in this paper can be described as follows. Given a simple111a polygon with no self-intersection polygon , decompose the polygon into disjoint sub-polygons to . The area of each sub-polygon () is specified by a weight (). Weights define area sizes as a proportion of the total area of the polygon . Thus, the area partitioning problem is defined as follows:

(1)
(2)
(3)
(4)

Additionally, to take shape characteristics into account, the generated sub-polygons should maximize the compactness score of each sub-polygon. Since sub-polygons are represented as sets of polylines each defined by its vertices (points) the state space of the problem that needs to be considered is infinite. In the approach presented in this paper, the problem representation is relaxed by discretizing the polygon into a finite size set of grid cells. Using such representation allows for redefining of the infinite domain problem into a finite domain constraint optimization problem.

Let be a finite set of grid cells of size that cover the polygon . A grid cell is defined as a tuple where is the center of the grid cell, and is its contributing area. The area is calculated as the sum of the intersections of the grid cells with the polygon . Thus, the area values for the grid cells not fully included in the polygon (cells close to polygon edges) will be bounded by . Formally, this is defined as:

(5)
(6)
(7)
(8)
(9)
Fig. 3: Illustrative example of a discretized area partitioning problem definition. Division of polygon into a finite set of grid cells (left). Potential fields associated with each sub-polygon represented as partitions (right).

An illustrative example of grid cell decomposition is presented in Fig. 3 (left), where the original polygon has been decomposed into a set of 127 grid cells (). The original problem of area polygon partitioning can be represented as finding disjoint subsets of subject to area constraints. A potential field model is used to compute an assignment of cells to each subset of , where each sub-polygon is represented as an attractive potential defined by a circle with its center position and a radius . The radius corresponds to the attractive force value of the potential. Let : represent a sub-polygon partition. is defined as a tuple , where is a set of grid cells that belong to the partition, and and define the attractive force associated with it. An example of four sub-polygon partitions for a simple polygon is presented in Fig. 3 (right, using color-coding for each partition).

Allocation of grid cells to a particular sub-polygon partition is defined by the following function:

(10)

where parameter is a grid cell index, is the euclidean distance between the potential field center and a grid cell center , and is the potential field radius. Thus, using such representation a sub-polygon is defined as a union of grid cells that has been assigned to the polygon partition:

(11)
(12)

Given the discretized representation of the original area partitioning problem, the following bi-objective function is used to define it as an optimization problem that captures both the area size constraints and shape characteristics:

(13)
(14)
(15)

where (each is a tuple ), and is a compactness score for sub-polygon . The is a function of area size error for each sub-polygon defined as:

(16)

Minimizing the objective function ensures that each sub-polygon meets its target area size. Maximizing the second objective function ensures that sub-polygon compactness is maximized. Intuitively, when , the set of sub-polygons meet the exact area size requirements defined by weights in and each sub-polygon compactness score is equal to 1.

The optimization problem for solving the area partitioning problem using the discretized representation is defined as:

(17a)
s.t. (17b)
(17c)
(17d)
(17e)

where a sub-partition is the result of the cell assignment of according to eq. 11. Constraints defined in the optimization problem ensure that: (1) the sub-polygons are disjoint (eq. 17c); (2) the sub-polygons cover the whole area defined by the original polygon (eq. 17b); and (3) the weights defined for area sizes are a proportion of the total area of polygon (eq. 17e). Satisfying the area size constraints for each sub-partition is not exact since the proposed representation uses a set of grid cells of finite size. Let be defined as an upper bound on the area size error for each partition. The constraint that enforces the bounds on area size errors is defined by eq. 17d.

In order to apply general optimization methods for solving the polygon decomposition problem defined above, the objective function including penalties for constraint violations is defined as follows:

(18)
(19)

where is a penalty coefficient chosen empirically.

Fig. 4: The AreaDecompose algorithm overview.

Iii Polygon Area Decomposition Algorithm

In this section, a description of the proposed algorithmic framework is presented. Fig. 4 provides an overview of the AreaDecompose algorithm. Given a polygon description and a set of weights, the algorithm starts by dividing the polygon into a set of grid cells with a fixed size (Step 1). In Step 2, one of the five alternative methods for optimization is applied to assign grid cells to each sub-polygon. The first alternative, a Potential Field Heuristics (PFH) method, uses a heuristic-based approach for finding optimal values of attractive force fields for each sub-polygon. The second and third alternatives apply generic optimization techniques, namely the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and Random Search (RS). The final two alternatives in the optimization step combine the heuristic-based PFH algorithm with the generic CMA-ES and RS approaches. In this case, the PFH provides an initial solution that is then refined by applying the CMA-ES or RS optimization techniques. The final step of the AreaDecompose algorithm (Step 3) includes two post-processing procedures. The first procedure deals with the problem of removing potential disconnected cells causing invalid sub-polygon borders (Section III-D, Fig. (a)a) in the generated solutions. Additionally, a heuristic-base procedure is applied that tries to improve values of area size errors for each partition. Sub-polygons are rebalanced by exchanging grid cells when possible to meet the area size constraints (Section III-D, Fig. (b)b). The second post-processing step applies a polyline simplification algorithm to smooth out sub-polygon borders that define the final sub-polygons (Section III-E, Fig. 6).

Input:
Output:
1 , optAlg divideIntoGrids for  in  do
       initOnBoundary expectedRadius   eq. 21
2      
3if {”PFH”} optAlg then
4       PFH
5if {”CMA-ES”, ”RandomSearch”} optAlg then
6       runOptimizationoptAlg
rebalancePartitions partitionsToPolygon   eq. 11
simplifyBorders return
Algorithm 1 AreaDecompose

The AreaDecompose algorithm is presented in Algorithm 1. An input to the algorithm is the polygon and a set of weights defining the relative area sizes of generated sub-polygons as proportions of the total area of . The number of sub-polygons that the algorithm will calculate is equal to . Other parameters used during the calculations are the minimum tolerance value for an error on area size of each sub-polygon, and a set of flags defining which optimization techniques to apply.

The algorithm starts by decomposing the polygon into a set of grid cells (see Fig.3, Lines 1-2). The grid cell size used for decomposition is calculated based on the requested upper bound on the area size error and the smallest partition size defined by weights in as follows:

(20)

Such grid cell size selection criteria ensures that the area error requirements can be met.

Next, the partitions are initialized. The list of grid cells assigned to partition is initialized as empty (Line 4). Initial centers of attractive potential fields for each partition is calculated as equidistant points on the perimeter of polygon (Line 5). The radius of each potential field circle is initialized so that the area of the circle is equal to the expected area of the sub-polygon defined by its weight :

(21)

Depending on the algorithm configuration flags () the algorithm executes a combination of methods that include a heuristic-based algorithm PFH, or optimization methods based on CMA-ES or RandomSearch (Lines 7-10). These are the main methods that search for potential field configurations that yield minimal area errors and maximize compactness of each partition as defined by the optimization problem in eq. 17a-eq. 17e.

After the main part of the algorithm is executed, the resulting configurations of sub-polygons are further optimized by a post-processing step that tries to improve cell assignments (Line 11). The rebalancePartitions function considers two factors. On rare occasions the disjoint sets of grid cells generated by the algorithms can result in non-continuous partition borders (Fig. (a)a). Additionally, the assignment of cells is checked if further improvement of area size errors can be made by switching grid cells between neighboring partitions (Fig. (b)b).

Next, the resulting partitions defined by are transformed into a set of polylines describing each sub-polygon (Line 12). The final step of the algorithm (Line 13) applies the simplifyBorders function that simplifies polylines resulting from a grid cell decomposition (zigzag) line segments.

Details describing the main optimization and post-processing methods are presented in the remainder of this section.

Input:
Output:
1 , MaxIter Finished Iter while Finished do
       updateRadiuses   eq.(22-24)
2       for  in  do
             assignCells   eq. 10
                eq. 12
3            
       calculateCenters   eq. 25
4       Iter Iter if  Iter MaxIter or  then
5             Finished
6      
return
Algorithm 2 PFH

Iii-a Potential Field Heuristics

The Potential Field Heuristics (PFH) is an iterative method that searches for the optimization parameters , associated with each sub-polygon partition using a simple heuristics. The algorithm is presented in Algorithm 2. The constant parameters used by the algorithm are the lower bound of the area size errors for sub-polygons and the maximum number of iterations .

The general idea behind the PFH algorithm is to use a heuristic function to update radiuses of potential field circles () associated with sub-polygon partitions (). These updates directly influence the grid cell assignments for each partition (eq. 10), thus changing partition configurations. Centers of potential field circles () are calculated as partition centroids based on the updated grid cell assignments ().

The execution of the algorithm starts with the initialization of two auxiliary variables (Lines 1-2). The first one is a boolean flag () used to determine if a termination condition of the algorithm has been satisfied. The second variable is a simple iteration counter used in the main loop of the algorithm which is executed in Lines 3-12. Conceptually the main loop consists of four steps. The first step is the heuristic-based update of radiuses (Lines 4-5). The second step, includes updates of sub-polygon partitions (, Lines 6-8). In the next step centers of potential field circles () are calculated (Line 9). Finally, two termination conditions are evaluated (Lines 11-12). The first condition checks whether area sizes of the resulting sub-polygon partitions are within the specified error . The second condition checks if the algorithm reached its maximum number of iterations .

The heuristics used for updating the radius associated with each potential field circle executed by the function is defined by following equations:

(22)
(23)
(24)

where is a linear dampening factor dependent on the maximum iteration parameter and the iteration counter of the algorithm. Thus, in the initial iterations of the execution, updates to the radius are more radical taking bigger step values. While towards the end of the algorithm execution the updates become more refined.

Centers of the attractive potential field force for each partition are calculated as partition centroids:

(25)
(a) Reassignment of a cell that causes non-continuous partition boundary. Partitions before and after the reassignment are shown on the left and right side, respectively.
(b) Reassignment of cells to minimize area size errors. Candidate cells from partition which has been assigned too many cells, considered for reassignment to partition are shown on the left side. Result of the reassignment is shown on the right side.
Fig. 5: An illustrative example of applying the rebalancing partitions post-processing steps.

Iii-B CMA-ES Optimization

The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [17]

is an evolutionary optimization technique that has gained popularity in recent years. It has been successfully applied to many problems where the gradient of the objective function cannot be computed directly and where numerical computation is expensive. Additionally, the CMA-ES includes methods to mitigate local minima problems. The strategy used in the algorithm can estimate the correlation between the optimization parameters and can detect when the parameters are independent. The CMA-ES searches for parameters

among a set of possible parameters minimizing the value of the objective function without any specific knowledge or restrictions on the function itself. For example, can be non-convex, multi-modal, non-smooth, discontinuous, noisy, or ill-conditioned.

For the area partitioning problem, the CMA-ES is used to minimize the bi-objective function defined in eq. 18, where (each is a tuple ). The algorithm searches for a set of potential field centers and their radiuses that minimizes the sub-polygon area size errors and maximizes the sub-polygon compactness. During each optimization step, the CMA-ES selects a set of parameters which defines the grid cell assignment (eq. 10, eq. 12) used to evaluate the objective function .

Iii-C Random Search Optimization

A simple alternative to the CMA-ES optimization proposed in this paper uses a stochastic random search approach. In this case the optimization parameters and the objective function are the same as in the case of the CMA-ES. The algorithm generates a number of random configurations of , which are evaluated and the best is returned.

Iii-D Rebalancing Partitions

The sub-polygon partitions generated by the main optimization methods are further improved by applying the RebalancePartitions post-processing step which includes two procedures.

The focus of the first procedure is to deal with non-continuous partition boundaries that may result from grid cell assignment. A sub-polygon partition boundary is defined by vertices shared with the original polygon and edges of its grid cells that are neighbors to other sub-polygon partitions. Sporadically the assignment of grid cells to sub-polygon partitions based on eq. 10 may create divisions that result in non-continuous partition boundaries. Transforming such boundaries directly into polyline representation used by final sub-polygons will create self-intersecting polygons. A concept of disconnected grid cells can be used to identify non-continuous partition boundaries. A grid cell assigned to a partition is considered to be disconnected if all of its neighboring cells (top, bottom, left, right) are assigned to other partitions than the partition in question.

(26)

where are top, bottom, left and right neighboring cells of . An example of a disconnected cell is presented in Fig. (a)a (left), where one cell assigned to has only cell neighbors that belong to and partitions.

The procedure used to remedy the problem of non-continuous partition boundaries is defined as follows. For each disconnected grid cell select a new sub-polygon partition until all grid cells are connected to their respective partitions. The selection of a new partition is based on the original assignment equation (eq. 10) where the second best partition is consider that is also a neighbor of the cell in question. Fig. (a)a shows a simple example of the reassignment. In this case, the disconnected cell can be assigned to either partition or as both of the partitions are neighbors of the cell. After applying the assignment equation (eq. 10) the sub-polygon partition is selected.

The second process included in the RebalancePartitions post-processing step uses simple heuristics to improve area size errors for sub-polygon partitions. For example, suppose a partition has received too many grid cells during the optimization. In that case, the redundant cells can potentially be reassigned to a neighbor partition with an insufficient number of grid cells. Thus the area size errors for both partitions are minimized.

In order to ascertain if a partition has been assigned too many grid cells or an insufficient amount of cells, an approximate expected number of cells () is calculated according to the following equation:

(27)
(28)

where corresponds to the number of cells that are redundant or absent in the partition .

The post-processing procedure starts with selecting a partition with the largest area that has been assigned a redundant number of cells (i.e. ). All neighbor partitions of are evaluated according to eq. 28 and the partition which is missing the most cells is selected () to receive a grid cell from . The process of selecting the recipient partition is defined by the following equation:

(29)

The candidate cell selected for reassignment is the cell that is the furthest from the partition center . The procedure is repeated until no further cell reassignments can be made.

An illustrative example of the procedure is shown in Fig. (b)b. Partition has the largest area and has been assigned too many cells while its neighbor partition has an insufficient number of grid cells. The procedure reassigns one grid cell from partition to .

Iii-E Polyline Simplification

The borders between the sub-polygons generated by the grid cell assignment follows the edges of the cells. Because of that, the polylines defining the borders are in shape of a zigzag pattern and consists of an excessive number of segments. The final post-processing step, simplifyBorders, used in the AreaDecompose algorithm, aims at simplifying these borders with k-point polylines where the value of is minimal. Fig. 6 depicts an example set of sub-polygons with their borders resulting from the grid cells assignment (top left) and their final borders after the simplification (top right). In this example, four borders have been simplified with 3-point polylines depicted at the bottom of the figure.

Fig. 6: An example of polyline simplification.

The simplifyBorders procedure first identifies borders that need to be simplified. The grid corners shared between at least three partitions are considered to be fix-points together with any points on the boundary of the polygon . An example of a set of fix-points computed for a sample polygon is presented in Fig. 6 (black circles). The polylines considered for simplification include all borders between the fix-points that are not part of the boundary of (black-red circles in the figure). The simplifyBorders procedure applies the k-point polyline simplification algorithm to these borders.

The simplification algorithm applied to each border should fulfill two key requirements. First, the resulting polyline should have minimal influence on the compactness score of each sub-polygon (compactness requirement). Second, the effects of the border simplification on the area size of each sub-polygon should be minimized (area requirement).

In [9], the authors proved that the problem of approximating a polyline by a k-point polyline that fulfills the compactness and area requirements is an NP-hard problem. Additionally, a solution limited to the case of an x-monotone path respecting the area requirement was proposed. Most polyline simplification algorithms, such as variants of the RamerDouglasPeucker algorithm [30][12], address the compactness requirement by ensuring that the distance between the simplified polyline is within a threshold of the original polyline. Thus, the approximated polyline has similar shape to the original polyline and as a consequence the compactness of the polygons defined by either the approximated or original polylines remains similar. In this work, the proposed k-point simplification algorithm adopts this idea. The exact relation between the distance measure threshold values and the compactness score is left to be investigated in the future work.

The proposed method incrementally tries to find a k-point polyline that fulfills both area and compactness requirements with increasing number of points, starting with . For each value of , a gradient descent [32] method is used to find positions of the intermediate points in the new k-polyline, where the objective is to satisfy the area requirement. The objective function used in the optimization is based on Green’s theorem [15] which allows for a calculation of areas enclosed by the original border and its k-point simplifying polyline. After the optimization, the algorithm checks if the maximum distance between the new k-polyline and the original border is within a given threshold, that is, if the compactness requirement is satisfied. If the distance is larger than the threshold, the value of is incremented, and the procedure is repeated.

Fig. 7: An example of a border simplification by a k-point polyline. The border polyline defined by ,…, is simplified by a new 3-point polyline satisfying two critical requirements: 1) maintain the same area between the two sides of the polyline (i.e. the sum of yellow and the sum of green areas are equal), 2) the new polyline has similar shape to the original polyline which in turn minimizes the influence of the simplification on the compactness of sub-polygons located on both sides of the border.

Before describing the simplifyBorders algorithm in detail, the following notation is introduced. Let denote the input polyline defined by a set of points . A k-point polyline that simplifies is denoted as . Note, the first and the last points in are and , respectively. Thus, the goal of the algorithm is to find positions of intermediate points to complete the polyline. An example of single polyline simplification is shown in Fig. 7. The original polyline consists of 13 points and its simplified by a 3-point polyline defined by three points . The area requirement is satisfied as the areas between the two sides of the polyline enclosed by are equal (i.e. the sum of the yellow and the sum of green areas are equal). The maximum distance between the points of border and the simplifying polyline is below a given threshold satisfying the compactness requirement.

Input:
Output:
1 MaxIter getBordersToSimplify for  in  do
2       Finished while Finished do
3             initPoints gradDesc MaxIter maxDist calcMaxDist if  maxDist  then  else
4                   Finished
5            
6      
updateBorders return
Algorithm 3 SimplifyBorders

The SimplifyBorders algorithm is presented in Algorithm 3. The input to the algorithm includes the original polygon , the set of sub-polygons and the size of the grid-cell (eq. 20) used as a threshold for satisfying the compactness requirement. The parameter defines the maximum number of iterations of the gradient descent algorithm used to find the intermediate point positions of the simplifying k-point polyline .

The SimplifyBorders algorithm starts with identifying a set of borders that need to be simplified (Line 1). For each border a k-point simplifying polyline is calculated in Lines 2-13. The number of intermediate points in is initialized to 1 (Line 4). The main optimization loop for finding the positions of intermediate points of is executed in Lines 5-13.

The optimization loop starts with the initialization of intermediate points (Line 6). The function returns a list of points equally spaced along the original polyline . The initial k-point polyline is constructed by combining the start () and end point () of with the set of intermediate points (Line 7).

The gradient descent optimization method (, Line 8) is used to calculate the positions of intermediate points of . For a given set of points the method solves the following optimization problem:

(30)

where function is based on Green’s theorem [15] and allows for calculation of areas enclosed by and . If the sum of areas on each side of enclosed by are equal then . The green formula is defined as:

(31)

where and .

Minimizing the value of the green formula when choosing the positions of the intermediate points of ensures that the area requirement is satisfied.

The resulting configuration of the k-point polyline is evaluated using the distance measure to check if it satisfies the compactness requirements (Line 9-13). The function computes the maximum distance between the points of and according to:

(32)

An example result of calculation for a sample polyline is depicted in Fig. 7. If the value of is larger than the threshold defined as the grid-cell size (eq. 20), the compactness requirement is not satisfied and the optimization loop (Lines 5-13) is repeated with an increased number of intermediate points . Otherwise, the newly calculated k-point polyline meets both area and compactness requirements, and the main optimization loop is terminated (Line 13) and continues its execution for the remaining borders in .

Finally, once the process of finding simplifying k-point polylines for all borders in is finished, the algorithm assigns newly calculated polylines to respected sub-polygons in (Line 14).

Iv Empirical Evaluation

The proposed AreaDecompose algorithm has been evaluated in a series of runs with different configurations applied to a set of randomly generated polygons. The generated set consist of 200 non-convex polygons222https://gitlab.liu.se/lrs/pad_polygons.

Several compactness scores exist in the literature that can be used in the optimization phase of the AreaDecompose algorithm. For the experimental evaluation, a set of the five most commonly used compactness metrics has been chosen. The first metric is the Schwartzberg score [33]. The score is computed as the ratio between the perimeter of the input polygon and the circumference of a circle which has the same area size:

(33)

where is the perimeter of the polygon .

The second metric is the Polsby––Popper score [29], which is calculated as a ratio of the polygon area and a circle area whose circumference is equal to the perimeter of the polygon:

(34)

The third metric is the Reock degree of compactness [31]. It is computed as the ratio between the polygon area and the area of the smallest circle which encloses it:

(35)

The fourth compactness metric is the two balls score derived from the definition of the -fat property [13]. The score is computed as the ratio between the circumference of the maximum inscribed circle in the polygon and the circumference of the minimum enclosing circle of the polygon :

(36)

where and are circumferences of the inscribed and enclosing circles, respectively.

The final metric used in the evaluation is the length-to-width ratio of the rotated minimum bounding rectangle (RMBR) of the polygon :

(37)

where and is the width and length of the RMBR.

The values for all compactness scores range from 0 to 1, where values closer to 1 indicate more compact polygons. The Schwartzberg score was used in the objective function (eq. 15) whenever the optimization methods in the AreaDecompose algorithm were utilized. Other compactness scores listed above were used for the evaluation of results. A Collective compactness score was calculated as an average of all scores for each generated result.

The evaluation of the algorithms was performed on a set of randomly generated polygons. The problem parameters used for different runs are summarized in Table I. The algorithms were applied to five different problems (categorized as Case 1-5) with varying the number of sub-polygons and their sizes. In the simplest case (Case 1), the polygons were divided into two equally sized sub-polygons. The most complex case (Case 5) involved dividing polygons into a randomly selected number of sub-polygons (up to 9) with their sizes selected randomly. Cases 1-4 were applied once on the polygon dataset, while Case 5 was applied twice.

Case# #Polygons #Partitions Weights
1 200 2 0.5, 0.5
2 200 3 0.166, 0.333, 0.5
3 200 4 0.1, 0.2, 0.4, 0.5
4 200 5 0.2, 0.2, 0.2, 0.2, 0.2
5 400 RND (2,9) RND
TABLE I: Problem configurations (Cases) used in the experimental evaluation.

Five configurations of the AreaDecompose algorithm were used. The first three configurations included running a single optimization algorithm. These included running either the heuristic-based PFH algorithm (Section III-A), the CMA-ES (Section III-B) or the Random Search (RS, Section III-C). The remaining two configurations involved using a combination of the heuristic-based PFH with one of the other remaining optimization methods (i.e. CMA-ES and RS). The results generated by the heuristic-based algorithm were used as initial solutions for the optimization methods. Additionally, three different tolerance values for area size errors were used (i.e. ).

Sub-polygons generated by the AreaDecompose algorithm when solving Cases 1-5 were compared to the polygons generated by the Hert&Lumelsky algorithm [18] using the same input parameters. The experiments were run using a computer equipped with an AMD Ryzen 9 3900XT 12-Core Processor utilizing a single core. The Hert&Lumelsky algorithm was implemented in Python, while the AreaDecompose was implemented in C++ programming language.

The results of the experimental evaluations for different algorithm configurations and tolerance values were computed as an average value over all cases considering three metrics. The metrics included the quality and accuracy of the solutions and the efficiency of the algorithm. The quality of the solutions measured as Collective compactness scores for the generated sub-polygons is presented in Fig. 8. The sub-polygon area errors representing the accuracy of the solutions are depicted in Fig. 9. Finally, the algorithm efficiency measured as the algorithm execution times are depicted in Fig. 10.

Fig. 8: Collective compactness scores for different algorithm configurations calculated as an average values over all cases.

The Hert&Lumelsky algorithm generated sub-polygons with the lowest compactness score (0.38) among all the algorithms considered (Fig. 8). In comparison, the AreaDecompose algorithm with its five different configurations provided solutions with sub-polygons’ compactness scores between 0.58 to 0.66. The worst performing configurations included running only the CMA-ES or RS optimization methods. These optimization methods do not perform well when searching for an optimal solution given the initial parameters of the potential field configurations. The optimization methods most likely find sub-optimal solutions at local minima. Additionally, the three different tolerance values for area size errors () did not influence the achieved compactness scores. The heuristics-based PFH and its combinations with CMA-ES and RS generated results with the highest compactness scores ranging between 0.61 and 0.66. In these configurations, lower tolerance value settings () led to solutions with higher compactness scores. This is expected as the lower values will result in a higher number of grid cells used, thus allowing to find solutions with more compact sub-polygons.

Fig. 9: Area size errors for different algorithm configurations calculated as an average values over all cases.

The average area size error for solutions generated by the Hert&Lumelsky algorithm was recorded at 3.7e-4% (Fig. 9). This is as expected since the algorithm guarantees the area sizes of sub-polygons to be exact. In case of the AreaDecomse algorithm, the area size errors are bounded by the tolerance parameter . All configurations of the algorithm generated solutions with an error below the value of the tolerance , except the RS. The most accurate solutions were generated by the PFH-CMAES configuration with errors as low as 0.23% for the case of .

The AreaDecompose algorithm efficiency measured as its execution times varied depending on the chosen configuration and the tolerance of the area size error parameter. This is as expected since the execution times depend on the number of grid cells used while searching for a solution. Overall, the algorithm execution times varied between 4.1ms (PFH, ) and 535ms (RS, ). Execution times for the Hert&Lumelsky algorithm were measured at 9.2ms, which was higher than in the case of the PFH run with (4.1ms) and (7.3ms). Other AreaDecompose configurations resulted in higher execution times especially for configurations were the heuristic-based PFH was combined with the CMA-ES or the RS techniques.

Fig. 10: Running times for different algorithm configurations shown in log-scale calculated as an average values over all cases.

Overall, the PFH and PFH combined with CMA-ES were the two best AreaDecompose algorithm configurations when considering all three measured aspects. Results comparing these two configurations to the Hert&Lumelsky algorithm are presented in Table II. For the case of the area size error the PFH and its combination with CMA-ES generated solutions with a higher collective compactness score in comparison to the Hert&Lumelsky algorithm. The PFH-CMAES configuration provided solutions with significantly lower area size error (0.23%) when compared to the PFH configuration (0.48%) at a cost of a higher computational time required (31ms vs 168ms).

Fig. 11: Dependency between the average collective score and the number of sub-polygons for different algorithm configurations.

An interesting dependency between the number of sub-polygons requested in the problem and the compactness scores has been observed. This is especially visible in problems where the original polygon is non-convex, such as the polygons included in the evaluation dataset. Dividing a non-convex polygon into a small number of partitions (e.g. two) is restricted by the original polygon borders. More compact sub-polygons can be generated when the problem involves dividing the polygon into a larger number of sub-polygons. Fig. 11 presents the average collective compactness scores as a function of the number of sub-polygons for different algorithm configurations. Interestingly, the Hert&Lumelsky algorithm generally generates less compact sub-polygons as the number of sub-partitions grows. This is attributed to the divide-and-conquer technique used in the algorithm, and the results obtained in our evaluation confirm the findings discussed in [18]. In the case of the AreaDecompose algorithm, this trend is reversed. That is, the higher the number of sub-polygons requested in the problem, the higher their compactness. The PFH-CMAES configuration of the AreaDecompose algorithm generated sub-polygons with compactness scores between 0.55 (2 sub-polygons) up to 0.71 (9 sub-polygons). The Hert&Lumelsky algorithm found solutions with compactness scores between 0.27 (9 partitions) and 0.49 (2 sub-polygons). While the PFH-CMAES outperforms the Hert&Lumelsky algorithm when considering the sub-polygon compactness, the difference becomes much more significant as the number of sub-polygons requested in the problem increases. In the extreme case of 9 sub-polygons, the PFH-CMAES generated sub-polygons which are 258% more compact than the results provided by the Hert&Lumelsky algorithm.

Algorithm Tolerance Collective Area Error Time
Score [%] [ms]
1% 0.66 0.23 168.45
PFH-CMAES 5% 0.65 1.03 30.02
10% 0.63 2.29 15.48
1% 0.66 0.48 31.02
PFH 5% 0.64 2.14 7.26
10% 0.63 4.24 4.11
Hert&Lumelsky NA 0.38 0 9.24
TABLE II: Comparison of the results for the two best AreaDecompose algorithm configurations and Hert&Lumelsky approach.

Iv-a Other Application Examples

Fig. 12: The population density in the state of New York is depicted on the left, where the white color represents the lowest density and black the highest. Result of the division of the state of New York based on population density is depicted on the right.

The proposed problem representation that uses a grid cell discretization and potential field model can be utilized for solving other classes of problems than area partitioning. One such class includes GIS related problems where the spacial data is represented as rasters. A raster consists of a two dimensional matrix of cells (or pixels) where each cell contains a value representing information. Rasters are commonly used in GIS systems to represent spacial data due to their simple format and ability to visualize them in the form of an image or a map. An example is a population density map, where each pixel of the image contains the population size [22]. The grid-based representation used as the basis of AreaDecompose allows for a straight forward association between the value contained in a raster and a cell of the grid used in the process of partitioning. By calculating the population of each grid cell based on the population density map, the AreaDecompose algorithm can be used to solve the problem of dividing a polygon based on the population density instead of the total surface of each partition. In that case, introduced in section III would not refer to the total surface, but instead, to the total population living in .

To demonstrate this application, the AreaDecompose is applied to the following redistricting problem. Given the population density map of the state of New York, calculate four districts with an equal population. The input population density map and districts generated by the AreaDecompose algorithm are shown in Fig. 12. Table III presents statistics for the resulting sub-regions, including the population error and three compactness scores. Each generated sub-region, with the exception of sub-region 2, maintains high compactness scores. In the case of region 2, the partitioning is constrained by the border of the state. Based on the input population density data, each district should ideally include 5.053 million inhabitants. Districts generated by the AreaDecompose algorithm have a population error of at most while maintaining high compactness scores.

Sub- Popu- Pop. Surface Schwarz- Reock Polsby-
region lation Error [] berg Popper
[M] [%]
1 5.114 1.19 84,052 0.79 0.56 0.62
2 5.078 0.49 7,715 0.56 0.27 0.31
3 5.104 1.00 2,694 0.62 0.35 0.39
4 4.917 2.77 46,849 0.79 0.51 0.62
TABLE III: Results of partitioning of the New-York state based on spacial population density data.

V Conclusion and future work

In this paper, we have considered the problem of area partitioning, which deals with the division of non-convex polygons into a set of non-overlapping sub-polygons with defined area sizes.

The area partitioning problem arises in terrain covering applications in collaborative robotics, where the goal is to find a set of efficient plans for a team of heterogeneous robots to cover a given area. In this application context, the shape characteristics of the sub-regions generated as a result of solving the area partitioning problem can improve the efficiency with which robotic tasks can be performed within each sub-region. In fact, maximizing the compactness of sub-polygons directly influences the optimality of any generated motion plan. The state-of-the-art approach used in this application and proposed by Hert&Lumelsky in [18] provides an algorithm that guarantees the satisfaction of area size constraints, but does not consider sub-region shape constraints. The algorithms that have been presented in this paper focus on addressing both the size and shape constraints while solving the area partitioning problem.

In this work, an extended formulation of the problem has been proposed that in addition to respecting area size constraints, aims at maximizing compactness scores of the generated sub-polygons. The proposed problem formulation is based on grid cell discretization and a potential field model and allows for applying general optimization algorithms. An algorithm, the AreaDecompose, has been presented for solving the extended area partitioning problem efficiently. The algorithm combines a new heuristic-based method with existing optimization techniques and two post-processing methods.

An empirical evaluation has been presented comparing the error on the area size and the compactness metric when using different configurations of the proposed partitioning algorithm. In addition, a comparative analysis is made between the algorithms proposed in this paper and the state-of-the-art approach proposed by Hert&Lumelsky in [18]. The results have shown that the new methods proposed in this paper can generate on average up to 73% more compact partitions than the one from Hert&Lumelsky, while remaining within a specified tolerance on the error of the size of the partitions. Additionally, the best results were obtained when the AreaDecompose algorithm was configured to use a combination of heuristics-based PFH and the CMA-ES optimization technique. The PFH quickly finds initial solutions, which are then further improved using the CMA-ES optimization, reducing the errors on the area constraints and improving the compactness of each sub-polygon.

While the standard area partitioning problem has been mainly defined to be used in terrain covering applications in robotics, the extended problem definition proposed here can be easily adopted to other GIS-related applications. An example of such an application, also presented in the paper, is the problem of redistricting for the state of New York.

In this work, the polygon simplification step aims to ensure that the areas of the generated sub-polygons do not change and to keep the simplified polygon within bounds of the original polygon to maintain its compactness properties. In future work, it would be interesting to investigate if the simplification step can be used to reduce the error on the area constraint and also improve the compactness property.

References

  • [1] D. Adjiashvili and D. Peleg (2010) Equal-area locus-based convex polygon decomposition. Theoretical Computer Science 411 (14), pp. 1648 – 1667. Note: Structural Information and Communication Complexity (SIROCCO 2008) External Links: ISSN 0304-3975, Document, Link Cited by: §I-A.
  • [2] A. Agarwal, Lim Meng Hiot, Nguyen Trung Nghia, and Er Meng Joo (2006-03) Parallel region coverage using multiple uavs. In 2006 IEEE Aerospace Conference, Vol. , pp. 8 pp.–. External Links: Document, ISSN Cited by: §I-A.
  • [3] A. Agarwal, L. M. Hiot, E. M. Joo, and N. T. Nghia (2007) Rectilinear workspace partitioning for parallel coverage using multiple unmanned aerial vehicles. Advanced Robotics 21 (1-2), pp. 105–120. Cited by: §I-A.
  • [4] T. Asano and T. Asano (1983-11) Minimum partition of polygonal regions into trapezoids. In 24th Annual Symposium on Foundations of Computer Science (sfcs 1983), Vol. , pp. 233–241. External Links: Document, ISSN Cited by: §I-A, §I.
  • [5] T. Asano, T. Asano, and H. Imai (1986-04) Partitioning a polygonal region into trapezoids. J. ACM 33 (2), pp. 290–312. External Links: ISSN 0004-5411, Link, Document Cited by: §I-A, §I.
  • [6] B. S. Baker, E. Grosse, and C. S. Rafferty (1988) Nonobtuse triangulation of polygons. Discrete & Computational Geometry 3 (2), pp. 147–168. Cited by: §I-A.
  • [7] C. Berger, M. Wzorek, J. Kvarnström, G. Conte, P. Doherty, and A. Eriksson (2016-10) Area coverage with heterogeneous uavs using scan patterns. In 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Vol. , pp. 342–349. External Links: Document, ISSN Cited by: §I-A.
  • [8] M. Bern, S. Michell, and J. Ruppert (1995) Linear-size nonobtuse triangulation of polygons. Discrete & Computational Geometry 14 (4), pp. 411–428. Cited by: §I-A.
  • [9] P. Bose, S. Cabello, O. Cheong, J. Gudmundsson, M. van Kreveld, and B. Speckmann (2006) Area-preserving approximations of polygonal paths. Journal of Discrete Algorithms 4 (4), pp. 554 – 566. External Links: ISSN 1570-8667, Document, Link Cited by: §III-E.
  • [10] I. T. Christou and R. R. Meyer (1996) Optimal equi-partition of rectangular domains for parallel computation. Journal of Global Optimization 8 (1), pp. 15–34. Cited by: §I-A, §I.
  • [11] L. De Floriani and E. Puppo (1992) An on-line algorithm for constrained delaunay triangulation. CVGIP: Graphical Models and Image Processing 54 (4), pp. 290–300. Cited by: §I-A.
  • [12] D. H. DOUGLAS and T. K. PEUCKER (1973) ALGORITHMS for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: The International Journal for Geographic Information and Geovisualization 10 (2), pp. 112–122. External Links: Document, Link, https://doi.org/10.3138/FM57-6770-U75U-7727 Cited by: §III-E.
  • [13] A. Efrat, M. J. Katz, F. Nielsen, and M. Sharir (2000) Dynamic data structures for fat objects and their applications. Computational Geometry 15 (4), pp. 215–227. External Links: ISSN 0925-7721, Document, Link Cited by: §IV.
  • [14] H. Feng and T. Pavlidis (1975) Decomposition of polygons into simpler components: feature generation for syntactic pattern recognition. IEEE Transactions on Computers 100 (6), pp. 636–650. Cited by: §I.
  • [15] G. Green (1828-07) An essay on the application of mathematical analysis to the theories of electricity and magnetism. Nottingham. External Links: 0807.0088, Link Cited by: §III-E, §III-E.
  • [16] D. H. Greene (1983) The decomposition of polygons into convex parts. Computational Geometry 1, pp. 235–259. Cited by: §I-A.
  • [17] N. Hansen (2006) The cma evolution strategy: a comparing review. In

    Towards a New Evolutionary Computation: Advances in the Estimation of Distribution Algorithms

    , J. A. Lozano, P. Larrañaga, I. Inza, and E. Bengoetxea (Eds.),
    pp. 75–102. External Links: ISBN 978-3-540-32494-2 Cited by: 2nd item, §III-B.
  • [18] S. Hert and V. J. Lumelsky (1998) Polygon area decomposition for multiple-robot workspace division. Int. J. Comput. Geometry Appl. 8, pp. 437–466. Cited by: 3rd item, §I-A, §I-A, §I, §IV, §IV, §V, §V.
  • [19] S. Hertel and K. Mehlhorn (1983) Fast triangulation of simple polygons. In International Conference on Fundamentals of Computation Theory, pp. 207–218. Cited by: §I-A.
  • [20] J. M. Keil (1985) Decomposing a polygon into simpler components. SIAM Journal on Computing 14 (4), pp. 799–817. Cited by: §I-A.
  • [21] O. Khatib (1990) Real-time obstacle avoidance for manipulators and mobile robots. In Autonomous Robot Vehicles, I. J. Cox and G. T. Wilfong (Eds.), pp. 396–404. External Links: Document, Link Cited by: 1st item.
  • [22] M. Langford and D. J. Unwin (1994) Generating and mapping population density surfaces within a geographical information system. The Cartographic Journal 31 (1), pp. 21–26. Cited by: §IV-A.
  • [23] C. Levcopoulos and A. Lingas (1984) Bounds on the length of convex partitions of polygons. In International Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 279–295. Cited by: §I-A.
  • [24] W. Li, M. F. Goodchild, and R. Church (2013) An efficient measure of compactness for two-dimensional shapes and its application in regionalization problems. International Journal of Geographical Information Science 27 (6), pp. 1227–1250. Cited by: §I.
  • [25] J. Lien and N. M. Amato (2006) Approximate convex decomposition of polygons. Computational Geometry 35 (1-2), pp. 100–123. Cited by: §I-A.
  • [26] E. Lodi, F. Luccio, C. Mugnai, and L. Pagli (1978) On two-dimensional data organization i. Fundamenta Informaticae 2 (1), pp. 211–226. Cited by: §I.
  • [27] D. Moitra (1991) Finding a minimal cover for binary images: an optimal parallel algorithm. Algorithmica 6 (1), pp. 624–657. Cited by: §I.
  • [28] W. Page and K. Sastry (1992) Area-bisecting polygonal paths. The Fibonacci Quarterly 30, pp. 263–73. Cited by: §I-A.
  • [29] D. D. Polsby and R. D. Popper (1991) The third criterion: compactness as a procedural safeguard against partisan gerrymandering. Yale L. & Pol’y Rev. 9, pp. 301. Cited by: §I, §I, §IV.
  • [30] U. Ramer (1972) An iterative procedure for the polygonal approximation of plane curves. Computer Graphics and Image Processing 1 (3), pp. 244 – 256. External Links: ISSN 0146-664X, Document, Link Cited by: §III-E.
  • [31] E. C. Reock (1961) A note: measuring compactness as a requirement of legislative apportionment. Midwest Journal of Political Science 5 (1), pp. 70–74. External Links: ISSN 00263397, Link Cited by: §I, §IV.
  • [32] S. Ruder (2016) An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. Cited by: §III-E.
  • [33] J. E. Schwartzberg (1965) Reapportionment, gerrymanders, and the notion of compactness. Minn. L. Rev. 50, pp. 443. Cited by: §I, §I, §IV.
  • [34] G. Skorobogatov, C. Barrado, E. Salamí, and E. Pastor (2021) Flight planning in multi-unmanned aerial vehicle systems: nonconvex polygon area decomposition and trajectory assignment. International Journal of Advanced Robotic Systems 18 (1), pp. 1729881421989551. External Links: Document, Link, https://doi.org/10.1177/1729881421989551 Cited by: §I-A, §I.