I Introduction
The problem of partitioning a polygon into a set of nonoverlapping primitive shapes whose union is equivalent to the original polygon has been extensively studied. Such problems arise in a vast number of applications. Examples include processes of Very Large Scale Integration (VLSI) circuit design [4][5]
[14], image processing [27], database systems [26], or parallel computing [10], to name a few. The broad application spectrum of polygon partitioning results in a wide range of problem formulations where polygons are decomposed into sets of triangles, rectangles, and trapezoids, with or without additional size, perimeter, or other constraints.In recent work [18], the area partitioning problem has been formulated with a focus on applications to terrain covering in robotics. In this case, the general goal of approaches designed to solve the area partitioning problem is to calculate a set of subregions that can be assigned to individual robots for tasks such as data collection, surveillance or exploration. In order to provide efficient solutions for a team of heterogeneous robots, the subregion sizes should be defined relative to the individual platform capabilities. As a motivating example, we consider the problem of terrain covering using a fleet of Unmanned Aerial Vehicles (UAVs). In this case, one can assume a number of UAVs collaborating to achieve a global collaborative goal such as gathering image sensor data from the given environment using their camera sensors. Each UAV would have its particular capabilities that would limit its range, speed and altitude in addition to constraints imposed by its sensor characteristics, such as fieldofview or image resolution. In order to achieve the common goal of covering an entire environment efficiently, the whole region described as a polygon can be divided into set of subregions each of whose area size corresponds to a particular UAV’s relative capabilities and sensor constraints.
Fig. 1 presents two examples where an environment described as a polygon is divided into two subregions of equal sizes ( and ) to be assigned for a data gathering mission among two homogeneous UAVs. The partitioning of the polygon can be done arbitrarily as long as the area sizes are equal. In the examples provided, one division results in a triangular shape subregion (left in the figure) and the other in a rectangular subregion (right in the figure). The flight paths generated by a motion planner that cover these two different subregions will differ in efficiency as the path covering the triangular region is longer and requires more turns which will result in longer execution times and additional power or fuel usage. Additionally, in the triangular case the UAV covering region will unnecessarily overlap with region . Ideally, one would want to minimize the overlap and provide regions that can be efficiently traversed. In the general case, the shape of the resulting subregion will influence these two factors as demonstrated in previous studies, for example in [34]. One of the measures that can be used to differentiate the properties of these shapes is the polygon compactness or fatness metrics.
Several compactness measures have been proposed in the literature, most notably with applications to problems related to Geographic Information Systems (GIS) [24]. One example is redistricting (i.e. drawing electoral district boundaries) for political elections in order to avoid the problem of gerrymandering ([29, 33, 31]).
These measures often compare geometric properties of the polygon (e.g. area, perimeter) to the properties of a base geometric shape (e.g. circle) and take values from 0 to 1. Examples of two compactness scores, the PolsbyPopper [29] and Schwartzberg [33] scores for different shapes with equal area sizes are presented in Fig. 2. The base geometric shape used in these measures is a circle, which is considered the most compact shape. Thus the score for a circle is equal to 1.0 and as the shape compactness degrades the score values decrease.
In this paper we propose a novel area partitioning problem formulation where the goal is to divide a polygon into a set of disjoint connected subpolygons with the focus on maximizing the subpolygons’ compactness as well as satisfying the area size constraints. The problem formulation is based on grid cell discretiztion and a potential field model. Additionally, an algorithm is proposed for solving the new area partitioning problem efficiently. The algorithm includes a number of optimization techniques combined with two postprocessing methods.
Although the terrain covering problem in UAV domains is used as a motivating application in this paper, the proposed approach is generic in nature. Other application domains where the shape characteristics are important could benefit from using the new area partitioning problem formulation and the proposed algorithms. Additionally, the proposed representation allows for applying the approach to GIS problems related to zoning or redistricting. An example of such an application is presented in Section IVA.
Ia Related work
One of the most common forms of polygon decomposition considered in the literature is polygon triangulation, where the goal is to decompose a polygon into a set of triangles under different settings. For example, in [11] a Delaunay triangulation is used. Work presented in [6, 8] considers decomposition using nonobtuse triangles. Other simple shapes have been used in the polygon decomposition problems as well. These include problems of polygon partitioning into trapezoids [4, 5], rectangles [23], convex polygons [16, 19, 25] or starshaped polygons [20]. However, these algorithms typically focus on finding a set of simple shapes that constitute a given polygon without considering additional constraints such as area sizes.
Polygon decomposition with area constraints where subpolygons have equal area sizes have been proposed in [28, 10, 1]. However, these approaches are limited to finding subpolygons of equal area sizes thus restricting their use in the applications considered in this work.
A more general approach is presented in [18], where the area sizes can be defined arbitrarily. The authors, Hert&Lumelsky, propose a polynomialtime algorithm that utilizes a divideandconquer strategy combined with a sweepline approach. The algorithm generates subpolygons that are guaranteed to satisfy the exact area size constraints. In comparison, the approach presented in this paper is based on a representation of the area partitioning problem based on grid cell discretization. This allows for applying standard optimization techniques to find solutions to the problem that includes compactness metrics in addition to satisfying the area size constraints. Empirical comparisons between this algorithm and our approach are made in Section IV.
Several approaches that include the area partitioning algorithms has been proposed in the application domain of terrain covering using robotic systems [34], [7], [2], [3]. In [34], [7], methods for generating flight plans for multiple UAVs are proposed based on algorithm in [18]. In [34] compactness of subpolygons is measured and the results show that using compact subpolygons yields more efficient area covering missions as plans generated for each subpolygon assigned to a UAV exhibit a lower number of turns and shorter flight paths. In [7] motion plans generated for each UAV were optimized for flight time using a camera or laser range finder sensor model. In [2], [3] an approach for parallel region coverage using multiple UAVs is presented. The polygon decomposition method proposed is unfortunately limited to rectilinear polygons. Although the approaches presented in these approaches consider application of terrain covering using robotic systems, the underlying methods used for polygon decomposition are either based on the algorithm presented in [18] or are limited to rectilinear polygons and do not include compactness metrics as the optimization objective. As such, these approaches are not directly comparable to the work presented here with the exception of [18], but rather the new specification of the area partitioning problem and the new algorithms presented can be incorporated in these solutions.
IB Contributions
The work described in this paper includes the following contributions:

A new area partitioning problem representation using a grid cell discretization and potential field model [21] that allows for applying standard optimization techniques. Solutions acquired by using the new problem formulation tradeoff accuracy of satisfying area size constraints while allowing for maximization of subpolygon compactness.

A new algorithm for solving the area partitioning problem which includes a number of optimization algorithms combined with two postprocessing methods. Integrated algorithms include a new heuristicbased method (the Potential Field Heuristic), the Covariance Matrix Adaptation Evolution Strategy (CMAES)
[17], and Random Search optimization techniques. 
An empirical evaluation of the proposed algorithm on a set of randomly generated nonconvex polygons and comparison with an existing stateoftheart technique [18].
The structure of this paper is as follows. First, a problem definition is provided in Section II. Section III describes the proposed polygon area decomposition algorithm. The results of experimental evaluation are presented in Section IV. Finally, the conclusions with a short description of future work are presented in Section V.
Ii Problem definition
Formally, the problem considered in this paper can be described as follows. Given a simple^{1}^{1}1a polygon with no selfintersection polygon , decompose the polygon into disjoint subpolygons to . The area of each subpolygon () is specified by a weight (). Weights define area sizes as a proportion of the total area of the polygon . Thus, the area partitioning problem is defined as follows:
(1)  
(2)  
(3)  
(4) 
Additionally, to take shape characteristics into account, the generated subpolygons should maximize the compactness score of each subpolygon. Since subpolygons are represented as sets of polylines each defined by its vertices (points) the state space of the problem that needs to be considered is infinite. In the approach presented in this paper, the problem representation is relaxed by discretizing the polygon into a finite size set of grid cells. Using such representation allows for redefining of the infinite domain problem into a finite domain constraint optimization problem.
Let be a finite set of grid cells of size that cover the polygon . A grid cell is defined as a tuple where is the center of the grid cell, and is its contributing area. The area is calculated as the sum of the intersections of the grid cells with the polygon . Thus, the area values for the grid cells not fully included in the polygon (cells close to polygon edges) will be bounded by . Formally, this is defined as:
(5)  
(6)  
(7)  
(8)  
(9) 
An illustrative example of grid cell decomposition is presented in Fig. 3 (left), where the original polygon has been decomposed into a set of 127 grid cells (). The original problem of area polygon partitioning can be represented as finding disjoint subsets of subject to area constraints. A potential field model is used to compute an assignment of cells to each subset of , where each subpolygon is represented as an attractive potential defined by a circle with its center position and a radius . The radius corresponds to the attractive force value of the potential. Let : represent a subpolygon partition. is defined as a tuple , where is a set of grid cells that belong to the partition, and and define the attractive force associated with it. An example of four subpolygon partitions for a simple polygon is presented in Fig. 3 (right, using colorcoding for each partition).
Allocation of grid cells to a particular subpolygon partition is defined by the following function:
(10) 
where parameter is a grid cell index, is the euclidean distance between the potential field center and a grid cell center , and is the potential field radius. Thus, using such representation a subpolygon is defined as a union of grid cells that has been assigned to the polygon partition:
(11)  
(12) 
Given the discretized representation of the original area partitioning problem, the following biobjective function is used to define it as an optimization problem that captures both the area size constraints and shape characteristics:
(13)  
(14)  
(15) 
where (each is a tuple ), and is a compactness score for subpolygon . The is a function of area size error for each subpolygon defined as:
(16) 
Minimizing the objective function ensures that each subpolygon meets its target area size. Maximizing the second objective function ensures that subpolygon compactness is maximized. Intuitively, when , the set of subpolygons meet the exact area size requirements defined by weights in and each subpolygon compactness score is equal to 1.
The optimization problem for solving the area partitioning problem using the discretized representation is defined as:
(17a)  
s.t.  (17b)  
(17c)  
(17d)  
(17e) 
where a subpartition is the result of the cell assignment of according to eq. 11. Constraints defined in the optimization problem ensure that: (1) the subpolygons are disjoint (eq. 17c); (2) the subpolygons cover the whole area defined by the original polygon (eq. 17b); and (3) the weights defined for area sizes are a proportion of the total area of polygon (eq. 17e). Satisfying the area size constraints for each subpartition is not exact since the proposed representation uses a set of grid cells of finite size. Let be defined as an upper bound on the area size error for each partition. The constraint that enforces the bounds on area size errors is defined by eq. 17d.
In order to apply general optimization methods for solving the polygon decomposition problem defined above, the objective function including penalties for constraint violations is defined as follows:
(18)  
(19) 
where is a penalty coefficient chosen empirically.
Iii Polygon Area Decomposition Algorithm
In this section, a description of the proposed algorithmic framework is presented. Fig. 4 provides an overview of the AreaDecompose algorithm. Given a polygon description and a set of weights, the algorithm starts by dividing the polygon into a set of grid cells with a fixed size (Step 1). In Step 2, one of the five alternative methods for optimization is applied to assign grid cells to each subpolygon. The first alternative, a Potential Field Heuristics (PFH) method, uses a heuristicbased approach for finding optimal values of attractive force fields for each subpolygon. The second and third alternatives apply generic optimization techniques, namely the Covariance Matrix Adaptation Evolution Strategy (CMAES) and Random Search (RS). The final two alternatives in the optimization step combine the heuristicbased PFH algorithm with the generic CMAES and RS approaches. In this case, the PFH provides an initial solution that is then refined by applying the CMAES or RS optimization techniques. The final step of the AreaDecompose algorithm (Step 3) includes two postprocessing procedures. The first procedure deals with the problem of removing potential disconnected cells causing invalid subpolygon borders (Section IIID, Fig. (a)a) in the generated solutions. Additionally, a heuristicbase procedure is applied that tries to improve values of area size errors for each partition. Subpolygons are rebalanced by exchanging grid cells when possible to meet the area size constraints (Section IIID, Fig. (b)b). The second postprocessing step applies a polyline simplification algorithm to smooth out subpolygon borders that define the final subpolygons (Section IIIE, Fig. 6).
The AreaDecompose algorithm is presented in Algorithm 1. An input to the algorithm is the polygon and a set of weights defining the relative area sizes of generated subpolygons as proportions of the total area of . The number of subpolygons that the algorithm will calculate is equal to . Other parameters used during the calculations are the minimum tolerance value for an error on area size of each subpolygon, and a set of flags defining which optimization techniques to apply.
The algorithm starts by decomposing the polygon into a set of grid cells (see Fig.3, Lines 12). The grid cell size used for decomposition is calculated based on the requested upper bound on the area size error and the smallest partition size defined by weights in as follows:
(20) 
Such grid cell size selection criteria ensures that the area error requirements can be met.
Next, the partitions are initialized. The list of grid cells assigned to partition is initialized as empty (Line 4). Initial centers of attractive potential fields for each partition is calculated as equidistant points on the perimeter of polygon (Line 5). The radius of each potential field circle is initialized so that the area of the circle is equal to the expected area of the subpolygon defined by its weight :
(21) 
Depending on the algorithm configuration flags () the algorithm executes a combination of methods that include a heuristicbased algorithm PFH, or optimization methods based on CMAES or RandomSearch (Lines 710). These are the main methods that search for potential field configurations that yield minimal area errors and maximize compactness of each partition as defined by the optimization problem in eq. 17aeq. 17e.
After the main part of the algorithm is executed, the resulting configurations of subpolygons are further optimized by a postprocessing step that tries to improve cell assignments (Line 11). The rebalancePartitions function considers two factors. On rare occasions the disjoint sets of grid cells generated by the algorithms can result in noncontinuous partition borders (Fig. (a)a). Additionally, the assignment of cells is checked if further improvement of area size errors can be made by switching grid cells between neighboring partitions (Fig. (b)b).
Next, the resulting partitions defined by are transformed into a set of polylines describing each subpolygon (Line 12). The final step of the algorithm (Line 13) applies the simplifyBorders function that simplifies polylines resulting from a grid cell decomposition (zigzag) line segments.
Details describing the main optimization and postprocessing methods are presented in the remainder of this section.
Iiia Potential Field Heuristics
The Potential Field Heuristics (PFH) is an iterative method that searches for the optimization parameters , associated with each subpolygon partition using a simple heuristics. The algorithm is presented in Algorithm 2. The constant parameters used by the algorithm are the lower bound of the area size errors for subpolygons and the maximum number of iterations .
The general idea behind the PFH algorithm is to use a heuristic function to update radiuses of potential field circles () associated with subpolygon partitions (). These updates directly influence the grid cell assignments for each partition (, eq. 10), thus changing partition configurations. Centers of potential field circles () are calculated as partition centroids based on the updated grid cell assignments ().
The execution of the algorithm starts with the initialization of two auxiliary variables (Lines 12). The first one is a boolean flag () used to determine if a termination condition of the algorithm has been satisfied. The second variable is a simple iteration counter used in the main loop of the algorithm which is executed in Lines 312. Conceptually the main loop consists of four steps. The first step is the heuristicbased update of radiuses (Lines 45). The second step, includes updates of subpolygon partitions (, Lines 68). In the next step centers of potential field circles () are calculated (Line 9). Finally, two termination conditions are evaluated (Lines 1112). The first condition checks whether area sizes of the resulting subpolygon partitions are within the specified error . The second condition checks if the algorithm reached its maximum number of iterations .
The heuristics used for updating the radius associated with each potential field circle executed by the function is defined by following equations:
(22)  
(23)  
(24) 
where is a linear dampening factor dependent on the maximum iteration parameter and the iteration counter of the algorithm. Thus, in the initial iterations of the execution, updates to the radius are more radical taking bigger step values. While towards the end of the algorithm execution the updates become more refined.
Centers of the attractive potential field force for each partition are calculated as partition centroids:
(25) 
IiiB CMAES Optimization
The Covariance Matrix Adaptation Evolution Strategy (CMAES) [17]
is an evolutionary optimization technique that has gained popularity in recent years. It has been successfully applied to many problems where the gradient of the objective function cannot be computed directly and where numerical computation is expensive. Additionally, the CMAES includes methods to mitigate local minima problems. The strategy used in the algorithm can estimate the correlation between the optimization parameters and can detect when the parameters are independent. The CMAES searches for parameters
among a set of possible parameters minimizing the value of the objective function without any specific knowledge or restrictions on the function itself. For example, can be nonconvex, multimodal, nonsmooth, discontinuous, noisy, or illconditioned.For the area partitioning problem, the CMAES is used to minimize the biobjective function defined in eq. 18, where (each is a tuple ). The algorithm searches for a set of potential field centers and their radiuses that minimizes the subpolygon area size errors and maximizes the subpolygon compactness. During each optimization step, the CMAES selects a set of parameters which defines the grid cell assignment (eq. 10, eq. 12) used to evaluate the objective function .
IiiC Random Search Optimization
A simple alternative to the CMAES optimization proposed in this paper uses a stochastic random search approach. In this case the optimization parameters and the objective function are the same as in the case of the CMAES. The algorithm generates a number of random configurations of , which are evaluated and the best is returned.
IiiD Rebalancing Partitions
The subpolygon partitions generated by the main optimization methods are further improved by applying the RebalancePartitions postprocessing step which includes two procedures.
The focus of the first procedure is to deal with noncontinuous partition boundaries that may result from grid cell assignment. A subpolygon partition boundary is defined by vertices shared with the original polygon and edges of its grid cells that are neighbors to other subpolygon partitions. Sporadically the assignment of grid cells to subpolygon partitions based on eq. 10 may create divisions that result in noncontinuous partition boundaries. Transforming such boundaries directly into polyline representation used by final subpolygons will create selfintersecting polygons. A concept of disconnected grid cells can be used to identify noncontinuous partition boundaries. A grid cell assigned to a partition is considered to be disconnected if all of its neighboring cells (top, bottom, left, right) are assigned to other partitions than the partition in question.
(26) 
where are top, bottom, left and right neighboring cells of . An example of a disconnected cell is presented in Fig. (a)a (left), where one cell assigned to has only cell neighbors that belong to and partitions.
The procedure used to remedy the problem of noncontinuous partition boundaries is defined as follows. For each disconnected grid cell select a new subpolygon partition until all grid cells are connected to their respective partitions. The selection of a new partition is based on the original assignment equation (eq. 10) where the second best partition is consider that is also a neighbor of the cell in question. Fig. (a)a shows a simple example of the reassignment. In this case, the disconnected cell can be assigned to either partition or as both of the partitions are neighbors of the cell. After applying the assignment equation (eq. 10) the subpolygon partition is selected.
The second process included in the RebalancePartitions postprocessing step uses simple heuristics to improve area size errors for subpolygon partitions. For example, suppose a partition has received too many grid cells during the optimization. In that case, the redundant cells can potentially be reassigned to a neighbor partition with an insufficient number of grid cells. Thus the area size errors for both partitions are minimized.
In order to ascertain if a partition has been assigned too many grid cells or an insufficient amount of cells, an approximate expected number of cells () is calculated according to the following equation:
(27)  
(28) 
where corresponds to the number of cells that are redundant or absent in the partition .
The postprocessing procedure starts with selecting a partition with the largest area that has been assigned a redundant number of cells (i.e. ). All neighbor partitions of are evaluated according to eq. 28 and the partition which is missing the most cells is selected () to receive a grid cell from . The process of selecting the recipient partition is defined by the following equation:
(29) 
The candidate cell selected for reassignment is the cell that is the furthest from the partition center . The procedure is repeated until no further cell reassignments can be made.
An illustrative example of the procedure is shown in Fig. (b)b. Partition has the largest area and has been assigned too many cells while its neighbor partition has an insufficient number of grid cells. The procedure reassigns one grid cell from partition to .
IiiE Polyline Simplification
The borders between the subpolygons generated by the grid cell assignment follows the edges of the cells. Because of that, the polylines defining the borders are in shape of a zigzag pattern and consists of an excessive number of segments. The final postprocessing step, simplifyBorders, used in the AreaDecompose algorithm, aims at simplifying these borders with kpoint polylines where the value of is minimal. Fig. 6 depicts an example set of subpolygons with their borders resulting from the grid cells assignment (top left) and their final borders after the simplification (top right). In this example, four borders have been simplified with 3point polylines depicted at the bottom of the figure.
The simplifyBorders procedure first identifies borders that need to be simplified. The grid corners shared between at least three partitions are considered to be fixpoints together with any points on the boundary of the polygon . An example of a set of fixpoints computed for a sample polygon is presented in Fig. 6 (black circles). The polylines considered for simplification include all borders between the fixpoints that are not part of the boundary of (blackred circles in the figure). The simplifyBorders procedure applies the kpoint polyline simplification algorithm to these borders.
The simplification algorithm applied to each border should fulfill two key requirements. First, the resulting polyline should have minimal influence on the compactness score of each subpolygon (compactness requirement). Second, the effects of the border simplification on the area size of each subpolygon should be minimized (area requirement).
In [9], the authors proved that the problem of approximating a polyline by a kpoint polyline that fulfills the compactness and area requirements is an NPhard problem. Additionally, a solution limited to the case of an xmonotone path respecting the area requirement was proposed. Most polyline simplification algorithms, such as variants of the RamerDouglasPeucker algorithm [30][12], address the compactness requirement by ensuring that the distance between the simplified polyline is within a threshold of the original polyline. Thus, the approximated polyline has similar shape to the original polyline and as a consequence the compactness of the polygons defined by either the approximated or original polylines remains similar. In this work, the proposed kpoint simplification algorithm adopts this idea. The exact relation between the distance measure threshold values and the compactness score is left to be investigated in the future work.
The proposed method incrementally tries to find a kpoint polyline that fulfills both area and compactness requirements with increasing number of points, starting with . For each value of , a gradient descent [32] method is used to find positions of the intermediate points in the new kpolyline, where the objective is to satisfy the area requirement. The objective function used in the optimization is based on Green’s theorem [15] which allows for a calculation of areas enclosed by the original border and its kpoint simplifying polyline. After the optimization, the algorithm checks if the maximum distance between the new kpolyline and the original border is within a given threshold, that is, if the compactness requirement is satisfied. If the distance is larger than the threshold, the value of is incremented, and the procedure is repeated.
Before describing the simplifyBorders algorithm in detail, the following notation is introduced. Let denote the input polyline defined by a set of points . A kpoint polyline that simplifies is denoted as . Note, the first and the last points in are and , respectively. Thus, the goal of the algorithm is to find positions of intermediate points to complete the polyline. An example of single polyline simplification is shown in Fig. 7. The original polyline consists of 13 points and its simplified by a 3point polyline defined by three points . The area requirement is satisfied as the areas between the two sides of the polyline enclosed by are equal (i.e. the sum of the yellow and the sum of green areas are equal). The maximum distance between the points of border and the simplifying polyline is below a given threshold satisfying the compactness requirement.
The SimplifyBorders algorithm is presented in Algorithm 3. The input to the algorithm includes the original polygon , the set of subpolygons and the size of the gridcell (eq. 20) used as a threshold for satisfying the compactness requirement. The parameter defines the maximum number of iterations of the gradient descent algorithm used to find the intermediate point positions of the simplifying kpoint polyline .
The SimplifyBorders algorithm starts with identifying a set of borders that need to be simplified (Line 1). For each border a kpoint simplifying polyline is calculated in Lines 213. The number of intermediate points in is initialized to 1 (Line 4). The main optimization loop for finding the positions of intermediate points of is executed in Lines 513.
The optimization loop starts with the initialization of intermediate points (Line 6). The function returns a list of points equally spaced along the original polyline . The initial kpoint polyline is constructed by combining the start () and end point () of with the set of intermediate points (Line 7).
The gradient descent optimization method (, Line 8) is used to calculate the positions of intermediate points of . For a given set of points the method solves the following optimization problem:
(30) 
where function is based on Green’s theorem [15] and allows for calculation of areas enclosed by and . If the sum of areas on each side of enclosed by are equal then . The green formula is defined as:
(31) 
where and .
Minimizing the value of the green formula when choosing the positions of the intermediate points of ensures that the area requirement is satisfied.
The resulting configuration of the kpoint polyline is evaluated using the distance measure to check if it satisfies the compactness requirements (Line 913). The function computes the maximum distance between the points of and according to:
(32) 
An example result of calculation for a sample polyline is depicted in Fig. 7. If the value of is larger than the threshold defined as the gridcell size (eq. 20), the compactness requirement is not satisfied and the optimization loop (Lines 513) is repeated with an increased number of intermediate points . Otherwise, the newly calculated kpoint polyline meets both area and compactness requirements, and the main optimization loop is terminated (Line 13) and continues its execution for the remaining borders in .
Finally, once the process of finding simplifying kpoint polylines for all borders in is finished, the algorithm assigns newly calculated polylines to respected subpolygons in (Line 14).
Iv Empirical Evaluation
The proposed AreaDecompose algorithm has been evaluated in a series of runs with different configurations applied to a set of randomly generated polygons. The generated set consist of 200 nonconvex polygons^{2}^{2}2https://gitlab.liu.se/lrs/pad_polygons.
Several compactness scores exist in the literature that can be used in the optimization phase of the AreaDecompose algorithm. For the experimental evaluation, a set of the five most commonly used compactness metrics has been chosen. The first metric is the Schwartzberg score [33]. The score is computed as the ratio between the perimeter of the input polygon and the circumference of a circle which has the same area size:
(33) 
where is the perimeter of the polygon .
The second metric is the Polsby––Popper score [29], which is calculated as a ratio of the polygon area and a circle area whose circumference is equal to the perimeter of the polygon:
(34) 
The third metric is the Reock degree of compactness [31]. It is computed as the ratio between the polygon area and the area of the smallest circle which encloses it:
(35) 
The fourth compactness metric is the two balls score derived from the definition of the fat property [13]. The score is computed as the ratio between the circumference of the maximum inscribed circle in the polygon and the circumference of the minimum enclosing circle of the polygon :
(36) 
where and are circumferences of the inscribed and enclosing circles, respectively.
The final metric used in the evaluation is the lengthtowidth ratio of the rotated minimum bounding rectangle (RMBR) of the polygon :
(37) 
where and is the width and length of the RMBR.
The values for all compactness scores range from 0 to 1, where values closer to 1 indicate more compact polygons. The Schwartzberg score was used in the objective function (eq. 15) whenever the optimization methods in the AreaDecompose algorithm were utilized. Other compactness scores listed above were used for the evaluation of results. A Collective compactness score was calculated as an average of all scores for each generated result.
The evaluation of the algorithms was performed on a set of randomly generated polygons. The problem parameters used for different runs are summarized in Table I. The algorithms were applied to five different problems (categorized as Case 15) with varying the number of subpolygons and their sizes. In the simplest case (Case 1), the polygons were divided into two equally sized subpolygons. The most complex case (Case 5) involved dividing polygons into a randomly selected number of subpolygons (up to 9) with their sizes selected randomly. Cases 14 were applied once on the polygon dataset, while Case 5 was applied twice.
Case#  #Polygons  #Partitions  Weights 

1  200  2  0.5, 0.5 
2  200  3  0.166, 0.333, 0.5 
3  200  4  0.1, 0.2, 0.4, 0.5 
4  200  5  0.2, 0.2, 0.2, 0.2, 0.2 
5  400  RND (2,9)  RND 
Five configurations of the AreaDecompose algorithm were used. The first three configurations included running a single optimization algorithm. These included running either the heuristicbased PFH algorithm (Section IIIA), the CMAES (Section IIIB) or the Random Search (RS, Section IIIC). The remaining two configurations involved using a combination of the heuristicbased PFH with one of the other remaining optimization methods (i.e. CMAES and RS). The results generated by the heuristicbased algorithm were used as initial solutions for the optimization methods. Additionally, three different tolerance values for area size errors were used (i.e. ).
Subpolygons generated by the AreaDecompose algorithm when solving Cases 15 were compared to the polygons generated by the Hert&Lumelsky algorithm [18] using the same input parameters. The experiments were run using a computer equipped with an AMD Ryzen 9 3900XT 12Core Processor utilizing a single core. The Hert&Lumelsky algorithm was implemented in Python, while the AreaDecompose was implemented in C++ programming language.
The results of the experimental evaluations for different algorithm configurations and tolerance values were computed as an average value over all cases considering three metrics. The metrics included the quality and accuracy of the solutions and the efficiency of the algorithm. The quality of the solutions measured as Collective compactness scores for the generated subpolygons is presented in Fig. 8. The subpolygon area errors representing the accuracy of the solutions are depicted in Fig. 9. Finally, the algorithm efficiency measured as the algorithm execution times are depicted in Fig. 10.
The Hert&Lumelsky algorithm generated subpolygons with the lowest compactness score (0.38) among all the algorithms considered (Fig. 8). In comparison, the AreaDecompose algorithm with its five different configurations provided solutions with subpolygons’ compactness scores between 0.58 to 0.66. The worst performing configurations included running only the CMAES or RS optimization methods. These optimization methods do not perform well when searching for an optimal solution given the initial parameters of the potential field configurations. The optimization methods most likely find suboptimal solutions at local minima. Additionally, the three different tolerance values for area size errors () did not influence the achieved compactness scores. The heuristicsbased PFH and its combinations with CMAES and RS generated results with the highest compactness scores ranging between 0.61 and 0.66. In these configurations, lower tolerance value settings () led to solutions with higher compactness scores. This is expected as the lower values will result in a higher number of grid cells used, thus allowing to find solutions with more compact subpolygons.
The average area size error for solutions generated by the Hert&Lumelsky algorithm was recorded at 3.7e4% (Fig. 9). This is as expected since the algorithm guarantees the area sizes of subpolygons to be exact. In case of the AreaDecomse algorithm, the area size errors are bounded by the tolerance parameter . All configurations of the algorithm generated solutions with an error below the value of the tolerance , except the RS. The most accurate solutions were generated by the PFHCMAES configuration with errors as low as 0.23% for the case of .
The AreaDecompose algorithm efficiency measured as its execution times varied depending on the chosen configuration and the tolerance of the area size error parameter. This is as expected since the execution times depend on the number of grid cells used while searching for a solution. Overall, the algorithm execution times varied between 4.1ms (PFH, ) and 535ms (RS, ). Execution times for the Hert&Lumelsky algorithm were measured at 9.2ms, which was higher than in the case of the PFH run with (4.1ms) and (7.3ms). Other AreaDecompose configurations resulted in higher execution times especially for configurations were the heuristicbased PFH was combined with the CMAES or the RS techniques.
Overall, the PFH and PFH combined with CMAES were the two best AreaDecompose algorithm configurations when considering all three measured aspects. Results comparing these two configurations to the Hert&Lumelsky algorithm are presented in Table II. For the case of the area size error the PFH and its combination with CMAES generated solutions with a higher collective compactness score in comparison to the Hert&Lumelsky algorithm. The PFHCMAES configuration provided solutions with significantly lower area size error (0.23%) when compared to the PFH configuration (0.48%) at a cost of a higher computational time required (31ms vs 168ms).
An interesting dependency between the number of subpolygons requested in the problem and the compactness scores has been observed. This is especially visible in problems where the original polygon is nonconvex, such as the polygons included in the evaluation dataset. Dividing a nonconvex polygon into a small number of partitions (e.g. two) is restricted by the original polygon borders. More compact subpolygons can be generated when the problem involves dividing the polygon into a larger number of subpolygons. Fig. 11 presents the average collective compactness scores as a function of the number of subpolygons for different algorithm configurations. Interestingly, the Hert&Lumelsky algorithm generally generates less compact subpolygons as the number of subpartitions grows. This is attributed to the divideandconquer technique used in the algorithm, and the results obtained in our evaluation confirm the findings discussed in [18]. In the case of the AreaDecompose algorithm, this trend is reversed. That is, the higher the number of subpolygons requested in the problem, the higher their compactness. The PFHCMAES configuration of the AreaDecompose algorithm generated subpolygons with compactness scores between 0.55 (2 subpolygons) up to 0.71 (9 subpolygons). The Hert&Lumelsky algorithm found solutions with compactness scores between 0.27 (9 partitions) and 0.49 (2 subpolygons). While the PFHCMAES outperforms the Hert&Lumelsky algorithm when considering the subpolygon compactness, the difference becomes much more significant as the number of subpolygons requested in the problem increases. In the extreme case of 9 subpolygons, the PFHCMAES generated subpolygons which are 258% more compact than the results provided by the Hert&Lumelsky algorithm.
Algorithm  Tolerance  Collective  Area Error  Time 
Score  [%]  [ms]  
1%  0.66  0.23  168.45  
PFHCMAES  5%  0.65  1.03  30.02 
10%  0.63  2.29  15.48  
1%  0.66  0.48  31.02  
PFH  5%  0.64  2.14  7.26 
10%  0.63  4.24  4.11  
Hert&Lumelsky  NA  0.38  0  9.24 
Iva Other Application Examples
The proposed problem representation that uses a grid cell discretization and potential field model can be utilized for solving other classes of problems than area partitioning. One such class includes GIS related problems where the spacial data is represented as rasters. A raster consists of a two dimensional matrix of cells (or pixels) where each cell contains a value representing information. Rasters are commonly used in GIS systems to represent spacial data due to their simple format and ability to visualize them in the form of an image or a map. An example is a population density map, where each pixel of the image contains the population size [22]. The gridbased representation used as the basis of AreaDecompose allows for a straight forward association between the value contained in a raster and a cell of the grid used in the process of partitioning. By calculating the population of each grid cell based on the population density map, the AreaDecompose algorithm can be used to solve the problem of dividing a polygon based on the population density instead of the total surface of each partition. In that case, introduced in section III would not refer to the total surface, but instead, to the total population living in .
To demonstrate this application, the AreaDecompose is applied to the following redistricting problem. Given the population density map of the state of New York, calculate four districts with an equal population. The input population density map and districts generated by the AreaDecompose algorithm are shown in Fig. 12. Table III presents statistics for the resulting subregions, including the population error and three compactness scores. Each generated subregion, with the exception of subregion 2, maintains high compactness scores. In the case of region 2, the partitioning is constrained by the border of the state. Based on the input population density data, each district should ideally include 5.053 million inhabitants. Districts generated by the AreaDecompose algorithm have a population error of at most while maintaining high compactness scores.
Sub  Popu  Pop.  Surface  Schwarz  Reock  Polsby 

region  lation  Error  []  berg  Popper  
[M]  [%]  
1  5.114  1.19  84,052  0.79  0.56  0.62 
2  5.078  0.49  7,715  0.56  0.27  0.31 
3  5.104  1.00  2,694  0.62  0.35  0.39 
4  4.917  2.77  46,849  0.79  0.51  0.62 
V Conclusion and future work
In this paper, we have considered the problem of area partitioning, which deals with the division of nonconvex polygons into a set of nonoverlapping subpolygons with defined area sizes.
The area partitioning problem arises in terrain covering applications in collaborative robotics, where the goal is to find a set of efficient plans for a team of heterogeneous robots to cover a given area. In this application context, the shape characteristics of the subregions generated as a result of solving the area partitioning problem can improve the efficiency with which robotic tasks can be performed within each subregion. In fact, maximizing the compactness of subpolygons directly influences the optimality of any generated motion plan. The stateoftheart approach used in this application and proposed by Hert&Lumelsky in [18] provides an algorithm that guarantees the satisfaction of area size constraints, but does not consider subregion shape constraints. The algorithms that have been presented in this paper focus on addressing both the size and shape constraints while solving the area partitioning problem.
In this work, an extended formulation of the problem has been proposed that in addition to respecting area size constraints, aims at maximizing compactness scores of the generated subpolygons. The proposed problem formulation is based on grid cell discretization and a potential field model and allows for applying general optimization algorithms. An algorithm, the AreaDecompose, has been presented for solving the extended area partitioning problem efficiently. The algorithm combines a new heuristicbased method with existing optimization techniques and two postprocessing methods.
An empirical evaluation has been presented comparing the error on the area size and the compactness metric when using different configurations of the proposed partitioning algorithm. In addition, a comparative analysis is made between the algorithms proposed in this paper and the stateoftheart approach proposed by Hert&Lumelsky in [18]. The results have shown that the new methods proposed in this paper can generate on average up to 73% more compact partitions than the one from Hert&Lumelsky, while remaining within a specified tolerance on the error of the size of the partitions. Additionally, the best results were obtained when the AreaDecompose algorithm was configured to use a combination of heuristicsbased PFH and the CMAES optimization technique. The PFH quickly finds initial solutions, which are then further improved using the CMAES optimization, reducing the errors on the area constraints and improving the compactness of each subpolygon.
While the standard area partitioning problem has been mainly defined to be used in terrain covering applications in robotics, the extended problem definition proposed here can be easily adopted to other GISrelated applications. An example of such an application, also presented in the paper, is the problem of redistricting for the state of New York.
In this work, the polygon simplification step aims to ensure that the areas of the generated subpolygons do not change and to keep the simplified polygon within bounds of the original polygon to maintain its compactness properties. In future work, it would be interesting to investigate if the simplification step can be used to reduce the error on the area constraint and also improve the compactness property.
References
 [1] (2010) Equalarea locusbased convex polygon decomposition. Theoretical Computer Science 411 (14), pp. 1648 – 1667. Note: Structural Information and Communication Complexity (SIROCCO 2008) External Links: ISSN 03043975, Document, Link Cited by: §IA.
 [2] (200603) Parallel region coverage using multiple uavs. In 2006 IEEE Aerospace Conference, Vol. , pp. 8 pp.–. External Links: Document, ISSN Cited by: §IA.
 [3] (2007) Rectilinear workspace partitioning for parallel coverage using multiple unmanned aerial vehicles. Advanced Robotics 21 (12), pp. 105–120. Cited by: §IA.
 [4] (198311) Minimum partition of polygonal regions into trapezoids. In 24th Annual Symposium on Foundations of Computer Science (sfcs 1983), Vol. , pp. 233–241. External Links: Document, ISSN Cited by: §IA, §I.
 [5] (198604) Partitioning a polygonal region into trapezoids. J. ACM 33 (2), pp. 290–312. External Links: ISSN 00045411, Link, Document Cited by: §IA, §I.
 [6] (1988) Nonobtuse triangulation of polygons. Discrete & Computational Geometry 3 (2), pp. 147–168. Cited by: §IA.
 [7] (201610) Area coverage with heterogeneous uavs using scan patterns. In 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Vol. , pp. 342–349. External Links: Document, ISSN Cited by: §IA.
 [8] (1995) Linearsize nonobtuse triangulation of polygons. Discrete & Computational Geometry 14 (4), pp. 411–428. Cited by: §IA.
 [9] (2006) Areapreserving approximations of polygonal paths. Journal of Discrete Algorithms 4 (4), pp. 554 – 566. External Links: ISSN 15708667, Document, Link Cited by: §IIIE.
 [10] (1996) Optimal equipartition of rectangular domains for parallel computation. Journal of Global Optimization 8 (1), pp. 15–34. Cited by: §IA, §I.
 [11] (1992) An online algorithm for constrained delaunay triangulation. CVGIP: Graphical Models and Image Processing 54 (4), pp. 290–300. Cited by: §IA.
 [12] (1973) ALGORITHMS for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: The International Journal for Geographic Information and Geovisualization 10 (2), pp. 112–122. External Links: Document, Link, https://doi.org/10.3138/FM576770U75U7727 Cited by: §IIIE.
 [13] (2000) Dynamic data structures for fat objects and their applications. Computational Geometry 15 (4), pp. 215–227. External Links: ISSN 09257721, Document, Link Cited by: §IV.
 [14] (1975) Decomposition of polygons into simpler components: feature generation for syntactic pattern recognition. IEEE Transactions on Computers 100 (6), pp. 636–650. Cited by: §I.
 [15] (182807) An essay on the application of mathematical analysis to the theories of electricity and magnetism. Nottingham. External Links: 0807.0088, Link Cited by: §IIIE, §IIIE.
 [16] (1983) The decomposition of polygons into convex parts. Computational Geometry 1, pp. 235–259. Cited by: §IA.

[17]
(2006)
The cma evolution strategy: a comparing review.
In
Towards a New Evolutionary Computation: Advances in the Estimation of Distribution Algorithms
, J. A. Lozano, P. Larrañaga, I. Inza, and E. Bengoetxea (Eds.), pp. 75–102. External Links: ISBN 9783540324942 Cited by: 2nd item, §IIIB.  [18] (1998) Polygon area decomposition for multiplerobot workspace division. Int. J. Comput. Geometry Appl. 8, pp. 437–466. Cited by: 3rd item, §IA, §IA, §I, §IV, §IV, §V, §V.
 [19] (1983) Fast triangulation of simple polygons. In International Conference on Fundamentals of Computation Theory, pp. 207–218. Cited by: §IA.
 [20] (1985) Decomposing a polygon into simpler components. SIAM Journal on Computing 14 (4), pp. 799–817. Cited by: §IA.
 [21] (1990) Realtime obstacle avoidance for manipulators and mobile robots. In Autonomous Robot Vehicles, I. J. Cox and G. T. Wilfong (Eds.), pp. 396–404. External Links: Document, Link Cited by: 1st item.
 [22] (1994) Generating and mapping population density surfaces within a geographical information system. The Cartographic Journal 31 (1), pp. 21–26. Cited by: §IVA.
 [23] (1984) Bounds on the length of convex partitions of polygons. In International Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 279–295. Cited by: §IA.
 [24] (2013) An efficient measure of compactness for twodimensional shapes and its application in regionalization problems. International Journal of Geographical Information Science 27 (6), pp. 1227–1250. Cited by: §I.
 [25] (2006) Approximate convex decomposition of polygons. Computational Geometry 35 (12), pp. 100–123. Cited by: §IA.
 [26] (1978) On twodimensional data organization i. Fundamenta Informaticae 2 (1), pp. 211–226. Cited by: §I.
 [27] (1991) Finding a minimal cover for binary images: an optimal parallel algorithm. Algorithmica 6 (1), pp. 624–657. Cited by: §I.
 [28] (1992) Areabisecting polygonal paths. The Fibonacci Quarterly 30, pp. 263–73. Cited by: §IA.
 [29] (1991) The third criterion: compactness as a procedural safeguard against partisan gerrymandering. Yale L. & Pol’y Rev. 9, pp. 301. Cited by: §I, §I, §IV.
 [30] (1972) An iterative procedure for the polygonal approximation of plane curves. Computer Graphics and Image Processing 1 (3), pp. 244 – 256. External Links: ISSN 0146664X, Document, Link Cited by: §IIIE.
 [31] (1961) A note: measuring compactness as a requirement of legislative apportionment. Midwest Journal of Political Science 5 (1), pp. 70–74. External Links: ISSN 00263397, Link Cited by: §I, §IV.
 [32] (2016) An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. Cited by: §IIIE.
 [33] (1965) Reapportionment, gerrymanders, and the notion of compactness. Minn. L. Rev. 50, pp. 443. Cited by: §I, §I, §IV.
 [34] (2021) Flight planning in multiunmanned aerial vehicle systems: nonconvex polygon area decomposition and trajectory assignment. International Journal of Advanced Robotic Systems 18 (1), pp. 1729881421989551. External Links: Document, Link, https://doi.org/10.1177/1729881421989551 Cited by: §IA, §I.
Comments
There are no comments yet.