 # Euclidean TSP, Motorcycle Graphs, and Other New Applications of Nearest-Neighbor Chains

We show new applications of the nearest-neighbor chain algorithm, a technique that originated in agglomerative hierarchical clustering. We apply it to a diverse class of geometric problems: we construct the greedy multi-fragment tour for Euclidean TSP in O(n n) time in any fixed dimension and for Steiner TSP in planar graphs in O(n√(n) n) time; we compute motorcycle graphs (which are a central part in straight skeleton algorithms) in O(n^4/3+ε) time for any ε>0; we introduce a narcissistic variant of the k-attribute stable matching model, and solve it in O(n^2-4/(k(1+ε)+2)) time; we give a linear-time 2-approximation for a 1D geometric set cover problem with applications to radio station placement.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The nearest-neighbor chain (NNC) technique is used for agglomerative hierarchical clustering, and has only seen one other use besides it. In this paper, we apply it to an assortment of new problems: multi-fragment TSP, straight skeletons, narcissistic -attribute stable matching, and a server cover problem. These problems share a property with agglomerative hierarchical clustering, which we call global-local equivalence, and which is the key to using the NNC algorithm. First, we review the NNC algorithm in the context of clustering.

### 1.1 Prior work: NNC in hierarchical clustering

Given a set of points, the agglomerative hierarchical clustering problem is defined procedurally as follows: each point starts as a base cluster, and the two closest clusters are repeatedly merged until there is only one cluster left. This creates a hierarchy, where any two clusters are either nested or disjoint. A key component of hierarchical clustering is the function used to measure distances between clusters. Popular metrics include minimum distance (or single-linkage), maximum distance (or complete-linkage), and centroid distance. We call two clusters mutually nearest neighbors (MNN) if they are the nearest neighbor of each other. Consider this alternative, non-deterministic procedure: instead of repeatedly merging the two overall closest clusters, merge any pair of MNNs. Clearly, this may merge clusters in a different order. Nonetheless, if the cluster-distance metric satisfies a property called reducibility, this procedure results in the same hierarchy [Bruynooghe77, bruynooghe1978, Muellner2011]. A cluster-distance metric is reducible if for any clusters : if and are MNNs, then

 d(A∪B,C)≥min(d(A,C),d(B,C)). (1)

In words, the new cluster resulting from merging and is not closer to other clusters than both and were. The relevance of this property is that, if, say, and are MNNs, merging and does not break that relationship. The net effect is that MNNs can be merged in any order and produce the same result. Many commonly used metrics are reducible, including minimum–, maximum–, and average–distance, but others such as centroid and median distance are not. The NNC algorithm exploits this reducibility property, which was originally observed by Bruynooghe [bruynooghe1978]. We briefly review the algorithm for hierarchical clustering, since we discuss it in detail later in the context of the new problems. For extra background on NNC for hierarchical clustering, see [murtagh1983, Muellner2011]. The basic idea is to maintain a stack (called chain) of clusters. The first cluster is arbitrary. The chain is always extended with the nearest neighbor (NN) of the current cluster at the top of the chain. Note that the distance between clusters in the chain keeps decreasing, so (with an appropriate tie breaking rule) no repeated clusters or “cycles” occur, and the chain inevitably reaches a pair of MNNs. At this point, the MNNs are merged and removed from the chain. Crucially, after a merge happens, the rest of the chain is not discarded. Due to reducibility, every cluster in the chain still points to its NN, so the chain is still valid. The process continues from the new top of the chain. The algorithm is efficient because each cluster is added to the chain only once, since it stays there until it is merged with another cluster. As we will see in detail for other problems, this bounds the number of iterations to be linear on the input size, with the cost of each iteration dominated by a NN computation.

### 1.2 Our contributions

Our key observation is that this equivalence between merging closest pairs and MNNs is not unique to hierarchical clustering. The problems in this paper, even though they are not about clustering, exhibit an analogous phenomenon, for which we coin the term global-local equivalence. The main thesis of this paper is that NNC is an efficient algorithm for problems with global-local equivalence, which includes many more problems than hierarchical clustering. Recently, the NNC algorithm was used for the first time outside of the domain of hierarchical clustering [eppstein2017_2, eppstein2017]. It was used in a stable matching problem where the two sets to be matched are point sets in a metric space, and each agent in one set ranks the agents in the other set by distance, with closer points being preferred. In this setting, there is a form of global-local equivalence: the stable matching is unique, and it can be obtained in two ways: by repeatedly matching the closest pair (from different sets), or by repeatedly matching MNNs. They used the NNC algorithm to solve the problem efficiently. In this paper, we consider global-local equivalence in the context of the new problems, and give NNC-type algorithms for them. We summarize the computational results here. See each section for extended background on the corresponding problems.

###### Multi-fragment TSP.

A classic heuristic for the Euclidean Traveling Salesman Problem is the multi-fragment algorithm. While not having strong approximation guarantees, experimental results show that it performs better than other heuristics, particularly in geometric instances

[Krari17, JohnMcGe97, misev11, Moscato1994AnAO, Bentley1990, bentley92]. We do not know of any subquadratic algorithm to compute the tour produced by this heuristic, which we call the multi-fragment tour. We give a -time algorithm for computing the multi-fragment tour of a point set in any fixed dimension and using any metric. We also consider the Steiner TSP problem in a graph-theoretical framework [Cornuejols1985], where we give a -time algorithm for finding the multi-fragment tour through a subset of nodes in planar graphs and, more generally, graph families with -size separators.

###### Straight skeletons and motorcycle graphs.

The fastest algorithms for computing straight skeletons consist of two phases, neither of which dominates the other [Cheng2016]. The first phase is a motorcycle graph computation. The best currently known algorithm for motorcycle graphs runs in time, where where and are the preprocessing time and operation time (maximum between query and update) of a dynamic ray-shooting data structure for curtains in  [Vigneron2014]. We improve this to . Using the structure from [agarwal93], both algorithms run in for any , but if both use the same in the data structure, ours is faster by a factor.

###### Narcissistic k-attribute stable matching.

Given that is optimal for general stable matching instances, it is interesting to study restricted models. We introduce a narcissistic variant of the -attribute model [bhatnagar2008] and give a subquadratic, -time algorithm for it, for any .

###### Server cover.

We give a linear-time -approximation for a one-dimensional version of a server coverage problem: given the locations of clients and servers, which can be seen as houses and telecommunication towers, the goal is to assign a “signal strength” to each communication tower so that they reach all the houses, minimizing the cost of transmitting the signals. This improves upon the -time algorithm by Alt et al. [carrots] with the same approximation ratio.

##### Paper organization.

Section 2 introduces a new data structure, which we call the soft nearest-neighbor data structure. Section 3 solves multi-fragment Euclidean TSP with a variant of NNC that uses this structure. Sections 45, and 6 are on motorcycle graphs, narcissistic -attribute stable matching, and server cover, respectively. Thus, Section 3 relies on Section 2, but the other sections are independent, self-contained, and in no particular order. We give concluding remarks in Section 7.

## 2 The Soft Nearest-Neighbor Data Structure

Throughout this section, we consider points in , for some fixed dimension , and distances measured under any metric . We begin with a formal definition of the structure and the main result of this section. [Dynamic soft nearest-neighbor data structure] Maintain a dynamic set of points, , subject to insertions, deletions, and soft nearest-neighbor queries: given a query point , return either of the following:

• The nearest neighbor of in : .

• A pair of points in satisfying .

In any fixed dimension, and for any metric, there is a dynamic soft nearest-neighbor data structure that maintains a set of points with preprocessing time and time per operation (queries and updates). We label the two types of answers to soft nearest-neighbor (SNN) queries as hard or soft. A “standard” NN data structure is a special case of a SNN structure that always gives hard answers. However, in light of Theorem 2, a standard NN structure would not be as efficient as a SNN structure. For comparison, the best dynamic NN structure in requires time per operation [Chan2010, KapMulRod-16]. In our implementation, we use the following data structure. Given a point set and a point , let denote the -th closest point to in . [Dynamic -approximate nearest-neighbor (-ANN) data structure] Maintain a dynamic set of points, , subject to insertions, deletions, and -approximate nearest-neighbor queries: given a query point and an integer with , return points such that, for each , , where is a constant known at construction time222Some approximate nearest-neighbor data structures [arya1998optimal] do not need to know at construction time, and, in fact, allow to be part of the query and to be different for each query. Clearly, such data structures are also valid for our needs.. We reduce each SNN query to a single -ANN query with constant and . Once we show this reduction, Theorem 2 will follow from the following result by Arya et al. [arya1998optimal]: [[arya1998optimal]] In any fixed dimension, and for any metric, there is a dynamic -approximate nearest-neighbor data structure with preprocessing time and time per operation (query and updates) for constant and .

### 2.1 Soft nearest-neighbor implementation

We maintain the point set in a dynamic -ANN structure ( depends on the metric space, and will be determined later). In what follows, denotes an arbitrary query point and the -th closest point to in . For ease of presentation, we assume throughout the section that . This scaling does not affect any result. Queries rely on the following lemma. Consider a query to a -ANN structure. If none of the returned points, is , then, for each with , we have that .

###### Proof.

For , the fact follows immediately from the definition of the -ANN structure (and the assumption that ). For , note that . This is because there are at least points within distance of : . Thus, The claim follows by induction. ∎

Let denote a closed shell centered at with inner radius and outer radius (i.e., is the difference between two balls centered at , the bigger one of radius and the smaller one of radius ). From Lemma 2.1, we get the following. Consider a query to an -ANN structure. If none of the returned points, , is , then they all lie in . We call a pair valid parameters if, in any set of points inside a shell with inner radius and outer radius , there must exist two points satisfying . Suppose that are valid parameters. Initially, we construct the -ANN structure using as the approximation factor. Then we answer queries as in Algorithm 1.

If are valid parameters, Algorithm 1 is correct.

###### Proof.

If a pair of points returned by the -ANN structure satisfy , and are a valid soft answer to the SNN query. Thus, consider the alternative case: no pair of the returned points is at distance . Then, because () are valid, at least one of the returned points must be outside of . By the contrapositive of Corollary 2.1, one of them must be . ∎

As a side note, a SNN structure always returns a hard answer when queried from a point that is part of the closest pair of the set of points it maintains, as there is no closer pair. In this way, a SNN structure can be used to find the closest pair in , for constant , in time by querying from every point. This matches the known runtimes in the literature [Bespamyatnikh1998].

### 2.2 Choice of parameters

We left open the question of finding valid parameters . This question is related to the kissing number of the metric space, which is the maximum number of points that can be on the surface of a unit sphere all at pairwise distance . For instance, it is well known that the kissing number is in and in . It follows that, in , are valid parameters. Of course, we are interested in . Thus, our question is more general in the sense that our points are not constrained to lie on a sphere, but in a shell (and, to complicate things, the width of the shell depends on the number of points). There are valid parameters in any metric space .

###### Proof.

Consider a shell with inner radius and outer radius , for some constant . A set of points in the shell at pairwise distance corresponds to a set of disjoint balls of radius centered inside the shell. Consider the volume of the intersection of the shell with such a ball. This volume is lower bounded by some constant, , corresponding to the case where the ball is centered along the exterior boundary. Since the volume of the shell, , is itself constant, the maximum number of disjoint balls of radius that fit in the shell is constant smaller than . This is because no matter where the balls are placed, at least volume of the shell is inside any one of them, so, if there are more than balls, there must be some region in the shell inside at least two of them. This corresponds to two points at distance . Set to be , and to be the constant such that . Then, are valid parameters for . ∎

The dependency of -ANN structures on is typically severe. Thus, for practical purposes, one would like to find a valid pair of parameters with as big as possible. The dependency on is usually negligible in comparison, and, in any case, cannot be too large because the shell’s width grows exponentially in . Thus, we narrow the question to optimizing : what is the largest that is part of a pair of valid parameters? We first address the case of , where we derive the optimal value for analytically. We then give a heuristic, numerical algorithm for general spaces.

##### Parameters in (R2,L2).

Let be the number such that , where is the golden ratio. The valid parameters with largest for are ( can be arbitrarily close to , but must be smaller). This follows from the following observations.

• The kissing number is , so there are no valid parameters with .

• The thinnest annulus (i.e., 2D shell) with inner radius such that points can be placed inside at pairwise distance has outer radius . Figure 1, top, illustrates this fact. In other words, if the outer radius is any smaller than , two of the points would be at distance . Thus, any valid pair with requires to be smaller than , but any value smaller than forms a valid pair with .

• For and for , it is possible to place points at pairwise distance in an annulus of inner radius and outer radius , and they are not packed “tightly”, in the sense that points at pairwise distance can lie in a thinner annulus. This can be observed easily; Figure 1 (bottom) shows the cases for and . Cases with can be checked one by one; in cases with , the annulus grows at an increasingly faster rate, so placing points at pairwise distance of each other becomes increasingly “easier”. Thus, for any , any valid pair with that specific would require an smaller than . Figure 1: Top: The first figure shows two concentric circles of radius 1 and φ with an inscribed pentagon and decagon, respectively, and some proportions of these shapes. The other figures show two different ways to place 10 points at pairwise distance ≥1 inside an annulus of inner radius 1 and outer radius (1+εφ)10=φ. Disks of radius 1/2 around each point are shown to be non-overlapping. In one case, the points are placed on the vertices of the decagon. In the other, they alternate between vertices of the decagon and the pentagon. In both cases, the distance between adjacent disks is 0. Thus, these packings are “tight”, i.e., if the annulus were any thinner, there would be two of the 10 points at distance <1. Bottom: 9 and 11 points at pairwise distance ≥1 inside annuli of radius (1+εφ)9 and (1+εφ)11, respectively. These packings are not tight, meaning that, for k=9 and k=11, a valid value of ε would have to be smaller than εφ.
##### Parameters in (Rδ,Lp).

For other spaces, we suggest a numerical approach. We can do a binary search on the values of to find one close to optimal. For a given value of , we want to know if there is any such that are valid. We can search for such a iteratively, trying (the answer will certainly be “no” for any smaller than the kissing number). Note that, for a fixed , the shell has constant volume. As in Lemma 2.2, let be the volume of the intersection between the shell and a ball of radius centered on the exterior boundary of the shell. As argued before, if is bigger than the shell’s volume, then are valid parameters. For the termination condition, note that if in the iterative search for , reaches a value where the volume of the shell grows more than in a single iteration, no valid for that will be found, as the shell grows faster than the new points cover it. Besides the volume check, one should also consider a lower bound on how much of the shell’s surface (both inner and outer) is contained inside an arbitrary ball. We can then see if, for a given , the amount of surface contained inside the balls is bigger than the total surface of the shell, at which point two balls surely intersect. This check finds better valid parameters than the volume one for relatively thin shells, where the balls “poke” out of the shell on both sides.

## 3 Multi-Fragment Euclidean TSP

The Euclidean Travelling Salesperson Problem asks to find, given a set of points, a closed tour (a closed polygonal chain) through all the points of shortest length. The problem is NP-hard even in this geometric setting, but a polynomial-time approximation scheme is known [arora1998polynomial]. In this section, we consider a classic greedy heuristic for constructing TSP tours, multi-fragment TSP. In this algorithm, each point starts as a single-node path. While there is more than one path, connect the two closest paths. Here, the distance between two paths is measured as the minimum distance between their endpoints, and connecting two paths means adding the edge between their closest endpoints. Once there is a single path left, connect their endpoints. We call the tour resulting from this process the multi-fragment tour. The multi-fragment algorithm was proposed by Bentley [bentley92] specifically in the geometric setting. Its approximation ratio is  [ONG1984273, Brecklinghaus15]. Nonetheless, it is used in practice due to its simplicity and empirical support that it generally performs better than other heuristics [Krari17, JohnMcGe97, misev11, Moscato1994AnAO, Bentley1990]. We are interested in the complexity of computing the multi-fragment tour. A straightforward implementation of the multi-fragment algorithm is similar to Kruskal’s minimum spanning tree algorithm: sort the pairs of points by increasing distances and process them in order: for each pair, if the two points are endpoints of separate paths, connect them. The runtime of this algorithm is . Eppstein [eppstein2000fast] uses dynamic closest pair data structures to compute the multi-fragment tour in time (for arbitrary distance matrices). Bentley [bentley92] gives a - tree-based implementation and says that it appears to run in

time on uniformly distributed points in the plane. We give a NNC-type algorithm that compute the multi-fragment tour in

in any fixed dimensions. We do not know of any prior worst-case subquadratic algorithm.

### 3.1 Global-local equivalence in multi-fragment TSP

Since the multi-fragment algorithm operates on paths rather than points, it will be convenient to think of the input as a set of paths (a path is an open polygonal chain, although, in the context of the algorithm, only the coordinates of the endpoints are relevant). The input to Euclidean TSP corresponds to a set of paths where all paths are single-point paths. Consider the following two strategies for constructing a tour from a set of paths, where we use to denote the path resulting from connecting paths and :

• While there is more than one path, connect two paths using one of the following strategies:

1. Connect the closest pair of paths.

2. Connect two mutually nearest-neighbor paths.

• Connect the two endpoints of the final path.

Strategy 1 corresponds to the multi-fragment algorithm. Note that Strategy 2 is non-deterministic, and that Strategy 1 is a special case of Strategy 2. In this section, we show that any execution of Strategy 2 computes the multi-fragment tour. Note the similarity between multi-fragment TSP and hierarchical clustering. We can see that in multi-fragment TSP we have a notion equivalent to reducibility in agglomerative hierarchical clustering (Equation 1). [Reducibility in multi-fragment TSP] Let and be paths. Then, .

###### Proof.

The distance between paths is defined as the minimum distance between their endpoints, and the two endpoints of are a subset of the four endpoints of and . ∎

[Global-local equivalence in multi-fragment TSP] Assuming that there are no ties in the pairwise distances between paths, strategies 1 and 2 produce the same tour. We adapt the proof of global-local equivalence for agglomerative hierarchical clustering presented in [Muellner2011]. We note that we can break ties using a consistent rule, such as breaking ties by the smallest index in the input.

###### Proof.

Let be a set of paths, and let denote the sequence of path pairs connected by Strategy 1 starting from , and the corresponding resulting tour. Similarly, let denote one of the possible sequences of path pairs connected by an instantiation of Strategy 2 starting from , and the corresponding tour. We need to show that . Proceed by induction on . If , both tours are the same because no connections happen. Thus, let . Let be the first pair of paths in . Then, consider the set . The tour can be seen as the tour obtained by starting from the set and connecting the same paths as in after the first connection . Note that . Thus, by the inductive hypothesis, . The bulk of the proof is to show that . First, note that is in : initially, and are MNN paths (since they are the first pair chosen by Strategy 2). Then, they remain so throughout the algorithm until they are connected. This is because (i) MNN paths are not connected with other paths, and (ii) by reducibility (Lemma 3.1), MNN paths stay so even if other paths are connected (i.e., if and are connected, is not closer to (or ) than the closest of and ). Let be the -th pair in . Next, we show that the first pairs in and are the same and in the same order. Let be the first pair of paths in . By Strategy 1, is minimum among all distances between paths in . By Lemma 3.1, is not closer to or than or . Thus, in , is also minimum, so is also the first element in . The claim for the next pairs follows analogously by induction. Finally, note that after the first connections in and the first connections in , the corresponding partial solutions are the same. After that point, all the connections, and the final solution, must be the same in both, so . ∎

We note that Lemma 3.1 holds for arbitrary distance matrices.

### 3.2 Soft nearest-neighbor chain for multi-fragment Euclidean TSP

Given that we have global-local equivalence (Lemma 3.1), we can use the NNC algorithm to compute the multi-fragment tour using Strategy 2. A straightforward adaptation of the NNC algorithm, paired with the NN structure from [Chan2010, KapMulRod-16], yields a runtime for . However, we skip this result and jump directly to our main result: The multi-fragment tour of a set of points in any fixed dimension, and under any metric, can be computed in time. We use a variation of the NNC algorithm that uses a SNN structure instead of the usual NN structure, which we call soft nearest-neighbor chain (SNNC). For this, we need a SNN structure for paths instead of points. That is, a structure that maintains a set of (possibly single-node) paths, and, given a query path , returns the closest path to or two paths which are closer.

###### A soft nearest-neighbor structure for paths.

We simulate a SNN structure for paths with a SNN structure for points. Given a set of paths, we maintain the set of path endpoints in the SNN structure for points. Updates are straightforward: we add or remove both endpoints of the path. Given a query path with endpoints , we do a SNN query from each endpoint of the path. If both answers are hard (assuming that the path has two distinct endpoints, otherwise, just the one), then we find the true NN of the path, and we can return it. However, there is a complication with soft answers: the two points returned could be the endpoints of the same path. Thus, it could be the case that we find two closer points, but not two closer paths, as we need. The solution is to modify the specification of the SNN structure for points so that soft answers, instead of returning two points closer to each other than the query point to its NN, return three pairwise closer points. We call this a three-way SNN structure. In the context of using the structure for paths, this guarantees that even if two of the three endpoints belong to the same path, at least two different paths are involved. Lemma 3.2 shows how to obtain a three-way SNN structure for points, Algorithm 2 shows the full algorithm for answering SNN queries about paths using a three-way SNN structure for points, and Lemma 3.2 shows its correctness. In any fixed dimension and for any metric, there is a three-way SNN structure with preprocessing time and operation time (queries and updates).

###### Proof.

Recall the implementation of the SNN structure from Section 2. To obtain a three-way SNN structure, we need to change the values of and to make the shell smaller and bigger, so that if there are points in a shell of inner radius and outer radius , then there must be at least three points at pairwise distance less than . The method described in Section 2.2 for finding valid parameters in also works here. It only needs to be modified so that the area (or surface) of the shell is accounted for twice. Since and are still constant, this does not affect the asymptotic runtimes in Theorem 2. ∎

In any fixed dimension, and for any metric, we can maintain a set of paths in a SNN structure for paths with preprocessing time and operation time (queries and updates).

###### Proof.

All the runtimes follow from Lemma 3.2: we maintain the set of path endpoints in a three-way SNN structure . The structure can be initialized in time. Updates require two insertions or deletions to , taking time each. Algorithm 2 for queries clearly runs in time. We argue that it returns a valid answer. Let be a query path with endpoints , and consider the three possible cases:

• Both answers are hard. In this case, we find the closest path to each endpoint, and, by definition, the closest of the two is the NN of .

• One answer is soft and the other is hard. Let be the hard answer to and the soft answer to (wlog). Let and be the two closest paths among the paths with endpoints and . If , then, the path with as endpoint must be the NN of , because there is no endpoint closer than to . Otherwise, is a valid soft answer, as they are closer to each other than either endpoint of to their closest endpoints.

• Both answers are soft. Assume (wlog) that the NN of is closer to than . Then, the soft answer to gives us two paths closer to each other than to its NN, so we return a valid soft answer.

###### The soft nearest-neighbor chain algorithm.

We use a SNN for paths. In the context of this algorithm, let us think of a SNN answer, hard or soft, as being a set of two paths. If the answer is hard, then one of the paths returned in the answer is the query path itself, and the remaining path is its NN. Now, we can establish a comparison relationship between SNN answers (independently of their type): given two SNN answers and , we say that is better than if and only if . The input is a set of paths, where we again assume unique distances. The algorithm maintains a stack (the chain) of nodes, where each node consists of a pair of paths (with the exception of the first node in the chain, which contains a single path). In particular, each node in the chain is the best SNN answer among two queries for the two paths in the predecessor node (when querying from a path, we remove it from the structure temporarily, so that the answer is not itself). The algorithm starts with an arbitrary path in the chain. If the chain ever becomes empty and there is still more than one path, the chain is restarted at an arbitrary path. If the best answer we get from the SNN structure is precisely the node currently at the top of the chain, we connect both paths contained in it and remove the node and its predecessor from the chain. Otherwise, we append the answer to the top of the chain as a new node. See Algorithm 3 for a full description of the algorithm and Figure 2 for a snapshot of the algorithm. Figure 2: Left: a set of paths (some of which are single points) and a possible chain, where the nodes are denoted by dashed lines and appear in the chain according to the numbering. Note that the first node is the only one containing a single path, and that all the nodes in the chain are soft answers except the fourth node. Right: Nearest-neighbor graph of the set of paths. For each path, a dashed/red arrow points to its NN. Further, the arrows start and end at the endpoints determining the minimum distance between the paths.

The following invariants hold at the beginning of each iteration of Algorithm 3:

1. The SNN structure contains a set of disjoint paths.

2. If node appears after node in the chain, then is better than .

3. Every path in appears in at most two nodes in the chain, in which case they consist of occurrences in two consecutive nodes.

4. The chain only contains paths in .

###### Proof.
1. The claim holds initially. Each time two paths are connected, one endpoint of each becomes an internal point in the new path . Since and are removed from , no path can be connected to those endpoints.

2. We show it for the specific case where is immediately after in the chain, which suffices. Note that is different than , or it would not have been added to the chain. We distinguish between two cases:

• and were MNNs when was added. Then, had to be a soft answer from or , which would have to be better than .

• and were not MNNs when was added. Then, had a closer path than (wlog). Thus, whether the answer for was soft or hard, the answer had to be better than .

3. Assume, for a contradiction, that three nodes , and appear in the chain in order , , (not necessarily consecutively, and with possibly in ). By Invariant 2, is better than . It is easy to see that if and are the two endpoints of path , then and were endpoints of paths since the beginning of the algorithm. Thus, the answer for when was at the top of the chain had to be a pair at distance at most . Note that , contradicting that is better than .

4. We show that no node in the chain contains paths that have already been connected to form bigger paths. Whenever the two paths in the node at the top of the chain are connected, we remove the node from the chain. By Invariant 3, their only other occurrence can only be in the predecessor node, which is also removed. In addition, since the paths are removed from when they are connected, newly added nodes to the chain only contain paths that have not been connected to form bigger path yet.

Paths connected in Algorithm 3 are MNNs in the set of paths in the SNN structure.

###### Proof.

Let be the node at the top of the chain, and the best SNN answer among the and queries. If and are not MNNs, at least one of them, (wlog), has a closer path than the other, , so the answer for cannot be . By the contrapositive, if the best answer is , then and are MNNs. In the algorithm, and are connected precisely when . ∎

###### Proof of subsection 3.2.

We show that Algorithm 3 computes the multi-fragment tour in time. In particular, it implements Strategy 2: the SNN structure maintains a set of paths, and the algorithm repeatedly connects MNNs (Lemma 3.2) By global-local equivalence (Lemma 3.1), this produces the multi-fragment tour. Note that the chain is acyclic in the sense that each node contains a path from the current set of paths in (Invariant 4) not found in previous nodes (by Invariant 3). Thus, the chain cannot grow indefinitely, so, eventually, paths get connected. The main loop does not halt until there is a single path. If there are paths at the beginning, there are different paths throughout the algorithm. This is because each connection removes two paths and adds one new path. At each iteration, either two paths are connected, which happens times, or one node is added to the chain. Since there are connections, each of which triggers the removal of two nodes in the chain, the total number of nodes removed from the chain is . Since every node added is removed, the number of nodes added to the chain is also . Thus, the total number of iterations is . Therefore, the total running time is , where and are the preprocessing and operation time of a SNN structure for paths. By Lemma 3.2, this can be done in time. ∎

Incidentally, the maximum-weight matching problem has the same type of global-local equivalence: consider the classic factor- approximation greedy algorithm that picks the heaviest edge at each iteration and discards the neighboring edges (edges with a shared endpoint) [avis83]. An alternative algorithm that picks any edge which is heavier than its neighbors produces the same matching [Hoepman2004SimpleDW], which we call the greedy matching. In the geometric setting, we are interested in matching points minimizing distances instead (but the mentioned results still hold). The greedy algorithm repeatedly matches the closest pair. It is possible to modify Algorithm 3 to find the greedy matching in time in any fixed dimension. The algorithm is, in fact, simpler, because the SNN structure only needs to maintain points instead of paths, and matched points are removed permanently (unlike connected paths which are re-added to the set of paths). However, this is not a new result, as there is a dynamic closest pair data structure with time per operation [Bespamyatnikh1998] which can be used to find the greedy matching in the same time bound.

### 3.3 Steiner TSP

Consider the Steiner TSP problem [Cornuejols1985]: given a weighted, undirected graph and a set of node sites , find a minimum-weight tour (repeated vertices and edges allowed) in that goes at least once through every site in . Nodes not in do not need to be visited. For instance, could represent a road network, and the sites could represent the daily drop-off locations of a delivery truck. See [DEKOSTER2007481, ZHANG201530] for more applications. We give a NNC algorithm to construct a multi-fragment tour in this setting. This is the first use of NNC in a graph-theoretical setting, but the fact of the matter is that the NNC algorithm can be used in any setting where a dynamic NN structure is available. Recently, Eppstein et al. [Eppstein17Latin] gave a NN structure for graphs from graph families with sublinear separators, which is the same as the class of graphs with polynomial expansion [dvorak2016]. For instance, planar graphs have -size separators333Other important families of sparse graphs with sublinear separators include -planar graphs [DujEppWoo-SJDM-17], bounded-genus graphs [gilbert1984], minor-closed graph families [kawarabayashi2010], and graphs that model road networks (better than, e.g., -planar graphs) [eppstein2017crossing].. This data structure maintains a subset of nodes of a graph , and, given a query node in , returns the node in closest to . It allows insertions and deletions to and from the set . We cite their result in Lemma 3. [[Eppstein17Latin]] Given an -node weighted graph from a graph family with separators of size , with , which can be constructed in time, there is a dynamic444They ([Eppstein17Latin]) use the term reactive for the data structure instead of dynamic, to distinguish from other types of updates, e.g., edge insertions and deletions. nearest-neighbor data structure requiring space and preprocessing time and that answers queries in time and updates in time. As mentioned, one way to implement the multi-fragment heuristic is to sort the pairs of sites by increasing distances, and process them in order: for each pair, if the two sites are endpoints of separate paths, connect them. The bottleneck is computing the distances. Running Dijkstra’s algorithm from each site in a sparse graph, this takes (or in planar graphs [HENZINGER19973]). When is , this becomes . We do not know of any prior faster algorithm to compute the multi-fragment tour for Steiner TSP. Since we have global-local equivalence (Lemma 3.1), we can use the NNC algorithm to construct the multi-fragment tour in time, where and are the preprocessing and operation time of a nearest-neighbor structure. Thus, using the structure from [Eppstein17Latin], we get: The multi-fragment tour for the steiner TSP problem can be computed in -time in weighted graphs from a graph family with separators of size , with . We omit the details of the proof, as NNC is a simpler version of the SNNC algorithm (Algorithm 3) that only has to handle hard answers. Finally, in graphs of bounded treewidth, which have separators of size , the data structure from [Eppstein17Latin] achieves and , so we can construct a multi-fragment tour in .

## 4 Motorcycle Graphs

An important concept in geometric computing is the straight skeleton [aichholzer1996novel]. It is a tree-like structure similar to the medial axis of a polygon, but which consists of straight segments only. Given a polygon, consider a shrinking process where each edge moves inward, at the same speed, in a direction perpendicular to itself. The straight skeleton of the polygon is the trace of the vertices through this process. Some of its applications include computing offset polygons [Eppstein1999], medical imaging [cloppet2000], polyhedral surface reconstruction [oliva1996, Barequet2003], and computational origami [DEMAINE20003]. It is a standard tool in geometric computing software [cacciola04]. The current fastest algorithms for computing straight skeletons consist of two main steps [Cheng2007, Huber2011, huber12]. The first step is to construct a motorcycle graph induced by the reflex vertices of the polygon. The second step is a lower envelope computation. With current algorithms, the first step is more expensive, but it only depends on the number of reflex vertices, , which might be smaller than the total number of vertices, . Thus, no step dominates the other in every instance. In this section, we discuss the first step. The second step can be done in time for simple polygons [bowers2014faster], in time for arbitrary polygons [Cheng2016], and in time for planar straight line graphs with connected components [bowers2014faster]. The motorcycle graph problem can be described as follows (see Figure 3, top) [Eppstein1999]. The input consists of points in the plane, with associated directions and speeds (the motorcycles). Consider the process where all the motorcycles start moving at the same time, in their respective directions and speeds. Motorcycles leave a trace behind that acts as a “wall” such that other motorcycles crash and stop if they reach it. Some motorcycles escape to infinity while others crash against the traces of other motorcycles. The motorcycle graph is the set of traces. Most existing algorithms rely on three-dimensional ray-shooting queries. Indeed, if time is seen as the third dimension, the position of a motorcycle starting to move from , at speed , in the direction , forms a ray (if it escapes) or a segment (if it crashes) in three dimensions, starting at in the direction . In particular, the impassable traces left behind by the motorcycles correspond to infinite vertical “curtains” – wedges or trapezoidal slabs, depending on whether they are bounded below by a ray or a segment. Thus, ray-shooting queries help determine which trace a motorcycle would reach first, if any. Of course, the complication is that as motorcycles crash, their potential traces change. Early algorithms handle this issue by computing the crashes in chronological order [Eppstein1999, Cheng2007]. The best previously known algorithm, by Vigneron and Yan [Vigneron2014], is the first that computes the crashes in non-chronological order. Our NNC-based algorithm improves upon it by reducing the number of ray-shooting queries needed from to , and simplify significantly the required data structures. It is also non-chronological, but follows a completely new approach. Figure 3: Top: an instance input with uniform velocities and its corresponding motorcycle graph. Bottom: snapshots of the NNC algorithm before and after determining all the motorcycles in a NN cycle found by the chain: the NN of the motorcycle at the top, m, is m′, which is already in the chain. Note that some motorcycles in the chain have as NNs motorcycles against the traces of which they do not crash in the final output. That is expected, because these motorcycles are still undetermined (e.g., as a result of clipping the curtain of m′, the NN of its predecessor in the chain changes).

### 4.1 Algorithm description

In the algorithm, we distinguish between undetermined motorcycles, for which the final location is still unknown, and determined motorcycle, for which the final location is already known. We use a dynamic three-dimensional ray-shooting data structure. In the data structure, determined motorcycles have wedges or slabs as curtains, depending on whether they escape or not. Undetermined motorcycles have wedge curtains, as if they were to escape. Thus, curtains of undetermined motorcycles may reach points that the corresponding motorcycles never get to. For an undetermined motorcycle , we define its nearest neighbor to be the motorcycle, determined or not, against which would crash next according to the set of curtains in the data structure. Motorcycles that escape may have no NN. Finding the NN of a motorcycle corresponds to one ray-shooting query. Note that may actually not crash against the trace of its NN, , if is undetermined and happens to crash early. On the other hand, if is determined, then definitely crashes into its trace. We begin with all motorcycles as undetermined. Our main structure is a chain (a stack) of undetermined motorcycles such that each motorcycle is the NN of the previous one. In contrast to typical applications of the NNC algorithm, here “proximity” is not symmetric: there may be no “mutually nearest-neighbors”. In fact, the only case where two motorcycles are MNNs is the degenerate case where two motorcycles reach the same point simultaneously. That said, mutually nearest neighbors have an appropriate analogous in the asymmetric setting: nearest-neighbor cycles, . Our algorithm relies on the following key observation: if we find a nearest-neighbor cycle of undetermined motorcycles, then each motorcycle in the cycle crashes into the next motorcycle’s trace. This is easy to see from the definition of nearest neighbors, as it means that no motorcycle outside the cycle would “interrupt” the cycle by making one of them crash early. Thus, if we find such a cycle, we can determine all the motorcycles in the cycle at once (this can be seen as a type of chronological global-local equivalence). Starting from an undetermined motorcycle, following a chain of nearest neighbors inevitably leads to (a) a motorcycle that escapes, (b) a motorcycle that is determined, or (c) a nearest-neighbor cycle. In all three cases, this allows us to determine the motorcycle at the top of the chain, or, in Case (c), all the motorcycles in the cycle. See Figure 3, bottom. Further, note that we only modify the curtain of the newly determined motorcycle(s). Thus, if we determine the motorcycle at the top of the chain, only the NN of the second-to-last motorcycle in the chain may have changed, and similarly in the case of the cycle. Consequently, the rest of the chain remains consistent. Algorithm 4 shows the full pseudo code.

### 4.2 Analysis

Clearly, every motorcycle eventually becomes determined, and we have already argued in the algorithm description that irrespective of whether it becomes determined through Case (a), (b), or (c), its final position is correct. Thus, we move on to the complexity analysis. Each “clipping” update can be seen as an update to the ray-shooting data structure: we remove the wedge and add the slab. Algorithm 4 computes the motorcycle graph in time , where and are the preprocessing time and operation time (maximum between query and update) of a dynamic, three-dimensional ray-shooting data structure.

###### Proof.

Each iteration of the algorithm makes one ray-shooting query. At each iteration, either a motorcycle is added to the chain (Case (d)), or at least one motorcycle is determined (Cases (a—c)). Motorcycles begin as undetermined and, once they become determined, they remain so. This bounds the number of Cases (a—c) to . In Cases (b) and (c), one undetermined motorcycle may be removed from the chain. Thus, the number of undetermined motorcycles removed from the chain is at most . It follows that Case (d) happens at most times. Overall, the algorithm takes at most iterations, so it needs no more than ray-shooting queries and at most “clipping” updates where we change a triangular curtain into a slab. It follows that the runtime is . ∎

In terms of space, we only need a linear amount besides the space required by the data structure. The previous best known algorithm runs in time  [Vigneron2014]. Besides ray-shooting queries, it also uses range searching data structures, which do not increase the asymptotic runtime but make the algorithm more complex. Agarwal and Matoušek [agarwal93] give a ray-shooting data structure for curtains in which achieves and for any . Using this structure, both our algorithm and the algorithm of Vigneron and Yan [Vigneron2014] run in time for any . If both algorithms use the same in the ray-shooting data structure, then our algorithm is asymptotically faster by a logarithmic factor.

### 4.3 Special cases and remarks

Consider the case where all motorcycles start from the boundary of a simple polygon with vertices, move through the inside of the polygon, and also crash against the edges of the polygon. In this setting, the motorcycle trajectories form a connected planar subdivision. There are dynamic ray-shooting queries for connected planar subdivisions that achieve  [Goodrich1993]. Vigneron and Yan used this data structure in their algorithm to get a -time algorithm for this case [Vigneron2014]. Our algorithm brings this down to . Furthermore, their other data structures require that coordinates have bits, while we do not have this requirement. Vigneron and Yan also consider the case where motorcycles can only go in different directions. They show how to reduce to , leading to a algorithm for motorcycle graphs in this setting. Using the same data structures, the NNC algorithm improves the runtime to . A remark on the use of our algorithm for computing straight skeletons: degenerate polygons where two shrinking reflex vertices collide gives rise to a motorcycle graph problem where two motorcycles collide head on. To compute the straight skeleton, a new motorcycle should emerge from the collision. Our algorithm does not work if new motorcycles are added dynamically (such a motorcycle could, e.g., disrupt a NN cycle already determined), so it cannot be used in the computation of straight skeletons of degenerate polygons. As a side note, the NNC algorithm for motorcycle graphs is reminiscent of Gale’s top trading cycle algorithm [shapley1974cores] from the field of economics. That algorithm also works by finding “first-choice” cycles. We are not aware of whether they use a NNC-type algorithm to find such cycles; if they do not, they certainly can; if they do, then at least our use is new in the context of motorcycle graphs.

## 5 Stable Matching Problems

We introduce the narcissistic k-attribute stable matching problem, a special case of k-attribute stable matching, and show that it belongs to the class of symmetric stable matching problems. We use this fact to give an efficient NNC-type algorithm for it.

###### Stable matching.

The stable matching problem studies how to match two sets of agents in a market where each agent has its own preferences about the agents of the other set in a “stable” manner. Some of its applications include matching hospitals and residents [thematch] and on-line advertisement auctions [Aggarwal2009]. It was originally formulated by Gale and Shapley [gale62] in the context of establishing marriages between men and women, where each man ranks the women and the women rank the men. A matching between the men and women is stable if there is no blocking pair: a man and woman who prefer each other over their assigned choices. Gale and Shapley [gale62] showed that a stable solution exists for any set of preferences (and it might not be unique), and presented the deferred-acceptance algorithm, which finds a stable matching in time.

### 5.1 Restricted models

For arbitrary preference lists, Gale–Shapley’s deferred-acceptance algorithm is worst-case optimal, as storing all the preferences already requires space (quadratic lower bounds are known also for “simpler” questions, like verifying stability of a given matching [Gonczarowski2015]). This inspired work on finding subquadratic algorithms in restricted settings where preferences can be specified in subquadratic space. Such models are collectively called succinct stable matching [moeller2016]. We introduce a new model which is a special case of the following three models (of which none is a special case of another):

-attribute model [bhatnagar2008].

Each agent

has a vector

of numerical attributes, and a vector of weights according to how much values each attribute in a match. Then, each agent ranks the agents in the other set according to the objective function , the linear combination of the attributes of according to the weights of .

Narcissistic stable matching.

This term is used to describe models where the preferences of each agent reflect their own qualities in some way (e.g., in [chennumber, moeller2016]).

Symmetric stable matching [eppstein2017_2].

Consider the setting where each agent has an arbitrary objective function, , and ranks the agents according to (note that any set of preference lists can be modeled in this way). The preferences are called symmetric if for any two agents in different sets, .

In this paper, we consider the natural narcissistic interpretation of the -attribute model, where . That is, each agent weighs each attribute according to its own value in that attribute. To illustrate this model, consider a centralized dating service where attributes are known for each person, such as income, intelligence, and so on. In the general -attribute model, each person assigns weights to the attributes according to their preferences. The narcissistic assumption that implies that someone with, say, a high income, values income more than someone with a relatively smaller income. We make a general position assumption that there are no ties in the preference list of each agent. In addition, in this model each agent is uniquely determined by its attribute vector, so we do not distinguish between the agents themselves and their -dimensional vectors. We obtain the following formal problem. [Narcissistic -attribute stable matching problem] Find a stable matching between two sets of vectors in , where a vector prefers over if and only if . We give a -time algorithm for the problem. Without the narcissistic assumption, the -attribute model becomes less tractable: Künnemann et al. [moeller2016] showed that no strongly subquadratic-time algorithm exists if assuming the Strong Exponential Time Hypothesis, even if the weights and attributes take Boolean values. (Similarly to us, [moeller2016] also studied some restricted cases and presented a -time algorithm for the case where attributes and weights may have only different values and a -time algorithm for the asymmetric case where one of the sets has a single attribute and the other has .555The notation ignores logarithmic factors.) It is easy to see that our setting is symmetric: since , for any two agents we have . Eppstein et al. [eppstein2017_2] showed that in symmetric models, the NNC algorithm can be used. Specifically, they introduced symmetric stable matching as an abstraction of the case where the agents are points in a metric space and they rank the agents in the other set by proximity [arkin2009geometric]. They showed that if preferences are symmetric, the problem has special properties: there is a unique stable matching and it can be found by repeatedly matching the two unmatched elements with the highest objective function value. In addition, they showed the global-local equivalence: it suffices to match any two elements who have each other as first choice, called soul mates, which are mutually nearest neighbors if the preferences are distance-based. The algorithm of Eppstein et al. [eppstein2017, eppstein2017_2] for symmetric stable matching was the first use of NNC outside of hierarchical clustering. The algorithm is a bichromatic version of the NNC algorithm, where each individual in the chain is followed by its first choice among the unmatched individuals in the other set. Following such a chain inevitably leads to soul mates, which are then matched and removed permanently (here, the symmetry assumption is the key to avoid cycles in the chain). The execution relies on a dynamic first-choice data structure, which maintains the elements in one set, and, given a query element from the other set, returns the first choice of among the elements in the structure. The final result is as follows: [[eppstein2017_2]] Given a first-choice data structure with preprocessing time and operation time (maximum between query and update), a symmetric stable matching problem can be solved in .

### 5.2 Our algorithm for narcissistic k-attribute stable matching

Adapting the NNC algorithm from [eppstein2017_2]

to our model simply requires using an appropriate structure for first-choice queries. In our case, the first-choice data structure should maintain a set of vectors, and, given a query vector, return the vector maximizing the dot product with the query vector. In the dual, this becomes ray shooting: each vector becomes a hyperplane, and a query asks for the first hyperplane hit by a vertical ray from the query point. We use the data structure from

[Matousek1992], the runtime of which is captured in the following lemma (see [pankaj99range] for a summary of ray-shooting data structures). ([Matousek1992, Theorem 1.5]). Let be a constant, a fixed dimension, and a parameter with . Then, there is a dynamic data structure for ray-shooting queries with space and preprocessing time, update time, and query time. For any , the narcissistic -attribute stable matching problem can be solved in time for , time for , and time for .

###### Proof.

Since the problem is symmetric, it can be solved in time, given a dynamic data structure for ray-shooting queries with preprocessing time and operation time (Lemma 5). For , using the data structure for ray-shooting queries from [Matousek1992] (Lemma 5.2) results in a runtime of for any . The optimal runtime is achieved when the parameter is chosen to balance the two terms, i.e., so that . This gives . For the sake of obtaining a simple asymptotic expression, we set to (which is the same for even

, and bigger for odd

). Then, the term dominates. Also note that if , this value of is between and , so the condition in Lemma 5.2 is satisfied. Thus, the problem can be solved in , which further simplifies to the claimed runtime of (where needs to satisfy ). For , we use the same data structure, but raising the problem to four dimensions, so that Lemma 5.2 applies. For , see Lemma 5.3. ∎

Incidentally, the value for used in Theorem 5.2 also improves the algorithm by Künnemann et al. [moeller2016] for the one-sided -attribute stable matching problem [moeller2016, Theorem 2], which also depends on the use of this data structure. The improvement is from to . Similar balancing of preprocessing and query times in [Mato1992trees, Corollary 5.2] also improves the time to verify stability of a given matching in the (2-sided) -attribute stable matching model [moeller2016, Section 5.1] for constant ; the improvement is from to for any .

### 5.3 The 2-attribute case

In this special case, we can design a simple first-choice data structure with preprocessing time and operation time. Note that, for a vector in , all the points along a line perpendicular to are equally preferred, i.e., have the same dot product with (because their projections onto the supporting line of are the same). In fact, the preference list for corresponds to the order in which a line perpendicular to encounters the vectors in the other set as it moves in the direction opposite from (see Figure 4, left). We get the following lemma (where the vectors in one set are interpreted as points). Figure 4: Left: an instance of narcissistic 2-attribute stable matching. The two sets of vectors are represented as red dots and blue crosses, respectively, in a plane where the axes correspond to the two attributes. For a specific red vector, →q, its first choice in the other set (the vector maximizing the dot product), p∗, is shown. The dashed line passing through p∗ is perpendicular to →q. Right: the point p is the point among the black points maximizing q⋅p for all the points q in the gray wedge. The wedge is delimited by two rays starting at the origin and perpendicular to the two edges of the convex hull incident to p.

Given a point set and a vector , in , the point in maximizing is in the convex hull of .

###### Proof.

Consider a line perpendicular to . Move this line in the direction of , until all points in lie on the same side of it (behind it). Note that any line orthogonal to has the property that all points lying on the line have the same dot product with . The point is the last point in to touch the line, since moving the line in the opposite direction from decreases the dot product of with any point on the line (and by the general position assumption, it is unique). Clearly, is in the convex hull. ∎

Our first-choice data structure is a semi-dynamic convex hull data structure, where deletions are allowed but not insertions [Hershberger1992]. We handle queries as in Lemma 5.3. Given the ordered list of points along the convex hull of a point set , and a query vector , we can find the point in maximizing in time, where is the number of points in the convex hull.

###### Proof.

By Lemma 5.3, the point is in the convex hull. For ease of exposition, assume that all the points in and have positive coordinates (the alternative cases are similar). Then, lies in the top-right section of the convex hull (the section from the highest point to the rightmost point, in clockwise order). Note that points along the top-right convex hull are ordered by their -coordinate, so, we say above and below to describe the relative positions of points in it. Each point in the top-right convex hull is the point in maximizing for all the vectors in an infinite wedge, as depicted in Figure 4, right. The wedge contains all the vectors whose perpendicular line touches last when moving in the direction of , so the edges of the wedge are perpendicular to the edges of the convex hull incident to . Thus, by looking at the neighbors of along the convex hull, we can calculate this wedge and know whether is in the wedge for , below it, or above it. Based on this, we discern whether the first choice of is itself or above or below it. Thus, we can do binary search for in time. ∎

The narcissistic 2-attribute stable matching problem can be solved in time.

###### Proof.

We can use the NNC algorithm from [eppstein2017_2] coupled with a first-choice data structure which is a semi-dynamic convex hull data structure. Updating the convex-hull can be done in time throughout the algorithm [Hershberger1992]. Queries are answered in time (Lemma 5.3). Thus, the total running time is . ∎

## 6 Server Cover

Geometric coverage problems deal with finding optimal configurations of a set of geometric shapes that contain or “cover” another set of objects (for instance, see [agarwal2014near, bronnimann1995almost, PedersenWang18]). In this section, we propose an NNC-type algorithm for a problem in this category. We use NNC to speed up a greedy algorithm for a one-dimensional version of a server cover problem: given the locations of clients and servers, which can be seen as houses and telecommunication towers, the goal is to assign a “signal stregth” to each communication tower so that they reach all the houses, minimizing the cost of transmitting the signals. Formally, we are given two sets of points in , (servers) and (clients). The problem is to assign a radius, , to a disk centered at each server in , so that every client is contained in at least one disk. The optimization function to minimize is for some parameter . The values and are of special interest, as they correspond to minimizing the sum of radii and areas (in 2D), respectively.

### 6.1 Related work

Table 1 gives an overview of exact and approximation algorithms for the server cover problem. It shows that when either the dimension or are larger than , there is a steep increase in complexity. We focus on the case with and , which has received significant attention because it gives insight into the problem in higher dimensions. Server coverage was first considered in the one-dimensional setting by Lev-Tov and Peleg [LevTov05]. They gave an -time dynamic-programming algorithm for the case, where is the number of clients and is the number of servers. They also gave a linear-time 4-approximation (assuming a sorted input). The runtime of the exact algorithm was improved to by Biniaz et al. [BiniazApprox]. In the approximation setting, Alt et al. [carrots] gave a linear-time 3-approximation and an -time -approximation (also assuming a sorted input). Using NNC, we improve this to a linear-time -approximation algorithm under the same assumption that the input is sorted.

### 6.2 Global-local equivalence in server cover

The -time -approximation by Alt et al. [carrots] can be described as follows: start with disks (which, in 1D, are intervals) of radius , and, at each step, make the smallest disk growth which covers a new client. If we define the distance between a client and a server with a disk with radius as the distance between and the closest boundary of the server’s disk, the process can be described as repeatedly finding the closest uncovered client–server pair and growing the server’s disk up to the client. Under this view, there is a natural notion of MNNs: an uncovered client and a server such that is the smallest among all the distances involving and . However, Figure 5 illustrates that this problem does not satisfy global-local equivalence: matching MNNs does not yield the same result as matching the closest pair. Furthermore, it shows that matching MNNs loses the 2-approximation guarantee. We nevertheless use NNC to achieve a 2-approximation, which requires enhancing the algorithm so that it does not simply match MNNs. This shows that NNC may be useful even in problems where global-local equivalence does not hold. Figure 5: An instance where choosing MNNs in a specific order does not result in the same solution as choosing closest pairs (servers are crosses, clients are dots). Furthermore, note that the cost of the solution choosing MNNs, 2.1, is not within a factor 2 of the optimal cost, 1.

### 6.3 Linear-time 2-approximation in 1D

The algorithm takes a list of clients and servers ordered left-to-right, and outputs a radius for each server (which might be ). In the algorithm, we group clients and servers into clusters. Each element starts as a base cluster, and we repeatedly merge them until there is a single cluster left. We distinguish between client clusters, consisting of a set of still uncovered clients, and server clusters, consisting of servers and covered clients. Clusters span intervals (as defined below). The distance between clusters is defined as the distance between the closest endpoints of the clusters’ intervals. We begin by describing the merging operation based on the cluster types.

• We merge client clusters into larger client clusters. All the clients in a cluster are eventually covered together, so we only need to keep track of the left-most one, and right-most one, ; thus, we represent the client cluster with the interval . Each client starts as a cluster . Two client clusters and (which in the algorithm never overlap), with are merged into a client cluster .

• We merge server clusters into larger server clusters. Of all the servers in a cluster, only the ones with disks reaching furthest to the left and to the right may cover new clients. Let these servers be and , respectively (which might be the same), let be the left-most point covered by , and the right-most point covered by . Then, all the information we need about a server cluster is . Note that . Each server starts as a cluster . To merge two server clusters and (which may overlap), let and . Replace both by a server cluster . Retain the identities only of the two servers whose boundaries extend furthest left () and right ().

• Merging a client cluster and a server cluster (which may overlap) into a new server cluster involves covering all the clients in the cluster by , whichever is cheaper. That is, Let be the new radius of the disk of after it grows to cover ; we merge the client cluster and the server cluster into a server cluster , where , , and (resp. ) is the server among and (resp. and ) with the leftmost (resp. rightmost) extending disk.

The algorithm works by building a chain (a stack) of clusters ordered from left to right. The following invariant holds at the beginning of each iteration: no two clusters overlap, the chain contains a prefix of the list of clusters, and the distance between successive clusters in the chain decreases. In the pseudocode (Algorithm 5) we use