## 1 Introduction

Nearest neighbor searching is a classic problem in computational geometry in
which we are given a set of point *sites* , and we wish to preprocess these
points such that for a query point , we can efficiently find the site
closest to . We consider the case where is a *dynamic* set of points inside a
simple polygon . That is, we may insert a new site into or delete an
existing one. We measure the distance between two points and by their
*geodesic distance* : the length of the *geodesic*
. The geodesic is the shortest path connecting and
that is completely contained in .

#### Motivation.

Our motivation for studying dynamic geodesic nearest neighbor searching originates from a problem in Ecology [4, 18]. We are given a threshold , and two sets of points in : a set of “red” points , representing the locations at which an animal or plant species lived many years ago, and and a set of “blue” points , representing locations where the species could occur today. Each point also has a real value , representing an environmental value such as temperature. The problem is to find, for every species (red point), the closest current location (blue point) where it can migrate to, provided that the environmental value (temperature) is similar to its original location, i.e. differs by at most .

In the setting described above, it is easy to solve the problem in time, where is the total size of and . Simply build a balanced binary search tree that stores the blue points in its leaves, ordered by their -values, and associate each internal node with the Voronoi diagram of its descendants. For each red point we can then find the closest blue point in time by selecting the nodes that together represent the interval , and locating the closest point in each associated Voronoi diagram. However, the geographical environment may limit migration. For example, if the species considered is a land-based animal like a deer then it cannot cross a large water body. Hence, we would like to consider the problem in a more realistic environment. We restrict the movement of the species to a simple polygon modeling the land, and measure distances using the geodesic distance.

Directly applying the previous approach in this new setting, this time building geodesic Voronoi diagrams, incurs a cost proportional to the size of the polygon, , in every node of the tree. Thus this approach has a running time of . If, instead, we sweep a window of width over the range of values, while maintaining our (offline) geodesic nearest neighbor data structure storing the set of blue points whose value lies in the window, we can solve the problem in only time. This is a significant improvement over the previous method.

#### Related Work.

A well known solution for nearest neighbor searching
with a fixed set of sites in is to build the Voronoi diagram and
preprocess it for planar point location. This yields an optimal solution that
allows for query time using space and
preprocessing time [6]. Voronoi diagrams have also been studied in case the set of sites is restricted
to lie in a polygon , and we measure the distance between two points and
by their geodesic distance
. Aronov [2] shows that when is simple
and has vertices, the geodesic Voronoi diagram has complexity and
can be computed in time. Papadopoulou and
Lee [20] present an improved algorithm that runs in
time. Hershberger and Suri [12] give
an time implementation of the *continuous dijkstra*
technique for the construction of a *shortest path map*. The shortest path
map supports time geodesic distance queries between a fixed source
point and an arbitrary query point , even in a polygon with holes. When
running their algorithm “simultaneously” on all source points (sites) in ,
their algorithm constructs the geodesic Voronoi diagram, even in a polygon with
holes, in time. These results all allow for
time nearest neighbor queries. Unfortunately, these results are efficient only
when the set of sites is fixed, as inserting or deleting even a single site
may cause a linear number of changes in the Voronoi diagram.

To support nearest neighbor queries, it is, however, not necessary to
explicitly maintain the (geodesic) Voronoi diagram. Bentley and
Saxe [3] show that nearest neighbor searching is a
*decomposable search problem*. That is, we can find the answer to a query
by splitting into groups, computing the solution for each group
individually, and taking the solution that is best over all groups. This
observation has been used in several other approaches for nearest neighbor
searching with the Euclidean
distance [1, 7].^{1}^{1}1Indeed, it
is also used in our initial solution to the migration problem. However, even
with this observation, it is hard to get both polylogarithmic update and query
time. Only recently, Chan [5] managed to achieve
such results by maintaining the convex hull of a set of points in . Via a
well-known lifting transformation this also allows (Euclidean) nearest neighbor
queries for points in . Chan’s solution uses space, and allows for
queries, while supporting insertions and deletions in
and amortized time, respectively. Very recently,
Kaplan et al. [13] managed to reduce
the deletion time to . In addition, they obtain polylogarithmic
update and query times for more general, constant complexity, distance
functions. Note however that the function describing the geodesic distance may
have complexity , and thus these results do not transfer easily to
our setting.

In the geodesic case, directly combining the decomposable search problem approach with the static geodesic Voronoi diagrams described above does not lead to an efficient solution. Similar to in our migration problem, this leads to an cost corresponding to the complexity of the polygon on every update. Simultaneously and independently from us Oh and Ahn [17] developed an approach that answers queries in time, and updates in time. Some of their ideas are similar to ours.

#### Our Results.

We develop a dynamic data structure to support nearest neighbor queries for a set of sites inside a (static) simple polygon . Our data structure allows us to locate the site in closest to a query point , to insert a new site into , and to delete a site from . Our data structure supports queries in time, and updates in time, where is the number of sites currently in and is the number of vertices of . The space usage is .

As with other decomposable search problems [3], we can adapt our data structure to improve the query and update time if there are no deletions. In this insertion-only setting, queries take worst-case time, and insertions take amortized time. Furthermore, we show that we can achieve the same running times in case there are both insertions and deletions, but the order of these operations is known in advance. The space usage of this version is .

#### The Global Approach.

The general idea in our approach is to
recursively partition the polygon into two roughly equal size sub-polygons
and that are separated by a diagonal. Additionally, we partition
the sites that lie in into a small number of subsets. The Voronoi
diagram that such a subset induces in the *other* half of the
polygon, , is a forest . See Fig. 1 for an
illustration. We show that we can efficiently construct a compact
representation of this forest that supports planar point location queries. We handle the sites that lie in analogously. When we get a query point
, our forests allow us to find the site in closest to
efficiently. To find the site in closest to , we recursively query in
sub-polygon . When we add or remove a site we have to rebuild the
forests associated with only few subsets containing . We show that we can
recompute each forest efficiently.

We give a more detailed description of the approach in Section 2. The core of our solution is that we can represent, and construct, the Voronoi diagram that a set of sites in induces in in time proportional to the size of the subset . In particular, our representation has a size and can be built in time . We show this in Section 4. The key to achieving this result is a representation of the bisector of two sites and . Our representation of , presented in Section 3, allows us to find the intersection of with another bisector in efficiently, and can be obtained from the input polygon in time. We combine all of the components into a fully dynamic data structure in Section 5. Furthermore, we show that we can get improved query and update times in the insertion-only and offline-cases.

## 2 An Overview of the Data Structure

As in previous work on geodesic Voronoi diagrams [2, 20], we assume that and are in general position. That is, (i) no two sites and in (ever) have the same geodesic distance to a vertex of , and (ii) no three points (either sites or vertices) are colinear. Note that (i) implies that no bisector between sites and contains a vertex of .

We start by preprocessing for two-point shortest path queries using the
data structure by Guibas and Hershberger [10] (see also the
follow up note of Hershberger [11]). This takes
time and allows us to compute the geodesic distance between any
pair of query points in time. We then construct a
balanced decomposition of into sub-polygons [9]. A
balanced decomposition is a binary tree in which each node represents a
sub-polygon of , together with a diagonal of that splits
into two sub-polygons and that have roughly the same number of
vertices. As a result, the height of the tree, and thus the number of
*levels* in the decomposition, is . The root of the tree
represents the polygon itself.

Consider a diagonal of that splits into and , and let denote the set of sites in . The Voronoi diagram of in is a forest with , where , nodes of degree one or three, and nodes of degree two [2]. The degree three nodes correspond to intersection points of two bisectors, and the degree one nodes correspond to intersection points of a bisector with the polygon boundary. See Fig. 1 for an illustration. We refer to the topological structure of the forest, i.e. the forest with only the degree one and three nodes, of as .

At every level of the decomposition, we partition the sites in in subsets, each of size . For each subset we explicitly build the embedded forest that represents its Voronoi diagram in using an algorithm for constructing a Hamiltonian abstract Voronoi diagram [15]. More specifically, for every degree one or three node, we compute the location of the intersection point that it is representing, and to which other nodes (of degree one or three) it is connected to. Furthermore, we preprocess for planar point location. Note that the edges in this planar subdivision correspond to pieces of bisectors, and thus are actually chains of hyperbolic arcs, each of which may have a high internal complexity. We do not explicitly construct these chains, but show that there is an oracle that can decide if a query point lies above or below a chain (edge of the planar subdivision) in time. It follows that our representation of has size , where , and supports planar point location queries in time. We handle the sites in analogously.

Once is a triangle, corresponding to a leaf in the balanced decomposition, the geodesic between any pair of points in is a single line segment, and thus the geodesic distance equals the Euclidean distance. In this case, we maintain the sites in in a dynamic Euclidean nearest neighbor data structure such as the one of Chan [5] or the much simpler data structure of Bentley and Saxe [3].

Since every site is stored in exactly one subset at every level of the decomposition, and the Voronoi diagram for each such subset has linear size, the data structure uses space.

#### Handling a Query.

Consider a nearest neighbor query with point at a node of the balanced decomposition corresponding to sub-polygon , and let and be the sub-polygons into which is split. In case that , we find the site in closest to , and recursively query the data structure for the nearest neighbor of in sub-polygon . We handle the case that analogously. Once is a triangle we find the site closest to by querying the Euclidean nearest neighbor searching data structure associated with . For each of the levels of the balanced decomposition this gives us a candidate closest site, and we return the one that is closest over all.

Since we partitioned the set of sites that lie in into subsets, we can find the site closest to by a point location query in the Voronoi diagram associated with each of these subsets. Each such a query takes time, and thus we can find in . The final query in the Euclidean nearest neighbor data structure can easily be handled in time [3]. Since we have levels in the balanced decomposition, the total query time is .

#### Handling Updates.

Consider inserting a new site into the data structure, or removing from . The site needs to be, or is, stored in exactly one subset at every level of the decomposition. Suppose that needs to be, or is, in the subset of at some level of the decomposition. We then simply rebuild the forest associated with . In Section 4 we will show that we can do this in time. Since the subset containing , and thus its corresponding forest , has size , the cost per level is . Inserting into or deleting from the final Euclidean nearest neighbor data structure can be done in time [3]. It follows that the total update time is .

## 3 Representing a Bisector

Assume without loss of generality that the diagonal that splits into and is a vertical line-segment, and let and be two sites in . In this section we show that there is a representation of , the part of the bisector that lies in , that allows efficient random access to the bisector vertices. Moreover, we can obtain such a representation using a slightly modified version of the two-point shortest path data structure of Guibas and Hershberger [10].

Let be a site in , and consider the shortest path tree rooted
at . Let be an edge of for which is further away from
than . The half-line starting at that is colinear with, and extending
has its first intersection with the boundary of in a point . We refer to
the segment as the *extension segment* of
[2]. Let denote the set of all extension
segments of all vertices in .

Consider two sites , and its bisector . We then have

###### Lemma 1 (Lemma 3.22 of Aronov [2]).

The bisector is a smooth curve connecting two points on and having no other points in common with . It is the concatenation of straight and hyperbolic arcs. The points along where adjacent pairs of these arcs meet, i.e., the vertices of , are exactly the intersections of with the segments of or .

###### Lemma 2 (Lemma 3.28 of Aronov [2]).

For any point , the bisector intersects the shortest path in at most a single point.

Consider a point on and let be the polygon defined by the shortest paths , , and . This polygon is a pseudo-triangle whose corners , , and , are connected to , , and respectively, by arbitrary polylines.

Let and be the intersection points between and the geodesics
and , respectively, and assume without loss of generality that . The restriction of to is a
*funnel* , bounded by , , and
. See Fig. 2(a). Note that is contained in .

Clearly, if intersects then it intersects . There is at most one such intersection point:

###### Lemma 3.

The bisector intersects in at most one point .

###### Proof.

Assume, by contradiction, that intersects in two points and , with above . See Fig. 3(b). Note that by Lemma 1, cannot intersect , and thus , in more than two points. Thus, the part of that lies in between and does not intersect . Observe that this implies that the region enclosed by this part of the curve, and the part of the diagonal from to (i.e. ) is empty. Moreover, since the shortest paths from to and to intersect only once (Lemma 2) region contains the shortest paths and .

Since has the same geodesic distance to and as , must lie in the intersection of the disks with radius centered at , for . It now follows that lies in one of the connected sets, or “pockets”, of . Assume without loss of generality that it lies in a pocket above (i.e. ). See Fig. 3. We now again use Lemma 2, and get that intersects only once, namely in . It follows that the shortest path from to has to go around , and thus has length strictly larger than . Contradiction. ∎

Since intersects only once (Lemma 3), and there is a point of on , it follows that there is at most one point where intersects the outer boundary of , i.e. . Observe that therefore is a corner of the pseudo-triangle , and that . Let and orient it from to . We assign the same orientation.

###### Lemma 4.

(i) The bisector does not intersect or in any point other than . (ii) The part of the bisector that lies in is contained in .

###### Proof.

By Lemma 2 the shortest path from to any point , so in particular to , intersects in at most one point. Since, by definition, lies on , the shortest path does not intersect in any other point. The same applies for , thus proving (i). For (ii) we observe that any internal point of is closer to than to , and any internal point of closer to than to . Thus, and must be separated by . It follows that lies inside . ∎

###### Lemma 5.

All vertices of lie on extension segments of the vertices in the pseudo-triangle .

###### Proof.

Assume by contradiction that is a vertex of that is not defined by an extension segment of a vertex in . Instead, let be the extension segment containing , and let be the starting vertex of . So has as its last internal vertex.

By Lemma 4, is contained in and thus in . Hence, . Since , and the shortest path from to intersects in some point . See Fig. 4(a). We then distinguish two cases: either lies on , or lies on .

In the former case this means there are two distinct shortest paths between and , that bound a region that is non-empty, that is, it has positive area. Note that this region exists, even if lies on the shortest path from to its corresponding corner in but not on itself (i.e. . Since is a simple polygon, this region is empty of obstacles, and we can shortcut one of the paths to . This contradicts that such a path is a shortest path.

In the latter case the point lies on , which means that it is at least as close to as it is to . Since is clearly closer to than to , this means that the shortest path from to (that visits and ) intersects somewhere between and . Since it again intersects at , we now have a contradiction: by Lemma 2, any shortest path from to intersects at most once. The lemma follows. ∎

Let denote the extension segments of the vertices of and , ordered along , and clipped to . See Fig. 2(b). We define analogously.

###### Lemma 6.

All vertices of lie on clipped extension segments in .

###### Proof.

By Lemma 5 all vertices of in lie on . Furthermore, by Lemma 4 all these vertices lie in . Hence, it suffices to clip all extension segments to (or even ). For all vertices on the extension segments (with respect to ) are disjoint from . It follows that for site , only the clipped extension segments from vertices on and are relevant. Analogously, for site , only the clipped extension segments on and are relevant. ∎

###### Observation 7.

The extension segments in are all pairwise disjoint, start on or , and end on .

By Corollary 3.29 of Aronov [2] every (clipped) extension segment in intersects (and thus ) at most once. Therefore, every such extension segment splits the bisector in two. Together with Lemma 6 and Observation 7 this now give us sufficient information to efficiently binary search among the vertices of when we have (efficient) access to .

###### Lemma 8.

Consider extension segments and , with , in . If intersects then so does .

###### Proof.

It follows from Lemma 4 that intersects only in and in a point on . Thus, partitions into an -side, containing , and a -side, containing . Since the extension segments also partition it then follows that the extension segments in intersect if and only if their starting point lies in the -side and their ending point lies in the -side. By Observation 7 all segments in end on . Hence, they end on the -side. We finish the proof by showing that if starts on the -side, so must , with .

The extension segments of vertices in trivially have their start point on the -side. It thus follows that they all intersect . For the extension segments of vertices in the ordering is such that the distance to is monotonically decreasing. Hence, if intersects , and thus starts on the -side, so does , with . ∎

###### Lemma 9.

Consider extension segments and , with , in . If intersects then so does .

###### Proof.

From Lemma 8 it follows that if intersects then so does , with . So, we only have to show that if intersects in then so does . Since the extension segments in are pairwise disjoint, it follows that if intersects , say in point then , with must intersect on the subcurve between and . Since intersects at most once (Lemma 3), and , it follows that this part of the curve, and thus its intersection with , also lies in . ∎

###### Corollary 10.

The segments in that define a vertex in form a suffix of . That is, there is an index such that is exactly the set of extension segments in that define a vertex of .

When we have and the point , we can find the value from Corollary 10 in time as follows. We binary search along to find the first vertex such that is closer to then to . For all vertices after , its extension segment intersects in . To find the first segment that intersects in , we find the first index for which the extension segment intersects below . In total this takes time.

Let be the ordered set of extension segments that intersect . Similarly, let be the suffix of extension segments from that define a vertex of .

###### Observation 11.

Let be an extension segment in , and let be the vertex of on . Let be the last extension segment in such that intersects in a point closer to than to . See Fig. 6. The vertex of corresponding to occurs before , that is .

###### Proof.

By definition of it follows that the intersection point of and lies between the intersection of with and . See Fig. 6. Thus, the intersection point ∎

###### Lemma 12.

Let be the number of extension segments in that intersect in a point closer to than to . Then contains vertex of .

###### Proof.

It follows from Corollary 10 and the definition of and that all vertices of lie on extension segments in . Together with Corollary 3.29 of Aronov [2] we get that every such extension segment defines exactly one vertex of . Since the bisector intersects the segments in order, there are exactly vertices of before , defined by the extension segments in . Let be the last extension segment in that intersects in a point closer to than to . Observation 11 gives us that this extension segment defines a vertex of with . We then again use that intersects the segments in order, and thus . Hence, is the vertex of . ∎

It follows from Lemma 12 that if we have and we have efficient random access to its vertices, we also have efficient access to the vertices of the bisector . Next, we argue with some minor augmentations the preprocessing of into a two-point query data structure by Guibas and Hershberger gives us such access.

#### Accessing .

The data structure of Guibas and Hershberger can return the shortest path between two query points and , represented as a balanced tree [10, 11]. This tree is essentially a persistent balanced search tree on the edges of the path. Every node of the tree can access an edge of the path in constant time, and the edges are stored in order along the path. The tree is balanced, and supports concatenating two paths efficiently. To support random access to the vertices of we need two more operations: we need to be able to access the edge or vertex in a path, and we need to be able to find the longest prefix (or suffix) of a shortest path that forms a convex chain. This last operation will allow us to find the corners and of . The data structure as represented by Guibas and Hershberger does not support these operations directly. However, with two simple augmentations we can support them in time. In the following, we use the terminology as used by Guibas and Hershberger [10].

The geodesic between and is returned as a balanced tree. The leaves of
this tree correspond to, what Guibas and Hershberger call, *fundamental
strings*: two convex chains joined by a tangent. The individual convex chains
are stored as balanced binary search trees. The internal nodes have two
or three children, and represent *derived strings*: the concatenation of
the fundamental strings stored in its descendant leaves. See
Fig. 7 for an illustration.

To make sure that we can access the vertex or edge on a shortest path in time, we augment the trees to store subtree sizes. It is easy to see that we can maintain these subtree sizes without affecting the running time of the other operations.

To make sure that we can find the longest prefix (suffix) of a shortest path that is convex we do the following. With each node in the tree we store a boolean flag that is true if and only if the sub path it represents forms a convex chain. It is easy to maintain this flag without affecting the running time of the other operations. For leaves of the tree (fundamental strings) we can test this by checking the orientation of the tangent with its two adjacent edges of its convex chains. These edges can be accessed in constant time. Similarly, for internal nodes (derived strings) we can determine if the concatenation of the shortest paths represented by its children is convex by inspecting the field of its children, and checking the orientation of only the first and last edges of the shortest paths. We can access these edges in constant time. This augmentation allows us to find the last vertex of a shortest path such that is a convex chain in time. We can then obtain itself (represented by a balanced tree) in time by simply querying the data structure with points and . Hence, we can compute the longest prefix (or suffix) on which a shortest path forms a convex chain in time.

Given point , the above augmentation allow us to access in time. We query the data structure to get the tree representing , and, using our augmentations, find the longest convex suffix . Similarly, we find the longest convex suffix of . Observe that the corners and of lie on and , respectively (otherwise and would not be convex chains). Unfortunately, we cannot directly use the same approach to find the part that is convex, as it may both start and end with a piece that is non-convex (with respect to ). However, consider the extension segment of the first edge of the shortest path from to (see Fig. 8). This extension segment intersects the shortest path exactly once in a point . By construction, this point must lie in the pseudo-triangle . Thus, we can decompose into two sub-paths, one of which starts with a convex chain and the other ends with a convex chain. Hence, for those chains we can use the fields to find the vertices and such that and are convex, and thus is convex. Finally, observe that must occur on , otherwise we could shortcut . See Fig. 8. Hence, and are the two corners of the pseudo-triangle . We can find in time by a binary search on . Finding the longest convex chains starting and ending in also takes time, as does computing the shortest path . It follows that given , we can compute (a representation of) in time.

With the above augmentations, and using Lemma 12, we then obtain the following result.

###### Lemma 13.

Given the points and where intersects and the outer boundary of , respectively, we can access the vertex of in time.

###### Proof.

Recall that the data structure of Guibas and Hershberger [10] reports the shortest path between query points and as a balanced tree. We augment these trees such that each node knows the size of its subtree. It is easy to do this using only constant extra time and space, and without affecting the other operations. We can then simply binary search on the subtree sizes, using Lemma 12 to guide the search. ∎

#### Finding and .

We first show that we can find the point where enters (if it exists), and then show how to find , the other point where intersects .

###### Lemma 14.

We can find in time.

###### Proof.

Consider the geodesic distance of to diagonal as a function , parameterized by a value along . Similarly, let be the distance function from to . Since intersects exactly once –namely in – the predicate changes from True to False (or vice versa) exactly once. Query the data structure of Guibas and Hershberger [10] to get the funnel representing the shortest paths from to the points in . Let , with , be the intersection points of the extension segments of vertices in the funnel with . Similarly, compute the funnel representing the shortest paths from to . The extension segments in this funnel intersect in points , with . We can now simultaneously binary search among and to find the smallest interval bounded by points in in which flips from True to False. Hence, contains . Computing the distance from () to some () takes time, and thus we can find in time. On interval both and are simple hyperbolic functions consisting of a single piece, and thus we can compute in constant time. ∎

Consider the vertices of in clockwise order, where is the diagonal. Since the bisector intersects the outer boundary of in only one point, there is a vertex such that are all closer to than to , and are all closer to than to . We can thus find this vertex using a binary search. This takes time, as we can compute and in time. It then follows that lies on the edge . We can find the exact location of using a similar approach as in Lemma 14. This takes time. Thus, we can find in time. We summarize our results from this section in the following theorem.

###### Theorem 15.

Let be a simple polygon with vertices that is split into and by a diagonal . In time we can preprocess , such that for any pair of points and in , we can compute a representation of in time. This representation supports accessing any of its vertices in time.

## 4 Rebuilding the Forest

Consider a level in the balanced decomposition of at which a diagonal
splits a subpolygon of into and , and assume without loss of
generality that is vertical and that lies right of . Recall that
we partition the sites into groups (subsets). When we
insert a new site into a group of size , or delete a site from ,
we rebuild the forest , representing the topology of the Voronoi
Diagram of in , from scratch. We will now show that we can compute
efficiently by considering it as an abstract Voronoi
diagram [16]. Assuming that certain geometric primitives like
computing the intersections between “related” bisectors take time we
can construct an abstract Voronoi diagram in expected
time [16]. We will show that is a actually a
*Hamiltonian* abstract voronoi diagram, which means that it can be
constructed in time [15]. We show this in
Section 4.1. In Section 4.2 we
discuss the geometric primitives used by the algorithm of Klein and
Lingas [15]; essentially computing (a representation
of) the concrete Voronoi diagram of five sites. We show that we can implement
these primitives in time by computing the intersection point
between two “related” bisectors and . This then gives us
an time algorithm for constructing . Finally, in
Section 4.3 we argue that having only the topological structure is sufficient to find the site in closest to a query point
.

### 4.1 Hamiltonian abstract Voronoi Diagrams

In this section we show that we can consider as a Hamiltonian abstract
Voronoi diagram. A Voronoi diagram is *Hamiltonian* if there is a curve
–in our case the diagonal – that intersects all regions exactly once, and
furthermore this holds for all subsets of the
sites [15]. Let be the set of sites in
that we consider, and let be the subset of sites from whose Voronoi
regions intersect , and thus occur in .

###### Lemma 16.

The Voronoi diagram in is a Hamiltonian abstract Voronoi diagram.

###### Proof.

By Lemma 3 any bisector intersects the diagonal at most once. This implies that for any subset of sites , so in particular for , the diagonal intersects all Voronoi regions in at most once. By definition, intersects all Voronoi regions of the sites in at least once. What remains is to show that this holds for any subset of . This follows since the Voronoi region of a site with respect to a set is contained in the voronoi region of with respect to . ∎

#### Computing the Order Along .

We will use the algorithm of Klein and Lingas [15] to construct . To this end, we need the set of sites whose Voronoi regions intersect , and the order in which they do so. Next, we show that we can maintain the sites in so that we can compute this information in time.

###### Lemma 17.

Let denote the sites in by increasing distance from the bottom-endpoint of , and let be the subset of sites whose Voronoi regions intersect , ordered along from bottom to top. For any pair of sites and , with , we have that .

###### Proof.

Since and both contribute Voronoi regions intersecting , their bisector must intersect in some point in between these two regions. Since it then follows that all points on below , so in particular the bottom endpoint , are closer to than to . Thus, . ∎

Lemma 17 suggests a simple iterative algorithm for extracting from .

###### Lemma 18.

Given , we can compute from in time.

###### Proof.

We consider the sites in in increasing order, while maintaining as a stack. More specifically, we maintain the invariant that when we start to process , contains exactly those sites among whose Voronoi region intersects , in order along from bottom to top.

Let be the next site that we consider, and let , for some , be the site currently at the top of the stack. We now compute the distance between and the topmost endpoint of . If this distance is larger than , it follows that the Voronoi region of does not intersect : since the bottom endpoint of is also closer to than to , all points on are closer to than to

Comments

There are no comments yet.