1 Introduction
Nearestneighbor classification is a widelyused supervised learning technique in which, from training data consisting of geometric points with discrete classifications, one infers the classification of new points as equal to the classification of the nearest training point
[18]. This technique has motivated many important developments in exact and approximate nearestneighbor searching, including the construction of Voronoi diagrams [6, 13, 21, 27, 42], the development of pointlocation data structures for performing queries in these diagrams [27, 38], quadtreebased data structures for approximate nearest neighbors in spaces of moderate dimension [5, 4, 11, 24], and localitysensitive hashing for higherdimensional data [29, 25, 19, 3].Voronoi diagrams have high worstcase complexity even for spaces of moderate dimensions: for points in dimensions, the Voronoi diagram can have complexity [20, 32, 39]. Approximate nearestneighbor searching data structures are betterbehaved, but highdimensional classification problems using them still have a complexity only a small factor faster than a naive scan of all training data for each new input point. To speed up these methods, it is natural to consider a preprocessing stage that reduces the training set to a smaller subset, its relevant points, before building these data structures [15, 7]. Here, we define a point to be relevant if it is needed for correct nearestneighbor classification: removing it would change the nearestneighbor classification of some points of . Removing all points that are not relevant leaves all nearestneighbor classifications unchanged (see Theorem 1). Although this has no benefit in the worst case, it is reasonable to expect that on average for smoothenough input distributions and smoothenough decision boundaries (we leave those terms deliberately vague and nonrigorous) a training set of points in dimensions may be reduced to a smaller subset of relevant points, which could in some cases be a significant savings.
In this work we investigate a simple algorithm for quickly identifying which points of a training set are relevant and which are not. Our algorithm is outputsensitive: it is faster when there are few relevant points, and slower when there are many. It is based on the solutions to two previouslystudied geometric problems: the construction of Euclidean minimum spanning trees, spanning trees for the complete graph on a given set of points, weighted by the Euclidean distances between pairs of points, and the identification of extreme points, the vertices of the convex hull of a given set of points.
1.1 New results
Given a set of training points with discrete classifications (not assumed to be binary nor in general position), our algorithm performs the following steps, using a single Euclidean minimum spanning tree construction followed by an extremepoint computation per relevant point. As we prove, it finds exactly the set of relevant points.
Algorithm 1 (relevant points of a training set).

Find a Euclidean minimum spanning tree of the training set.

Find the edges in whose two endpoints have different classifications, and initialize the set of relevant points to consist of the endpoints of these edges.

For each relevant point added to (either initially or within this loop), perform the following steps:

Invert through a unit sphere centered at all of the training points whose classification differs from , producing a point set , including also itself in .

Identify the extreme points of .

Add to the training points corresponding to extreme points of .

The steps of the algorithm are illustrated in Fig. 1. The intuition behind Algorithm 1 is that the minimum spanning tree phase of the algorithm finds a piece of each component of the decision boundary, a wall between the Voronoi cells of two relevant points. The extreme point phase finds the neighbors of each relevant point in the Voronoi diagram of , including the relevant points that define neighboring walls of the decision boundary. In this way, it expands each piece of boundary to the full component of the boundary, from each wall to its adjacent walls in the component, without finding any false positives, and without needing to know anything about the topology of the boundary. The algorithm’s efficiency comes from outputsensitivity both in the number of calls to the extremepoint subproblem and within the algorithms for this subproblem. Both the Euclidean minimum spanning tree and the extremepoint subproblem admit either simple and dimensionindependent algorithms or asymptotically faster but more dimensionspecific and more complex algorithms, but unfortunately we do not know of algorithms that are both simple and optimally efficient. We analyze the same overall algorithm both ways, using both kinds of algorithm for the subproblems. This analysis gives us the following results:

Using simple algorithms, for an input of bounded dimension with relevant points, we can identify the relevant points in time . For an input of unbounded dimension, we can identify the relevant points using linear programs.

Using more complex subroutines for the Euclidean minimum spanning tree and extreme points, we can identify the relevant points in randomized expected time for threedimensional points, and in time
for dimensional points, for any . For instance, for this bound is , and for it is .
The nearestneighbor decision boundary, and not just the set of relevant points, can be constructed as a subcomplex of the Voronoi diagram of the relevant points, consisting of the dimensional faces of the diagram that separate cells of opposite classifications. This construction can be performed using standard Voronoi diagram construction algorithms [13] in an additional time of , independently of . For all dimensions greater than two these time bounds are significantly faster than the time that could be obtained by constructing the Voronoi diagram directly and then using its faces to identify the classification decision boundary.
We note that the set of relevant points found by Algorithm 1 is not necessarily the smallest set of points that would have the same decision boundary, or the smallest subset of the given points that would have the same decision boundary. It is, rather, the set of all training points whose omission from the whole data set (or from the resulting subset of relevant points) would change the decision boundary. Finding the smallest set of points with the same decision boundary, in high dimensions, seems likely to be a much more difficult task.
1.2 Related work
The problem of constructing nearestneighbor decision boundaries, in an outputsensitive way, was considered by Bremner et al. [7], for the special case of twodimensional training data. They showed that, for training sets of this dimension, the relevant points and their resulting nearestneighbor decision boundary can be found in time .
In higher dimensions, the only work we are aware of for this problem is that of Clarkson [15], who (in our terms) gave a simple algorithm for finding the relevant points whose running time is whenever the dimension is bounded. Our time bounds are significantly faster than Clarkson’s in the cases for which this sort of thinning is particularly useful, when is much smaller than .
We will survey algorithms for the two subroutines we use, for Euclidean minimum spanning trees and extreme points, in our discussion of these problems in the next section.
2 Preliminaries
2.1 Voronoi diagrams and Delaunay graphs
The Voronoi diagram of a finite set of points in (called sites in this context) is a collection of convex polyhedra (possibly unbounded), one for each site site , consisting of the points in for which is a nearest site, one with minimum Euclidean distance to the point [6]. We call such a polyhedron the cell of . It is an intersection of a finite system of halfspaces, the halfspaces that contain
and have as their boundaries the hyperplanes halfway between
and each other site. These hyperplanes are the perpendicular bisectors of line segments connecting to each other site. For our purposes it is convenient to think of the cells as closed sets, containing their boundaries, and intersecting each other at shared boundary points. Their interiors, however, are disjoint. The union of all the cells equals .The Voronoi diagram has finitely many faces, the intersections of finite sets of cells. The dimension of a face is the dimension of its affine hull. We define a wall of the Voronoi diagram to be a face of dimension , and a junction of the diagram to be a face of dimension . The affine hull of a junction, a dimensional subspace of , is perpendicular to a family of twodimensional planes. If we choose a plane in this family that intersects the junction in its relative interior, a point that is not part of any lowerdimensional face, then the junction will appear in this intersection as a point, and in a neighborhood of this point the walls that include the junction will appear as rays and the cells that include the junction will appear as convex wedges between these rays. The geometry of this structure of rays and wedges does not depend on the choice of intersecting plane. It is called the link of the junction. If the sites are in general position (no of them belonging to a common sphere) then the link of a junction will only have three rays and three wedges, but we do not wish to assume general position.
We define the Delaunay graph of a set of sites to be a graph having the sites as vertices, with edges connecting pairs of sites when their cells intersect in a wall. We do not include edges for pairs of sites whose cells have a lowerdimensional or empty intersection. For sites in general position, this graph forms the set of edges of a simplicial complex, the Delaunay triangulation, but again we do not wish to make this general position assumption.
2.2 Euclidean minimum spanning trees
A Euclidean minimum spanning tree of a set of point sites is just a minimum spanning tree of a complete graph, having as its vertices, with edges weighted by Euclidean distance. We allow equal distances, in which case there may be more than one possible minimum spanning tree. If there are sites, a minimum spanning tree can be constructed easily by naive algorithms in time [23]: simply construct the weighted complete graph, and apply either Borůvka’s algorithm or Jarník’s algorithm (with an unordered list as priority queue), both of which take time for dense graphs.
For points in , a standard and more efficient method of constructing a Euclidean minimum spanning tree is to construct the Delaunay triangulation of the points (perturbed if necessary to be in general position), which is guaranteed to contain a minimum spanning tree as a subgraph [41], and then apply a planar graph minimum spanning tree algorithm to the resulting graph. This method provides no advantage for worstcase analysis in higher dimensions, as the Delaunay graph can be complete [42], but we need the following related lemma for the correctness of Algorithm 1:
Lemma 1.
For an arbitrary finite set of sites in , any Euclidean minimum spanning tree of the sites is a subgraph of the Delaunay graph of the sites.
Proof.
Consider any minimum spanning tree edge , and let be the midpoint of the edge between sites and . Then is equidistant from and . If any other site were not strictly farther than , then (by the triangle inequality) and would both be shorter than , allowing the tree to be made shorter by replacing edge by or (whichever is not already in the tree). Because we are assuming the tree to be minimum, shortening it is not possible, so all other sites must be strictly farther from .
Let be obtained by perturbing by a small amount in any direction perpendicular to ; this perturbation preserves the property that is equidistant from and and farther from all other sites. Therefore, lies on a wall between the cells for and , and edge is also an edge of the Delaunay graph. ∎
Instead, specialized higherdimensional Euclidean minimum spanning tree algorithms proceed by reducing the problem to a collection of bichromatic closest pair problems, in which one must find the closest red–blue pair among a collection of red and blue points, combining the resulting pairs into a graph, and applying a graph minimum spanning tree algorithm to this graph. Several reductions from Euclidean minimum spanning trees to bichromatic closest pairs have been given but for the known time bounds for bichromatic closest pairs these reductions all have the same efficiency [1, 8, 33]. Based on this approach, the following results are known [1]:^{1}^{1}1Agarwal et al. [1] state the time bound for highdimensional Euclidean minimum spanning trees as a randomized expected time bound, but in later related work such as [2] they observe that the need for randomness can be eliminated using techniques from [34].
Lemma 2.
A 3dimensional Euclidean minimum spanning tree of points can be computed in randomized expected time . A dimensional Euclidean minimum spanning tree can be computed by a deterministic algorithm in time
for any .
When the dimension is not constant, these methods fail to improve on the quadratic time of the naive algorithms. More strongly, for any , the strong exponential time hypothesis implies that closest pairs and therefore also Euclidean minimum spanning trees of dimension polylogarithmic in (with the polylogarithm depending on ) cannot be found in time [30].
2.3 Extreme points
The extreme points of a finite set of points are the vertices of its convex hull, or equivalently the points that are on the boundary of a halfspace in which all other points of are interior.^{2}^{2}2See, for instance, [26], p. 35, where this equivalence is stated in the form that the extreme points and exposed points of a convex polytope coincide. Testing whether a given point
is extreme can be formulated as a linear programming feasibility problem in which we seek a vector
for which for all other points in . The point is extreme if and only if such a vector exists.The dimension equals the number of variables in this linear program. Therefore, when is bounded, we can apply algorithms for lowdimensional linear programming, which are strongly polynomial and take linear time when the dimension is bounded, with exponential or subexponential dependence on the dimension [36, 22, 40, 16, 35, 12]. One particularly simple choice here is Seidel’s algorithm, which considers the constraints of the given program in a random order, maintaining the optimal solution for the constraints seen so far, and when finding a violated constraint recurses within the subspace of one lower dimension in which that constraint is tight [40].
Thus, one simple way of finding all of the extreme points is to apply this linear programming approach to each point individually. However, this can be improved by the following method:
Algorithm 2 (simple algorithm for extreme points [15, 9, 37]).

Initialize the set of extreme points to the empty set.

For each input point , in an arbitrary order, perform the following steps:

Use the linear program outlined above to test to test whether is extreme for and find a vector whose dot product with exceeds the dot product with any point in .

If is not extreme for ( does not exist) go on to the next point in the outer loop.

Otherwise, find the training point that maximizes , breaking ties lexicographically by the coordinates of the training points, and add to .

Each iteration of the outer loop that reaches step (c) identifies a new extreme point. Thus, all extreme points are identified using linear programs of size , plus extremepoint searches of size , in total time in bounded dimensions. We omit some details and any proof of correctness, for which see the references for this algorithm [15, 9, 37]. This time bound can be further improved at the cost of greater algorithmic complexity. When the dimension is two or three, and of the given points are extreme, one can find all the extreme points in time using an outputsensitive algorithm for the convex hull, as (by Euler’s polyhedral formula) the number of extreme points and convex hull complexity are always within a constant factor of each other [31, 17, 14, 10]. In higher dimensions, Chan [10] gives a time bound for this problem of
2.4 Inversion and polar duality
An inversion through a sphere in Euclidean space is a transformation that maps points (other than the center of the circle) to other points. Each point and its transformed image lie on a common ray from the center of the circle, and the product of their distances from the center equals the squared radius of the circle. These transformations preserve cocircularity of the transformed points, but not other properties such as distances. Polarity is a different kind of transformation, again defined by a sphere in , that associates points (other than the center of the sphere) with hyperplanes (not through the center of the sphere) and vice versa. The point associated with a hyperplane can be found by finding the point that belongs to and is nearest to the center of the sphere, and then inverting through the sphere. Reversing these steps, the hyperplane associated with a point is the hyperplane through the inverted image of , perpendicular to the line through and the center of the sphere. Both inversion and polarity are illustrated in Fig. 2.
We can choose a Cartesian coordinate system for which the given sphere is the unit sphere centered at the origin. Under these coordinates, if
is a point and is the polar image of a hyperplane , then lies on if and only if . If the dot product is less than one, lies on the same side of as the origin, and if the dot product is greater than one, lies on the far side of from the origin. Because the dot product is a commutative operation, the operation of polarity (taking to a hyperplane and to the point ) preserves incidence and sidedness.The inversions performed in our algorithm for finding relevant points can alternatively be thought of as polarities, with the inverted image of each site representing the polar image of the hyperplane equidistant between it and the chosen relevant point . We can formalize this intuition in the following lemma:
Lemma 3.
Let be a set of points, let be a point not belonging to , and let be the set of points obtained by inverting through a sphere centered at , with corresponding indexes. Then is extreme in if and only if the cells for and share a wall in the Voronoi diagram of .
Proof.
Consider any particular point , for which we wish to prove the statement of the lemma. Scaling the radius of the sphere of inversion scales but does not change the property of being extreme, so the choice of radius is irrelevant to the truth of the lemma. Therefore, without loss of generality, we may assume that the radius is such that the polar hyperplane of (for a different sphere of unit radius centered at ) is exactly the wall of the twopoint Voronoi diagram of .
If is extreme, there is a hyperplane passing through it, with all of the other points in on a single side of . Let be the polar of ; then lies on , so it is equidistant from and . By the preservation of sidedness of polarity, lies on the same side as of each wall for , so it is farther from all the other points than it is from and from . Because it is equidistant from and and farther from all the other sites of the Voronoi diagram, it witnesses the existence of a Voronoi wall between and in the full Voronoi diagram.
In the opposite direction, if the cells for and share a wall in the Voronoi diagram of , let be a point in the relative interior of that wall, and let by the hyperplane polar to . Because is equidistant from and , and farther from all of the other points , is on the same side as of each wall . By the preservation of sidedness of polarity, point lies on , with all other points on the same side as of , so witnesses the fact that is extreme in . ∎
2.5 Binary homology
Although Algorithm 1 itself is ignorant of topology, we need some basics for its correctness proof. Specifically, we use mod2 homology, as described e.g. by [28]. This theory applies to a wide class of cell complexes, but we do not need to define this class carefully, because we will only apply this form of homology to the finite convex subdivisions of obtained as Voronoi diagrams of finite point sets.
If is any set, the family of all subsets of forms a vector space with the symmetric difference of sets as its vector addition operation. Subsets of the dimensional faces of a polyhedral decomposition of space (such as a Voronoi diagram) are called chains. For each , the chains can be mapped to chains by the boundary map , which takes a single face to the faces on its boundary, and acts linearly on chains: if is a chain, is the chain consisting of
faces that occur an odd number of times on the boundary of
faces in . It follows directly from its action on convex polytopes that, for any , : the boundary of a boundary is empty. Conversely, if is a chain with empty boundary – that is, if – then is itself a boundary: there exists such that . In other spaces than , there can exist chains with empty boundary that are not themselves boundaries, corresponding to nontrivial elements of the homology groups of the space, but has trivial homology so such chains do not exist.For a set of dimensional training points with classifications, we consider a subgraph of the Delaunay graph consisting of the edges connecting cells with the same classification as each other, and we define a decision component to be a connected component of this subgraph; see Fig. 3. As a set of dimensional Voronoi cells, it can be considered as a chain. If is a decision component, form a graph whose vertices are the Voronoi walls of , with two vertices adjacent when the two walls they represent meet in a junction. We define a decision boundary component to be a connected component of for any decision component . Because it represents a set of Voronoi walls, it can be considered as a chain.
Lemma 4.
Every decision boundary component is the boundary of a set of Voronoi cells.
Proof.
Because , and a set of walls has an empty boundary if and only if they touch each junction an even number of times, it follows that all vertices in have even degree, and therefore that the same thing is true in every decision boundary component. Therefore, if is a decision boundary component, . It follows from the triviality of the homology of that there exists a chain such that . ∎
If is a decision boundary component, we define a side of to be a set of Voronoi cells having as its boundary. By Lemma 4, every decision boundary component has at least one side. (Actually, at least two, because the complement of a side is another side.)
3 Correctness
3.1 Only relevant points are found
Algorithm 1 adds points to its set of relevant points in two ways: by finding endpoints of minimum spanning tree edges and by finding extreme points of inverted point sets. We prove in this section that in both cases the points that are found are truly relevant: there is a nearestneighbor classification query that would be answered incorrectly if any one of these points were omitted.
Lemma 5.
If is an endpoint of an edge of a minimum spanning tree for which the other endpoint has a different classification, then is relevant with respect to the whole data set, and also relevant with respect to any subset of the data set that includes both and .
Proof.
By Lemma 1, there is a Voronoi wall between the cells for and . Let be any point interior to that wall, and perturb to a point closer to than to , by a perturbation sufficiently small that it does not touch or cross any other Voronoi walls. Then the correct nearest neighbor classification of is the same as that for , but if were omitted then the classification would become the same as for , a different value. Therefore is relevant. ∎
Lemma 6.
If is relevant for the whole data set, has a different classification than , and corresponds to one of the extreme points of the set constructed by Algorithm 1, then is relevant with respect to the whole data set, and also relevant with respect to any subset of the data set that includes both and .
Proof.
By Lemma 3, and share a wall in the Voronoi diagram of . Let be a point in the relative interior of this wall, chosen in general position so that, in the Voronoi diagram of the whole training set, line segment does not cross any faces of lower dimension than a wall. With this choice, is equidistant from and , and farther from all other points of different classification to .
Within line segment , each point of the line segment has as the nearest neighbor among points with a different classification to ; however, some points of the line segment may have an even closer neighbor that has the same classification as . (For instance, this happens for the leftmost red Voronoi neighbor of in Fig. 1.) Let be a point of line segment that is equidistant from and from the nearest training point to with the same classification as . The existence of can be seen from the intermediate value theorem, noting that at one endpoint of this segment, , is closer than any other training point, while at the other endpoint, , the nearest training point with the same classification as is at equal or closer distance than . Because of our choice of as being in general position, no other training point can be as close to as these two points. Then lies on a Voronoi wall between and a point with the same classification as , so is relevant by the same perturbation argument as in the proof of Lemma 5. ∎
Lemma 7.
All points identified as relevant by Algorithm 1 are relevant, both with respect to the whole data set and with respect to the subset of points identified by the algorithm.
Proof.
This follows by induction on the number of iterations of the outer loop of Algorithm 1, using Lemma 5 as the base case and Lemma 6 for the induction step. ∎
3.2 All relevant points are found
If is a decision boundary component, and is a wall in , between the Voronoi cells for points and , we say that and are defining points of .
Lemma 8.
For every decision boundary component , Algorithm 1 identifies at least one defining point of as relevant.
Proof.
By Lemma 4, has a side . Because the minimum spanning tree connects all the Voronoi cells, it includes at least one edge that connects a cell in to another cell not in . The wall between these two cells is part of the boundary of , so it belongs to , and its endpoints are defining points of . Because this wall belongs to a decision boundary component, it separates two cells of different classifications, so the endpoints of will be identified as relevant by the phase of Algorithm 1 that finds endpoints of minimum spanning tree edges whose endpoints have different classifications. ∎
Once we have identified at least one defining point of a decision boundary component, the second phase of Algorithm 1 finds all of them, as the following lemmas show.
Lemma 9.
If Algorithm 1 identifies one of the two defining points of a wall of a decision boundary component as relevant, it identifies the other defining point of the same wall.
Proof.
Let be the first of the two defining points to be identified, and be the other defining point for the same wall. Because they are separated by a decision boundary component, they have different classifications. Then is a neighbor of in the Delaunay graph of all of the training points, and therefore also in the Delaunay graph of and the subset of training points with different classifications to . Therefore, by Lemma 3, will be found as the inverted image of one of the extreme points in . ∎
Lemma 10.
Let and be walls sharing a junction, such that both and separate cells with different classifications and such that all cells between them (within one of the two angles that they form at their shared junction) have a single classification. Suppose also that Algorithm 1 identifies one of the defining points of as relevant. Then it also identifies one of the defining points of as relevant.
Proof.
By Lemma 9, Algorithm 1 identifies the defining point of that lies within the angle between and . Let be the defining point of that lies outside this angle, let be the junction of and , and consider the link of this junction, within a plane perpendicular to . Within this link, the Voronoi cells incident to divide the plane into wedges, meeting at the point where crosses the plane of the link. The defining sites of these cells are all closer to this point than any other training points. For any subset of the training points that includes at least one of the defining sites of a cell incident to , the crosssection of the Voronoi diagram in the plane of the link will still have the structure of a set of wedges for the remaining sites, in the same circular ordering. In particular, in the Voronoi diagram of , the wedges of and will be consecutive. Therefore, these two cells are separated by a wall in the Voronoi diagram of , and by Lemma 3, will be found as the inverted image of one of the extreme points in . ∎
Lemma 11.
Algorithm 1 identifies all relevant points with respect to the whole training set, and with respect to the set of points that it identies.
Proof.
Let be a training point that is relevant with respect to the whole training set, and let be a query point that would get the wrong classification if is removed from the training set (witnessing the relevance of ). Then must be the nearest training point to , and the secondnearest training point to must have a different classification than . Because and are the nearest and secondnearest points to , they must share a Voronoi wall , which belongs to a decision boundary component of the decision component of . By Lemma 8, Algorithm 1 identifies at least one defining point of , and by Lemma 10 the defining points that it identifies extend from any wall to any other wall of that is consecutive at a junction, and therefore also to any other wall of that is adjacent at a junction, and to any other wall that is connected through a sequence of adjacencies at junctions. But was defined as a set of walls that are adjacent in this way, so Algorithm 1 identifies at least one defining point of . By Lemma 9 it identifies both defining points, and therefore it identifies .
Removing an irrelevant point does not change the decision boundary components or the defining points of their walls, so it does not create new relevant points. Therefore, the relevant points with respect to the whole training set identified by Algorithm 1 are also all of the relevant points with respect to the set of points that it identifies. ∎
Theorem 1.
The nearestneighbor classifications obtained from the set of relevant points identified by Algorithm 1 equal the nearestneighbor classifications obtained from the whole training set.
Proof.
The classification of any point of , for a given set of labeled points, may be obtained from the label of its nearest relevant point for the given set; all Voronoi walls that separate from must have the same label on both sides, for otherwise the two points defining the wall would be relevant and at least one would be nearer to than to , from which it follows that the nearest neighbor to has the same label as . By Lemma 11 the identity of the relevant points does not change between the whole training points and the subset of points identified by the algorithm, and therefore the identity of the nearest relevant point cannot change. ∎
4 Analysis
We are now ready to prove our main results on the time bounds for finding relevant points.
Theorem 2.
Algorithm 1, implemented using Jarník’s or Borůvka’s algorithm for its minimum spanning trees, Algorithm 2 for extreme points, and Seidel’s algorithm [40] for the linear programming steps of Algorithm 2, finds all relevant points of a given training set of size , having relevant points, in any constant dimension , in total time .
Proof.
Theorem 3.
Algorithm 1, implemented using the algorithm of Agarwal, Edelsbrunner, Schwarzkopf, and Welzl [1] for its minimum spanning trees, and the algorithm of Chan [10] for its extreme points, finds all relevant points of a given training set of size , having relevant points, in randomized expected time for and in time
for all constant dimensions .
The proof is the same as for Theorem 2.
Theorem 4.
Algorithm 1, implemented using Jarník’s or Borůvka’s algorithm for its minimum spanning trees, Algorithm 2 for extreme points, and an arbitrary linear programming algorithm for the linear programming steps of Algorithm 2, finds all relevant points of a given training set of size , having relevant points, in any constant dimension , in time , where denotes the time to solve a linear program of dimension (number of variables) and size (number of constraints) .
Again, the proof is the same.
References
 [1] Pankaj K. Agarwal, Herbert Edelsbrunner, Otfried Schwarzkopf, and Emo Welzl. Euclidean minimum spanning trees and bichromatic closest pairs. Discrete Comput. Geom., 6(5):407–422, 1991. doi:10.1007/BF02574698.
 [2] Pankaj K. Agarwal and Jiří Matoušek. Dynamic halfspace range reporting and its applications. Algorithmica, 13(4):325–345, 1995. doi:10.1007/BF01293483.
 [3] Alexandr Andoni and Piotr Indyk. Nearoptimal hashing algorithms for approximate nearest neighbor in high dimensions. Commun. ACM, 51(1):117–122, 2008. doi:10.1145/1327452.1327494.
 [4] Sunil Arya, Theocharis Malamatos, and David M. Mount. Spacetime tradeoffs for approximate nearest neighbor searching. J. ACM, 57(1):A1:1–A1:54, 2010. doi:10.1145/1613676.1613677.
 [5] Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, and Angela Y. Wu. An optimal algorithm for approximate nearest neighbor searching in fixed dimensions. J. ACM, 45(6):891–923, 1998. doi:10.1145/293347.293348.
 [6] Franz Aurenhammer, Rolf Klein, and DerTsai Lee. Voronoi Diagrams and Delaunay Triangulations. World Scientific, 2013. doi:10.1142/8685.
 [7] David Bremner, Erik Demaine, Jeff Erickson, John Iacono, Stefan Langerman, Pat Morin, and Godfried Toussaint. Outputsensitive algorithms for computing nearestneighbour decision boundaries. Discrete Comput. Geom., 33(4):593–604, 2005. doi:10.1007/s0045400411520.
 [8] Paul B. Callahan and S. Rao Kosaraju. Faster algorithms for some geometric graph problems in higher dimensions. In Proc. 4th Symp. Discrete Algorithms (SODA 1993), pages 291–300. ACM, 1993.
 [9] Timothy M. Chan. Optimal outputsensitive convex hull algorithms in two and three dimensions. Discrete Comput. Geom., 16(4):361–368, 1996. doi:10.1007/BF02712873.
 [10] Timothy M. Chan. Outputsensitive results on convex hulls, extreme points, and related problems. Discrete Comput. Geom., 16(4):369–387, 1996. doi:10.1007/BF02712874.
 [11] Timothy M. Chan. Approximate nearest neighbor queries revisited. Discrete Comput. Geom., 20(3):359–373, 1998. doi:10.1007/PL00009390.
 [12] Timothy M. Chan. Improved deterministic algorithms for linear programming in low dimensions. ACM Trans. Algorithms, 14(3):A30:1–A30:10, 2018. doi:10.1145/3155312.
 [13] Bernard Chazelle. An optimal convex hull algorithm in any fixed dimension. Discrete Comput. Geom., 10(4):377–409, 1993. doi:10.1007/BF02573985.
 [14] Bernard Chazelle and Jiří Matoušek. Derandomizing an outputsensitive convex hull algorithm in three dimensions. Comput. Geom., 5(1):27–32, 1995. doi:10.1016/09257721(94)00018Q.
 [15] Kenneth L. Clarkson. More outputsensitive geometric algorithms. In Proc. 35th Symp. Foundations of Computer Science (FOCS 1994), pages 695–702. IEEE Computer Society, 1994. doi:10.1109/SFCS.1994.365723.
 [16] Kenneth L. Clarkson. Las Vegas algorithms for linear and integer programming when the dimension is small. J. ACM, 42(2):488–499, 1995. doi:10.1145/201019.201036.
 [17] Kenneth L. Clarkson and Peter W. Shor. Applications of random sampling in computational geometry. II. Discrete Comput. Geom., 4(5):387–421, 1989. doi:10.1007/BF02187740.
 [18] T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1):21–27, 1967. doi:10.1109/tit.1967.1053964.
 [19] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. Localitysensitive hashing scheme based on stable distributions. In Jack Snoeyink and JeanDaniel Boissonnat, editors, Proc. 20th Symp. Computational Geometry (SoCG 2004), pages 253–262. ACM, 2004. doi:10.1145/997817.997857.
 [20] A. K. Dewdney and J. K. Vranch. A convex partition of with applications to Crum’s problem and Knuth’s postoffice problem. Utilitas Math., 12:193–199, 1977.
 [21] Rex A. Dwyer. Higherdimensional Voronoi diagrams in linear expected time. Discrete Comput. Geom., 6(4):343–367, 1991. doi:10.1007/BF02574694.
 [22] Martin E. Dyer and Alan M. Frieze. A randomized algorithm for fixeddimensional linear programming. Math. Programming, 44(2, (Ser. A)):203–212, 1989. doi:10.1007/BF01587088.
 [23] David Eppstein. Spanning trees and spanners. In JörgRudiger Sack and Jorge Urrutia, editors, Handbook of Computational Geometry, pages 425–461. Elsevier, 2000. doi:10.1016/B9780444825377/500103.
 [24] David Eppstein, Michael T. Goodrich, and Jonathan Z. Sun. Skip quadtrees: dynamic data structures for multidimensional point sets. Internat. J. Comput. Geom. Appl., 18(12):131–160, 2008. doi:10.1142/S0218195908002568.
 [25] Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Similarity search in high dimensions via hashing. In Malcolm P. Atkinson, Maria E. Orłowska, Patrick Valduriez, Stanley B. Zdonik, and Michael L. Brodie, editors, Proc. 25th Int. Conf. Very Large Data Bases (VLDB 1999), pages 518–529. Morgan Kaufmann, 1999. URL: https://www.vldb.org/conf/1999/P49.pdf.
 [26] Branko Grünbaum. Convex Polytopes, volume 221 of Graduate Texts in Mathematics. Springer, 2nd edition, 2003. See in particular Grünbaum’s discussion of the Perles configuration on pp. 93–94.
 [27] Leonidas J. Guibas, Donald E. Knuth, and Micha Sharir. Randomized incremental construction of Delaunay and Voronoi diagrams. Algorithmica, 7(4):381–413, 1992. doi:10.1007/BF01758770.
 [28] Michael Henle. A Combinatorial Introduction to Topology. Dover Publications, 1994.

[29]
Piotr Indyk and Rajeev Motwani.
Approximate nearest neighbors: towards removing the curse of dimensionality.
In Jeffrey Scott Vitter, editor,Proc. 30th Symp. Theory of Computing (STOC 1998)
, pages 604–613. ACM, 1998. doi:10.1145/276698.276876.  [30] C. S. Karthik and Pasin Manurangsi. On closest pair in Euclidean metric: monochromatic is as hard as bichromatic. Combinatorica, 40(4):539–573, 2020. doi:10.1007/s0049301941131.
 [31] David G. Kirkpatrick and Raimund Seidel. The ultimate planar convex hull algorithm? SIAM J. Comput., 15(1):287–299, 1986. doi:10.1137/0215021.
 [32] Victor Klee. On the complexity of dimensional Voronoi diagrams. Arch. Math., 34(1):75–80, 1980. doi:10.1007/BF01224932.
 [33] Drago Krznaric, Christos Levcopoulos, and Bengt J. Nilsson. Minimum spanning trees in dimensions. Nordic J. Comput., 6(4):446–461, 1999.
 [34] Jiří Matoušek. Approximations and optimal geometric divideandconquer. J. Comput. System Sci., 50(2):203–208, 1995. doi:10.1006/jcss.1995.1018.
 [35] Jiří Matoušek, Micha Sharir, and Emo Welzl. A subexponential bound for linear programming. Algorithmica, 16(45):498–516, 1996. doi:10.1007/BF01940877.
 [36] Nimrod Megiddo. Linear programming in linear time when the dimension is fixed. J. ACM, 31(1):114–127, 1984. doi:10.1145/2422.322418.
 [37] Thomas Ottmann, Sven Schuierer, and Subbiah Soundaralakshmi. Enumerating extreme points in higher dimensions. Nordic J. Comput., 8(2):179–192, 2001.
 [38] Franco P. Preparata and Roberto Tamassia. Efficient point location in a convex spatial cellcomplex. SIAM J. Comput., 21(2):267–280, 1992. doi:10.1137/0221020.
 [39] Raimund Seidel. Exact upper bounds for the number of faces in dimensional Voronoi diagrams. In Applied Geometry and Discrete Mathematics, volume 4 of DIMACS Ser. Discrete Math. Theoret. Comput. Sci., pages 517–529. Amer. Math. Soc., 1991.
 [40] Raimund Seidel. Smalldimensional linear programming and convex hulls made easy. Discrete Comput. Geom., 6(5):423–434, 1991. doi:10.1007/BF02574699.
 [41] Michael Ian Shamos and Dan Hoey. Closestpoint problems. In Proc. 16th Symp. Foundations of Computer Science (FOCS 1975), pages 151–162. IEEE Computer Society, 1975. doi:10.1109/SFCS.1975.8.
 [42] D. F. Watson. Computing the dimensional Delaunay tessellation with application to Voronoi polytopes. Comput. J., 24(2):167–172, 1981. doi:10.1093/comjnl/24.2.167.
Comments
There are no comments yet.