1 Introduction
Approximation problems involving a single convex body in dimensional space have received a great deal of attention in the computational geometry community [4, 9, 10, 11, 12, 18, 19, 45]. Recent results include nearoptimal algorithms for approximating the convex hull of a set of points [9, 19], as well as an optimal data structure for answering approximate polytope membership queries [11]. In contrast, works involving multiple convex bodies are generally limited to dimensions and/or do not consider approximation [2, 13, 29, 30, 44]. In this paper we present new approximation algorithms to natural problems that either involve multiple convex polytopes or result from such an analysis:

Determining whether two convex polytopes and intersect

Computing the Minkowski sum, , of two convex polytopes

Computing the width of a convex polytope (which results from an analysis of the Minkowski sum )
Throughout we assume that the input polytopes reside in and are fulldimensional, where the dimension is a fixed constant. Polytopes may be represented either as the convex hull of points (point representation) or as the intersection of halfspaces (halfspace representation). In either case, denotes the size of the polytope.
1.1 Convex Intersection
Detecting whether two geometric objects intersect and computing the region of intersection are fundamental problems in computational geometry. Geometric intersection problems arise naturally in a number of applications. Examples include geometric packing and covering, wire and component layout in VLSI, map overlay in geographic information systems, motion planning, and collision detection. Several surveys present the topics of collision detection and geometric intersection [33, 36, 37].
The special case of detecting the intersection of convex objects has received a lot of attention in computational geometry. The static version of the problem has been considered in [39, 42] and [20, 38]. The data structure version where each convex object is preprocessed independently has been considered in [13, 21, 22, 25] and [13, 22, 25, 26].
Recently, Barba and Langerman [13] considered the problem in higher dimension. They showed how to preprocess convex polytopes in so that given two such polytopes that have been subject to affine transformations, it can be determined whether they intersect each other in logarithmic time. However, the preprocessing time and storage grow as the combinatorial complexity of the polytope raised to the power . Since the combinatorial complexity of a polytope with vertices can be as high as , the storage upper bound is roughly . This high complexity motivates the study of approximations to the problem.
We define approximation in a manner that is sensitive to direction. Consider any convex body in and any
. Given a nonzero vector
, defineto be the minimum slab defined by two hyperplanes that enclose
and are orthogonal to . Define the directional width of with respect to , , to be the perpendicular distance between these hyperplanes. Let be the central expansion of by a factor of , and define to be the intersection of these expanded slabs over all unit vectors . It can be shown that for any , . An approximation of is any set (which need not be convex) such that . This defines an outer approximation. It is also possible to define an analogous notion of inner approximation in which each directional width is no smaller than times the true width. Our results can be extended to either type of approximation.A related notion studied extensive in the literature is that of kernels. Given a discrete point set in , an kernel of is any subset such that is an inner approximation of [4]. It is well known that points are sufficient and sometimes necessary in an kernel. Kernels efficiently approximate the convex hull and as such have been used to obtain fast approximation algorithms to several problems such as diameter, minimum width, convex hull volume, minimum enclosing cylinder, minimum enclosing annulus, and minimumwidth cylindrical shell [4, 5].
In the approximate version of convex intersection, we are given two convex bodies and and a parameter . If , then the answer is “yes.” If , then the answer is “no.” Otherwise, either answer is acceptable. The approximate polytope intersection problem is defined as follows. A collection of two or more convex polytopes in are individually preprocessed (with knowledge of ). Given any two preprocessed polytopes, and , the query determines whether and intersect approximately. In general, the query algorithm can be applied to any affine transformation of the preprocessed polytopes.
Theorem 1.
Given a parameter and two polytopes each of size (given either using a point or halfspace representation), we can independently preprocess each polytope into a data structure in order to answer approximate polytope intersection queries with query time , storage , and preprocessing time , where is an arbitrarily small positive constant.
The space is nearly optimal in the worst case because there is a lower bound of on the worstcase bit complexity of representing an approximation of a polytope [11].
1.2 Minkowski Sum
Given two convex bodies , the Minkowski sum is defined as (see Figure 1(a)). Minkowski sums have found numerous applications in motion planning [7, 31], computeraided design [44], computational biology [40], satellite layout [15], and image processing [35]. Minkowski sums have also been well studied in the context of discrete and computational geometry [1, 3, 29, 32, 43].
It is well known that in dimension , the number of vertices in the Minkowski sum of two polytopes can grow as rapidly as the product of the number of vertices in the two polytopes [7]. This has led to the study of algorithms to compute approximations to Minkowski sums in [2, 30, 44]. In this paper, we show how to approximate the Minkowski sum of two convex polytopes in in nearoptimal time.
Theorem 2.
Given a parameter and two polytopes each of size (given either using a point or halfspace representation), it is possible to construct an approximation of of size in time, where is an arbitrarily small positive constant.
The output representation can be either pointbased or halfspacebased, irrespective of the input representations.
1.3 Width
Define the directional width of a set of points to be the directional width of . The width of is the minimum over all directional widths. The maximum over all directional widths is equal to the diameter of . Both problems can be approximated using the kernel of . After successive improvements [4, 6, 8, 14, 18], algorithms to compute kernels and to approximate the diameter in roughly time have been independently discovered by Chan [19] and the authors [9]. Somewhat surprisingly, these works offer no improvement to the running time to approximate the width [4, 17, 18, 28, 45], which Chan [19] posed as an open problem. The fastest known algorithms date from over a decade ago and take roughly time [17, 18].
Agarwal et al. [2] showed that the width of a convex body is equal to the minimum distance from the origin to the boundary of the convex body (see Figure 1(b)). Using Theorem 2, we can approximate the width by computing an approximation of represented as the intersection of halfspaces and then determining the closest point to the origin among all bounding hyperplanes. The following presents this result.
Theorem 3.
Given a set of points in and an approximation parameter , it is possible to compute an approximation to the width of in time, where is an arbitrarily small positive constant.
1.4 Techniques
Our algorithms and data structure are based on a data structure defined by a hierarchy of Macbeath regions [9, 11], which answers approximate directional width queries in polylogarithmic time. First, we show how to use this data structure as a black box to answer approximate polytope intersection queries by transforming the problem to a dual setting and performing a multidimensional convex minimization. Next, we show how to use approximate polytope intersection queries to compute approximations of the Minkowski sum. The approximation to the width follows directly.
Since we only access the input polytopes through a data structure for approximate directional width queries, our results apply in much more general settings. For example, we could answer in polylogarithmic time whether the Minkowski sum of two polytopes (preprocessed independently) approximately intersects a third polytope. Our techniques are also amenable to other polytope operations such as intersection and convex hull of the union, as long as the model of approximation is defined accordingly.
The preprocessing time of the approximate directional width data structure we use is , for arbitrarily small . If this preprocessing time is reduced in the future, the complexity of our algorithms becomes equal to the preprocessing time plus .
2 Preliminaries
In this section we present a number of results, which will be used throughout the paper. The first provides three basic properties of Minkowski sums. The proof can be found in standard sources on Minkowski sums (see, e.g., [41]).
Lemma 4.
Let be two (possibly infinite) sets of points. Then:

if and only if , where is the origin.

.

For all nonzero vectors , .
Next, we recall a recent result of ours on answering directional width queries approximately [9], which we will use as a black box later in this paper. Given a set of points in a constant dimension and an approximation parameter , the answer to the approximate directional width query for a nonzero query vector consists of a pair of points such that .
Lemma 5.
Given a set of points in and an approximation parameter , there is a data structure that can answer approximate directional width queries with query time , space , and preprocessing time .
2.1 Fattening
Existing algorithms and data structures for convex approximation often assume that the bodies have been fattened through an appropriate affine transformation. In the context of multiple bodies, this is complicated by the fact that different fattening transformations may be needed for the two bodies or their Minkowski sum. In this section we explore this issue.
Consider a convex body in dimensional space . Given a parameter , we say that is fat if there exist concentric Euclidean balls and , such that , and . We say that is fat if it is fat for a constant (possibly depending on , but not on or ). For a centrally symmetric convex body , the body obtained by scaling about its center by a factor of is called the expansion of .
Let be a convex body. We say that a convex body is a sandwiching body for if is centrally symmetric and , where is a expansion of . John [34] proved tight bounds for the constant of a sandwiching ellipsoid. This ellipsoid is referred to as the John ellipsoid.
Lemma 6.
For every convex body in , there exists a sandwiching ellipsoid. Furthermore, if is centrally symmetric, there exists a sandwiching ellipsoid.
It is an immediate consequence of this lemma that for any convex body there exists an affine transformation such that is fat. Any affine transformation that maps the John ellipsoid into a Euclidean ball will do. The following lemma generalizes this to hyperrectangles (see also Barequet and HarPeled [14]).
Lemma 7.
For every convex body in , there exists a sandwiching hyperrectangle.
Proof.
Let denote the sandwiching ellipsoid for , described in Lemma 6. By elementary geometry, there exists a sandwiching hyperrectangle for . We claim that is a sandwiching hyperrectangle for . To prove this claim, observe that and , where is the expansion of and is the expansion of . Letting denote the expansion of , it is easy to see that . It follows that . Since is the expansion of and is the expansion of , it follows that is the expansion of . This completes the proof. ∎
Next, let us consider fattening in the context of multiple bodies. The next two lemmas follow from elementary geometry and properties of Minkowski sums.
Lemma 8.
Let and be sandwiching bodies for and , respectively. Then is a sandwiching body for .
Lemma 9.
Let be a convex body. Given a sandwiching polytope for of constant complexity, we can compute a fattening affine transformation for in constant time, where .
Proof.
Let denote the given sandwiching polytope for . Recalling that sandwiching polytopes are centrally symmetric, by Lemma 6 we can find a sandwiching ellipsoid for . As has constant complexity, we can determine in time. In time, we can also find the affine transformation that converts into a Euclidean ball. We claim that is fat for . To prove this claim, observe that and , where is the expansion of and is the expansion of . Letting denote the expansion of , it is easy to see that . It follows that . Since is the expansion of and is the expansion of , it follows that is the expansion of . Thus is contained between Euclidean balls and , whose radii differ by a factor of , which proves the lemma. ∎
We conclude by showing that we can maintain a small amount of auxiliary information for any collection of convex bodies in order to determine the fattening transformation for the Minkowski sum of any two members of this library. We refer to the data structure for approximate directional width queries from Lemma 5 together with the additional information to determine the fattening transformation as the augmented data structure for approximate directional width queries.
Lemma 10.
Consider any finite collection of convex polytopes in , and let . It is possible to store information of constant size with each polytope such that in constant time we can compute a fattening affine transformation for the Minkowski sum of any two polytopes from the collection. This information can be computed in time proportional to the size of the input polytope.
Proof.
At preprocessing time, we store the sandwiching hyperrectangles for each , where . By Lemma 7, such hyperrectangles exist and they can be computed in time proportional to the size of the input polytope [23].
Suppose we want to compute a fattening affine transformation for , where and are the result of applying (possibly different) affine transformations to and , respectively. Let and be the polytopes of constant complexity obtained by applying the corresponding affine transformations to and , respectively. Clearly, and are sandwiching polytopes for and , respectively. Thus, by Lemma 8, is a sandwiching polytope for . Note that this polytope has constant complexity and can be computed in constant time. Applying Lemma 9, we can use this polytope to compute a fattening affine transformation for in constant time, where . ∎
The previous lemma holds more generally even when each of the polytopes are subject to any nonsingular affine transformation and to the Minkowski sum of a constant number of polytopes.
2.2 Projective Duality and Width
Our algorithm for approximating the directional width of a point set is based on a projective dual transformation, which maps points into hyperplanes and vice versa. Each primal point is mapped to the dual hyperplane . Each primal hyperplane is mapped to a dual point in the same manner. This dual transformation has several wellknown properties [24]. For example, the points in the lower convex hull of map to the hyperplanes in the upper envelope.
Let be a set of hyperplanes in . Given a point , the thickness of at , denoted is defined as follows. Given and , let denote the point in resulting by concatenating and . For the sake of illustration, we think of the th coordinate axis as being the vertical axis. Let and . We define as the maximum difference for points in the hyperplanes in . In other words, the thickness is the vertical distance between the intersection of the vertical line defined by with the upper and lower envelopes of . The following relates width and thickness.
Lemma 11.
Consider two points and a vector . Let denote the dual hyperplanes and . We have
Proof.
Given vectors and , let denote the standard inner product. Assume without loss of generality that . Clearly, is nonzero, so . Let and . The dual hyperplanes are
If we set we have and . Therefore
3 Approximate Convex Intersection
In this section, we will prove Theorem 1 for the case when the input polytopes are represented by points. Assume that we are given two polytopes and in the point representation. The objective is to preprocess and individually such that we can efficiently answer approximate intersection queries for and (or more generally for affine transformations of and ).
Given a convex body , , and a point , an approximate polytope membership query is defined as follows. If , the answer is “yes,” if , the answer is “no,” and otherwise, either answer is acceptable. Our strategy to answer approximate intersection queries is based on reducing them to approximate polytope membership queries. This reduction is presented in the following lemma, which is a straightforward generalization of Lemma 4(a) to an approximate context. The proof follows from standard algebraic properties of Minkowski sums and the observation that can be expressed as .
Lemma 12.
Let be two polytopes and . Determining the approximate intersection of and is equivalent to determining the approximate membership of .
Proof.
We begin by establishing the useful identity . By basic properties of Minkowski sums (commutativity and distributivity) we have
as desired.
The previous lemma relates approximate polytope intersection with an approximate membership of the origin in a polytope (Figure 2(a)). Determining whether the origin lies within the convex hull of a set of points
is a classic problem in computational geometry, which can be solved by linear programming. However, we are interested in a faster approximate solution that does not compute
explicitly. We cannot afford to preprocess an approximate polytope membership data structure for for each pair and , since the number of such pairs is quadratic in the number of input polytopes. Instead, we preprocess each input polytope individually, and we show next how to efficiently answer approximate polytope membership queries for by using augmented data structures for approximate directional width queries for and as black boxes.Lemma 13.
Given augmented data structures for answering approximate directional width queries for polytopes and , we can answer approximate membership queries for using queries to these data structures.
Proof.
Without loss of generality, we may translate space so that the query point coincides with the origin . Let , and let be ’s vertex set. (Note that and are not explicitly computed.)
The problem of determining whether is invariant to scaling and rotation about the origin. It will be helpful to perform some affine transformations that will guarantee certain properties for . First, we apply Lemma 10 to fatten and then apply a uniform scaling about the origin so that ’s diameter is . By fatness, has a sandwiching ball of radius . If the origin either lies within the inner ball or outside the outer ball, then the answer is trivial. Otherwise, let be the diameter of the outer ball. We may apply a rotation about the origin so that the center of this ball lies on the positive axis at a point . Again, this scaling and rotation can be computed in constant time using the augmented information. It follows that the coordinates of the points of have absolute values at most .
In summary, there exists an affine transformation computable in constant time such that after applying this transformation, the query point lies at the origin, is sandwiched between two concentric balls of constant radii centered at , where , and ’s vertex set is contained within . It is an immediate consequence that for all directions , and hence it suffices to answer the membership query to an absolute error of .
Lemma 4(c) implies that we can answer approximate width queries for as the sum of two approximate width queries to and . Therefore, our goal is to determine approximately if using only approximate width queries to and . In order to do this, we look at the projective dual problem in which each point is mapped to the hyperplane . Let denote the corresponding set of hyperplanes. The primal problem is equivalent to the dual problem of determining whether the horizontal hyperplane is sandwiched between the upper and lower envelopes of (Figure 2(b)). Since the point lies vertically above the origin and within ’s interior, it follows that cannot intersect the lower envelope. Therefore, it suffices to test whether intersects the upper envelope.
The dual problem can be solved exactly by computing the minimum value of the coordinate in the upper envelope and testing whether . In the primal, the value of corresponds to the negated coordinate of the intersection of a facet of the lower convex hull of and a vertical line passing through the origin (see Figure 2). Let ’s supporting hyperplane be denoted by . Since is sandwiched between two concentric balls of constant radii whose common center lies on this vertical line, it follows from simple geometry that this supporting hyperplane cannot be very steep. In particular, there exists such that , for . In the dual, this means that the minimum value is attained at a point whose first coordinates all lie within . In approximating , we will apply directional width queries only for directional vectors whose first coordinates lie within and . Thus, .
By Lemma 11, the duals of two points returned by an exact directional width query in the primal for a vector correspond to the two dual hyperplanes in the upper and lower envelopes of that intersect the vertical line for . Since queries are only applied to directions where and since for all directions , it follows from Lemma 11 that a relative error of in the directional width implies an absolute error of in the corresponding thickness. We can think of the upper envelope of as defining the graph of a convex function over the domain . Since , the slopes of the hyperplanes in are similarly bounded, and therefore this function has bounded slope. It follows that, for an appropriate , we can compute this function to an absolute error of at any by performing an approximate directional width query on for . To complete the proof, it suffices to show that with such queries, it is possible to compute an absolute approximation to . We do this in the next section. ∎
3.1 Convex Minimization
The following lemma shows how to use binary search to solve a onedimensional convex minimization problem approximately (see Figure 3(a)).
Lemma 14.
Let and be real parameters. Let be a convex function with bounded slope and be a function with for all . Let be the value of that minimizes . It is possible to determine a value with after evaluations of and no evaluation of .
Proof.
First, we present the recursive algorithm used to determine the value . If , then since the function has bounded slope, we simply return , as a valid answer.
Otherwise, we start by trisecting the interval and evaluate at the four endpoints of the subintervals (see Figure 3(a)). Let denote the value that minimizes , breaking ties arbitrarily. To simplify the boundary cases, let and . We then invoke our algorithm recursively on the interval and store the value returned as . We return the value among the two values that minimizes .
Since the length of the interval reduces by at least one third at each iteration, the number of recursive calls and therefore evaluations of is . Next, we show that . By the convexity of we have
Using that , we have
Since , we have
For inside the interval we have , and therefore
The same argument is used to bound the case of , obtaining
Either the minimum of is inside the interval or not. If it is not, then the previous inequality shows that provides a good approximation, regardless of the value returned in the recursive call. If the minimum is inside the interval , then the recursive call will provide a value result by an inductive argument. ∎
We are now ready to extend the result to arbitrary dimensions.
Lemma 15.
Let and be real parameters. Let for a constant dimension be a convex function with bounded slope and be a function with for all . Let be the value of that minimizes . It is possible to determine a value with after evaluations of and no evaluation of .
Proof.
The minimum can be written as
Note that if is a convex function with bounded slope, then so is the function (see Figure 3(b)) defined as
The proof is based on induction on the dimension . Since is a constant, the number of induction steps is also a constant. The base case of follows from Lemma 14. By the induction hypothesis, we can solve the dimensional instance to obtain a function such that
Using Lemma 14 for the function , we obtain a value with .
For the number of function evaluations for a given dimension we have
The recurrence easily solves to the desired
By applying Lemma 15 to the dual problem defined in the proof of Lemma 13 (where is the graph of the upper envelope of and ) with the augmented data structure from Lemma 5, we obtain Theorem 1 for the case when the input polytopes are represented by points. We will consider the case when the input polytopes are represented by halfspaces at the end of the next section.
4 Minkowski Sum Approximation
In this section, we will prove Theorems 2 and 3, as well as Theorem 1 for the case when the input polytopes are represented by halfspaces. Assume that we are given two polytopes and in the point representation, and we have computed the augmented approximate directional width data structures from Lemma 5 for each polytope. The objective is to obtain an approximation of the Minkowski sum of size using these data structures. Our approach is to fatten using Lemma 10 and then apply Dudley’s construction [27] in order to obtain an approximation with halfspaces. For completeness, we start by describing Dudley’s algorithm.
Let be a fat polytope of constant diameter. Dudley’s algorithm obtains an approximation represented by halfspaces as follows. Let be a ball of radius centered at the origin. (Note that .) Place a set of points on the surface of such that every point on the surface of is within distance of some point in . For each point , let be its nearest point on the boundary of . We call these points samples. For each sample point , take the supporting halfspace passing through that is orthogonal to the vector from to . The approximation is defined as the intersection of these halfspaces (see Figure 4(a)).
Bronshteyn and Ivanov [16] presented a similar construction. Instead of approximating by halfspaces, Bronshteyn and Ivanov’s construction approximates as the convex hull of the aforementioned set of samples^{1}^{1}1Dudley’s construction yields an outer approximation and Bronshteyn and Ivanov’s yields inner approximation, but it is possible to convert both to the other type through standard techniques. For details, see Lemma 2.8 of the full version of [9]. (see Figure 4(b)). In both constructions it is possible to tune the constant factors so that closest point queries need only be computed to within an absolute error of .
An approximate closest point query between a polytope and a point within constant distance from can be reduced to computing an approximation to the smallest radius ball centered at that intersects . This can be solved through binary search on the radius of this ball, where each probe involves determining whether intersects a ball of some radius centered at . Notice that the data structure for approximate polytope intersection from Section 3 only accesses the bodies through approximate directional width queries, besides the initial fattening transformation. By Lemma 4(c), given two preprocessed bodies and , we can answer directional width queries on through directional width queries on and individually. (In the case of a ball, no data structure is required.) Therefore, we can test intersection with a Minkowski sum , as long as we have augmented approximate directional width data structures for both and .
In order to establish Theorem 2 for the case when the input polytopes are represented by points, we apply the aforementioned binary search to simulate Dudley’s construction. Each sample is obtained after approximate polytope intersection queries. The total running time is dominated by the preprocessing time of Lemma 5. Note that the output polytope may be represented by either points or halfspaces according to whether we use Dudley’s or Bronshteyn and Ivanov’s algorithm. To show that the input polytopes may be represented by halfspaces, we show how to efficiently convert between the two representations.
Lemma 16.
Given an approximation parameter and a polytope of size (given either using a point or halfspace representation), we can obtain an approximation of size (in either representation, independent of the input representation) in time, where is an arbitrarily small constant.
Proof.
We remind the reader that Agarwal et al. [2] showed that the width of a convex body is equal to the minimum distance from the origin to the boundary of the convex body . To obtain Theorem 3, we compute Dudley’s approximation of and then we determine the closest point to the origin among the bounding hyperplanes of the approximation.
[1]Referencess:ref
References
 [1] P. K. Agarwal, E. Flato, and D. Halperin. Polygon decomposition for efficient construction of Minkowski sums. Comput. Geom. Theory Appl., 21(1):39 – 61, 2002.
 [2] P. K. Agarwal, L. J. Guibas, S. HarPeled, A. Rabinovitch, and M. Sharir. Penetration depth of two convex polytopes in 3D. Nordic J. of Computing, 7(3):227–240, 2000.
 [3] P. K. Agarwal, S. HarPeled, H. Kaplan, and M. Sharir. Union of random Minkowski sums and network vulnerability analysis. Discrete Comput. Geom., 52(3):551–582, 2014.
 [4] P. K. Agarwal, S. HarPeled, and K. R. Varadarajan. Approximating extent measures of points. J. Assoc. Comput. Mach., 51:606–635, 2004.
 [5] P. K. Agarwal, S. HarPeled, and K. R. Varadarajan. Geometric approximation via coresets. In J. E. Goodman, J. Pach, and E. Welzl, editors, Combinatorial and Computational Geometry. MSRI Publications, 2005.
 [6] P. K. Agarwal, J. Matoušek, and S. Suri. Farthest neighbors, maximum spanning trees and related problems in higher dimensions. Comput. Geom. Theory Appl., 1(4):189–201, 1992.
 [7] B. Aronov and M. Sharir. On translational motion planning of a convex polyhedron in 3space. SIAM J. Comput., 26(6):1785–1803, 1997.
 [8] S. Arya and T. M. Chan. Better dependencies for offline approximate nearest neighbor search, Euclidean minimum spanning trees, and kernels. In Proc. 30th Annu. Sympos. Comput. Geom., pages 416–425, 2014.
 [9] S. Arya, G. D. da Fonseca, and D. M. Mount. Nearoptimal kernel construction and related problems. In Proc. 33rd Internat. Sympos. Comput. Geom., pages 10:1–15, 2017.
 [10] S. Arya, G. D. da Fonseca, and D. M. Mount. On the combinatorial complexity of approximating polytopes. Discrete Comput. Geom., 58(4):849–870, 2017.
 [11] S. Arya, G. D. da Fonseca, and D. M. Mount. Optimal approximate polytope membership. In Proc. 28th Annu. ACMSIAM Sympos. Discrete Algorithms, pages 270–288, 2017.
 [12] S. Arya, G. D. da Fonseca, and D. M. Mount. Approximate polytope membership queries. SIAM J. Comput., 47(1):1–51, 2018.
 [13] L. Barba and S. Langerman. Optimal detection of intersections between convex polyhedra. In Proc. 26th Annu. ACMSIAM Sympos. Discrete Algorithms, pages 1641–1654, 2015.
 [14] G. Barequet and S. HarPeled. Efficiently approximating the minimumvolume bounding box of a point set in three dimensions. J. Algorithms, 38(1):91–109, 2001.
 [15] J.D. Boissonnat, E. D. Lange, and M. Teillaud. Minkowski operations for satellite antenna layout. In Proc. 13th Annu. Sympos. Comput. Geom., pages 67–76, 1997.
 [16] E. M. Bronshteyn and L. D. Ivanov. The approximation of convex sets by polyhedra. Siberian Math. J., 16:852–853, 1976.
 [17] T. M. Chan. Approximating the diameter, width, smallest enclosing cylinder, and minimumwidth annulus. Internat. J. Comput. Geom. Appl., 12:67–85, 2002.
 [18] T. M. Chan. Faster coreset constructions and datastream algorithms in fixed dimensions. Comput. Geom. Theory Appl., 35(1):20–35, 2006.
 [19] T. M. Chan. Applications of Chebyshev polynomials to lowdimensional computational geometry. In Proc. 33rd Internat. Sympos. Comput. Geom., pages 26:1–15, 2017.
 [20] B. Chazelle. An optimal algorithm for intersecting threedimensional convex polyhedra. SIAM J. Comput., 21(4):671–696, 1992.
 [21] B. Chazelle and D. P. Dobkin. Detection is easier than computation. In Proc. 12th Annu. ACM Sympos. Theory Comput., pages 146–153, 1980.
 [22] B. Chazelle and D. P. Dobkin. Intersection of convex objects in two and three dimensions. J. Assoc. Comput. Mach., 34:1–27, 1987.
 [23] B. Chazelle and J. Matoušek. On lineartime deterministic algorithms for optimization problems in fixed dimension. J. Algorithms, 21:579–597, 1996.
 [24] M. de Berg, O. Cheong, M. van Kreveld, and M. Overmars. Computational Geometry: Algorithms and Applications. Springer, 3rd edition, 2010.
 [25] D. P. Dobkin and D. G. Kirkpatrick. Fast detection of polyhedral intersection. Theo. Comp. Sci., 27(3):241–253, 1983.
 [26] D. P. Dobkin and D. G. Kirkpatrick. Determining the separation of preprocessed polyhedra—A unified approach. In Proc. Internat. Colloq. Automata Lang. Prog., pages 400–413, 1990.
 [27] R. M. Dudley. Metric entropy of some classes of sets with differentiable boundaries. J. Approx. Theory, 10(3):227–236, 1974.
 [28] C. A. Duncan, M. T. Goodrich, and E. A. Ramos. Efficient approximation and optimization algorithms for computational metrology. In Proc. Eighth Annu. ACMSIAM Sympos. Discrete Algorithms, pages 121–130, 1997.
 [29] E. Fogel, D. Halperin, and C. Weibel. On the exact maximum complexity of Minkowski sums of polytopes. Discrete Comput. Geom., 42(4):654–669, 2009.
 [30] X. Guo, L. Xie, and Y. Gao. Optimal accurate Minkowski sum approximation of polyhedral models. Advanced Intelligent Computing Theories and Applications. With Aspects of Theoretical and Methodological Issues, pages 179–188, 2008.
 [31] D. Halperin, O. Salzman, and M. Sharir. Algorithmic motion planning. In J. E. Goodman, J. O’Rourke, and C. D. Tóth, editors, Handbook of Discrete and Computational Geometry, Discrete Mathematics and its Applications. CRC Press, 2017.
 [32] S. HarPeled, T. M. Chan, B. Aronov, D. Halperin, and J. Snoeyink. The complexity of a single face of a Minkowski sum. In Proc. Seventh Canad. Conf. Comput. Geom., pages 91–96, 1995.
 [33] P. Jiménez, F. Thomas, and C. Torras. 3D collision detection: A survey. Computers & Graphics, 25(2):269–285, 2001.
 [34] F. John. Extremum problems with inequalities as subsidiary conditions. In Studies and Essays Presented to R. Courant on his 60th Birthday, pages 187–204. Interscience Publishers, Inc., New York, 1948.

[35]
A. Kaul and J. Rossignac.
Solidinterpolating deformations: construction and animation of pips.
Computers & graphics, 16(1):107–115, 1992.  [36] M. Lin and S. Gottschalk. Collision detection between geometric models: A survey. In Proc. of IMA conference on mathematics of surfaces, volume 1, pages 602–608, 1998.
 [37] D. M. Mount. Geometric intersection. In J. E. Goodman, J. O’Rourke, and C. D. Tóth, editors, Handbook of Discrete and Computational Geometry, Discrete Mathematics and its Applications. CRC Press, 2017.
 [38] D. E. Muller and F. P. Preparata. Finding the intersection of two convex polyhedra. Theo. Comp. Sci., 7(2):217–236, 1978.
 [39] J. O’Rourke. Computational geometry in C. Cambridge University Press, 1998.
 [40] L. Pachter and B. Sturmfels. Algebraic statistics for computational biology, volume 13. Cambridge University Press, 2005.
 [41] R. Schneider. Convex bodies: The BrunnMinkowski theory. Cambridge University Press, 1993.
 [42] M. I. Shamos. Geometric complexity. In Proc. Seventh Annu. ACM Sympos. Theory Comput., pages 224–233, 1975.
 [43] H. R. Tiwary. On the hardness of computing intersection, union and Minkowski sum of polytopes. Discrete Comput. Geom., 40(3):469–479, 2008.
 [44] G. Varadhan and D. Manocha. Accurate Minkowski sum approximation of polyhedral models. Graphical Models, 68(4):343–355, 2006.
 [45] H. Yu, P. K. Agarwal, R. Poreddy, and K. R. Varadarajan. Practical methods for shape fitting and kinetic data structures using coresets. Algorithmica, 52(3):378–402, 2008.
Comments
There are no comments yet.