# A PTAS for Euclidean TSP with Hyperplane Neighborhoods

In the Traveling Salesperson Problem with Neighborhoods (TSPN), we are given a collection of geometric regions in some space. The goal is to output a tour of minimum length that visits at least one point in each region. Even in the Euclidean plane, TSPN is known to be APX-hard, which gives rise to studying more tractable special cases of the problem. In this paper, we focus on the fundamental special case of regions that are hyperplanes in the d-dimensional Euclidean space. This case contrasts the much-better understood case of so-called fat regions. While for d=2 an exact algorithm with running time O(n^5) is known, settling the exact approximability of the problem for d=3 has been repeatedly posed as an open question. To date, only an approximation algorithm with guarantee exponential in d is known, and NP-hardness remains open. For arbitrary fixed d, we develop a Polynomial Time Approximation Scheme (PTAS) that works for both the tour and path version of the problem. Our algorithm is based on approximating the convex hull of the optimal tour by a convex polytope of bounded complexity. Such polytopes are represented as solutions of a sophisticated LP formulation, which we combine with the enumeration of crucial properties of the tour. As the approximation guarantee approaches 1, our scheme adjusts the complexity of the considered polytopes accordingly. In the analysis of our approximation scheme, we show that our search space includes a sufficiently good approximation of the optimum. To do so, we develop a novel and general sparsification technique to transform an arbitrary convex polytope into one with a constant number of vertices and, in turn, into one of bounded complexity in the above sense. Hereby, we maintain important properties of the polytope.

## Authors

• 10 publications
• 2 publications
• 5 publications
• 10 publications
• ### Faster Algorithms for Orienteering and k-TSP

We consider the rooted orienteering problem in Euclidean space: Given n ...
02/18/2020 ∙ by Lee-Ad Gottlieb, et al. ∙ 0

• ### Optimizing a Generalized Gini Index in Stable Marriage Problems: NP-Hardness, Approximation and a Polynomial Time Special Case

This paper deals with fairness in stable marriage problems. The idea stu...
09/22/2018 ∙ by Hugo Gilbert, et al. ∙ 0

• ### FPT Approximation for Fair Minimum-Load Clustering

In this paper, we consider the Minimum-Load k-Clustering/Facility Locati...
07/20/2021 ∙ by Sayan Bandyapadhyay, et al. ∙ 0

• ### Approximation Schemes for Capacitated Vehicle Routing on Graphs of Bounded Treewidth, Bounded Doubling, or Highway Dimension

In this paper, we present Approximation Schemes for Capacitated Vehicle ...
06/29/2021 ∙ by Aditya Jayaprakash, et al. ∙ 0

• ### A Constant-Factor Approximation for Directed Latency in Quasi-Polynomial Time

We give the first constant-factor approximation for the Directed Latency...
12/12/2019 ∙ by Zachary Friggstad, et al. ∙ 0

• ### On the Two-Dimensional Knapsack Problem for Convex Polygons

We study the two-dimensional geometric knapsack problem for convex polyg...
07/31/2020 ∙ by Arturo Merino, et al. ∙ 0

• ### Beating Greedy For Approximating Reserve Prices in Multi-Unit VCG Auctions

We study the problem of finding personalized reserve prices for unit-dem...
07/24/2020 ∙ by Mahsa Derakhshan, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The Traveling Salesperson Problem (TSP) is commonly regarded as one of the most important problems in combinatorial optimization. In TSP, a salesperson wishes to find a tour that visits a set of clients the shortest way possible. It is very natural to consider metrical TSP, that is, to assume that the clients are located in a metrical space. While Christofides’ famous algorithm

[11] then attains an approximation factor of , the problem is APX-hard even under this assumption [26]. This lower bound and the paramount importance of the problem has motivated the study of more specialized cases, in particular Euclidean TSP (ETSP), that is, metrical TSP where the metrical space is Euclidean. ETSP admits a Polynomial Time Approximation Scheme (PTAS), a -approximation polynomial time algorithm for any fixed , which we know from the celebrated results of Arora [2] and Mitchell [23]. These results have subsequently been improved and generalized [4, 5, 27].

A very natural generalization of metrical TSP is motivated by clients that are not static (as in TSP) but willing to move in order to meet the salesperson. In the Traveling Salesperson Problem with Neighborhoods (TSPN), first studied by Arkin and Hassin in the Euclidean setting [1], we are given a set of reasonably represented (possibly disconnected) regions. The task is to compute a minimum-length tour that visits these regions, that is, the tour has to contain at least one point from every region. In contrast to regular TSP, the problem is already APX-hard in the Euclidean plane, even for neighborhoods of relatively low complexity [12, 17]. Whereas the problem did receive considerable attention and a common focus was identifying natural conditions on the input that admit a PTAS, the answers that were found are arguably not yet satisfactory. For instance, it is not known whether the special case of disjoint connected neighborhoods in the plane is APX-hard [24, 30]. On the other hand, there has been a line of work [7, 9, 10, 15, 24] that has led up to a PTAS for “fat” regions in the plane [24] and a restricted version of such regions (“weakly disjoint”) in general doubling metrics [10]. Here, a region is called fat if the radii of the largest ball included in the region and the smallest ball containing the region are within a constant factor of each other.

In this paper, we focus on the fundamental case in which all regions are hyperplanes (in Euclidean space of fixed dimension ) and give a PTAS, improving upon a -approximation [16]. Not only is the problem itself considered “particularly intriguing” [15] and has its complexity status been repeatedly posed as an open problem over the past 15 years [15, 16, 24, 30]. It also seems plausible that studying this problem, which is somewhat complementary to the much-better understood case of fat regions, will add techniques to the toolbox for TSPN that may lead towards understanding which cases are tractable in an approximation sense. Indeed, our techniques are novel and quite general: We define a certain type of polytopes of bounded complexity and, using a new sparsification technique, we show that one of them represents the optimal solution well enough. To compute a close approximation to that polytope, we boost the computational power of an LP by enumerating certain crucial properties of the polytope. It is thinkable that especially our sparsification technique has applications beyond TSPN, e.g., in data compression.

#### Further Related Work.

In contrast to regular TSP, TSPN is already APX-hard in the Euclidean plane [6]. For some cases in the Euclidean plane, there is even no polynomial-time -approximation (unless P NP), for instance, the case where each region is an arbitrary set of (non-connected) points [28] (Group TSP). The problem remains APX-hard when there are exactly two points in each region [12] or the regions are line segments of similar lengths [17].

Positive results for TSPN in the Euclidean plane were obtained in the seminal paper of Arkin and Hassin [1], who gave -approximation algorithms for various cases of bounded neighborhoods, including translates of convex regions and parallel unit segments. The only known approximation algorithm for general bounded neighborhoods (polygons) in the plane is an -approximation [21]. Partly in more general metrics, -approximation algorithms and approximation schemes were obtained for other special cases of bounded regions, which are disjoint, fat, or of comparable sizes [6, 7, 10, 15, 24, 25].

We review results for the case of unbounded neighborhoods, such as lines or planes. For  lines in the plane, the problem can be solved exactly in  time by a reduction to the watchman route problem [19] and using existing algorithms for the latter problem [8, 13, 20, 29]. A -approximation is possible in linear time [14]. This result uses that the smallest rectangle enclosing the optimal tour is already a good approximation. By a straightforward reduction from ETSP in the plane, the problem becomes NP-hard if we consider lines in three dimensions. For the case of lines in three dimensions, only a recent -approximation algorithm by Dumitrescu and Tóth [16] is known. They tackle the problem by reducing it to a group Steiner tree instance on a geometric graph, to which a solution already gives a -approximation. Then they apply a known approximation algorithm for group Steiner tree. If neighborhoods are planes in 3D, or hyperplanes in higher constant dimensions, it is even open whether the problem is NP-hard. Only one known approximation result has been obtained so far: The linear-time algorithm of Dumitrescu and Tóth [16] finds, for any constant dimension  and any constant , a -approximation of the optimal tour. Their algorithm generalizes the ideas used for the two-dimensional case [14]. Via a low-dimensional LP, they find a -approximation of the smallest box enclosing the optimal tour. Then they output a Hamiltonian cycle on the vertices of the box as a solution. They observe that any tour visiting all the vertices of the box is a feasible solution and that the size of the box is similar to the length of the optimal tour. This allows them to relate the length of their solution to the length of the optimal tour. For the three-dimensional case and a sufficiently small , their algorithm gives a -approximation.

Observe that all of the above approximation results hold – with a loss of a factor of – also for the TSP path problem where the goal is to find a shortest path visiting all regions (without arbitrary start and end points). For the case of lines in the plane, there is a -approximation linear-time algorithm [14].

For improving the results on hyperplane neighborhoods, a repeatedly expressed belief is the following: If we identify the smallest convex region intersecting all hyperplanes, scale it up by a polynomial factor to a region , then contains the optimal tour. Interestingly, Dumitrescu and Tóth [16] refute this belief by giving an example where no -approximate tour exists within such a region , for a small enough constant . This result makes it unlikely that first narrowing down the search space to a bounded region (such as the box computed in the -approximation by Dumitrescu and Tóth [16]) and then applying local methods is a viable approach to obtaining a PTAS. Indeed, the technique that we present in this paper is much more global.

#### Our Contribution and Techniques.

The main result of this paper is a PTAS for TSP with hyperplane neighborhoods in fixed dimensions. This is a significant step towards settling the complexity status of the problem, which had been posed as an open problem several times over the past 15 years [15, 16, 24, 30].

###### Theorem 1.

For every fixed  and , there is a -approximation algorithm for TSP with hyperplane neighborhoods in  that runs in strongly polynomial time.

Our technique is based on the observation that the optimal tour can be viewed as the shortest tour visiting all the vertices of a certain polytope , the convex hull of . So, in order to approximate the optimal tour, one may also think about finding a convex polytope with a short and feasible tour on its vertices. In this light, the -approximation by Dumitrescu and Tóth [16], which, by using an LP, finds a cuboid with minimum perimeter intersecting all input hyperplanes, can be viewed as a very crude approximation of . Note that forcing the polytope to intersect all hyperplanes makes each tour on its vertices feasible.

The approach we take here can be viewed as an extension of this idea. Namely, we also use an LP (with  many variables) to find bounded-complexity polytopes intersecting all input hyperplanes. However, the extension to a -approximation raises three main challenges:

1. In order to get a -approximation, the complexity of the polytope increases to an arbitrarily high level as . We need to come up with a suitable definition of complexity.

2. As the complexity of the polytope increases, we need to handle more and more complicated combinatorics, which makes writing an LP significantly more difficult. For instance, the expression of the objective for the LP becomes more challenging

3. More careful arguments are necessary when comparing the solution of the LP to the optimum solution.

In this paper, we overcome all three challenges by introducing several novel ideas, many of which can be of independent interest. First, we impose the bounded complexity of the considered polytopes by only allowing facets that are parallel to any one of hyperplanes. We define these hyperplanes as those going through the points of a grid of a certain granularity, which is connected to how we overcome the third challenge.

The idea of the LP that finds a polytope of bounded complexity in this sense is to keep a variable for each of the half-spaces corresponding to the

hyperplanes that shifts the half-space along its normal vector. The polytope is then the intersection of all shifted half-spaces. When the polytope is a cuboid, as is the case in

[16], one has the advantage that, independently of the values of the shift variables, the combinatorics of the cuboid stays the same (as long as the cuboid has strictly positive volume). This makes it easy to write the vertex coordinates as LP variables and, for each input hyperplane, to select a separated pair of vertices that are on different sides of the hyperplane if and only if the cuboid intersects the hyperplane.

We note that these properties do not hold if we try to have a closer approximation of . Indeed, consider a regular pyramid (i.e., a pyramid where the base is a square) and imagine a parallel shift towards the outside of any lateral face of it. This will introduce an extra vertex to pyramid and turn it into a more general polytope. We note that, in general and in contrast to the cuboid case, the vertex-facet incidence graph changes depending on the values of the shift variable. This makes it impossible to write the vertex coordinates as variables of a single LP. Our idea is to guess which vertices are relevant and which facets they are incident to, that is, we enumerate all such configurations and write an LP with respective vertex variables for each such configuration. Now, since the facets are parallel to fixed hyperplanes, we can use the configuration to compute a separated pair of vertices and write as LP constraints that the polytope intersects all input hyperplanes. Strictly speaking, we also include configurations that do not correspond to convex polytopes, but they do no harm as they only widen our search space.

As the objective of the LP, ideally, we would use the length of the shortest tour on the vertices. Clearly, this objective is highly non-linear. To approximate this function, we make use of more guessing. First, we guess the order in which the vertices are visited, which makes it easy to write the length of the tour in -norm or -norm. Since we do not want to lose a factor of in comparison to the -norm, we additionally guess the rough direction the tour takes between consecutive vertices. This allows us to write the approximate -distance as a linear function.

The third challenge is overcome by turning , the convex hull of an optimum tour , into one of the polytopes in our search space without increasing the length of the shortest tour on the vertices by more than a factor. One way of getting a polytope in our search space “similar” to is the following: Take a hypercube that includes and subdivide it into a grid. Now, for each vertex , take all vertices of the grid cell containing , and take the convex hull of all these points to obtain , which is obtained in our search space by the way we choose the hyperplanes.

However, in order to satisfactorily bound the length of the shortest tour on the vertices of with respect to , we need  to have only few vertices. For instance, if  had  vertices, we could choose the granularity of the grid to be small enough so that we could transform into a tour of the vertices of by making it longer by only the additive length of  at each vertex. Since in general we cannot bound the number of vertices of , we first transform into an intermediate polytope that has vertices, and only then, do we apply the above construction to obtain . This is where the following structural result, which is likely to have more general applications, is used.

For a general polytope, we show how to select many of its vertices such that if we scale the convex hull of these selected vertices by a factor , with respect to some carefully chosen center, the scaled convex hull contains the original polytope. The proof utilizes properties of the maximum inscribed hyper-ellipsoid, due to John [18] (see also the refinement due to Ball [3] that we use in this paper). This result comes in handy, because we can scale in the same way to obtain a tour of the vertices of of length .

We note that our techniques easily extend to the path version of the problem in which the tour need not be closed.

#### Overview of this Paper.

In Section 2, we introduce some notation that we use throughout the paper and make some preliminary observations. In Section 3, we describe an algorithm that computes a -approximation of the shortest TSP tour that satisfies certain conditions. Then, in Section 4, we show that the shortest TSP tour that satisfies these conditions is again a -approximation of the overall shortest TSP tour. Finally, in Section 5, we discuss remaining open problems and the implication of our work for the TSPN path problem with hyperplanes.

We give a more detailed overview in the following:

• Lemma 2 and Corollary 3 in Section 2 show that we can focus on tours of the vertices of polytopes. We also introduce in Section 2 a notion of constructing polytopes from a fixed set of hyperplanes, by translating the half-spaces associated to those hyperplanes and taking the intersection.

• Section 3 describes the polynomial-time algorithm for finding a -approximation among the set of solutions that are tours of vertices of polytopes that are constructed from some constant-size set of hyperplanes .

• In the algorithm, we enumerate configurations , permutations , and directional vectors . Here, describes the structure of the polytope, describes the order in which the vertices of the polytope are visited, and is used to compute the length of the tour of the vertices of the polytope. We give a more detailed overview in Subsection 3.1.

• For each combination of enumerated values, we construct an LP of which the solution is a feasible tour (if a solution exists). To this end, in Subsection 3.2, Lemma 67, and 8 describe that each feasible LP solution is a feasible tour and each feasible tour corresponds to a feasible LP solution (for some LP).

• In Subsection 3.3, we prove that the tour length can be described as a linear objective function that is a most a factor of away from the true tour length. Therefore, the solutions found from the LPs can be compared to find the shortest one.

• In Section 4 we prove that the convex hull of the optimal tour can be approximated by a polytope that can be constructed from a constant-size (for constant and ) set of hyperplanes, , defined in Definition 10. That is, Lemma 11 says that the tour of this polytope is a -approximation of the optimal tour.

• Theorem 12 shows that, for any given convex polytope , we can pick a center and constantly many vertices of , such that, if we expand these vertices from by a factor , the convex hull of the result contains . The proof for Theorem 12 uses geometric properties that hold for any convex polytope which we establish in Lemma 14 and Lemma 15.

• The proof of Lemma 11 finally uses a grid to snap the vertices of the polytope resulting from Theorem 12 onto, such that the facets of the polytope are parallel to hyperplanes in .

• Combining the results from Sections 3 and 4 shows that we can find a constant-size (for constant and ) set of hyperplanes such that the shortest tour that is a tour of the vertices of a polytope constructed from that set, is a -approximation of the optimal tour, and that we can approximate such a tour within a factor in polynomial time (for constant and ). This implies Theorem 1.

## 2 Preliminaries

#### Problem Definition.

Throughout this paper, we fix a dimension  and restrict ourselves to the Euclidean space . The input of TSPN for hyperplanes consists of a set  of  hyperplanes. Every hyperplane is given by  integers , where not all scalars are , and an integer , and it contains all points  that satisfy . A tour is a closed polyline and is called feasible or a feasible solution if it visits every hyperplane of , that is, if it contains a point in every hyperplane of . A tour is optimal or an optimal solution if it is a feasible tour of minimum length. The goal is to find an optimal tour. Given , we call any such optimal tour Opt or . The length of a tour , is given by .

#### Notation.

Let  be a set of hyperplanes. We denote by the set of hyperplanes that precisely contains, for each hyperplane in , the parallel hyperplane that goes through the origin. Further, is the set of half-spaces that precisely contains, for each hyperplane in , the two half-spaces and bordered by . Similarly, for any halfspace , we denote by the hyperplane bordering . Consider a set that precisely contains, for each half-space in , a half-space parallel to , that is, is a translate of . The intersection of all these half-spaces is a (possibly unbounded) polyhedron. In the set , we collect all bounded polyhedra (that is, polytopes) for some as above. For a polyhedron  denotes the set of its vertices. A tour of a point set  is a closed polyline that contains every point of . Throughout this paper, let  denote any shortest tour of , and  denote the convex hull of . For a tour  denotes the convex hull of the tour. An expansion or scaling of a point set  centered at a point  with scaling factor  is the set of points . A fully-dimensional polytope is a bounded and fully-dimensional polyhedron, i.e., a polyhedron that is bounded and contains a -dimensional ball of strictly positive radius. Similarly, a set  of hyperplanes is non-trivial, if  contains a fully-dimensional polytope. Unless otherwise specified, we use hyperplane, hypercube, etc. to refer to the corresponding objects in the -dimensional space.

#### Preliminary Observations.

Let  be an optimal tour for  and suppose that we know the polytope . By the following lemma it suffices to find an optimal tour for the vertices of .

###### Lemma 2.

Let  be any convex polytope. Every tour of  is a feasible solution to  if and only if  intersects every hyperplane of .

###### Proof.

Let  be a convex polytope that intersects every hyperplane of , and  be any tour on the vertices of . Consider any hyperplane  of . If it contains any vertex of , it is visited by . If it does not contain any vertex of , it must intersect the interior of  and separate at least one pair of vertices of . Hence, any path connecting that pair must intersect , thus, the tour  visits  in this case as well.

For the only-if part, assume that there exists a hyperplane  such that  does not intersect . Since by convexity any tour on the vertices of  is contained in , no tour can visit . ∎

###### Corollary 3.

Any optimal tour  of  is also an optimal tour of .

## 3 Our Algorithm

In this section, we show the following lemma.

###### Lemma 4.

Let . Given a non-trivial set  of  hyperplanes, there is an algorithm that computes in time strongly polynomial in the size of  a feasible tour  with length

 |T|≤(1+ε)⋅|T′|

for all feasible tours  for which there exists  with .

### 3.1 Overview of the algorithm

Consider now a non-trivial set of hyperplanes . In order to approximate the shortest feasible tour  on the vertices of any  from , we first enumerate a few objects that correspond to properties of  and  (see Lemma 2). The first of these objects is a configuration:

###### Definition 5.

A configuration with base set  is a set  such that for all , and  is a single point. Moreover, there is no pair with such that or .

Note that, since all the hyperplanes in pass through the origin, the point for each is actually the origin.

We enumerate the following objects:

1. A configuration  with the base set .

2. A permutation  of  (or the corresponding set of vertices).

3. A sequence  of  unit vectors that express the rough direction of the edge connecting the corresponding vertices. The vectors are defined such that there are  choices for each .

Since , there are only  many combinations  that need to be enumerated. Note that each vertex of  is, by definition of , the intersection of at least  hyperplanes parallel to those in . Hence, each vertex  of  corresponds to one subset  such that  and  is a single point. Further, none of these sets is contained in another. Consequently,  corresponds to at least one configuration  that we enumerate. Further, there is some order  of the elements in  that corresponds to the order in which  visits , and there is a vector  such that, for all  is roughly the direction of the segment between the vertices that correspond to  and , respectively. We also say respects  under .

For each of the  many enumerated combinations , we write an LP that has size polynomial in  and   many variables. Each convex polytope  in  that corresponds to the configuration  and respects  under , if such a polytope exists, corresponds to some solution to the LP. This is a feasible solution for the TSPN problem with input instance  if and only if the convex polytope intersects all input hyperplanes in  (see also Lemma 2). The objective value is up to  times the length of the tour that visits  in the order given by . We make sure that, even though the search space includes more solutions than those corresponding to the aforementioned convex polytopes, the tour found by any of the LPs is feasible (note that some LPs may not return any tour). That is, in order to find the desired optimal solution to the LP that corresponds to , we can simply take the shortest tour that was output over all of the many LPs as the optimal solution. Note that, since our objective function is only approximate, our solution may (strictly) not come from the LP that corresponds to .

Next, we describe the construction of the LPs in more detail. The LP maintains shift variables, each of which shifts a different half-space in along its respective normal vector. We also write the coordinates of the vertices that correspond to the different elements of  as LP variables. We refer to these variables as vertex variables. Further, in a polytope solution, the vertices are exactly the vertices of the convex polytope that is the intersection of the half-spaces corresponding to the values of the shift variables in that solution.

The tour found by the LP is the one that visits the vertices in the order . By Lemma 2, this tour is feasible if the convex hull of the vertices intersects each input hyperplane. To ensure this, we use an idea similar to that of Dumitrescu and Tóth [16]: For each input hyperplane, we select two vertices (the separated pair) and write a feasibility constraint requiring the two vertices to be on different sides of the hyperplane. This ensures that the convex hull of the vertices intersects each hyperplane and thus any tour that visits all its vertices is feasible (Lemma 2).

Note that, feasible non-polytope solutions also yield feasible tours. However, as we will see later, we can restrict further discussion to polytope solutions. Consider a polytope solution and denote by  the convex polytope that is bounded by the accordingly shifted half-spaces in . If we choose the separated pair in an arbitrary way,  may intersect all hyperplanes, but the LP solution that corresponds to  may, other than required, still be infeasible. We fix this by choosing the separated pair more carefully. Let  be the normal vector of some input hyperplane . Note that the vertices  that minimize the dot product  and maximize , respectively, are on different sides of  if and only if  intersects . This suffices as definition for a separated pair: Using that is fixed, we can show that, independent of the values of the shift variables,  always correspond to the same elements of , and they can be computed efficiently.

Finally, we need to express the length of the tour that visits the vertices in the order  as a linear objective function. This is straightforward to do in -norm, but using the -norm instead of -norm would result in losing a factor of . In the LP that corresponds to , however, we only have to consider convex polytopes that respect  under . Therefore, we first add angle constraints that make sure that, for all , the direction between the vertices that correspond to  and , respectively, is roughly . Now, knowing the rough direction the tour takes between consecutive vertices, we can write the approximate traveled distances as linear functions of the coordinates of the involved vertices.

We proceed with this section by first showing that every possible convex polytope  that intersects all input hyperplanes is a feasible solution to at least one of the constructed LPs. Moreover, we prove that any feasible solution to any of the constructed LPs indeed corresponds to a feasible tour of the input hyperplanes. Then, we show that we can use a linear function to approximate the tour length of a solution to an LP within a factor of . Finally, we show how to use these results to prove Lemma 4.

### 3.2 LP Variables and Feasibility Constraints

Now, we describe the LP more formally. In this subsection, we introduce the variables of the LP and focus on feasibility constraints. This part solely depends on the enumerated configuration , the given hyperplanes , and the input hyperplanes . In Lemmata 7 and 8 we show that the emerging search space is not too small and not too large, respectively.

For each of the half-spaces , there is an unconstrained shift variable  in the LP. Additionally, for each , there are  unconstrained vertex variables, that correspond to the coordinates of .

 ρ±h ∈R ∀±h∈±H0, (1) xc ∈Rd ∀c∈C. (2)

These are the only variables of the linear program.

For , let  denote the normal vector of  (by convention, pointing from the bordering hyperplane into the half-space). The following type of constraint relates the two types of variables:

 ⟨→xc−ρ±h→n±h,→n±h⟩ =0 ∀c∈C, ±h∈c. (3)

Note that here we use that the hyperplanes bordering each of the pass through the origin.

Now, we wish to compute for each input hyperplane  a separated pair of vertices in order to write feasibility constraints. To do so, we define a directed graph  in which each arc  is equipped with a direction vector . We set  to be , so the vertex set of  can be thought of as the vertices that correspond to .

Consider any . We define the arcs whose tail is. Consider any . We first check if  is one-dimensional, that is, a line. If it is not, we do not add the arcs . If it is, let be a direction vector of the line. We distinguish three cases. Note that is nonempty.

1. For all , , and there is a with . In this case, add the arc and set .

2. For all , , and there is a with . In this case, add the arc and set .

3. We either have for all , or there are such that and . In this case, do not add the arc (or skip the configuration altogether, because it is not relevant).

Now consider some hyperplane , and let  be the normal vector of . At least for the “relevant” configurations , we would like to select those  such that  is maximized and  is minimized, where  and . We describe how to compute  by a simplex-like method; the computation of  is symmetric. Start with a token in an arbitrary vertex . Whenever the token is in a vertex such that there is an arc with , move the token along an arbitrary such arc. Whenever there is no such arc, output the current vertex. If the token ever visits a vertex twice, output an arbitrary vertex (or skip the configuration altogether, because it is irrelevant).

We prove that this procedure fulfills its purpose. Towards this, for a polytope , define  to be the set that, for each vertex  of , contains the set of half-spaces corresponding to facets incident to .

###### Lemma 6.

Consider some polytope  such that  holds. Then  and .

###### Proof.

Since there is a natural correspondence between the vertices in and those in , we refer to any vertex in and the corresponding one in by the same name. We first show that, for any , there is an arc from to in if and only if there is an edge between and in , and that arc is labeled with the direction vector from to in . To see this, first assume that there is an arc from to in . This means that the intersection of those facets incident to and in is one-dimensional, implying that the intersection of the hyperplanes bordering the half-spaces in is one-dimensional as well, in turn implying that there is an edge between and in . Similarly, if there is an edge between and in , is one-dimensional, so let be a direction vector of this line again. Thus, when constructing , we distinguish Cases a, b, and c to determine whether the arc exists in . First assume that, for all , . That would however imply that , a contradiction to being zero-dimensional. Now note that for all if points from towards the polytope, that is, ; similarly, for all if points from away from the polytope. So indeed either for all , or for all , and there is an arc from to in with the label as claimed.

Having established this close relationship between and , the claim essentially follows from the correctness of the simplex method (in the non-degenerate case): If the maximization (similarly for minimization) objective can be improved from a vertex to another point , it is a standard fact from convex geometry that there is another vertex adjacent to such that , or equivalently . Further, if and only if by the above correspondence. Hence we move the token to improve the objective if and only if it is possible. It is impossible to cycle, because naturally serves as (strict) potential; the procedure terminates, because there is only a finite number of vertices. ∎

Now, for all , let denote a fixed point on the hyperplane . We write LP constraints that force  and  to be on different sides of

 ⟨→s+i−→γi,→ni⟩ ≥0 ∀i∈I (4) ⟨→s−i−→γi,→ni⟩ ≤0 ∀i∈I. (5)

We now show that these constraints fulfill their purpose and start with a lemma that, informally speaking, says that the constraints are not too restrictive.

###### Lemma 7.

For every  with  that intersects each hyperplane in , there exist and that satisfy (4) and (5).

###### Proof.

Let  with  intersect each hyperplane in . By Lemma 6, the vertices  and  respectively maximize and minimize  over all points .

Now let  be some point in the intersection of  and , i.e. , then we have

 ⟨→p⋆−→γi,→ni⟩=0.

Thus,

 ⟨→s+i−→γi,→ni⟩ =⟨→s+i−→γi−→p⋆+→γi,→ni⟩ =⟨→s+i−→p⋆,→ni⟩

where the inequality comes from the fact that  maximizes . Similarly,

 ⟨→s−i−→γi,→ni⟩ =⟨→s−i−→γi−→p⋆−→γi,→ni⟩ =⟨→s−i−→p⋆,→ni⟩

where the inequality comes from the fact that  minimizes . ∎

We next show that, informally, our constraints are not too general either. Each feasible solution  of an LP will later be associated with a tour that visits all vertices  for .

###### Lemma 8.

Let  be a solution that satisfies (1)–(5). Then any tour  that visits all vertices  for  visits all the hyperplanes in .

###### Proof.

Consider some hyperplane  and note that (4) and (5) hold for , which are visited by . Thus, by the linearity of the dot product and the Intermediate Value Theorem, there is a point  on  such that

 ⟨→pi−→γi,→ni⟩=0,

meaning that  is visited by . ∎

### 3.3 Objective Function

Now we show that, for any set of vertices that satisfy (1)–(3), we can approximate the length of a TSP tour of those vertices within a factor of . Since we can only write linear functions as objective functions of our LP, we make use of the enumeration of the order , in which the vertices are visited, and the approximate direction vectors . Recall that we will construct an LP for any such combination .

We make the enumeration of the direction vectors more precise. Given the order  in which vertices are visited, we wish to approximate the distance between consecutive pairs of vertices with a linear expression. To do so, consider some vector  whose length we would like to approximate. We find a vector of which the length can be approximated by a linear function and that has approximately the same direction as . First, we enumerate which of the  coordinates of is largest in absolute value. We indicate this coordinate by . Now, for each other coordinate , we guess the ratio  with by enumeration. Based on this guessed ratio, we then express the distance between the two consecutive vertices with a linear expression (denoted by the function  in the following) that is at most a factor of  away from the true distance (denoted by  in the following). To guess the coordinate of the direction vector , we finally try both possible signs of as well. Denote this sign by , then the guessed coordinate of the direction vector is equal to .

Let

 δ≤min{√(1+ε)2−1d,√1−(1+ε)−2d,ε}.

For ratios between  and , we enumerate in multiplicative steps of , starting at and ending at the first such that . For ratios in , we guess equal to . Note that there are  many values in this enumeration. Now, we obtain that for one of the enumerated , we have that , if , or , if .

###### Lemma 9.

Let  and . Furthermore, let  be a vector with , and assume that for all  we have

 |→vℓ|/|→vℓmax|∈{[(1+δ)−1rℓ,(1+δ)rℓ]if |→vℓ|/|→vℓmax|≥δ[rℓ−12δ,rℓ+12δ]if % |→vℓ|/|→vℓmax|<δ. (6)

Then it holds that

 11+ε|→v|≤len(→v)≤(1+ε)|→v|,

where

 len(→v)=|→vℓmax| ⎷d∑ℓ=1rℓ2.
###### Proof.

Denote by  the coordinates  that are not the maximal coordinate (i.e. ) and for which . By  denote the coordinates  such that  holds. The length of  is at least

 |→v| = ⎷d∑ℓ=1→vℓ2=√→vℓmax2+∑ℓ∈D≥→vℓ2+∑ℓ∈D<→vℓ2 ≥ ⎷→vℓmax2+∑ℓ∈D≥(rℓ1+δ→vℓmax)2+∑ℓ∈D<0 ≥ ⎷→vℓmax2+∑ℓ∈D≥→vℓmax2(1+δ)2rℓ2+∑ℓ∈D<→vℓmax2(rℓ2−δ2) ≥ ⎷(1−dδ2)→vℓmax2+→vℓmax2(1+δ)2∑ℓ∈D≥rℓ2+→vℓmax2∑ℓ∈D

where the second to last inequality holds by the choice of . Moreover, the length of  is at most

 |→v| = ⎷d∑ℓ=1→vℓ2=√→vℓmax2+∑ℓ∈D≥→vℓ2+∑ℓ∈D<→vℓ2 ≤√→vℓmax2+∑ℓ∈D≥((1+δ)rℓ→vℓmax)2+∑ℓ∈D<δ2→vℓmax2 ≤√(1+dδ2)→vℓmax2+(1+δ)2→vℓmax2∑ℓ∈D≥rℓ2 ≤(1+ε)|→vℓmax| ⎷d∑ℓ=1rℓ2=(1+ε)len(→v),

where the last inequality holds by the choice of . ∎

Now we are ready to complete the LP. Recall that, for each in the enumerated configuration , is the position of the vertex corresponding to . Furthermore, is the enumerated order of the vertices corresponding to the elements of . We would like to write the following objective function:

 min|C|∑k=1 len(→xσk−→xσk+1), (7)

where  and is a linear function with coefficients given by the enumerated direction vector.

For this to be a sufficient approximation of the actual length of the tour, however, needs to point in a certain direction. In particular, according to Lemma 9, it needs to fulfill (6) according to the enumeration. Towards this, for , let  and denote the -th pair of normal vectors that correspond to the half-spaces that bound , i.e., the half-spaces corresponding to (6) and the guessed signs of the coordinates.

 ⟨→xσk+1−→xσk,→aℓ−k⟩ ≥0 ∀ℓ∈{1,…,d},∀k∈{1,…,|C|}, ⟨→xσk+1−→xσk,→aℓ+k⟩ ≥0 ∀ℓ∈{1,…,d},∀k∈{1,…,|C|}.

Note that the conditional definition of (6) can be conditioned on the guessed ratios as well.

### The complete LP

For completeness, we include a concise formulation of the complete linear program.

 min|C|∑k=1 len(→xσk−→xσk+1)

s.t.

 ⟨→xc−ρ±h→n±h,→n±h⟩ =0 ∀c∈C, ±h∈c ⟨→s+i−→γi,→ni⟩ ≥0 ∀i∈I ⟨→s−i−→γi,→ni⟩ ≤0 ∀i∈I ⟨→xσk+1−→xσk,→aℓ−k⟩ ≥0 ∀ℓ∈{1,…,d},∀k∈{1,…,|C|} ⟨→xσk+1−→xσk,→aℓ+k⟩ ≥0 ∀ℓ∈{1,…,d},∀k∈{1,…,|C|} ρ±h ∈R ∀±h∈±H0 xc ∈Rd ∀c∈C

### 3.4 Proof of Lemma 4

In this subsection, we put the proofs of the previous lemmata together to show Lemma 4.

###### Proof of Lemma 4..

The number of LPs, with size polynomial in the input , that we solve is equal to the total number of combinations of  that we enumerate. Since we enumerate many and each LP has many variables, the running time of our algorithm is strongly polynomial [22] in the input . Let be such that it minimizes  over all polytopes in , and let be such that . By Lemma 7 and by the enumeration of the configurations, is considered in at least one LP. By Lemma 8, the output of any of the LPs is a feasible tour, and by Corollary 3 this is also a tour of vertices of some polytope . Thus, for all outcomes of any of the LPs. By Lemma 9, the approximate objective function in any of the LPs is at most a factor of  away from the true length of the shortest tour of the vertices, for some appropriately chosen . Now, if the algorithm finds an optimal tour , we know that and therefore .

By the choice of we obtain the desired result. ∎

## 4 Structural Results

Before describing the main result of this section, we need to introduce the concept of a base set of hyperplanes. The definition is constructive.

###### Definition 10 (Base set of hyperplanes).

We define a base set of hyperplanes , as follows: Take a unit hypercube. Overlay it with a -dimensional cartesian grid of granularity (i.e., the side-length of each grid-cell) , where . Now consider any -tuple of points in this grid. For any such tuple that uniquely defines a hyperplane , add  to .

Note that . Thus, the number of possible -tuples over the grid points is also  and therefore so is the size of the set . Also note that a hypercube with any side length could be used for defining the set  (as long as it has the same orientation). In that case, we could just adapt the granularity of the grid so that we obtain the same number of cells.

The goal of this section is to show the following lemma.

###### Lemma 11.

For any input set of hyperplanes, , and fixed , there is a polytope such that the tour  with  is feasible and

 |T′|≤(1+ε)⋅\textscOpt(I).

In other words, Lemma 11 shows that in order to obtain a -approximate solution it suffices to find the polytope of optimal tour length, among the polytopes in . Together with Lemma 4, this immediately implies Theorem 1.