The Unbounded Integrality Gap of a Semidefinite Relaxation of the Traveling Salesman Problem

We study a semidefinite programming relaxation of the traveling salesman problem introduced by de Klerk, Pasechnik, and Sotirov [8] and show that their relaxation has an unbounded integrality gap. In particular, we give a family of instances such that the gap increases linearly with n. To obtain this result, we search for feasible solutions within a highly structured class of matrices; the problem of finding such solutions reduces to finding feasible solutions for a related linear program, which we do analytically. The solutions we find imply the unbounded integrality gap. Further, they imply several corollaries that help us better understand the semidefinite program and its relationship to other TSP relaxations. Using the same technique, we show that a more general semidefinite program introduced by de Klerk, de Oliveira Filho, and Pasechnik [7] for the k-cycle cover problem also has an unbounded integrality gap.

Authors

• 6 publications
• 13 publications
07/21/2019

Semidefinite Programming Relaxations of the Traveling Salesman Problem and Their Integrality Gaps

The traveling salesman problem (TSP) is a fundamental problem in combina...
07/26/2019

Subtour Elimination Constraints Imply a Matrix-Tree Theorem SDP Constraint for the TSP

De Klerk, Pasechnik, and Sotirov give a semidefinite programming constra...
09/26/2021

Preemptive Two-stage Goal-Programming Formulation of a Strict Version of the Unbounded Knapsack Problem with Bounded Weights

The unbounded knapsack problem with bounded weights is a variant of the ...
11/14/2018

Design of Spectrally Shaped Binary Sequences via Randomized Convex Relaxation

Wideband communication receivers often deal with the problems of detecti...
02/11/2020

Maximizing Products of Linear Forms, and The Permanent of Positive Semidefinite Matrices

We study the convex relaxation of a polynomial optimization problem, max...
04/14/2021

Improving Optimal Power Flow Relaxations Using 3-Cycle Second-Order Cone Constraints

This paper develops a novel second order cone relaxation of the semidefi...
10/21/2021

An echelon form of weakly infeasible semidefinite programs and bad projections of the psd cone

A weakly infeasible semidefinite program (SDP) has no feasible solution,...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The traveling salesman problem (TSP) is one of the most famous problems in combinatorial optimization. An input to the TSP consists of a set of

cities and edge costs for each pair of distinct representing the cost of traveling from city to city . Given this information, the TSP is to find a minimum-cost tour visiting every city exactly once. Throughout this paper, we implicitly assume that the edge costs are symmetric (so that for all distinct ) and metric (so that for all distinct ). Hence, we interpret the cities as vertices of the complete undirected graph with edge costs for edge . In this setting, the TSP is to find a minimum-cost Hamiltonian cycle on .

The TSP is well-known to be NP-hard. It is even NP-hard to approximate TSP solutions in polynomial time to within any constant factor (see Karpinski, Lampis, and Schmied [19]). For the general TSP (without any assumptions beyond metric and symmetric edge costs), the state of the art approximation algorithm remains Christofides’ 1976 algorithm [4]. The output of Christofides’ algorithm is at most a factor of away from the optimal solution to any TSP instance.

A broad class of approximation algorithms begin by relaxing the set of Hamiltonian cycles. The prototypical example is the subtour elimination linear program (also referred to as the Dantzig-Fulkerson-Johnson relaxation [6] and the Held-Karp bound [16], and which we will refer to as the subtour LP). Let denote the set of vertices in and let denote the set of edges in . For , denote the set of edges with exactly one endpoint in by and let The subtour elimination linear programming relaxation of the TSP is:

 min∑e∈Ecexesubject to∑e∈δ(v)xe=2,v=1,…,n∑e∈δ(S)xe≥2,S⊂V:S≠∅,S≠V0≤xe≤1,e=1,…,n.

The constraints are known as the degree constraints, while the constraints are known as the subtour elimination constraints. Wolsey [30] and Shmoys and Williamson [27] show that solutions to this linear program are also within a factor of of the optimal, integer solution to the TSP.

Instead of linear programming relaxations, another approach is to consider relaxations that are semidefinite programs (SDPs). This avenue is considered by Cvetković, Čangalović, and Kovačević-Vujčić [5]. They introduce an SDP relaxation that searches for solutions that meet the degree constraints and that are at least as connected as a cycle with respect to algebraic connectivity (see Section 4.4). Goemans and Rendl [11], however, show that the SDP relaxation of Cvetković et al. [5] is weaker than the subtour LP in the following sense: any solution to the subtour LP implies an equivalent feasible solution for the SDP of Cvetković et al. of the same cost. Since both optimization problems are minimization problems, the optimal value for the SDP of Cvetković et al. cannot be closer than the optimal solution of the subtour LP to the optimal solution to the TSP.

More recently, de Klerk, Pasechnik, and Sotirov [8] introduced another SDP relaxation of the TSP. This SDP can be motivated and derived through a general framework for SDP relaxations based on the theory of association schemes (see de Klerk, de Oliveira Filho, and Pasechnik [7]). Moreover, de Klerk et al. [8] show computationally that this new SDP is incomparable to the subtour LP: there are cases for which their SDP provides a closer approximation to the TSP than the subtour LP and vice versa! Moreover, de Klerk et al. [8] show that their SDP is stronger than the earlier SDP of Cvetković et al. [5]: any feasible solution for the SDP of de Klerk et al. [8] implies a feasible solution for the SDP of Cvetković et al. [5] of the same cost.

We analyze the SDP relaxation of de Klerk et al. [8]; our main result is that the integrality gap of this SDP is unbounded. To show this result, we introduce a family of instances corresponding to a cut semimetric: a subset such that if and otherwise. We will take Equivalently, of the cities are located at the point , the remaining cities are located at , and the cost is the Euclidean distance between the locations of city and city . We show that for these instances the integrality gap grows linearly in . The feasible solutions we introduce to bound the integrality gap, moreover, have the same algebraic connectivity as a Hamiltonian cycle vertices, even though their cost becomes arbitrarily far from that of a Hamiltonian cycle (see Section 4.4) as grows.

We introduce the SDP of de Klerk et al. [8] in Section 2. In Section 3 we motivate and prove our result. The crux of our argument involves exploiting the symmetry of the instances we introduce. We consider a candidate class of solutions to the SDP respecting this symmetry and show that members of this class are feasible solutions to the SDP if and only if they are feasible solutions for a simpler linear program, whose constraints enforce certain positive semidefinite inequalities. We then analytically find solutions to this linear program, and show that these solutions imply the unbounded integrality gap. Next, in Section 4, we discuss several corollaries of our main result. These corollaries shed new light on how the SDP relates to the subtour LP as well as to the earlier SDP of Cvetković et al. [5]. In Section 5, we apply our technique for showing that the integrality gap is unbounded to a generalization of the SDP of de Klerk et al. [8] for the minimum-cost -cycle cover problem; when , this problem is exactly the same as the TSP. This more general SDP was introduced in de Klerk et al. [7], and we show that it also has an unbounded integrality gap.

This work is related in spirit to Goemans and Rendl [12], who study how to solve SDPs arising from association schemes using a linear program. Specifically, they show that an SDP of the form

 max⟨M0,X⟩ s.t. ⟨Mj,X⟩=bj for % j=1,...,m,X⪰0,

where the are fixed, input matrices forming an association scheme, can be solved using a linear program. Like Goemans and Rendl [12], the SDP we study is related to an association scheme and we obtain a result using a linear program. In contrast, however, to having input matrices that form an association scheme, the SDP we analyze seeks solutions that satisfy many properties of a certain, fixed association scheme (in particular, de Klerk et al. [7] shows that the constrains of the SDP are satisfied by the association scheme corresponding to cycles; see Section 2). Moreover, we only use a linear program to find feasible solutions to this SDP that are sufficient to imply an unbounded integrality gap: this SDP does not in general reduce to the LP we use.

2 A Semidefinite Programming Relaxation of the TSP

2.1 Notation and Preliminaries

Throughout this paper we will use standard notation from linear algebra. We use and to denote the all-ones and identity matrices in respectively. When clear from context, we suppress the dependency on the dimension and just write and . We denote by

the column vector of all ones, so that

. Also, we use for the set of real, symmetric matrices in and to denote the Kronecker product of matrices. denotes that is a positive semidefinite (PSD) matrix (we will generally have

symmetric, in which case positive semidefiniteness is equivalent to all eigenvalues of

being nonnegative). The trace of a matrix , denoted , is the sum of its diagonal entries so that for , means that each entry of of matrix is nonnegative.

Our main result addresses the integrality gap of a relaxation, which represents the worst-case ratio of the original problem’s optimal solution to the relaxation’s optimal solution. We are specifically interested in the gap of the SDP of de Klerk et al. [8]; we will refer to this SDP as simply “the SDP” throughout. Let denote a matrix of edge costs, so that and . Let and respectively denote the optimal solutions to the SDP and to the TSP for a given matrix of costs . The integrality gap is then

 supCOPTTSP(C)OPTSDP(C),

where we take the supremum over all valid cost matrices (those whose constituent costs are metric and symmetric). This ratio is bounded below by 1, since the SDP is a relaxation of the TSP; we re-derive this fact in Section 2.2. We will show that the ratio cannot be bounded above by any constant. In contrast, the results we noted previously about the subtour LP imply that its integrality gap is bounded above by

Throughout the remainder of this paper we will take to be even and let We use for set minus notation, so that We take to mean that is a vector whose entries are indexed by the edges of .

The SDP introduced by de Klerk et al. [8] uses matrix variables , with the cost of a solution depending only on It is:

 min12trace(CX(1))subject toX(k)≥0,k=1,…,d∑dj=1X(j)=J−I,I+∑dj=1cos(2πjkn)X(j)⪰0,k=1,…,dX(k)∈Sn,k=1,…,d. (1)

Both de Klerk et al. [8] and de Klerk et al. [7] show that this is a relaxation of the TSP by showing that the following solution is feasible: for a simple, undirected graph , let be the -th distance matrix: the matrix with -th entry equal to if and only if the shortest path between vertices and in is of distance and equal to 0 otherwise. Let be a cycle of length (i.e., any Hamiltonian cycle on ). The solution where for is feasible for the SDP (see Proposition 2.1). Hence, the optimal integer solution to the TSP has a corresponding feasible solution to the SDP. That SDP solution has the same value as the optimal integer solution to the TSP: each edge is represented twice in as both and but this is accounted for by the factor in the objective function.

These solutions are shown to be feasible in de Klerk et al. [8] by noting that the form an association scheme and are therefore simultaneously diagonalizable. This allows for the positive semidefinite inequalities to be verified after computing the eigenvalues of each . A more systematic approach is taken in de Klerk et al. [7], where they introduce general results about association schemes. The constraints of the SDP then represent an application of these results to a specific association scheme: that of the distance matrices . We begin by providing a new, direct proof that the SDP is a relaxation of the TSP.

Proposition 2.1 (de Klerk et al. [8]).

Setting for yields a feasible solution to the SDP (1).

We will use two lemmas in our proof. First, the main work in our proof involves showing that the positive semidefinite inequalities from (1) hold. We do so by noticing that has a very specific structure: that of a circulant matrix. A circulant matrix is a matrix of the form

 M=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝m0m1m2m3⋯mn−1mn−1m0m1m2⋯mn−2mn−2mn−1m0m1⋱mn−3⋮⋮⋮⋮⋱⋮m1m2m3m4⋯m0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠=(m(s−t) mod n)ns,t=1.

The eigenvalues of circulant matrices are well understood, which will allow us to show that is a positive semidefinite matrix for each by computing the eigenvalues of that linear combination. In particular:

Lemma 2.2 (Gray [14]).

The circulant matrix has eigenvalues

 λt(M)={∑n−1s=0mse−2πst√−1n, if t=1,...,n−1∑n−1s=0ms, if t=n.

This is the only section where we will work with imaginary numbers, and to avoid ambiguity with index variables, we explicitly write and reserve and as index variables.

Our second lemma is a trigonometric identity that we will use repeatedly in later proofs:

Lemma 2.3.

Let be even and be an integer. Then

 d∑j=1cos(2πjkn)=−1+(−1)k2.
Proof.

Our identity is a consequence of Lagrange’s trigonometric identity (see, e.g., Identity 14 in Section 2.4.1.6 of Jeffrey and Dai [18]), which states, for that

 m∑j=1cos(jθ)=−12+sin((m+12)θ)2sin(θ2).

Taking and using , we obtain:

 d∑j=1cos(2πknj) =−12+sin(πk+πkn)2sinπkn =−12+(−1)k12,

where we recall that

Notice that when or , the sum is .

Proof (of Proposition 2.1).

We first remark that each is a nonnegative symmetric matrix. Moreover, This follows because, in , the shortest path between any pair of distinct vertices is a unique element of the set . Hence, exactly one of the terms in the sum has a one in its entry, and all other terms have a zero. The diagonals of each consist of all zeros, since the shortest path from vertex to itself has length .

Now for any fixed we compute the eigenvalues of the matrix

 M:=I+d∑j=1cos(2πjkn)Aj(Cn).

First, suppose the vertices are labeled so that the cycle is We will later note why this is without loss of generality.

Then is circulant with, for , entries and given exactly by the coefficient of the -th term in the sum. Namely:

We can directly compute the -th eigenvalue of using Lemma 2.2. Our later proofs will include similar computations, so we pay particular emphasis to the details of our algebraic manipulation. For , the -th eigenvalue of is:

 λt(M) =n−1∑s=0mse−2πst√−1n =1+cos(2πkdn)e−2πdt√−1n+d−1∑s=1cos(2πskn)(e−2πst√−1n+e−2π(n−s)t√−1n), where we have first written the terms when s=0 and s=d. We rewrite terms so that our sum is to d and simplify exponentials: =1−cos(2πkdn)e2πdt√−1n+d∑s=1cos(2πskn)(e−2πst√−1n+e2πst√−1n) =1−(−1)k(−1)t+2d∑s=1cos(2πskn)cos(2πstn). Recalling the product-to-sum identity for cosines (that 2cos(θ)cos(ϕ)=cos(θ+ϕ)+cos(θ−ϕ)), we get Using Lemma 2.3 and (−1)k+t=(−1)k−t: =⎧⎪ ⎪⎨⎪ ⎪⎩1−(−1)2d+2d, if k=t=d−12+(−1)k+t12+d, if k≠d,t∈{k,n−k}1−(−1)k+t−12+(−1)k+t12−12+(−1)k−t12, else =⎧⎨⎩2d, if k=t=dd, if k≠d,t∈{k,n−k}0, else.

The eigenvalue is:

 λn(M) =n−1∑s=0ms =1−cos(2πkdn)+2d∑s=1cos(2πskn) =1−(−1)k−1+(−1)k =0.

The matrix thus has all nonnegative eigenvalues, so the positive semidefinite constraints hold for each

Finally, we note that our assumption that the cycle is was without loss of generality: we can replace the with for a permutation matrix that permutes the labels of the vertices so that the cycle is Then and are similar matrices and share the same spectrum. Thus is positive semidefinite if and only if is positive semidefinite; is the circulant matrix above, with

and thus both and are positive semidefinite.

We briefly remark that de Klerk et al. [8] also use the eigenvalue properties of circulant matrices in proving that the SDP is a relaxation of the TSP. They use the fact that each individual is circulant to compute the eigenvalues of each while we use the fact that the linear combinations of those matrices denoted above by is circulant.

3 The Unbounded Integrality Gap

To show that the SDP has an arbitrarily bad integrality gap, we demonstrate a family of instances of edge costs for which we can upper bound the SDP’s objective value. We consider an instance with two groups of vertices. The costs associated to intergroup edges will be expensive (1), while the costs of intragroup edges, negligible (0). As noted in the introduction, this instance is equivalent to both a cut semimetric and an instance where the costs are given by Euclidean distances in Explicitly, we will use the cost matrix

 ^C:=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝0⋯01⋯1⋮⋱⋮⋮⋱⋮0⋯01⋯11⋯10⋯0⋮⋱⋮⋮⋱⋮1⋯10⋯0⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠=(0110)⊗Jd.

Notice that the edge costs embedded in this matrix are metric.

Throughout this paper, we reserve and to refer to the two groups of vertices, so that and . In a Hamiltonian cycle so that any feasible solution to the TSP must use the expensive intergroup edges at least twice. We can achieve a tour costing with a tour that starts in , goes through all the vertices in , crosses to , goes through the vertices in , and then returns to . Hence

We state our main result:

Theorem 3.1.
 OPTSDP(^C)≤π22nOPTTSP(^C).

As a consequence:

Corollary 3.2.

The SDP has an unbounded integrality gap. That is, there exists no constant such that

 OPTTSP(C)OPTSDP(C)≤α

for all cost matrices .

To prove this theorem, we construct a family of feasible SDP solutions whose cost becomes arbitrarily small as grows. We will specifically search for solutions respecting the symmetry of : matrices that place a weight of on each intragroup edge and a weight of on each intergroup edge. Moreover, we choose111Note that de Klerk et al. [8] actually show that every feasible solution must satisfy for and for (when is even). The fact that every feasible solution matches these row sums is not something we will need, though we implicitly use it to inform the solutions we search for. We provide an alternative, direct proof that all feasible solutions must satisfy these row sums in the appendix in Theorem A.1. the so as to enforce that the row sums of the match those of the distance matrices introduced earlier: for and Since every vertex is incident to edges in its group (with weight ) and edges in the other group (with weight ), we have

 (d−1)ai+dbi={2, if i=1,...,d−11, if i=d.

Rearranging for the lets us express the -th solution matrix of this form as

 X(j)=((ajbjbjaj)⊗Jd)−ajIn,bj=⎧⎪⎨⎪⎩4n−(1−2n)aj, if j=1,...,d−12n−(1−2n)aj, if j=d, (2)

where we subtract so that the diagonal is zero. The cost of such a solution is entirely determined by the intergroup edges, each of cost . Each edge is accounted for twice in but the objective scales by , so the cost of this solution is

 (n2)2b1.

Theorem 3.1 then will follow from the claim below.

Claim 3.3.

Choosing the parameters

so that

 bi=⎧⎨⎩2n(1−cos(πid)), if i=1,...,d−12n, if i=d,i=1,...,d,

leads to a feasible SDP solution with matrices as given in Equation (2).

In particular Basic facts from calculus will show that this is roughly so that the cost of our solution is is roughly which gets arbitrarily small with .

The main work in proving this claim involves showing that the satisfy the PSD constraints. We first characterize the choices of the that lead to feasible SDP solutions of the form in Equation (2); this is done in Section 3.1. There we exploit the structure of matrices in the form of Equation (2) to write the PSD constraints on the as linear constraints on the ; these linear constraints will imply that all eigenvalues of the term are nonnegative. To finish proving the claim, in Section 3.2 we show that the claimed are indeed feasible.

3.1 Finding Structured Solutions to the SDP via Linear Programing

In this section we prove the following:

Proposition 3.4.

For the SDP, finding a minimum-cost feasible solution of the form

 X(j)=((ajbjbjaj)⊗Jd)−ajIn%wherebj=⎧⎪⎨⎪⎩4n−(1−2n)aj, if j=1,...,d−12n−(1−2n)aj, if j=d,

for is equivalent to solving the following linear program:

Proof.

First we notice that maximizing is equivalent to minimizing which is in turn equivalent to minimizing the cost of the SDP solution. The are nonnegative if and only if for The constraints are explicit in the linear program, and is equivalent to and Finally, the constraint that the to sum to is equivalent to and However, follows from requiring :

 d∑i=1bi =d−1∑i=1(4n−(1−2n)ai)+(2n−(1−2n)ai) =(d−1)4n+2n−(1−2n)d∑i=1ai =2−2n−(1−2n) =1.

It remains to show that the -th SDP constraint is equivalent to

 −2n−2≤d∑i=1cos(2πikn)ai≤1,k=1,...,d.

The -th SDP constraint is:

 I+d∑i=1cos(2πikn)X(i)⪰0.

Using properties of the Kroenecker product (see Chapter 4 of Horn and Johnson [17]) and the structure of our , we simplify this:

 In+d∑i=1cos(2πikn)X(i) =In+d∑i=1cos(2πikn)(((aibibiai)⊗Jd)−aiIn) =(1−a(k))In+(a(k)b(k)b(k)a(k))⊗Jd,

where

depend on the full sequences and on .

To explicitly write the eigenvalues of the -th SDP constraint, we use several helpful facts from linear algebra.

Fact 3.5.
• The eigenvalues of with and are for and . See Theorem 4.2.12 in Chapter 4 of Horn and Johnson [17].

• The rank one matrix , with of dimension , has one eigenvalue

corresponding to eigenvector

, and all other eigenvalues are zero. (Choose, e.g., any linearly independent vectors that are orthogonal to .)

• is an eigenvalue of with eigenvector if and only if is an eigenvalue of with eigenvector . This follows by direct computation.

• The eigenvalues of are and with respective eigenvectors and

From these facts, we obtain that the eigenvalues of are:

 1−a(k),1−a(k)+n2(a(k)+b(k)), and 1−a(k)+n2(a(k)−b(k)).

For example, has multiplicity . It corresponds to the zero eigenvalues of , each of which gives rise to 2 zero eigenvalues of

Therefore, for the -th PSD constraint to hold, it suffices that the following three linear inequalities hold:

 1−a(k)≥0,1−a(k)+n2(a(k)+b(k))≥0,1−a(k)+n2(a(k)−b(k))≥0. (3)

We thus far have derived a system of inequalities on the that, if satisfied, imply a set of feasible solutions to the SDP. We can further simplify these by writing the in terms of the . As in Proposition 2.1, we begin by writing the sum so that we can use Lemma 2.3. We compute

 b(k) =d∑i=1cos(2πikn)bi =(d−1∑i=1cos(2πikn)(4n−(1−2n)ai))+cos(2πdkn)(2n−(1−2n)ad) Using Lemma 2.3: =4n(−1+(−1)k2)−(1−2n)a(k)−(−1)k(2n) =−(1−2n)a(k)−2n.

We use this relationship to simplify the second and third inequalities in Equation (3) by writing them only in terms of We obtain

 1−a(k)+n2(a(k)+b(k))=1−a(k)+n2(a(k)−(1−2n)a(k)−2n)=0,

and

 1−a(k)+n2(a(k)−b(k))=1−a(k)+n2(a(k)+(1−2n)a(k)+2n)=2+(n−2)a(k).

Hence, the three inequalities in Equation (3) become

 −2n−2≤a(k)≤1,

and these inequalities are equivalent to ensuring that the -th PSD constraint of the SDP in (1) hold.

Corollary 3.6.

Consider a possible solution to the SDP of the form

 X(j)=((ajbjbjaj)⊗Jd)−ajIn%wherebj=⎧⎪⎨⎪⎩4n−(1−2n)aj, if j=1,...,d−12n−(1−2n)aj, if j=d,.

The th PSD constraint is equivalent to

3.2 Analytically Finding Solutions to the Linear Program

We now show that the following choice of the lead to that are feasible for the SDP (1):

As argued above, to show feasibility we need only verify that the constraints of the linear program in Proposition 3.4 hold. Notice that so that, for we have Moreover, . Hence we need only show that and that the live in the appropriate range.

Claim 3.7.

For

 d∑i=1ai=1.
Proof.

We directly compute using Lemma 2.3 with . Then:

 d∑i=1ai =2n−2(−1+d) =1.

Claim 3.8.

For

 a(k)={d−2n−2, if k=1−2n−2, otherwise.
Proof.

As in Proposition 2.1, we use the product-to-sum identity for cosines and then do casework using Lemma 2.3. We have:

 a(k) =d∑i=1cos(2πikn)ai =2n−2d∑i=1(cos(2πikn)+cos(2πikn)cos(πid)) We cannot apply Lagrange’s trigonometric identity only when k=1, so that =⎧⎪ ⎪⎨⎪ ⎪⎩2n−2(−1+(−1)k2+−1+(−1)k+14+−1+(−1)k−14), if k>12n−2(−1+0+12d), if k=1 ={−2n−2, if k>1d−2n−2, if k=1.

Claim 3.8 and Corollary 3.6 now show that the claimed imply feasible solutions satisfying the PSD constraints. Taken with Claim 3.7 and Proposition 3.4, we have that

 ai=2n−2(cos(πid)+1),i=1,...,d

is feasible for the linear program in Proposition 3.4 and therefore implies feasible solutions for the SDP (1) of the form

 X(j)=((ajbjbjaj)⊗Jd)−ajIn%wherebj=⎧⎪⎨⎪⎩4n−(1−2n)aj, if j=1,...,d−12n−(1−2n)aj, if j=d.

3.3 The Unbounded Integrality Gap

We are now able to prove our main theorem:

Theorem 3.1

 OPTSDP(^C)≤π22nOPTTSP(^C).
Proof.

Earlier we saw that a feasible solution of the form in Equation (2) had cost and Hence, assuming a feasible solution, we can bound

 OPTSDP(^C)OPTTSP(^C)≤n