 # Fractional Decomposition Tree Algorithm: A tool for studying the integrality gap of Integer Programs

We present a new algorithm, Fractional Decomposition Tree (FDT) for finding a feasible solution for an integer program (IP) where all variables are binary. FDT runs in polynomial time and is guaranteed to find a feasible integer solution provided the integrality gap is bounded. The algorithm gives a construction for Carr and Vempala's theorem that any feasible solution to the IP's linear-programming relaxation, when scaled by the instance integrality gap, dominates a convex combination of feasible solutions. FDT is also a tool for studying the integrality gap of IP formulations. We demonstrate that with experiments studying the integrality gap of two problems: optimally augmenting a tree to a 2-edge-connected graph and finding a minimum-cost 2-edge-connected multi-subgraph (2EC). We also give a simplified algorithm, Dom2IP, that more quickly determines if an instance has an unbounded integrality gap. We show that FDT's speed and approximation quality compare well to that of feasibility pump on moderate-sized instances of the vertex cover problem. For a particular set of hard-to-decompose fractional 2EC solutions, FDT always gave a better integer solution than the best previous approximation algorithm (Christofides).

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper we focus on finding feasible solutions to binary Integer Linear Programs (IP). Informally, an integer program is the optimization of a linear objective function subject to linear constraints, where the variables must take integer values. Binary variables represent yes/no decisions. Integer Programming (and more generally Mixed Integer Linear Programming (MILP)) can model many practical optimization problems including scheduling, logistics and resource allocation. It is NP-hard even to determine if an IP instance has a feasible solution

[GJ90]. However, there is substantial research into finding a feasible, provably-good approximate, and even (computationally) provably optimal solutions to specific IP instances.

A major tool for finding feasible solutions is the linear-programming (LP) relaxation for the instance. This is a new problem created by relaxing the integrality constraints for an IP instance, allowing the variables to take continuous (rational) values. Linear programs can be solved in polynomial time. The objective value of the linear programming relaxation provides a bound (lower bound for a minimization problem and upper bound for a maximization problem) on the optimal solution to the IP instance. The solutions can also provide some useful global structure, even though the fractional values might not be directly meaningful.

LP-based approximation algorithms use LP relaxations to find provably good approximate feasible solutions to IP problems in polynomial time. At the highest level, they involve solving the LP relaxation, using special structure from the problem to find a feasible solution, and proving that the objective value of the solution is no more than times worse than the bound from the LP relaxation. The approximation factor can be a constant or depend on the input parameters of the IP, e.g. where is the number of variables in the formulation of the IP (the dimension of the problem).

There is an inherent limit to how small can be for a given IP. The integrality gap for an IP instance is the ratio of the best integer solution to the best solution of the LP relaxation. Any LP-based approximation cannot have an approximation factor smaller than the integrality gap because there is no feasible solution with an objective value better than a factor of worse than the optimal solution of the LP relaxation.

If the integrality gap for an IP formulation is large, it is sometimes possible to add families of constraints to the formulation to reduce the integrality gap. These constraints are redundant for the integer problem, but can make some fractional solutions no longer feasible for the LP. These families of constraints (cuts) can have exponential size as long as we can provide a polynomial-time separation algorithm.

Reducing the integrality gap of an IP formulation has two advantages. It can lead to better LP-based approximation algorithm bounds as described above. It can also help exact solvers run faster or solve instances it could not before. Exact IP solvers are based on intelligent branch-and-bound strategies. Commercial and open-source MILP solvers can find exact solutions (or near-optimal solutions with a provable bound) to many specific instances of NP-hard combinatorial optimization problems. These solvers use the LP relaxation to get lower bounds (for minimization problems). The worst-case exponential search is practically feasible when the solver can prune large amounts of the search space. This happens when the lower bound for a problem is worse than the value of a known feasible solution. This requires a way to find good heuristic solution and it requires good lower bounds that are as close to the actual optimal value of an IP subproblem as possible.

In this paper, we give a method to find feasible solutions for IPs if the integrality gap is bounded. The method is also a tool for evaluating the integrality gap for a formulation. Researcher can use it to determine whether they should expend effort to find new classes of cuts. They can also use it to help guide theory for finding tighter bounds on the integrality gap for classic problems like the traveling salesman problem.

For some problems such as the Minimum Cost Spanning Tree Problem there are linear programming relaxations whose basic feasible solutions coincide with integral solutions, i.e. spanning trees.

We now describe IPs and our methods more formally. The set of feasible points for a pure IP (henceforth IP) is the set

 S(A,b)={x∈Zn:Ax≥b}. (1)

If we drop the integrality constraints, we have the linear relaxation of set ,

 P(A,b)={x∈Rn:Ax≥b}. (2)

Let denote an instance. Then and denote and , respectively. Given a linear objective function , an IP is .

Relaxing the integrality constraints gives the polynomial-time-solvable linear programming relaxation: . The optimal value of this linear program (LP), denoted , is a lower bound on the optimal value for the IP, denoted .

Many researchers (see [WS11, Vaz01]) have developed polynomial time LP-based approximation algorithms that find solutions for special classes of IPs whose cost are provably smaller than . However, for many combinatorial optimization problems there is a limit to such techniques based on LP relaxations, represented by the integrality gap of the IP formulation. The integrality gap for instance is defined to be . For example consider the minimum cost 2-edge-connected multi-subgraph problem (2EC): Given a graph and , 2EC asks for the minimum cost 2-edge-connected multi-subgraph of . A linear programming relaxation for this problem known as the subtour elimination relaxation is

 min{cx:∑e∈δ(U)xe≥2 for ∅⊊U⊊V,x∈[0,2]E}. (3)

In this case the instance-specific integrality gap is the integrality gap of the subtour-elimination relaxation for the 2EC on graph with vertices. . Alexander et al. [ABE06] showed the instance-specific integrality gap of the subtour elimination relaxation for the 2EC for instances of the problem with is at most .

The value of depends on the constraints in (1). We cannot hope to find solutions for the IP with objective values better than . More generally we can define the integrality gap for a class of instances as follows.

 g(I)=maxc≥0,I∈IzIP(I,c)zLP(I,c). (4)

For example, the aforementioned integrality gap of the subtour elimination relaxation for the 2EC is at most [Wol80] and at least [ABE06]. Therefore, we cannot hope to obtain an LP-based -approximation algorithm for this problem using this LP relaxation.

Our methods apply theory connecting integrality gaps to sets of feasible solutions. Instances with has , the convex hull of the lattice of feasible points. In this case, is an integral polyhedron. The spanning tree polytope of graph , , and the perfect-matching polytope of graph , , have this property ([Edm70, Edm65]

). For such problems there is an algorithm to express vector

as a convex combination of points in in polynomial time [GLS93].

###### Proposition 1.

If , then for there exists , where and for such that . Moreover, we can find such a convex combination in polynomial time.

An equivalent way of describing Proposition 1 is the following Theorem of Carr and Vempala [CV04].

###### Theorem 2 (Carr, Vempala [Cv04]).

We have if and only if for there exists where and for such that .

We denote by the set of points such that there exists a point with , also known as the dominant of . A polyhedron is of blocking type if it is equal to its dominant. Theorem 2 was first introduced by Goemans [Goe95] for blocking type polyhedra. While there is an exact algorithm for problems with gap as stated in Proposition 1, Theorem 2 is existential, with no construction. To study integrality gaps, we wish to find such a solution constructively: assuming reasonable complexity assumptions, a specific problem with , and for some , can we find , where and for such that in polynomial time? We wish to find the smallest factor possible.

### 1.1 Algorithms and Theory Contributions

We give a general approximation framework for solving binary IPs. Consider the set of point described by sets and as in (1) and (2), respectively. Assume in addition that . For a vector such that , let . For an integer let be the vector with for .

We introduce the Fractional Decomposition Tree Algorithm (FDT) which is a polynomial-time algorithm that given a point produces a convex combination of feasible points in that are dominated by a “factor” of in the coordinates corresponding to . If , it would be optimal. However we can only guarantee a factor of . FDT relies on iteratively solving linear programs that are about the same size as the description of .

thmbinaryFDT Assume . The Fractional Decomposition Tree (FDT) algorithm, given , produces in polynomial time and such that , , and . Moreover, .

A subroutine of the FDT, called the DomToIP algorithm, finds feasible solutions to any IP with finite gap. This can be of independent interest, especially in proving that a model has unbounded gap. thmDomToIP Assume . The DomToIP algorithm finds in polynomial time.

For a generic IP instance it is NP-hard to even decide if the set of feasible solutions is empty or not. There are a number of heuristics for this purpose, such as the feasibility pump algorithm [FGL05, FS09]. These heuristics are often very effective and fast in practice, however, they can sometimes fail to find a feasible solution. Moreover, these heuristics do not provide any bounds on the quality of the solution they find.

Here is how the FDT algorithm works in a high level: in iteration the algorithm maintains a convex combination of vectors in that have a 0 or 1 value for coordinates indexed . Let be a vector in the convex combination in iteration of the algorithm. We solve a linear programming problem that gives us and such that and and . We then replace in the convex combination with . Repeating this for every vector in the convex combination from previous iteration yields a convex combination of points that is “more” integral. If in any iteration there are too many points in the convex combination we solve a linear programming problem that “prunes” the convex combination. At the end we find a convex combination of integer solutions . For each such solution we invoke the DomToIP algorithm (see Section 2) to find where .

One can extend the FDT algorithm for binary IPs into covering IPs by losing a factor on top of the loss for FDT. In order to eradicate this extra factor, we need to treat the coordinate with differently. We focus on the 2-edge-connected multi-subgraph graph problem (2EC): Given a graph and find a 2-edge-connected multi-subgraph of with minimum cost. The natural linear programming relaxation for this problem is

 min{cx:x(δ(U))≥2 for ∅⊂U⊂V,x∈[0,2]E} (5)

We denote the feasible region of this LP by . Let be the convex hull of incidence vectors of 2-edge-connected multi-subgraphs of graph . Following the definition in (4) have

 g(2ECec)=maxc≥0,Gminx∈2ECec(G)cxminx∈Subtour(G)cx. (6)

thmFDTEC Let and be an extreme point of . The FDT algorithm for 2EC produces and 2-edge-connected multi-subgraphs such that , , and . Moreover, .

### 1.2 Experiments.

Although the bound guaranteed in both Theorems 1.1 and 1.1 are very large, we show that in practice, the algorithm works very well for network design problems described above. We show how one might use FDT to investigate the integrality gap for such well-studied problems.

#### 1.2.1 Minimum vertex cover problem

In the minimum vetex cover problem (VC) we are given a graph and . A subset of of is a vertex cover if for at least one endpoint of is in . The goal in VC is to find the minimum cost vertex cover. The linear programming relaxation for VC is

 min{cx:xu+xv≥1 for e=uv∈E,x∈[0,1]V}. (7)

The integrality gap of this formulation is exactly 2 [WS11]. It is shown that it is UG-hard to approximte VC within any factor sctrictly better than 2 [AKS11]. We compare FDT and the feasbility pump heuristic [FGL05] on the the small instances of PACE 2019111 Parameterized Algorithms and Computational Experiments Challenge 2019: https://pacechallenge.org/2019/ challenge test cases [DFH19].

#### 1.2.2 Tree augmentation problem

In the Tree Augmentation Problem (TAP) we are given a graph , a spanning tree of . We also have a cost vector . A subset of is called a feasible augmentation if is a 2-edge-connected graph. In TAP we seek the minimum cost feasible augmentation. The natural linear programming relaxation for TAP is

 min{cx:∑ℓ∈cov(e)xℓ≥1 for e∈T,x∈[0,1]E∖T}. (8)

where is set of edges such that is in the unique cycle of . We call the LP above the cut-LP. The integrality gap of the cut-LP is known to be between [CKKK08] and [FJ81]. We create random fractional extreme points of the cut-LP and round them using FDT. For the instances that we create the blow-up factor is always below providing an upper bound for such instances.

#### 1.2.3 2-edge-connected multi-subgraph problem

Known polyhedral structure makes it easier to study integrality gaps for such problems. We use the idea of fundamental extreme point [CR98, BC11, CV04] to create the “hardest” LP solutions to decompose.

There are fairly good bounds for the integrality gap for TSP or 2EC. Benoit and Boyd [BB08] used a quadratic program to show the integrality gap of the subtour elimination relaxation for the TSP, , is at most for graphs with at most 10 vertices. Alexander et al. [ABE06] used the same ideas to provide an upper bound of for on graphs with at most 10 vertices.

Consider a graph . A Carr-Vempala point is a fractional point in where the edges with form a single cycle in and the vertices on the cycle are connected via vertex-disjoint paths of edges with . Carr and Vempala [CV04] showed that is achieved for instances where the optimal solution to is a Carr-Vempala point. We show that the integrality gap is at most for Carr-Vempala points with at most 12 vertices on the cycle formed by the fractional edges. Note that the number of vertices in these instances can be arbitrarily high since the paths of edges with -value 1 can be arbitrarily long.

## 2 Finding a Feasible Solution

Consider an instance of the IP formulation. Define sets and as in (1) and (2), respectively. Assume and . For simplicity in the notation we denote and with , , and for this section and the next section. Also, for both sections we assume . Without loss of generality we can assume for .

In this section we prove Theorem 1.1. In fact, we prove a stronger result.

###### Lemma 3.

Given and , there is an algorithm (the DomToIP algorithm) that finds in polynomial time, such that .

Notice that Lemma 3 implies Theorem 1.1, since it is easy to obtain an integer point in : rounding up any fractional point in gives us a point in .

### 2.1 Proof of Lemma 3: The DomToIP Algorithm

We start by introducing an algorithm that “fixes” the variables iteratively, starting from the first coordinate and ending at the -th coordinate. Suppose we run the algorithm for iterations and in each iteration we find such that for . Notice that we can set . Now consider the following linear program. The variables of this linear program are the variables.

 DomToIP(x(ℓ)) minzℓ+1 (9) s.t.Az≥b (10) \definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1s.t.zj=x(ℓ)jj=1,…,ℓ (11) \definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1s.t.zj≤x(ℓ)jj=ℓ+1,…,n (12) \definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1s.t.z≥0 (13)

If the optimal value to is 0, then let . Otherwise if the optimal value is strictly positive let . Let for (See Algorithm 1).

The above procedure suggests how to find from . The DomToIP algorithm initializes with and iteratively calls this procedure in order to obtain .

We prove that indeed . First, we need to show that in any iteration of DomToIP that is feasible. We show something stronger. For let

 LP(ℓ) ={z∈P:z≤x(ℓ) and zj=x(ℓ)j for j∈[ℓ]}, and IP(ℓ) ={z∈LP(ℓ):z∈{0,1}n}.

Notice that if is a non-empty set then is feasible. We show by induction on that and are not empty sets for . First notice that is clearly feasible since by definition , meaning there exists such that . By Theorem 2, there exists and for such that and . Hence, . So if , then , which implies that for all and where . Hence, for . Therefore for , which implies .

Now assume is non-empty for some . Since we have and hence the has an optimal solution .

We consider two cases. In the first case, we have . In this case we have . Since , we have . Also, . By Theorem 2 there exists and for such that and . We have . So for where , we have for . This implies for . Hence, there exists such that . We claim that . If we must have such that , and thus and . Without loss of generality assume is minimum number satisfying . Consider iteration of the DomToIP algorithm. Notice that . We have which implies when we solved the optimal value was strictly larger than zero. However, is a feasible solution to and gives an objective value of 0. This is a contradiction, so .

Now for the second case, assume . We have . Notice that for each point we have , so for each we have , i.e. . This means that , and .

Now consider . Let be the optimal solution to . If , we have , which implies that , and since we have . If , it must be the case that . By the argument above there is a point . We show that . For we have . We just need to show that . Assume for contradiction, then has objective value of for , this is a contradiction to being the optimal solution. This concludes the proof of Lemma 3.

## 3 FDT on Binary IPs

Assume we are given a point . For instance, can be the optimal solution of minimizing a cost function over set , which provides a lower bound on . In this section, we prove Theorem 1.1 by describing the Fractional Decomposition Tree (FDT) algorithm. We also remark that if , then the algorithm will give an exact decomposition of any feasible solution.

The FDT algorithm grows a tree similar to the classic branch-and-bound search tree for integer programs. Each node represents a partially integral vector in together with a multiplier . The solutions contained in the nodes of the tree become progressively more integral at each level. In each level of the tree, the algorithm maintain a conic combination of points with the properties mentioned above. Leaves of the FDT tree contain solutions with integer values for all the variables that dominate a point in . In Lemma 3 we saw how to turn these into points in .

##### Branching on a node.

We begin with the following lemmas that show how the FDT algorithm branches on a variable.

###### Lemma 4.

Given and where , we can find in polynomial time vectors and scalars such that: (i) , (ii) and are in ,(iii) and , (iv) .

###### Proof.

Consider the following linear program which we denote by . The variables of are and and .

 LPC(ℓ,x′) maxλ0+λ1 (14) s.t.Axj≥bλj for j=0,1 (15) \definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1s.t.0≤xj≤λj for j=0,1 (16) \definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1s.t.x0ℓ=0,x1ℓ=λ1 (17) \definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1s.t.x0+x1≤x′ (18) \definecolor[named]pgfstrokecolorrgb1,1,1\pgfsys@color@gray@stroke1\pgfsys@color@gray@fill1s.t.λ0,λ1≥0 (19)

Let , and be an optimal solution to the LP above. Let , . This choice satisfies (ii), (iii), (iv). To show that (i) is also satisfied we prove the following claim.

We have .

###### Proof..

We show that there is a feasible solution that achieves the objective value of . By Theorem 2 there exists , with and for such that . So

 x′≥k∑i=1θig~xi=∑i∈[k]:~xiℓ=0θig~xi+∑i∈[k]:~xiℓ=1θig~xi. (20)

For , let . Also let and . Note that . Constraint (18) is satisfied by Inequality (20). Also, for we have

 Axj=∑i∈[k],~xiℓ=jθigA~xi≥b∑i∈[k],~xiℓ=jθig=bλj. (21)

Hence, Constraints (15) holds. Constraint (17) also holds since is obviously and . The rest of the constraints trivially hold.

This concludes the proof of Lemma 4. ∎

We now show if in the statement of Lemma 4 is partially integral, we can find solutions with more integral components.

###### Lemma 5.

Given where and for some we can find in polynomial time vectors and scalars such that: (i) , (ii) and are in , (iii) and , (iv) ,(v) for and .

###### Proof.

By Lemma 4 we can find , , and that satisfy (i), (ii), (iii), and (iv). We define and as follows. For , for , let , for let .

We now show that , , , and satisfy all the conditions. Note that conditions (i), (ii), (iii), and (v) are trivially satisfied. Thus we only need to show (iv) holds. We need to show that . If , then this clearly holds. Hence, assume . By the property of we have . If , then by Constraint (18) we have . Therefore, for , so (iv) holds. Otherwise if , then we have Therefore (v) holds. ∎

##### Growing and Pruning FDT tree.

The FDT algorithm maintains nodes in iteration of the algorithm. The nodes in correspond to the nodes in level of the FDT tree. The points in the leaves of the FDT tree, , are points in and are integral for all integer variables.

###### Lemma 6.

There is a polynomial time algorithm that produces sets of pairs of together with multipliers with the following properties for : (a) If , then for , i.e. the first coordinates of a solution in level are integral, (b) , (c) , (d) .

###### Proof.

We prove this lemma using induction but one can clearly see how to turn this proof into a polynomial time algorithm. Let be the set that contains a single node (root of the FDT tree) with and multiplier 1. It is easy to check all the requirements in the lemma are satisfied for this choice.

Suppose by induction that we have constructed sets . Let the solutions in be for and be their multipliers, respectively. For each if we add the pair to . Otherwise, applying Lemma 5 (setting and ) we can find , , and with the properties (i) to (v) in Lemma 5. Add the pairs and to . It is easy to check that set is a suitable candidate for , i.e. set satisfies (a), (b) and (c). However we can only ensure that , and might have . We call the following linear program . Let . The variables of are scalar variables for each node in .

 Pruning(L′){max|L′|∑j=1θj:|L′|∑j=1θjxji≤x∗i for i∈[t],θ≥0} (22)

Notice that is in fact a feasible solution to . Let be the optimal vertex solution to this LP. Since the problem is in , has to satisfy linearly independent constraints at equality. However, there are only constraints of type . Therefore, there are at most coordinates of that are non-zero. Set which consists of for and their corresponding multipliers satisfy the properties in the statement of the lemma. Notice that, we can discard the nodes in that have , so . Also, since is optimal and is feasible for , we have . ∎

##### From leaves of FDT to feasible solutions.

For the leaves of the FDT tree, , we have that every solution in has and . By applying Lemma 3 we can obtain a point such that . This concludes the description of the FDT algorithm and proves Theorem 1.1. See Algorithm 2 for a summary of the FDT algorithm.