An Improved Approximation Algorithm for the Minimum k-Edge Connected Multi-Subgraph Problem

01/15/2021 ∙ by Anna R. Karlin, et al. ∙ University of Washington 0

We give a randomized 1+√(8ln k/k)-approximation algorithm for the minimum k-edge connected spanning multi-subgraph problem, k-ECSM.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In an instance of the minimum -edge connected subgraph problem, or -ECSS, we are given an (undirected) graph with vertices and a cost function , and we want to choose a minimum cost set of edges such that the subgraph is -edge connected. In its most general form, -ECSS generalizes several extensively-studied problems in network design such as tree augmentation or cactus augmentation. The -edge-connected multi-subgraph problem, -ECSM, is a close variant of -ECSS in which we want to choose a -edge-connected multi-subgraph of of minimum cost, i.e., we can choose an edge multiple times. It turns out that one can assume without loss of generality that the cost function in -ECSM is a metric, i.e., for any three vertices , we have .

Around four decades ago, Fredrickson and Jájá [FJ81, FJ82] designed a 2-approximation algorithm for -ECSS and a 3/2-approximation algorithm for -ECSM. The latter essentially follows by a reduction to the well-known Christofides-Serdyukov approximation algorithm for the traveling salesperson problem (TSP). Over the last four decades, despite a number of papers on the problem [JT00, KR96, Kar99, Gab05, GG08, GGTW09, Pri11, LOS12], the aforementioned approximation factors were only improved in the cases where the underlying graph is unweighted or . Most notably, Gabow, Goemans, Tardos and Williamson [GGTW09] showed that if the graph is unweighted then -ECSS and -ECSM admit approximation algorithms, i.e., as the approximation factor approaches 1. The special case of -ECSM where received significant attention and better than -approximation algorithms were designed for special cases [CR98, BFS16, SV14, BCCGISW20].

Motivated by [GGTW09], Pritchard posed the following conjecture:

Conjecture 1.1 ([Pri11]).

The -ECSM problem admits a approximation algorithm.

In other words, if true, the above conjecture implies that the 3/2-classical factor is not optimal for sufficiently large , and moreover that it is possible to design an approximation algorithm whose factor gets arbitrarily close to 1 as . In this paper, we prove a weaker version of the above conjecture.

Theorem 1.2 (Main).

There is a randomized algorithm for (weighted) -ECSM with approximation factor (at most) .

We remark that our main theorem only improves the classical 3/2-approximation algorithm for -ECSM only when (although one can use the more precise expression given in the proof to, for example, improve upon 3/2 for even values of ).

For a set , let denote the set of edges leaving

. The following is the natural linear programming relaxation for

-ECSM.

(1)
s.t.

Note that while in an optimum solution of -ECSM the degree of each vertex is not necessarily equal to , since the cost function satisfies the triangle inequality we may assume that in any optimum fractional solution each vertex has (fractional) degree . This follows from the parsimonious property [gb93].

We prove Theorem 1.2 by rounding an optimum solution to the above linear program. So, as a corollary we also upper-bound the integrality gap of the above linear program.

Corollary 1.3.

The integrality gap of LP (1) is at most .

1.1 Proof Overview

Before explaining our algorithm, we recall a randomized rounding approach of Karger [Kar99]. Karger showed that if we choose every edge

independently with probability

, then the sample is -edge connected with high probability. He then fixes the connectivity of the sample by adding copies of the minimum spanning tree of . This gives a approximation algorithm for the problem.

First, we observe that where is a solution to the LP (1

), the vector

is in the spanning tree polytope (after modifying slightly, see creftype 2.1 for more details). Following a recent line of works on the traveling salesperson problem [OSS11, KKO20b] we write as a -uniform spanning tree distribution, . Then, we independently sample spanning trees111If

is odd, we sample

trees. The bound remains unchanged relative to the analysis we give below as the potential cost of one extra tree is .
. It follows that has the same expectation across every cut as , and due to properties of -uniform spanning tree distributions it is concentrated around its mean. Unlike the independent rounding procedure, has at least edges across each cut with probability 1. This implies that the number of “bad” cuts of , i.e. those of size strictly less than , is at most (with probability 1). This is because any tree has strictly less than 2 edges in exactly -“tree cuts,” and a cut lying on no tree cuts must have at least edges in .

We divide these potentially bad cuts into two types: (i) Cuts such that and (ii) Cuts where , for some . We fix all cuts of type (i) by adding copies of the minimum spanning tree of . To fix cuts of type (ii), we employ the following procedure: for any tree where and is of type (ii), we add one extra copy of the unique edge of in . To bound the expected cost of our rounded solution, we use the concentration property of -uniform trees on edges of to show that the probability any fixed cut is of type (ii) is exponentially small in , , even if we condition on for a single tree .

1.2 Algorithm

For two sets of edges , we write to denote the multi-set union of and allowing multiple edges. Note that we always have .

Let be an optimal solution of LP (1). We expand the graph to a graph by picking an arbitrary vertex , splitting it into two nodes and , and then, for every edge incident to , assigning fraction to each of the two edges and in . Call this expanded graph , its edge set , and the resulting fractional solution , where and are identical on all other edges. (Note that each of and now have fractional degree in .) In creftype 2.1 below, we show that is in the spanning tree polytope for the graph . For ease of exposition, the algorithm is described as running on (and spanning trees222A spanning tree in is a 1-tree in , that is, a tree plus an edge. of ), which has the same edge set as (when and are identified).

Our algorithm is as follows:

1:Let be an optimum solution of (1) extended to the graph as described above.
2:Find weights such that for any , . By Theorem 2.2
3:Sample spanning trees (in ) independently and let .
4:Let be copies of the MST of . is a parameter we choose later.
5:for  and  do
6:     if  and  then
7:         .
8:     end if
9:end for
10:Return .
Algorithm 1 An Approximation Algorithm for -ECSM

2 Preliminaries

For any set of edges and a set of edges , we write

Also, for any edge weight function , we write .

For any spanning tree of , and any edge , we write to denote the set of edges in the unique cut obtained by deleting from . Of particular interest to us below will be where is an edge in .

2.1 Random Spanning Trees

Edmonds [Edm70] gave the following description for the convex hull of the spanning trees of any graph , known as the spanning tree polytope.

(2)

Edmonds also [Edm70] proved that the extreme point solutions of this polytope are the characteristic vectors of the spanning trees of .

Fact 2.1 ([KKO20b]).

Let be the optimal solution of LP (1) and its extension to as described above. Then is in the spanning tree polytope (2) of .

Proof.

For any set with , . If , then , so Finally, if , then . Thus, . The claim follows because

Given nonnegative edge weights , we say a distribution over spanning trees of is -uniform, if for any spanning tree ,

Theorem 2.2 ([Agmos17]).

There is a polynomial-time algorithm that, given a connected graph , and a point in the spanning tree polytope (2) of , returns such that the corresponding -uniform spanning tree distribution satisfies

i.e., the marginals are approximately preserved. In the above is the set of all spanning trees of .

2.2 Bernoulli-Sum Random Variables

Definition 2.3 (Bernoulli-Sum Random Variable).

We say is a Bernoulli-Sumrandom variable if it has the law of a sum of independent Bernoullis, say for some , with .

Fact 2.4.

If and are two independent Bernoulli-sum random variables then .

Lemma 2.5 ([Bbl09, Pit97]).

Given and , let be the -uniform spanning tree distribution of . Let be a sample from . Then for any fixed , the random variable is distributed as .

Theorem 2.6 (Multiplicative Chernoff-Hoeffding Bound for BS Random Variables).

Let be a Bernoulli-Sum random variable. Then, for any and

3 Analysis of the Algorithm

In this section we prove Theorem 1.2. We first observe that the cuts of are precisely the cuts of that have and on the same side of the cut, and for any such cut the set of edges crossing the cut in and in is the same (once and are contracted). We begin by showing that the output of Algorithm 1 is -edge connected (in ) with probability 1.

Lemma 3.1 (-Connectivity of the Output).

For any , the output of Algorithm 1, is a -edge connected subgraph of .

Proof.

Fix spanning trees in and let for some , where . We show that . If , then since has copies of the minimum spanning tree, and we are done. Otherwise . Then, we know that for any tree , either or . If , since , has one extra copy of the unique edge of in . Therefore, including those cases where an extra copy of the edge is added, each has at least two edges in , so as desired. ∎

Lemma 3.2.

For any , , and any ,

where the randomness is over spanning trees independently sampled from .

Proof.

Condition on tree such that and (.

By Lemma 2.5, for any such that , is a random variable, with . Also, by definition, (with probability 1). Since are independently chosen, by creftype 2.4 the random variable is distributed as for . Since each has at least one edge in , with probability 1. So, by Theorem 2.6, with , when ,

Averaging over all realizations of satisfying the required conditions proves the lemma. ∎

Proof of Theorem 1.2.

Let be an optimum solution of LP (1). Since the output of the algorithm is always -edge connected we just need to show . By linearity of expectation,

where for simplicity we ignored the loss in the marginals. On the other hand, since by creftype 2.1, is in the spanning tree polytope of , . It remains to bound the expected cost of . By Lemma 3.2, we have,

Putting these together we get, Setting finishes the proof. ∎