An Optimal Monotone Contention Resolution Scheme for Uniform and Partition Matroids

05/25/2021 ∙ by Danish Kashaev, et al. ∙ ETH Zurich 0

A common approach to solve a combinatorial optimization problem is to first solve a continous relaxation and then round the fractional solution. For the latter, the framework of contention resolution schemes (or CR schemes) introduced by Chekuri, Vondrak, and Zenklusen, has become a general and successful tool. A CR scheme takes a fractional point x in a relaxation polytope, rounds each coordinate x_i independently to get a possibly non-feasible set, and then drops some elements in order to satisfy the independence constraints. Intuitively, a CR scheme is c-balanced if every element i is selected with probability at least c · x_i. It is known that general matroids admit a (1-1/e)-balanced CR scheme, and that this is (asymptotically) optimal. This is in particular true for the special case of uniform matroids of rank one. In this work, we provide a simple monotone CR scheme with a balancedness factor of 1 - e^-kk^k/k! for uniform matroids of rank k, and show that this is optimal. This result generalizes the 1-1/e optimal factor for the rank one (i.e. k=1) case, and improves it for any k>1. Moreover, this scheme generalizes into an optimal CR scheme for partition matroids.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The relaxation and rounding framework is a broad and powerful approach to solve combinatorial optimization problems. It consists of first getting a suitable continuous relaxation of the discrete problem, solving it approximately, and then rounding the obtained fractional solution into an integral one while preserving the objective value as much as possible. Contention resolution schemes (or CR schemes) were proposed by Chekuri, Vondrak, and Zenklusen [3] as a general technique to round a fractional solution. While initially designed to handle submodular maximization problems under different types of constraints, CR schemes have become a powerful tool to tackle a broad class of problems. At the high level, given a fractional point , the procedure first generates a random set by independently including each element with probability . It then removes some elements from the set so that the obtained set is feasible. More formally, given a ground set and an independence family , we let be a relaxation polytope associated to . A CR scheme is then defined as follows. [CR scheme] is a -balanced contention resolution scheme for the polytope if for every , is an algorithm that takes as input a set and outputs an independent set contained in such that

Moreover, a contention resolution scheme is monotone if for any :

Monotonicity is a desirable property for a -balanced CR scheme to have, since one can then get approximation guarantees for constrained submodular maximization problems via the relaxation and rounding approach (see [3] for more details). By presenting a variety of CR schemes for different constraints, the work in [3] gives improved approximation algorithms for linear and submodular maximization problems under matroid, knapsack, matchoid constraints, as well as their intersections. While contention resolution schemes have been applied to different types of independence families, we restrict our attention in this work to matroid constraints. A CR scheme with a balancedness of for the uniform matroid of rank one is given in [4, 5], where it is also shown that this is optimal. That is, there is no -balanced CR scheme for the uniform matroid of rank one with . The work of [3] extends this result by providing a -balanced CR scheme for any matroid. This existence proof can then be turned into an efficient algorithm with a balancedness of , which is asymptotically optimal. However, this algorithm uses random sampling and lacks simplicity, which is why another simpler CR scheme for a general matroid with a worse balancedness is also presented there. There has also been work done for getting CR schemes for different types of independence families [2, 7], or by having the elements of the random set arrive in an online fashion [1, 6, 8]. To the best of our knowledge, however, not much work has been done in the direction of finding classes of matroids where the balancedness factor can be improved. This is the main question studied in this work.

1.1 Our contributions

Our main result is an optimal monotone CR scheme for the uniform matroid of rank on elements, with a balancedness of . This result is encapsulated in Theorem 2 (balancedness), Theorem 2.5 (optimality), and Theorem 2.6 (monotonicity). This generalizes the balancedness factor of given in [4, 5] for the rank one (i.e. ) case. Moreover, for a fixed value of , we have . The latter term is strictly better than for any value of . In addition, our scheme is in a sense simpler than the one provided in [4, 5]

, since our algorithm simply consists of assigning probabilities to each base contained in the input set, and then picking a base according to that probability distribution. We also discuss how the above CR scheme for uniform matroids naturally generalizes to partition matroids. If we denote by

the optimal balancedness for uniform matroids of rank , then the balancedness we get for partition matroids with blocks and capacities is , and this is optimal.

1.2 Preliminaries on matroids

We provide in this section a brief background on matroids. A matroid is a pair consisting of a ground set and a non-empty family of independent sets which satisfy:

  • If and , then .

  • If and with , then such that .

Given a matroid its rank function is defined as . Its matroid polytope is given by , where . The next two classes of matroids are of special interest for this work. [Uniform matroid] The uniform matroid of rank on elements is the matroid whose independent sets are all the subsets of the ground set of cardinality at most . That is, . [Partition matroid] Partition matroids are a generalization of uniform matroids. Suppose the ground set is partitioned into blocks: and each block has a certain capacity . Then . The uniform matroid is simply a partition matroid with one block and one capacity . Moreover, the restriction of a partition matroid to each block is a uniform matroid of rank on the ground set .

2 An optimal monotone contention resolution scheme for uniform matroids

We assume throughout this whole section that and that . For any point , we let be the random set satisfying independently for each coordinate. If the size of is at most , then is already an independent set and the CR scheme returns it. If however , then the CR scheme returns a random subset of elements by making the probabilities of each subset of elements depend linearly on the coordinates of the original point . More precisely, given an arbitrary , for any set with and any subset of size , we define

(1)

where we use the following notation: . We then define a randomized CR scheme for as follows. [CR scheme for ] We are given a point and a set .

  • If , then .

  • If , then for every with with probability .

One can check that this CR scheme is well-defined, i.e. that is a valid probability distribution. The above procedure is a well-defined CR scheme. That is, and , we have and .

Proof.

Since and , it directly follows from the definition (1) that . In order to prove the second claim, we need the equality

(2)

that we derive the following way:

Hence,

We now state our main result. Algorithm 1 is a -balanced CR scheme for the uniform matroid of rank on elements, where . Since we use the above expression often throughout this section, we denote it by

We note that when we get , which matches the optimal balancedness for provided in [4, 5]. This converges to when gets large. In addition, the balancedness factor improves as grows. Indeed, our next result shows that for any , the balancedness is asymptotically (as grows) strictly better than .

1 2 3 4 9 99 999
2 0.75
3 0.704 0.852
4 0.684 0.813 0.895
5 0.672 0.793 0.862 0.918
10 0.651 0.759 0.813 0.850 0.961
100 0.633 0.732 0.779 0.810 0.874 0.996
1000 0.632 0.730 0.776 0.809 0.869 0.962 0.999
Table 1: Numerical values for the balancedness of Theorem 2

For a fixed , the limit of as tends to infinity is

Proof.

We use Stirling’s approximation, which states that:

(3)

which means that these two quantities are asymptotic, i.e. their ratio tends to 1 if we tend to infinity. By (3), we get

(4)

Hence,

where we have used (4) from the second to the third line. ∎

2.1 Outline of the proof of Theorem 2

Throughout this whole section on uniform matroids, we fix an arbitrary element . In order to prove Theorem 2, we need to show that for every with we have . This is equivalent to showing that for every with we have

(5)

We now introduce some definitions and notation that will be needed. For any , let , where is the random set obtained by rounding each coordinate of in the reduced ground set to one independently with probability . Note that . We do not write the dependence on for simplicity of notation. We mainly work on the set . For this reason, we define . Note that ; we use this often in our arguments. With the above notation we can rewrite the probability in (5) in a more convenient form. For any satisfying , we get

The obtained expression is a multivariable function of the variables , since and depend on those variables as well. We denote it as follows.

(6)

One then has that for proving Theorem 2 it is enough to show the following. Let and be as defined above. Then . Moreover, the maximum is attained at the point Indeed, Theorem 6 implies that for every we have , with equality holding if . In particular, for any satisfying , we get

which proves Theorem 2 by (5). Notice that for the conditional probability to be well defined, we need the assumption that . However, in our case is simply a multivariable polynomial function of the variables and is thus also defined when . We may therefore forget the conditional probability and simply treat Theorem 6 as a multivariable maximization problem over a bounded domain. We now state the outline of the proof for Theorem 6. We first maximize over the variable , and get an expression depending only on the -variables in . This is done in Section 2.2. We then maximize the above expression over the unit hypercube (see Section 2.3). Finally, we combine the above two results to show that the maximum in Theorem 6 is attained at the point for every ; this is done in Section 2.4.

2.2 Maximizing over the variable

The matroid polytope of is given by . We define a new polytope by removing the constraint from :

Clearly, . We now present the main result of this section, where we consider the maximization problem and maximize over the variable while keeping all the other variables ( for every ) fixed to get an expression depending only on the -variables in . For every ,

(7)

Moreover, equality holds when .

Proof.
(8)

We now maximize this expression with respect to the variable over while keeping all the other variables fixed. Since this is a linear function of and the coefficient of is positive, the maximal value will be in order to satisfy the constraint . Note that this was the reason for the definition of , since might not necessarily be smaller than 1. We thus plug-in in (2.2) and write an inequality to emphasize that the derivation holds for any .

(9)

Notice the only part which depends on in the last summation is . By using Equation (2) and noticing that , we get

(10)

Now, note that by definition of the term , we have

(11)

We compute the middle term in (10) by plugging in (11) and the change of variable .

(12)

We finally plug-in (2.2) into (10) and use to get

Notice that the only place where we used an inequality was from (2.2) to (9). Hence equality holds when . ∎

2.3 Maximizing

In this section, we turn our attention into maximizing the right-hand side expression in (7) over the unit hypercube . In fact, we work with the following function instead:

Nota that the above function is simply the right hand side of (7) multiplied by . Hence, maximizing one or the other is equivalent. Let , so that and . Then the function attains its maximum over the unit hypercube at the point with value

For simplicity, we denote this maximum value by:

Notice that for any . Hence Theorem 2.3 holds for and as well. Moreover, the function also satisfies an interesting duality property: . In order to prove Theorem 2.3, we first show that has a unique extremum (in particular a local maximum) in the interior of at the point — see Proposition 2.3. We then use induction on to show that any point in the boundary of has a lower function value than . Since our function is continuous over a compact domain, it attains a maximum. That maximum then has to be attained at by the two arguments above. That is, the unique extremum cannot be a local minimum or a saddle point. Otherwise, since there are no more extrema in the interior and the function is continuous, the function would increase in some direction leading to a point in the boundary having higher value. For completion, in the appendix we present another proof for showing local maximality that relies on the Hessian matrix. For any has a unique extremum in the interior of the unit hypercube at the point .

(a)
(b)
Figure 1: Plot of for . The maximum is attained at in (a) and at in (b).

For proving Proposition 2.3 we need the following lemma, the proof of which we leave to the appendix. The following holds for any :

where

The above formula actually holds for with any . We use this in Section 2.5 with . We are now able to prove Proposition 2.3.

Proof of Proposition 2.3.

Let . To find the extrema of , we want to solve . We thus first need to find

(13)

Notice that:

  • For a set such that ,

    (14)
  • For a set such that ,

    (15)

We are now able to compute (13):