# An Optimal Monotone Contention Resolution Scheme for Uniform and Partition Matroids

A common approach to solve a combinatorial optimization problem is to first solve a continous relaxation and then round the fractional solution. For the latter, the framework of contention resolution schemes (or CR schemes) introduced by Chekuri, Vondrak, and Zenklusen, has become a general and successful tool. A CR scheme takes a fractional point x in a relaxation polytope, rounds each coordinate x_i independently to get a possibly non-feasible set, and then drops some elements in order to satisfy the independence constraints. Intuitively, a CR scheme is c-balanced if every element i is selected with probability at least c · x_i. It is known that general matroids admit a (1-1/e)-balanced CR scheme, and that this is (asymptotically) optimal. This is in particular true for the special case of uniform matroids of rank one. In this work, we provide a simple monotone CR scheme with a balancedness factor of 1 - e^-kk^k/k! for uniform matroids of rank k, and show that this is optimal. This result generalizes the 1-1/e optimal factor for the rank one (i.e. k=1) case, and improves it for any k>1. Moreover, this scheme generalizes into an optimal CR scheme for partition matroids.

## Authors

• 1 publication
• 7 publications
• ### An Optimal Monotone Contention Resolution Scheme for Bipartite Matchings via a Polyhedral Viewpoint

Relaxation and rounding approaches became a standard and extremely versa...
05/21/2019 ∙ by Simon Bruggmann, et al. ∙ 0

• ### Submodular Stochastic Probing with Prices

We introduce Stochastic Probing with Prices (SPP), a variant of the Stoc...
10/03/2018 ∙ by Ben Chugg, et al. ∙ 0

• ### Optimal Online Contention Resolution Schemes via Ex-Ante Prophet Inequalities

Online contention resolution schemes (OCRSs) were proposed by Feldman, S...
06/25/2018 ∙ by Euiwoong Lee, et al. ∙ 0

• ### Maximizing Determinants under Matroid Constraints

Given vectors v_1,…,v_n∈ℝ^d and a matroid M=([n],I), we study the proble...
04/16/2020 ∙ by Vivek Madan, et al. ∙ 1

• ### Flux correction for nonconservative convection-diffusion equation

Our goal is to develop a flux limiter of the Flux-Corrected Transport me...
04/09/2021 ∙ by Sergii Kivva, et al. ∙ 0

• ### Stochastic Monotone Submodular Maximization with Queries

We study a stochastic variant of monotone submodular maximization proble...
07/09/2019 ∙ by Takanori Maehara, et al. ∙ 0

• ### Augmentation of the Reconstruction Performance of Fuzzy C-Means with an Optimized Fuzzification Factor Vector

Information granules have been considered to be the fundamental construc...
04/13/2020 ∙ by Kaijie Xu, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The relaxation and rounding framework is a broad and powerful approach to solve combinatorial optimization problems. It consists of first getting a suitable continuous relaxation of the discrete problem, solving it approximately, and then rounding the obtained fractional solution into an integral one while preserving the objective value as much as possible. Contention resolution schemes (or CR schemes) were proposed by Chekuri, Vondrak, and Zenklusen [3] as a general technique to round a fractional solution. While initially designed to handle submodular maximization problems under different types of constraints, CR schemes have become a powerful tool to tackle a broad class of problems. At the high level, given a fractional point , the procedure first generates a random set by independently including each element with probability . It then removes some elements from the set so that the obtained set is feasible. More formally, given a ground set and an independence family , we let be a relaxation polytope associated to . A CR scheme is then defined as follows. [CR scheme] is a -balanced contention resolution scheme for the polytope if for every , is an algorithm that takes as input a set and outputs an independent set contained in such that

 P[i∈πx(R(x))∣i∈R(x)]≥c∀i∈supp(x).

Moreover, a contention resolution scheme is monotone if for any :

 P[i∈πx(A)]≥P[i∈πx(B)]for any i∈A⊂B⊂supp(x).

Monotonicity is a desirable property for a -balanced CR scheme to have, since one can then get approximation guarantees for constrained submodular maximization problems via the relaxation and rounding approach (see [3] for more details). By presenting a variety of CR schemes for different constraints, the work in [3] gives improved approximation algorithms for linear and submodular maximization problems under matroid, knapsack, matchoid constraints, as well as their intersections. While contention resolution schemes have been applied to different types of independence families, we restrict our attention in this work to matroid constraints. A CR scheme with a balancedness of for the uniform matroid of rank one is given in [4, 5], where it is also shown that this is optimal. That is, there is no -balanced CR scheme for the uniform matroid of rank one with . The work of [3] extends this result by providing a -balanced CR scheme for any matroid. This existence proof can then be turned into an efficient algorithm with a balancedness of , which is asymptotically optimal. However, this algorithm uses random sampling and lacks simplicity, which is why another simpler CR scheme for a general matroid with a worse balancedness is also presented there. There has also been work done for getting CR schemes for different types of independence families [2, 7], or by having the elements of the random set arrive in an online fashion [1, 6, 8]. To the best of our knowledge, however, not much work has been done in the direction of finding classes of matroids where the balancedness factor can be improved. This is the main question studied in this work.

### 1.1 Our contributions

Our main result is an optimal monotone CR scheme for the uniform matroid of rank on elements, with a balancedness of . This result is encapsulated in Theorem 2 (balancedness), Theorem 2.5 (optimality), and Theorem 2.6 (monotonicity). This generalizes the balancedness factor of given in [4, 5] for the rank one (i.e. ) case. Moreover, for a fixed value of , we have . The latter term is strictly better than for any value of . In addition, our scheme is in a sense simpler than the one provided in [4, 5]

, since our algorithm simply consists of assigning probabilities to each base contained in the input set, and then picking a base according to that probability distribution. We also discuss how the above CR scheme for uniform matroids naturally generalizes to partition matroids. If we denote by

the optimal balancedness for uniform matroids of rank , then the balancedness we get for partition matroids with blocks and capacities is , and this is optimal.

### 1.2 Preliminaries on matroids

We provide in this section a brief background on matroids. A matroid is a pair consisting of a ground set and a non-empty family of independent sets which satisfy:

• If and , then .

• If and with , then such that .

Given a matroid its rank function is defined as . Its matroid polytope is given by , where . The next two classes of matroids are of special interest for this work. [Uniform matroid] The uniform matroid of rank on elements is the matroid whose independent sets are all the subsets of the ground set of cardinality at most . That is, . [Partition matroid] Partition matroids are a generalization of uniform matroids. Suppose the ground set is partitioned into blocks: and each block has a certain capacity . Then . The uniform matroid is simply a partition matroid with one block and one capacity . Moreover, the restriction of a partition matroid to each block is a uniform matroid of rank on the ground set .

## 2 An optimal monotone contention resolution scheme for uniform matroids

We assume throughout this whole section that and that . For any point , we let be the random set satisfying independently for each coordinate. If the size of is at most , then is already an independent set and the CR scheme returns it. If however , then the CR scheme returns a random subset of elements by making the probabilities of each subset of elements depend linearly on the coordinates of the original point . More precisely, given an arbitrary , for any set with and any subset of size , we define

 qA(B):=1\mbinom|A||B|(1+¯x(A∖B)−¯x(B)), (1)

where we use the following notation: . We then define a randomized CR scheme for as follows. [CR scheme for ] We are given a point and a set .

• If , then .

• If , then for every with with probability .

One can check that this CR scheme is well-defined, i.e. that is a valid probability distribution. The above procedure is a well-defined CR scheme. That is, and , we have and .

###### Proof.

Since and , it directly follows from the definition (1) that . In order to prove the second claim, we need the equality

 ∑B⊂A,|B|=kx(B)=(|A|−1k−1)x(A) (2)

that we derive the following way:

 ∑B⊂A,|B|=kx(B) =∑B⊂A,|B|=k∑i∈Axi1{i∈B}=∑i∈Axi∑B⊂A,|B|=k1{i∈B} =∑i∈Axi|{B⊂A∣|B|=k,i∈B}|=(|A|−1k−1)x(A).

Hence,

 ∑B⊂A,|B|=kqA(B) =∑B⊂A,|B|=k1\mbinom|A|k(1+x(A∖B)|A|−k−x(B)k) =1+1\mbinom|A|k∑B⊂A,|B|=k(x(A)|A|−k−x(B)|A|−k−x(B)k) =1+1\mbinom|A|k∑B⊂A,|B|=k(x(A)|A|−k−|A|x(B)k(|A|−k)) =1+1\mbinom|A|k(|A|−k)∑B⊂A,|B|=k(x(A)−|A|kx(B)) =1+1\mbinom|A|k(|A|−k)(\mbinom|A|kx(A)−\mbinom|A|kx(A))=1.\qed

We now state our main result. Algorithm 1 is a -balanced CR scheme for the uniform matroid of rank on elements, where . Since we use the above expression often throughout this section, we denote it by

 c(k,n):=1−(nk)(1−kn)n+1−k(kn)k.

We note that when we get , which matches the optimal balancedness for provided in [4, 5]. This converges to when gets large. In addition, the balancedness factor improves as grows. Indeed, our next result shows that for any , the balancedness is asymptotically (as grows) strictly better than .

For a fixed , the limit of as tends to infinity is

 limn→∞c(k,n)=1−e−kkkk!.
###### Proof.

We use Stirling’s approximation, which states that:

 n!∼√2πn(ne)n (3)

which means that these two quantities are asymptotic, i.e. their ratio tends to 1 if we tend to infinity. By (3), we get

 n!(n−k)!∼√2πn(ne)n1√2π(n−k)(en−k)n−k=e−knn(n−k)n−k√nn−k. (4)

Hence,

 1−c(k,n) =(nk)(1−kn)n+1−k(kn)k =kkk!n!(n−k)!(n−k)n+1−knn+1 ∼e−kkkk!n−kn√nn−k =e−kkkk!√n−kn ∼e−kkkk!,

where we have used (4) from the second to the third line. ∎

### 2.1 Outline of the proof of Theorem 2

Throughout this whole section on uniform matroids, we fix an arbitrary element . In order to prove Theorem 2, we need to show that for every with we have . This is equivalent to showing that for every with we have

 P[e∉πx(R(x))∣e∈R(x)]≤1−c(k,n). (5)

We now introduce some definitions and notation that will be needed. For any , let , where is the random set obtained by rounding each coordinate of in the reduced ground set to one independently with probability . Note that . We do not write the dependence on for simplicity of notation. We mainly work on the set . For this reason, we define . Note that ; we use this often in our arguments. With the above notation we can rewrite the probability in (5) in a more convenient form. For any satisfying , we get

 P[e∉πx(R(x))∣e∈R(x)]=∑A⊂SP[e∉πx(R(x))∣RS(x)=A,e∈R(x)]P[RS(x)=A∣e∈R(x)]=∑A⊂S,|A|≥kP[e∉πx(R(x))∣R(x)=A∪e]pS(A)=∑A⊂S,|A|≥kpS(A)∑B⊂A,|B|=kqA∪e(B).

The obtained expression is a multivariable function of the variables , since and depend on those variables as well. We denote it as follows.

 G(x):=∑A⊂S,|A|≥kpS(A)∑B⊂A,|B|=kqA∪e(B). (6)

One then has that for proving Theorem 2 it is enough to show the following. Let and be as defined above. Then . Moreover, the maximum is attained at the point Indeed, Theorem 6 implies that for every we have , with equality holding if . In particular, for any satisfying , we get

 G(x)=P[e∉πx(R(x))∣e∈R(x)]≤1−c(k,n),

which proves Theorem 2 by (5). Notice that for the conditional probability to be well defined, we need the assumption that . However, in our case is simply a multivariable polynomial function of the variables and is thus also defined when . We may therefore forget the conditional probability and simply treat Theorem 6 as a multivariable maximization problem over a bounded domain. We now state the outline of the proof for Theorem 6. We first maximize over the variable , and get an expression depending only on the -variables in . This is done in Section 2.2. We then maximize the above expression over the unit hypercube (see Section 2.3). Finally, we combine the above two results to show that the maximum in Theorem 6 is attained at the point for every ; this is done in Section 2.4.

### 2.2 Maximizing over the variable xe

The matroid polytope of is given by . We define a new polytope by removing the constraint from :

 ˜P\I:={x∈RN≥0:x(N)≤k and xi≤1∀i∈S}.

Clearly, . We now present the main result of this section, where we consider the maximization problem and maximize over the variable while keeping all the other variables ( for every ) fixed to get an expression depending only on the -variables in . For every ,

 G(x)≤∑A⊂S,|A|=kpS(A)(1−¯x(A)). (7)

Moreover, equality holds when .

###### Proof.
 G(x) =∑A⊂S,|A|≥kpS(A)∑B⊂A,|B|=kqA∪e(B) =∑A⊂S,|A|≥kpS(A)∑B⊂A,|B|=k1\mbinom|A|+1k(1+¯x((A∖B)∪e))−¯x(B)) =∑A⊂S,|A|≥kpS(A)1\mbinom|A|+1k∑B⊂A,|B|=k(1+x(A∖B)+xe|A|−k+1−x(B)k). (8)

We now maximize this expression with respect to the variable over while keeping all the other variables fixed. Since this is a linear function of and the coefficient of is positive, the maximal value will be in order to satisfy the constraint . Note that this was the reason for the definition of , since might not necessarily be smaller than 1. We thus plug-in in (2.2) and write an inequality to emphasize that the derivation holds for any .

 ≤∑A⊂S,|A|≥kpS(A)1\mbinom|A|+1k∑B⊂A,|B|=k(1+x(A∖B)+k−x(S)|A|−k+1−x(B)k)=∑A⊂S,|A|≥kpS(A)1\mbinom|A|+1k∑B⊂A,|B|=k(1+k−x(S∖A)−x(B)|A|−k+1−x(B)k)=∑A⊂S,|A|≥kpS(A)1\mbinom|A|+1k∑B⊂A,|B|=k(1+k|A|−k+1−x(S∖A)|A|−k+1−(1|A|−k+1+1k)x(B))=∑A⊂S,|A|≥kpS(A)1\mbinom|A|+1k∑B⊂A,|B|=k(|A|+1|A|−k+1−x(S∖A)|A|−k+1−|A|+1k(|A|−k+1)x(B)). (9)

Notice the only part which depends on in the last summation is . By using Equation (2) and noticing that , we get

 (???) =∑A⊂S,|A|≥kpS(A)1\mbinom|A|+1k1|A|−k+1(\mbinom|A|k(|A|+1)−\mbinom|A|kx(S∖A)−|A|+1k\mbinom|A|−1k−1x(A)) =∑A⊂S,|A|≥kpS(A)1\mbinom|A|+1k1|A|−k+1\mbinom|A|k(|A|+1−x(S∖A)−|A|+1|A|x(A)) =∑A⊂S,|A|≥kpS(A)|A|−k+1|A|+11|A|−k+1(|A|+1−x(S∖A)−|A|+1|A|x(A)) =∑A⊂S,|A|≥kpS(A)|A|+1(|A|+1−x(S∖A)−|A|+1|A|x(A)) =∑A⊂S,|A|≥kpS(A)(1−x(S∖A)|A|+1−x(A)|A|). (10)

Now, note that by definition of the term , we have

 xipS(A)=(1−xi)pS(A∪i) for any i∈S∖A. (11)

We compute the middle term in (10) by plugging in (11) and the change of variable .

 ∑A⊂S,|A|≥k1|A|+1 pS(A)x(S∖A)=∑A⊂S,|A|≥k∑i∈S1|A|+1xipS(A)1{i∉A} =∑i∈S ∑A⊂S,|A|≥k1|A|+1(1−xi)pS(A∪i)1{i∉A} =∑i∈S ∑B⊂S,|B|≥k+11|B|(1−xi)pS(B)1{i∈B} =∑B⊂S,|B|≥k+11|B|pS(B)∑i∈S1{i∈B}−∑B⊂S,|B|≥k+11|B|pS(B)∑i∈Sxi1{i∈B} =∑B⊂S,|B|≥k+1pS(B)−∑B⊂S,|B|≥k+1pS(B)|B|x(B) =∑B⊂S,|B|≥k+1pS(B)(1−x(B)|B|) =∑A⊂S,|A|≥k+1pS(A)(1−x(A)|A|). (12)

We finally plug-in (2.2) into (10) and use to get

 (???) =∑A⊂S,|A|=kpS(A)(1−x(A)|A|)=∑A⊂S,|A|=kpS(A)(1−¯x(A)).

Notice that the only place where we used an inequality was from (2.2) to (9). Hence equality holds when . ∎

### 2.3 Maximizing hkS:[0,1]S↦R

In this section, we turn our attention into maximizing the right-hand side expression in (7) over the unit hypercube . In fact, we work with the following function instead:

 hkS(x):=∑A⊂S|A|=kpS(A)(k−x(A)).

Nota that the above function is simply the right hand side of (7) multiplied by . Hence, maximizing one or the other is equivalent. Let , so that and . Then the function attains its maximum over the unit hypercube at the point with value

 hkS(k/n,…,k/n)=k(nk)(1−kn)n+1−k(kn)k=k(1−c(k,n)).

For simplicity, we denote this maximum value by:

 α(k,n):=k(nk)(1−kn)n+1−k(kn)k.

Notice that for any . Hence Theorem 2.3 holds for and as well. Moreover, the function also satisfies an interesting duality property: . In order to prove Theorem 2.3, we first show that has a unique extremum (in particular a local maximum) in the interior of at the point — see Proposition 2.3. We then use induction on to show that any point in the boundary of has a lower function value than . Since our function is continuous over a compact domain, it attains a maximum. That maximum then has to be attained at by the two arguments above. That is, the unique extremum cannot be a local minimum or a saddle point. Otherwise, since there are no more extrema in the interior and the function is continuous, the function would increase in some direction leading to a point in the boundary having higher value. For completion, in the appendix we present another proof for showing local maximality that relies on the Hessian matrix. For any has a unique extremum in the interior of the unit hypercube at the point .

For proving Proposition 2.3 we need the following lemma, the proof of which we leave to the appendix. The following holds for any :

 hkS(x)=k−1∑i=0QiS(x)(x(S)−i)

where

 QkS(x):=∑A⊂S,|A|=kpS(A).

The above formula actually holds for with any . We use this in Section 2.5 with . We are now able to prove Proposition 2.3.

###### Proof of Proposition 2.3.

Let . To find the extrema of , we want to solve . We thus first need to find

 ∂hkS(x)∂xifor every i∈S. (13)

Notice that:

• For a set such that ,

 ∂∂xipS(A)(k−x(A))=(k−x(A))∏j∈A∖ixj∏j∈S∖A(1−xj)−pS(A)=(k−x(A))∏j∈A∖ixj∏j∈S∖A(1−xj)−xi∏j∈A∖ixj∏j∈S∖A(1−xj)=(k−x(A))pS∖i(A∖i)−xipS∖i(A∖i)=pS∖i(A∖i)(k−x(A∖i)−2xi). (14)
• For a set such that ,

 ∂∂xipS(A)(k−x(A))=−(k−x(A))∏j∈Axj∏j∈(S∖A)∖i(1−xj)=−pS∖i(A)(k−x(A)). (15)

We are now able to compute (13):

 ∂hkS(x)∂xi =∂∂xi∑A⊂S|A|=kpS(A)(k−x(A)) =∑A⊂S|A|=ki∈A∂∂xipS(A)(k−x(A))+∑A⊂S|A|=ki∉A∂∂xipS(A)(k−x(A)) =∑A⊂S|A|=ki∈ApS∖i(A∖i)(k−x(A∖i)−2xi)−∑A⊂S|A|=ki∉ApS∖i(A)(k−x(A)) =∑B⊂S∖i|B|=k−1pS∖i(B)(k−x(B)−2xi)−∑A⊂S∖i|A|=kpS∖i(A)(k−x(A)) =∑A⊂S∖i|A|=k−1pS∖i(A)(k−1−x(A)+1−2xi)−hkS∖i(x) =∑A⊂S∖i|A|=k−1pS∖i(A)(k−1−x(A))+(1−2xi)∑A⊂S∖i|A|=k−1pS∖i(A)−hkS∖i(x) =(1−2xi)Qk−1S∖i(x)−(hkS∖i(x)−h