The relaxation and rounding framework is a broad and powerful approach to solve combinatorial optimization problems. It consists of first getting a suitable continuous relaxation of the discrete problem, solving it approximately, and then rounding the obtained fractional solution into an integral one while preserving the objective value as much as possible. Contention resolution schemes (or CR schemes) were proposed by Chekuri, Vondrak, and Zenklusen  as a general technique to round a fractional solution. While initially designed to handle submodular maximization problems under different types of constraints, CR schemes have become a powerful tool to tackle a broad class of problems. At the high level, given a fractional point , the procedure first generates a random set by independently including each element with probability . It then removes some elements from the set so that the obtained set is feasible. More formally, given a ground set and an independence family , we let be a relaxation polytope associated to . A CR scheme is then defined as follows. [CR scheme] is a -balanced contention resolution scheme for the polytope if for every , is an algorithm that takes as input a set and outputs an independent set contained in such that
Moreover, a contention resolution scheme is monotone if for any :
Monotonicity is a desirable property for a -balanced CR scheme to have, since one can then get approximation guarantees for constrained submodular maximization problems via the relaxation and rounding approach (see  for more details). By presenting a variety of CR schemes for different constraints, the work in  gives improved approximation algorithms for linear and submodular maximization problems under matroid, knapsack, matchoid constraints, as well as their intersections. While contention resolution schemes have been applied to different types of independence families, we restrict our attention in this work to matroid constraints. A CR scheme with a balancedness of for the uniform matroid of rank one is given in [4, 5], where it is also shown that this is optimal. That is, there is no -balanced CR scheme for the uniform matroid of rank one with . The work of  extends this result by providing a -balanced CR scheme for any matroid. This existence proof can then be turned into an efficient algorithm with a balancedness of , which is asymptotically optimal. However, this algorithm uses random sampling and lacks simplicity, which is why another simpler CR scheme for a general matroid with a worse balancedness is also presented there. There has also been work done for getting CR schemes for different types of independence families [2, 7], or by having the elements of the random set arrive in an online fashion [1, 6, 8]. To the best of our knowledge, however, not much work has been done in the direction of finding classes of matroids where the balancedness factor can be improved. This is the main question studied in this work.
1.1 Our contributions
Our main result is an optimal monotone CR scheme for the uniform matroid of rank on elements, with a balancedness of . This result is encapsulated in Theorem 2 (balancedness), Theorem 2.5 (optimality), and Theorem 2.6 (monotonicity). This generalizes the balancedness factor of given in [4, 5] for the rank one (i.e. ) case. Moreover, for a fixed value of , we have . The latter term is strictly better than for any value of . In addition, our scheme is in a sense simpler than the one provided in [4, 5]
, since our algorithm simply consists of assigning probabilities to each base contained in the input set, and then picking a base according to that probability distribution. We also discuss how the above CR scheme for uniform matroids naturally generalizes to partition matroids. If we denote bythe optimal balancedness for uniform matroids of rank , then the balancedness we get for partition matroids with blocks and capacities is , and this is optimal.
1.2 Preliminaries on matroids
We provide in this section a brief background on matroids. A matroid is a pair consisting of a ground set and a non-empty family of independent sets which satisfy:
If and , then .
If and with , then such that .
Given a matroid its rank function is defined as . Its matroid polytope is given by , where . The next two classes of matroids are of special interest for this work. [Uniform matroid] The uniform matroid of rank on elements is the matroid whose independent sets are all the subsets of the ground set of cardinality at most . That is, . [Partition matroid] Partition matroids are a generalization of uniform matroids. Suppose the ground set is partitioned into blocks: and each block has a certain capacity . Then . The uniform matroid is simply a partition matroid with one block and one capacity . Moreover, the restriction of a partition matroid to each block is a uniform matroid of rank on the ground set .
2 An optimal monotone contention resolution scheme for uniform matroids
We assume throughout this whole section that and that . For any point , we let be the random set satisfying independently for each coordinate. If the size of is at most , then is already an independent set and the CR scheme returns it. If however , then the CR scheme returns a random subset of elements by making the probabilities of each subset of elements depend linearly on the coordinates of the original point . More precisely, given an arbitrary , for any set with and any subset of size , we define
where we use the following notation: . We then define a randomized CR scheme for as follows. [CR scheme for ] We are given a point and a set .
If , then .
If , then for every with with probability .
One can check that this CR scheme is well-defined, i.e. that is a valid probability distribution. The above procedure is a well-defined CR scheme. That is, and , we have and .
Since and , it directly follows from the definition (1) that . In order to prove the second claim, we need the equality
that we derive the following way:
We now state our main result. Algorithm 1 is a -balanced CR scheme for the uniform matroid of rank on elements, where . Since we use the above expression often throughout this section, we denote it by
We note that when we get , which matches the optimal balancedness for provided in [4, 5]. This converges to when gets large. In addition, the balancedness factor improves as grows. Indeed, our next result shows that for any , the balancedness is asymptotically (as grows) strictly better than .
For a fixed , the limit of as tends to infinity is
2.1 Outline of the proof of Theorem 2
Throughout this whole section on uniform matroids, we fix an arbitrary element . In order to prove Theorem 2, we need to show that for every with we have . This is equivalent to showing that for every with we have
We now introduce some definitions and notation that will be needed. For any , let , where is the random set obtained by rounding each coordinate of in the reduced ground set to one independently with probability . Note that . We do not write the dependence on for simplicity of notation. We mainly work on the set . For this reason, we define . Note that ; we use this often in our arguments. With the above notation we can rewrite the probability in (5) in a more convenient form. For any satisfying , we get
The obtained expression is a multivariable function of the variables , since and depend on those variables as well. We denote it as follows.
One then has that for proving Theorem 2 it is enough to show the following. Let and be as defined above. Then . Moreover, the maximum is attained at the point Indeed, Theorem 6 implies that for every we have , with equality holding if . In particular, for any satisfying , we get
which proves Theorem 2 by (5). Notice that for the conditional probability to be well defined, we need the assumption that . However, in our case is simply a multivariable polynomial function of the variables and is thus also defined when . We may therefore forget the conditional probability and simply treat Theorem 6 as a multivariable maximization problem over a bounded domain. We now state the outline of the proof for Theorem 6. We first maximize over the variable , and get an expression depending only on the -variables in . This is done in Section 2.2. We then maximize the above expression over the unit hypercube (see Section 2.3). Finally, we combine the above two results to show that the maximum in Theorem 6 is attained at the point for every ; this is done in Section 2.4.
2.2 Maximizing over the variable
The matroid polytope of is given by . We define a new polytope by removing the constraint from :
Clearly, . We now present the main result of this section, where we consider the maximization problem and maximize over the variable while keeping all the other variables ( for every ) fixed to get an expression depending only on the -variables in . For every ,
Moreover, equality holds when .
We now maximize this expression with respect to the variable over while keeping all the other variables fixed. Since this is a linear function of and the coefficient of is positive, the maximal value will be in order to satisfy the constraint . Note that this was the reason for the definition of , since might not necessarily be smaller than 1. We thus plug-in in (2.2) and write an inequality to emphasize that the derivation holds for any .
Notice the only part which depends on in the last summation is . By using Equation (2) and noticing that , we get
Now, note that by definition of the term , we have
In this section, we turn our attention into maximizing the right-hand side expression in (7) over the unit hypercube . In fact, we work with the following function instead:
Nota that the above function is simply the right hand side of (7) multiplied by . Hence, maximizing one or the other is equivalent. Let , so that and . Then the function attains its maximum over the unit hypercube at the point with value
For simplicity, we denote this maximum value by:
Notice that for any . Hence Theorem 2.3 holds for and as well. Moreover, the function also satisfies an interesting duality property: . In order to prove Theorem 2.3, we first show that has a unique extremum (in particular a local maximum) in the interior of at the point — see Proposition 2.3. We then use induction on to show that any point in the boundary of has a lower function value than . Since our function is continuous over a compact domain, it attains a maximum. That maximum then has to be attained at by the two arguments above. That is, the unique extremum cannot be a local minimum or a saddle point. Otherwise, since there are no more extrema in the interior and the function is continuous, the function would increase in some direction leading to a point in the boundary having higher value. For completion, in the appendix we present another proof for showing local maximality that relies on the Hessian matrix. For any has a unique extremum in the interior of the unit hypercube at the point .
For proving Proposition 2.3 we need the following lemma, the proof of which we leave to the appendix. The following holds for any :