Matchings are fundamental structures in graph theory that play an indispensable role in combinatorial optimization. For decades, there have been tremendous and ongoing efforts to design more efficient algorithms for finding maximum matchings in terms of their cardinality, and more generally, their total weight. In particular, matchings in bipartite graphs have found countless applications in settings where it is desirable to assign entities from one set to those in another set (e.g., matching students to schools, physicians to hospitals, computing tasks to servers, and impressions in online media to advertisers). Due to the enormous growth of matching markets in digital domains, efficient online matching algorithms have become increasingly important. In particular, search engine companies have created opportunities for online matching algorithms to have a massive impact in multibillion-dollar advertising markets. Motivated by these applications, we consider the problem of matching a set of impressions that arrive one by one to a set of advertisers that are known in advance. When an impression arrives, its edges to the advertisers are revealed and an irrevocable decision has to be made about to which advertiser the impression should be assigned. Karp, Vazirani, and Vazirani[KVV90] gave an elegant online algorithm called Ranking to find matchings in unweighted bipartite graphs with a competitive ratio of . They also proved that this is the best achievable competitive ratio. Further, Aggarwal et al. [AGKM11] generalized their algorithm to the vertex-weighted online bipartite matching problem and showed that the competitive ratio is still attainable.
The edge-weighted case, however, is much more nebulous. This is partly due to the fact that no competitive algorithm exists without an additional assumption. To see this, consider two instances of the edge-weighted problem, each with one advertiser and two impressions. The edge weight of the first impression is in both instances, and the weight of the second impression is in the first instance and in the second instance, for some arbitrarily large . An online algorithm cannot distinguish between the two instances when the first impression arrives, but it has to decide whether or not to assign this impression to the advertiser. Not assigning it gives a competitive ratio of in the first instance, and assigning it gives an arbitrarily small competitive ratio of in the second. This problem cannot be tackled unless assigning both impressions to the advertiser is an option.
In display advertising, assigning more impressions to an advertiser than they paid for only makes them happier. In other words, we can assign multiple impressions to any given advertiser. However, instead of achieving the weights of all the edges assigned to it, we only acknowledge the maximum weight (i.e., the objective equals the sum of the heaviest edge weight assigned to each advertiser). This is equivalent to allowing the advertiser to dispose of previously matched edges for free to make room for new, heavier edges. This assumption is commonly known as the free disposal model. In the display advertising literature [FKM09, KMZ13], the free-disposal assumption is well received and widely applied because of its natural economic interpretation. Finally, edge-weighted online bipartite matching with free disposal is a special case of the monotone submodular welfare maximization problem, where we can apply known -competitive greedy algorithms [FNW78, LLN06].
1.1 Our Contributions
Despite thirty years of research in online matching since the seminal work of Karp et al. [KVV90], finding an algorithm for edge-weighted online bipartite matching that achieves a competitive ratio greater than has remained a tantalizing open problem. This paper gives a new online algorithm and answers the question affirmatively, breaking the long-standing barrier (under free disposal).
There is a 0.5086-competitive algorithm for edge-weighted online bipartite matching.
Given the hardness result of Kapralov, Post, and Vondrák [KPV13] that restricts beating a competitive ratio of for monotone submodular welfare maximization, our algorithm shows that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in the online setting.
From now on, we will use the more formal terminologies of offline and online vertices in a bipartite graph instead of advertisers and impressions. One of our main technical contributions is a novel algorithmic ingredient called online correlated selection (OCS), which is an online subroutine that takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to make each of its decisions, the OCS asks to what extent the decisions across different pairs can be negatively correlated, and ultimately guarantees that a vertex appearing in
pairs is selected at least once with probability strictly greater than. See Section 3 for a short introduction and Section 5 for the full details.
Given an OCS, we can achieve a better than competitive ratio for unweighted online bipartite matching with the following (barely) randomized algorithm. For each online vertex, either pick a pair of offline neighbors and let the OCS select one of them, or choose one offline neighbor deterministically. More concretely, among the neighbors that have not been matched deterministically, find the least-matched ones (i.e., those that have appeared in the least number of pairs). Pick two if there are at least two of them; otherwise, choose one deterministically. We analyze this algorithm in Appendix A.
Although the competitive ratio of the algorithm above is far worse than the optimal ratio by Karp et al. [KVV90], it benefits from improved generalizability. To extend this algorithm to the edge-weighted problem, we need a reasonable notion of “least-matched” offline neighbors. Suppose one neighbor’s heaviest edge weight is either or each with probability , another neighbor’s heaviest edge is with certainty, and their edge weights with the current online vertex are both . Which one is less matched? To remedy this, we use the online primal-dual framework for matching problems by Devanur, Jain, and Kleinberg [DJK13], along with an alternative formulation of the edge-weighted online bipartite matching problem by Devanur et al. [DHK16]
. In short, we account for the contribution of each offline vertex by weight-levels, and at each weight-level we consider the probability that the heaviest edge matched to the vertex has weight at least this level. This is the complementary cumulative distribution function (CCDF) of the heaviest edge weight, and hence we call this the CCDF viewpoint. Then for each offline neighbor, we utilize the dual variables to compute an offer at each weight-level, should the current online vertex be matched to it. The neighbor with the largest net offer aggregating over all weight-levels is considered the “least-matched”. We introduce the online primal-dual framework and the CCDF viewpoint in Section2. Then we formally present our edge-weighted matching algorithm in Section 4, followed by its analysis. Lastly, Appendix B includes hard instances that show the competitive ratio of our algorithm is nearly tight.
1.2 Related Works
The literature of online weighted bipartite matching algorithms is extensive, but most of these works are devoted to achieving competitive ratios greater than by assuming that offline vertices have large capacities or that some stochastic information about the online vertices is known in advance. Below we list the most relevant works and refer interested readers to the excellent survey of Mehta [Meh13]. We note that there have recently been several significant advances in more general settings, including different arrival models and general (non-bipartite) graphs [HKT18, GKS19, GKM19, HPT19].
The capacity of an offline vertex is the number of online vertices that can be assigned to it. Exploiting the large-capacity assumption to beat dates back two decades ago to Kalyanasundaram and Pruhs [KP00]. Feldman et al. [FKM09] gave a -competitive algorithm for Display Ads, which is equivalent to edge-weighted online bipartite matching assuming large capacities. Under similar assumptions, the same competitive ratio was obtained for AdWords [MSVV05, BJN07], in which offline vertices have some budget constraint on the total weight that can be assigned to them rather than the number of impressions. From a theoretical point of view, one of the primary goals in the online matching literature is to provide algorithms with competitive ratio greater than without making any assumption on the capacities of offline vertices.
If we have knowledge about the arrival patterns of online vertices, we can often leverage this information to design better algorithms. Typical stochastic assumptions include assuming the online vertices are drawn from some known or unknown distribution [FMMM09, KMT11, DJSW11, HMZ11, MGS12, MP12, JL13], or that they arrive in a random order [GM08, DH09, FHK10, MY11, MGZ12, MWZ15, HTWZ19]. These works achieve a competitive ratio if the large capacity assumption holds in addition to the stochastic assumptions, or at least for arbitrary capacities. Korula, Mirrokni, and Zadimoghaddam [KMZ18] showed that the greedy algorithm is -competitive for the more general problem of submodular welfare maximization if the online vertices arrive in a random order, without any assumption on the capacities. The random order assumption is particularly justified because Kapralov et al. [KPV13] proved that beating for submodular welfare maximization in the oblivious adversary model implies .
The edge-weighted online matching problem considers a bipartite graph , where and are the sets of vertices on the left-hand side (LHS) and right-hand side (RHS), respectively, and is the set of edges. Every edge is associated with a nonnegative weight , and we can assume without loss of generality that this is a complete bipartite graph, i.e., , by assigning zero weights to the missing edges.
The vertices on the LHS are offline in that they are all known to the algorithm in advance. The vertices on the RHS, however, arrive online one at a time. When an online vertex arrives, its incident edges and their weights are revealed to the algorithm, who must then irrevocably match to an offline neighbor. Each offline vertex can be matched any number of times, but only the weight of its heaviest edge counts towards the objective. This is equivalent to allowing a matched offline vertex , say, to , to be rematched to a new online vertex with edge weight , disposing of vertex and edge for free. This assumption is known as the free disposal model.
The goal is to maximize the total weight of the matching. A randomized algorithm is -competitive if its expected objective value is at least times the offline optimal in hindsight, for any instance of edge-weighted online matching. We refer to as the competitive ratio of the algorithm.
2.1 Complementary Cumulative Distribution Function Viewpoint
Next we describe an alternative formulation of the edge-weighted online matching problem due to Devanur et al. [DHK16] that captures the contribution of each offline vertex to the objective in terms of the complementary cumulative distribution function (CCDF) of the heaviest edge weight matched to . We refer to this approach as the CCDF viewpoint.
For any offline vertex and any weight-level , let be CCDF of the weight of the heaviest edge matched to , i.e., the probability that is matched to at least one online vertex such that . Then, is a non-increasing function of that takes values between and . Observe that is a step function with polynomially many pieces, because the number of pieces is at most the number of incident edges. Hence, we will be able to maintain in polynomial time.
The expected weight of the heaviest edge matched to then equals the area under , i.e.:
This follows from an alternative formula for the expected value of a nonnegative random variable involving only its cumulative distribution function.
We illustrative this idea with an example in Figure 1. Suppose an offline vertex has four online neighbors , , , and with edge weights . Further, suppose that is matched to with certainty, while , , and each have some probability of being matched to . (The latter events may be correlated.) Next, suppose a new neighbor arrives whose edge weight is also . The values of are then increased for accordingly, and the total area of the shaded regions is the increment in the expected weight of the heaviest edge matched to vertex .
2.2 Online Primal-Dual Framework
We analyze our algorithms using a linear program (LP) for edge-weighted matching under the online primal-dual framework. Consider the standard matching LP and its dual below. We interpret the primal variablesas the probability that is the heaviest edge matched to vertex .
Let P denote the primal objective. If is the probability that is the heaviest edge matched to , then P also equals the objective of the algorithm. Let D denote the dual objective.
Online algorithms under the online primal-dual framework maintain not only a matching but also a dual assignment (not necessarily feasible) at all times subject to the conditions summarized below.
Suppose an online algorithm simultaneously maintains primal and dual assignments such that for some constant , the following conditions hold at all times:
Approximate dual feasibility: For any and any , we have .
Reverse weak duality: The objectives of the primal and dual assignments satisfy .
Then, the algorithm is -competitive.
By the first condition, the values and form a feasible dual assignment whose objective equals . By weak duality of linear programming, the objective of any feasible dual assignment upper bounds the optimal (i.e., D is at least times the optimal). Applying the second condition now proves the lemma. ∎
Online Primal-Dual in the CCDF Viewpoint.
In light of the CCDF viewpoint, for any offline vertex and any weight-level , we introduce and maintain new variables that satisfy:
Accordingly, we rephrase approximate dual feasibility in Lemma 2 in the CCDF viewpoint as:
Concretely, at each step of our primal-dual algorithm, is a piecewise constant function with possible discontinuities at the weight-levels . Initially, all of the ’s are the zero function. Then, as each online vertex arrives, if is potentially matched to an offline candidate , the function values of are systematically increased according to the dual update rules in Section 4.1. In contrast, each dual variable is a scalar value that is initialized to zero and increased only once during the algorithm, at the time when arrives.
3 Online Correlated Selection: An Introduction
This section introduces our novel ingredient for online algorithms, which we believe to be widely-applicable and of independent interest. To motivate this technique, consider the following thought experiment in the case of unweighted online matching, i.e., for any and any .
We first recall why all deterministic greedy algorithms that match each online vertex to an unmatched offline neighbor are at most -competitive. Consider an instance with a graph that has two offline and two online vertices. The first online vertex is adjacent to both offline vertices, and the algorithm deterministically chooses one of them. The second online vertex, however, is only adjacent to the previously matched vertex.
Two-Choice Greedy with Independent Random Bits.
We can avoid the problem above by matching the first online vertex randomly, which improves the expected matching size from to . In this spirit, consider the following two-choice greedy algorithm. When an online vertex arrives, identify its neighbors that are least likely to be matched (over the randomness in previous rounds). If there is more than one such neighbor, choose any two, e.g., lexicographically, and match to one with a fresh random bit. Otherwise, match to the least-matched neighbor deterministically. We refer to the former as a randomized round and the latter as a deterministic round. Since each randomized round uses a fresh random bit, this is equivalent to matching to neighbors that have been chosen in the least number of randomized rounds and in no deterministic round. Unfortunately, this algorithm is also -competitive due to upper triangular graphs. We defer this standard example to Appendix B.
Two-choice Greedy with Perfect Negative Correlation.
The last algorithm in this thought experiment is an imaginary variant of two-choice greedy that perfectly and negatively correlates the randomized rounds so that each offline vertex is matched with certainty after being a candidate in two randomized rounds. This is infeasible in general. Nevertheless, if we assume feasibility then this algorithm is -competitive [HT19]. In fact, it is effectively the -matching algorithm of Kalyanasundaram and Pruhs [KP00], by having two copies of each online vertex and allowing offline vertices to be matched twice.
Can we use partial negative correlation to retain feasibility and break the barrier?
We answer this question affirmatively by introducing an algorithmic ingredient called online correlated selection (OCS), which allows us to quantify the negative correlation among randomized rounds. Appendix A provides an analysis of the two-choice greedy algorithm powered by OCS in the unweighted case. Furthermore, Section 4 generalizes this approach to edge-weighted online matching, achieving the first algorithm with a competitive ratio that is provably greater than .
Definition 1 (-semi-OCS).
Consider a set of ground elements. For any , a -semi-OCS is an online algorithm that takes as input a sequence of pairs of elements, and selects one per pair such that if an element appears in pairs, it is selected at least once with probability at least:
Using independent random bits is a -semi-OCS, and the perfect negative correlation in the thought experiment corresponds to a -semi-OCS, although it is typically infeasible. Our algorithms satisfy a stronger definition, which considers any collection of pairs containing an element . This stronger definition is useful for generalizing to the edge-weighted bipartite matching problem.
In the following definition, a subsequence (not necessarily contiguous) of pairs containing element is consecutive if it includes all the pairs that contain element between the first and last pair in the subsequence. Further, two subsequences of pairs are disjoint if no pair belongs to both of them. For example, consider the sequence . The subsequences and are consecutive and disjoint, but the subsequence is not consecutive because it does not include the pair .
Definition 2 (-Ocs).
Consider a set of ground elements. For any , a -OCS is an online algorithm that takes as input a sequence of pairs of elements, and selects one per pair such that for any element and any disjoint subsequences of consecutive pairs containing , is selected in at least one of these pairs with probability at least:
There exists a -OCS.
We defer the design and analysis of this OCS to Section 5, and instead describe a weaker -OCS, which is already sufficient for breaking the barrier in edge-weighted online bipartite matching.
Proof Sketch of a -Ocs.
Consider two sequences of random bits. The first set is used to construct a random matching among the pairs, where any two consecutive pairs (with respect to some element) are matched with probability . Concretely, each pair is consecutive to at most four pairs, one before it and one after it for each of its two elements. For each pair, choose one of its consecutive pairs, each with probability . Two consecutive pairs are matched if they choose each other.
The second random sequence is used to select elements from the pairs. For any unmatched pair, choose one of its elements with a fresh random bit. For any two matched pairs, use a fresh random bit to choose an element in the first pair, and then make the opposite selection for the later one (i.e., select the common element if it is not selected in the earlier pair, and vice versa). Observe that even if two matched pairs are identical, there is no ambiguity in the opposite selection.
Next, fix any element and any disjoint subsequences of consecutive pairs containing . We bound the probability that is never selected. If any two of the pairs are matched, is selected once in the two pairs. Otherwise, the selections from the pairs are independent, and the probability that is never selected is
. Applying the law of total probability to the event thatis in a matched pair, it remains to upper bound the probability of having no such matched pairs by . Intuitively, this is because there are choices of two consecutive pairs within the -th subsequence, each of which is matched with probability . Further, these events are negatively dependent and therefore, the probability that none of them happens is upper bounded by the independent case. The formal analysis in Section 5 substantiates this claim. ∎
4 Edge-Weighted Online Matching
This section presents an online primal-dual algorithm for the edge-weighted online bipartite matching problem. The algorithm uses a -OCS as a black box, and its competitive ratio depends on the value of . For (as sketched in Section 3) it is -competitive, and for (as in Theorem 3) it is -competitive, proving our main result about edge-weighted online matching.
4.1 Online Primal-Dual Algorithm
The algorithm is similar to the two-choice greedy in the previous section. It maintains an OCS with the offline vertices as the ground elements. For each online vertex, the algorithm either (1) matches it deterministically to one offline neighbor, (2) chooses a pair of offline neighbors and matches to the one selected by the OCS, or (3) leaves it unmatched. We refer to the first case as a deterministic round, the second as a randomized round, and the third as an unmatched round.
How does the algorithm decide whether it is a randomized, deterministic or unmatched round, and how does it choose the candidate offline vertices? We leverage the online primal-dual framework. When an online vertex arrives, it calculates for every offline vertex how much the dual variable would gain if is matched to in a deterministic round, denoted as , and similarly for a randomized round. Then it finds with the maximum , and with the maximum . If both and are negative, it leaves unmatched. If is nonnegative and greater than , it matches in a randomized round with and as the candidates using its OCS. Finally, if is nonnegative and greater than , it matches to in a deterministic round. See Algorithm 1 for the formal definition of the algorithm.
It remains to explain how and are calculated. For any offline vertex and any weight-level , let be the number of randomized rounds in which has been chosen and has edge weight at least . The values of may change over time, so we consider these values at the beginning of each online round. The increments to the dual variables and depend on the values of via the following gain-sharing parameters, which we determine later using a factor-revealing LP to optimize the competitive ratio. The gain-sharing values are listed at the end of this section in Table 1.
: Amortized increment in the dual variable if is chosen as one of the two candidates in a randomized round in which its edge weight is at least and .
: Increment in the dual variable due to an offline vertex at weight-level if is matched in a randomized round with as one of the two candidates and .
Note that these gain-sharing values and are instance independent (i.e., they do not depend on the underlying graph) and defined for all . We interpret these parameters according to a gain-splitting rule. If is one of the two candidates to be matched to in a randomized round, the increase in the expected weight of the heaviest edge matched to equals the integration of ’s increments, for , which can be related to the values of the ’s. We then lower bound the gain due to the increment of using the definition of a -OCS and split the gain into two parts, and . The former is assigned to and the latter goes to .
In fact, we prove at the end of this subsection the following invariant about how the dual variables are incremented:
Next, define to be:
We should think of as the increase in the dual variable due to offline vertex , if is chosen as one of the two candidates for in an randomized round. The first term in Eqn. 5 follows from the interpretation of above (and would be the only term in the unweighted case). The second term is designed to cancel out the extra help we get from the ’s at weight-levels in order to satisfy approximate dual feasibility for the edge . Concretely, if is matched in a randomized round to two candidates at least as good as , our choice of ’s ensures approximate dual feasibility between and (i.e., the following inequality holds):
Finally, for some , define the value of to be:
For concreteness, readers can assume . The competitive ratio, however, is insensitive to the choice of as long as it is neither too close to nor to . On the one hand, ensures that if the algorithm chooses a randomized round with offline vertex and another vertex as the candidates, the contribution from to must be at least a fraction of what offers; otherwise, the algorithm would have preferred a deterministic round with alone. On the other hand, we have because otherwise a randomized round would always be inferior to a deterministic round. We further explain the definitions of and in Subsection 4.3, and we demonstrate how their terms interact when proving that the dual assignments always satisfy approximate dual feasibility.
We have defined the primal algorithm and, implicitly, how the dual algorithm updates the ’s. It remains to define the updates to ’s. Before that, we first need to characterize the primal increment since the dual updates are driven by it. Recall that by the CCDF viewpoint:
Since it is difficult to account for the exact CCDF due to complicated correlations in the selections, we instead consider a lower bound for it given by the -OCS. A critical observation here is that the decisions made by the primal-dual algorithm are deterministic, except for the randomness in the OCS. In particular, its choices of , , and the decisions about whether a round is unmatched, randomized, or deterministic are independent of the selections in the OCS and therefore deterministic quantities governed solely by the input graph and arrival order of the online vertices. Hence, we may view the sequence of pairs of candidates as fixed.
For any offline vertex and any weight-level , consider the randomized rounds in which is a candidate and has edge weight at least . Decompose these rounds into disjoint collections of, say, consecutive rounds. By Definition 2, vertex is selected by the -OCS in at least one of these rounds with probability at least:
Accordingly, we will use the following surrogate primal objective:
The primal objective is lower bounded by the surrogate, i.e., .
It will often be more convenient to consider the following characterization of :
Initially, let .
If is matched in a deterministic round in which its edge weight is at least , let .
If is chosen in a randomized round in which its edge weight is at least , further consider , its edge weight in the previous round involving ; let if it is the first randomized round involving . Then, decrease the gap by a factor if , i.e., if it is the second or later pair of a collection of consecutive pairs containing with edge weight at least ; otherwise, decrease by , to account for the in the exponent of in Eqn 7.
For any offline vertex and any weight-level , we have:
Initially, equals . Then, it decreases by in the first randomized round involving with edge weight at least , and by at most in each of the subsequent rounds. ∎
This is equivalent to a lower bound of the increment in in a deterministic round.
For any offline vertex and any weight-level , if is matched in a deterministic round in which its edge weight is at least , the increment in is at least:
For any offline vertex and any weight-level , if is chosen as a candidate in a randomized round in which its edge weight is at least , the increment in is at least:
Suppose further that vertex ’s edge weight is also at least in the last randomized round involving . Then, it follows that and the increment in is at least:
By definition, decreases by a factor of either or in a randomized round, depending on whether vertex ’s edge weight is at least the last time it is chosen in a randomized round. Therefore, the increment in is either a fraction of , or a fraction. Putting this together with the lower bound for in Lemma 5 proves the lemma. ∎
Dual Updates to Online Vertices.
Consider any online vertex at the time of its arrival. The dual variable will only increase at the end of this round, depending on the type of assignment. If is left unmatched, then the value of remains zero. If is matched in a randomized round, set . Lastly, if is matched in a deterministic round, set .
Dual Updates to Offline Vertices: Proof of Eqn. (4).
Fix any offline vertex . Suppose that is matched in a deterministic round in which its edge weight is . Then, for any weight-level , the value of stays the same, so we leave unchanged. On the other hand, for any weight-level , the value of becomes by definition. Therefore, to maintain the invariant in Eqn. (4), we increase for each weight-level by:
The updates in randomized rounds are more subtle. Suppose is one of the two candidates in a randomized round in which its edge weight is . Further consider ’s edge weight the last time it was chosen in a randomized round, denoted as ; let if this is the first randomized round involving vertex . Then, and partition the weight-levels into up to three subsets, each of which requires a different update rule for . Concretely, the algorithm increase by:
The first case is straightforward—we simply increase by to maintain the invariant in Eqn. (4). Observe that this is the only case in the unweighted problem.
For a weight-level that falls into the second case (if there is any), the increment in is smaller than the first case by . This is the difference between the lower bounds for the increments in in Lemma 7, depending on whether ’s edge weight was at least the last time it was chosen in a randomized round. Since the increase in the surrogate primal objective due to vertex and weight-level (when ) is less than the first case of Eqn. (9), we subtract this difference from the increment in so that the update to is unaffected.
How can we still maintain the invariant in Eqn. (4) given the subtraction in the second case? Observe that if the second case happens, the same weight-level must fall into the third case in the previous randomized round in which is involved. Thus, an equal amount is prepaid to each in the previous round. This give-and-take in the offline dual vertex updates becomes clear when we prove reverse weak duality in the next subsection.
4.2 Online Primal-Dual Analysis: Reverse Weak Duality
This subsection derives a set of sufficient conditions under which the increment in the surrogate primal is at least that of the dual solution D. Reverse weak duality then follows from .
Suppose is matched to in a deterministic round. Using the lower bound for the increase of in Lemma 6, the increase of the ’s in Eqn. (8), and a lower bound for by dropping the second term in Eqn. (6), we need:
We will ensure the inequality locally at every weight-level, so it suffices to have:
Now suppose is matched with candidates in a randomized round. We show that the increment in due to is at least the increase in the ’s plus its contribution to (i.e., ). This also holds for by symmetry, and together they prove reverse weak duality.
Let be the edge weight of in this round, and let be its edge weight the last time it was chosen in a randomized round; set if this has not happened. Then, and partition the weight-levels into three subsets corresponding to the three cases for incrementing the dual variables in a randomized round, as in Eqn. (9)
The first case is when or . By Lemma 7, the increase in due to vertex at weight-level is at least:
The second case is when and . By Lemma 7, the increment in due to at weight-level is at least . By the second case of Eqn. (9), the increase in is . Finally, the contribution to the first term of , at weight-level , is . Hence, we need:
Rearranging the second term to the RHS gives us the same conditions as the second part of Eqn. (11).
The third case is when and . The increment in due to at weight-level is 0. By the last case of Eqn. (9), the increase in is . The negative contribution from the second term of , at weight-level , is . Hence, we need:
The first term is decreasing in and the second is increasing, so it suffices to consider :
4.3 Online Primal-Dual Analysis: Approximate Dual Feasibility
This subsection derives a set of conditions that are sufficient for approximate dual feasibility, i.e., Eqn. (3). Start by fixing any and any , and also the values of the ’s when arrives.
Boundary Condition at the Limit.
First, it may be the case that for all and is unmatched. This means in this round and thus, the contribution from the ’s alone must ensure approximate dual feasibility. To do so, we will ensure that the value of is at least whenever . By the invariant in Eqn. (4), it suffices to have:
Next, we consider five different cases that depend on whether the round of is randomized, deterministic or unmatched, and if is chosen as a candidate. We first analyze the cases when is in a randomized round, and then we will show that the other cases only require weaker conditions.
Case 1: Round of is a randomized, is not chosen.
We will again ensure this inequality at every weight-level. Therefore, it suffices to have:
Case 2: Round of is randomized, is chosen.
By symmetry, suppose WLOG that and is the other candidate. By definition, . Next, we derive a lower bound only in terms of . Since the algorithm does not choose a deterministic round with alone, we have . Further, we have by Eqn. (6). Combining these, we have . Finally, by the definition of in Eqn. (5), is at least:
Lower bounding the ’s is more subtle. Recall that denotes the value at the beginning of the round when arrives. Thus, the value of increases by for any weight-level and stays the same for any other weight-level . Therefore, the contribution of the ’s to approximate dual feasibility is at least:
Finally, since , the net contribution from weight-levels is nonnegative, so we can drop them. Then approximate dual feasibility as in Eqn. (3) becomes:
Thus, it suffices to ensure the inequality locally at every weight-level:
Case 3: Round of is deterministic, is not chosen.
By definition, . Next, we derive a lower bound in terms of . Since the algorithm does not choose a randomized round with and as the two candidates, we have . By Eqn. (6) and , we have . Here, we use the fact that , because is chosen in a deterministic round. Putting this together gives us , which is identical to the lower bound in the first case. Therefore, approximate dual feasibility is guaranteed by Eqn. (14).
Case 4: Round of is deterministic, is chosen.
Case 5: Round of is unmatched.
4.4 Optimizing the Gain-Sharing Parameters
To optimize the competitive ratio in the above online primal-dual analysis, it remains to solve for the gain sharing parameters and using the following LP:
We obtain a lower bound on the competitive ratio by solving a more restricted LP, which is finite. In particular, we set for all for some sufficiently large integer , so that it becomes:
We present an approximately optimal solution to the finite LP in Table 0(a) with , , and , which gives . We also tried different values of , for . If or , then ; if , then ; for all other values of , . Hence, the analysis is robust to the choice of , so long as it is neither too close to