Trading the System Efficiency for the Income Equality of Drivers in Rideshare

12/12/2020 ∙ by Yifan Xu, et al. ∙ New Jersey Institute of Technology 0

Several scientific studies have reported the existence of the income gap among rideshare drivers based on demographic factors such as gender, age, race, etc. In this paper, we study the income inequality among rideshare drivers due to discriminative cancellations from riders, and the tradeoff between the income inequality (called fairness objective) with the system efficiency (called profit objective). We proposed an online bipartite-matching model where riders are assumed to arrive sequentially following a distribution known in advance. The highlight of our model is the concept of acceptance rate between any pair of driver-rider types, where types are defined based on demographic factors. Specially, we assume each rider can accept or cancel the driver assigned to her, each occurs with a certain probability which reflects the acceptance degree from the rider type towards the driver type. We construct a bi-objective linear program as a valid benchmark and propose two LP-based parameterized online algorithms. Rigorous online competitive ratio analysis is offered to demonstrate the flexibility and efficiency of our online algorithms in balancing the two conflicting goals, promotions of fairness and profit. Experimental results on a real-world dataset are provided as well, which confirm our theoretical predictions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Rideshares such as Uber and Lyft have received significant attention among research communities of computer science, operations research, and business, to name a few. One main research topic is the matching policy design of pairing drivers and riders, see, [danassis2019, curry2019mix, ashlagi2019edge, Patrick-18-JAI, BeiZ18, dickerson2018assigning, xu-aaai-19]. Most of the current work focuses on either the promotion of system efficiency or that of users’ satisfaction or both.

In this paper, we study the fairness among rideshare drivers. There are several reports showing the earning gap among drivers based on their demographic factors such as age, gender and race, see,  [cook2018, rosenblat2016]. In particular, [wage-gap] has reported that “Black Uber and Lyft drivers earned $13.96 an hour compared to the $16.08 average for all other drivers” and “Women drivers reported earning an average of $14.26 per hour, compared to $16.61 for men”. The wage gap among drivers from different demographic groups is partially due to the discriminative cancellations from riders, which can be well spotted especially during off-peak hours when the number of riders is comparable or even less than that of drivers. Note that in rideshares like Uber and Lyft, after a driver accepts a rider: (1) all sensitive information of the driver such as name and photo will be accessible to the rider and (2) riders can cancel the driver for the first two minutes free of charge [cancel-policy]. This makes the discriminative cancellations from riders technically possible and economically worry-free.

We aim to address the income disparity among drivers due to discriminative cancellations from riders and its tradeoff with system efficiency. Note that the two goals, promoting the group-level income equality among drivers and the system efficiency, are somewhat conflicting. Consider the off-peak hours for example, when riders are kinds of scarce resources. To maximize the system efficiency, rideshares like Uber should please riders by assigning them to their “favorite” drivers. This can effectively reduce any possible cancellations from riders and thus, minimize the risk of driving away riders to other rivals like Lyft. This measure, however, will offer those drivers “popular” among riders much more chances of getting orders than others and as a result, hurt the group-level income equality greatly.

In this paper, we propose two parameterized matching policies, which can smoothly tradeoff the above two goals with provable performances. We adopt the online-matching based model to capture the dynamics in rideshare, as commonly used before [dickerson2018assigning, xu-aaai-19]. Assume a bipartite graph where and represent the sets of types of offline drivers and online requests, respectively. Each driver type represents a specific demographic group (defined by gender, age, race, ) with a given location, while each request type represents a specific demographic group with a given starting and ending location. There is an edge if the driver (of type) is capable of serving the request (of type) 111For simplicity, we refer to a driver of type and a request of type directly as a driver and request when the context is clear. (the distance between them is below a given threshold). The online phase consists of rounds and in each round, a request arrives dynamically. Upon its arrival an immediate and irrevocable decision is required: either reject or assign it to a neighboring driver in . We assume each has a matching capacity of , which captures the number of drivers belonging to the type . Additionally, we have the following key assumptions in the model.

Arrivals of online requests. We consider a finite time horizon (known to the algorithm). For each time-step or round , a request of type will be sampled (or arrives) from a known distribution such that . Note that the sampling process is independent and identical across the online rounds. For each , let , which is called the arrival rate of request with . Our arrival assumption is commonly called the known identical independent distributions (KIID). This is mainly inspired from the fact that we can often learn the arrival distribution from historical logs [Yao2018deep, DBLP:conf/kdd/LiFWSYL18]. KIID is widely used in many practical applications of online matching markets including rideshare and crowdsourcing [xu-aaai-19, dickerson2018assigning, singer2013pricing, singla2013truthful].

Edge existence probabilities. Each edge is associated with an existence probability , which captures the statistical acceptance rate of a request of type toward a driver of type . The random process goes as follows. Once we assign to , we observe an immediate random outcome of the existence, which is present ( accepts ) with probability and not ( cancels ) otherwise. We assume that (1) the randomness associated with the edge existence is independent across all edges; (2) the values are given as part of the input. The first assumption is motivated by individual choice and the second from the fact that historical logs can be used to compute such statistics with high precision.

Patience of requests. Each request is associated with patience , which captures an upper bound of unsuccessful assignments the request can tolerate before leaving the platform. Under patience constraints, we can dispatch each request to at most different drives. Observe that we cannot broadcast to a set of at most different drives simultaneously. Instead, we should assign to at most distinct drives (maybe of the same type though) in a sequential manner until either accepts one or leaves the system after running out of patience. We refer to this as the online probing process (OPP). Note that OPP starts immediately after a request arrives if not rejected by the algorithm, and ends within one single round before the next request arrives.

We say an assignment is successful if is assigned to , and  accepts which occurs with probability . Assume that the platform will gain a profit from a successful assignment (we call a match then). For a given policy , let be the set of (possibly random) successful assignments; we interchangeably use the term matching to denote this set . Inspired by the work of [nanda2019, lesmana2019], we define two objectives, namely profit and fairness, which capture the system efficiency and group-level income equality among drivers, respectively.

Profit:

The expected total profit over all matches obtained by the platform, which is defined as .

Fairness:

Let be the set of edges in incident to . Define the fairness achieved by over all driver types as .

1.1 Preliminaries and Main Contributions

Competitive ratio. The competitive ratio is a commonly-used metric to evaluate the performance of online algorithms. Consider an online maximization problem for example. Let denote the expected performance of on an input , where the expectation is taken over the random arrival sequence . Let denote the expected offline optimal, where refers to the optimal value after we observe the full arrival sequence . Then, competitive ratio is defined as . It is a common technique to use an to upper bound the (called the benchmark ) and hence get a valid lower bound on the target competitive ratio. In our paper, we conduct online competitive ratio analysis on both objectives.

Main contributions. Our contributions can be summarized in the following three aspects. First, we propose a new online-matching based model to address the income inequality among drivers from different demographic groups and its trade-off with the system efficiency in rideshare. Second, we present a robust theoretical analysis for our model. We first construct a bi-objective linear program (-(1) and -(2)), which is proved to offer valid upper bounds for the respective maximum profit and fairness in the offline optimal. Then, we propose LP-based parameterized online algorithms and with provable performances on both objectives. We say an online algorithm achieves an -competitive ratio if it achieves competitive ratios and on the profit and fairness against benchmarks -(1) and -(2), respectively. Results in Theorems 1.1 and 1.1 suggest that can achieve a nearly optimal ratio on each single objective either fairness or profit, though there is some space of improvement left for the summation of both ratios.

achieves a competitive ratio at least simultaneously on the profit and fairness for any with .

[] achieves a competitive ratio at least simultaneously on the profit and fairness for any with .

No algorithm can achieve an -competitive ratio simultaneously on the profit and fairness with or or using -(1) and -(2) as benchmarks.

Last, we test our model and algorithms on a real dataset collected from a large on-demand taxi dispatching platform. Experimental results confirm our theoretical predictions and demonstrate the flexibility of our algorithms in tradeoffing the two conflicting objectives and their efficiency compared to natural heuristics.

2 Related Work

Fairness in operations is an interesting topic which has a large body of work [Bertsimas2011ThePO, Bertsimas2012OnTE, Chen2018WhyAF, Lyu2019MultiObjectiveOR, Cohen2019PriceDW, Ma2020GrouplevelFM, Chen2020SameDayDW]. Here is a few recent work addressing the fairness issue in rideshares. [suhr2019] proposed two notions of amortized fairness for fair distribution of income among rideshare drivers, one is related to absolute income equality, while the other is averaged income equality over active time. [lesmana2019] considered nearly the same two objectives as proposed in this paper. Note that both of the aforementioned work considered an essential offline setting in the way that all arrivals of online requests are known in advance by considering a short time window. Additionally, both ignore the potential cancellations from riders, and assume each rider will accept the assigned driver surely (all ). The recent work [nanda2019] studied an interesting “dual” setting to us. They focused on the peak hours and examined the fairness on the rider side due to discriminative cancellations from drivers.

Our model technically belongs to a more general optimization paradigm, called Multi-Objective Optimization. Here are a few theoretical work which studied the design of approximation or online algorithms to achieve a bi-criterion approximation and/or online competitive ratios, see,  [ravi1993many, grandoni2009, korula2013bicriteria, aggarwal2014, esfandiari2016bi]. The work of [bansal2012lp, BSSX17, fata2019multi] have the closest setting to us: each edge has an independent existence probability and each vertex from the offline and/or online side has a patience constraint on it. However, all investigated one single objective: maximization of the total profit over all matched edges.

3 Valid Benchmarks for Profit and Fairness

We first present our benchmark LPs and then an LP-based parameterized algorithm. For each edge , let be the expected number of probes on edge (assignments of to but not necessarily matches) in the offline optimal. For each (), let () be the set of neighboring edges incident to (). Consider the following bi-objective LP.

(1)
(2)
s.t. (3)
(4)
(5)
(6)

Let -(1) and -(2) denote the two LPs with the respective objectives (1) and (2), each with Constraints (3), (4), (5), (6). Note that we can rewrite Objective (2) as a linear one like with additional linear constraints as for all . For presentation convenience, we keep the current compact version. The validity of -(1) and -(2) as benchmarks for our two objectives can be seen in the following lemma.

-(1) and -(2) are valid benchmarks for the two respective objectives, profit and fairness. In other words, the optimal values to -(1) and -(2) are valid upper bounds for the expected profit and fairness achieved by the offline optimal, respectively.

Proof.

We can verify that objective functions (1) and (2) each captures the exact expected profit and fairness achieved by the offline optimal by the linearity of expectation. To prove the validity of the benchmark for each objective, it suffices to show the feasibility of all constraints for any given offline optimal. Recall that for each edge , denotes the expected number of probes on (assignments of to but not necessarily matches) in the offline optimal. Constraint (3) is valid since each driver has a matching capacity of . Note that the expected arrivals of during the whole online phase is and can be probed at most times upon each online arrival. Thus, the expected number of total probes and matches over all edges incident to should be no more than and , respectively. This rationalizes Constraints (4) and (5). The last constraint is valid, since for each edge, the expected number of probes should be no more than that of arrivals. Therefore, we justify the feasibility of all constraints for any given offline optimal. ∎

4 LP-based Parameterized Algorithms

The following lemma suggests that for any online algorithm , the worst-case scenario (the instance on which achieves the lowest competitive ratio) arrives when each driver type has a unit matching capacity.

[] Let be an online algorithm achieving an -competitive ratio on instances with unit matching capacity (all =1). We can twist to such that achieves at least an -competitive ratio on instances with general integral matching capacities.

Proof.

Let be online algorithm with an -competitive ratio when all have a unit matching capacity. Consider a given instance with general matching capacities. We can create a corresponding instance with unit capacities by replacing each with a set of identical copies of where . Note that for any feasible solution to -(1) and -(2) on the instance , we can create another feasible solution to -(1) and -(2) on where for every and . We can verify that the objective values of -(1) and -(2) each remains the same on and . Similarly, let be a given feasible solution of -(1) and -(2) on the instance , we can create a feasible solution on where for each . We can verify that the objective value of -(1) remains the same on and . Let be an optimal solution -(2) on the instance and w.lo.g. assume that all are the same over all (otherwise we can decrease some until all are the same). Thus, we claim that the objective value of -(2) remains the same on and . From the above analysis, we conclude that the optimal LP values of -(1) and -(2) each remains the same on and .

Now let be such an online algorithm on that it first replaces with and then apply to . We can verify that is valid online algorithm on since (1) each will be matched at most times since each will be matched at most once when applying to ; and (2) each will be probed at most times upon arrival. Let and be the profit and fairness achieved by on . Similarly let and be the profit and fairness achieved by on . Let be the set of offline vertices in , and for each , let be the expected number of matches of when is applied to . Observe that and

Note that benchmark LP values of -(1) and -(2) each remains the same on and . Thus, we get our claim. ∎

From Lemma 4, we assume unit capacity for all driver types throughout this paper w.l.o.g. In the following, we will present a warm-up algorithm () and then another refined algorithm (), which can be viewed as a polished version of with simulation-based attenuation techniques. The main idea of is primarily inspired by the work [BSSX17]. Both and invoke the following dependent rounding techniques (denoted by GKPS) introduced by [gandhi2006dependent]. For simplicity, we state a simplified version of GKPS tailored to star graphs which suffices in our paper.

Recall that is the set of edges incident to in the compatible graph

. GKPS is such a dependent rounding technique that takes as input a fractional vector

on , and output a random binary vector , which satisfied the following properties. (1) Marginal distribution: for all ; (2) Degree preservation: ; (3) Negative correlation: For any pair of edges , .

Throughout this section, we assume (1) and are optimal solutions to -(1) and -(2) respectively; (2) are two given parameters with ; (3) for all from Lemma 4; (4) and , which are scaled solutions from and respectively restricted on . Note that from Constraints (4), (5) and (6), we have that and are two fractional solutions on and each has a total sum at most .

The first algorithm . Let an online vertex arrive at . Our job is to probe at most edges in until is matched. Let be a given fractional solution on . invokes the following procedures (denoted by ) as a subroutine during each online round: it first selects a set of at most edges from in a random way guided by a given fractional vector on and then follows a random order to process all edges in one by one. The details of are stated in Algorithm 1.

1 Apply GKPS to the fractional vector and let be the random binary vector output. Choose a random permutation over . Follow the order to process each until is matched: if  and is available then
2       Probe the edge (assign to ).
3else
4      Skip to the next one.
Algorithm 1 Sub-Routine : Dependent rounding combined with random permutation

Based on , the main idea of is as simple as follows: each round when an online vertex arrives, it invokes and with probabilities and respectively. Recall that and are the scaled optimal solutions to -(1) and -(2) restricted to , each has a total sum at most . Thus, when we run or after arrives online, we will probe at most edges incident to since the final rounded binary vector has at most ones due to Property of Degree Preservation in the dependent rounding. The details of are as follows.

Let arrive at time . With probability , run . With probability , run . With probability , reject .
Algorithm 2 An LP-based warm-up algorithm:

We conduct an edge-by-edge analysis. It would suffice to show that each is probed with probability at least and in . Then by linearity of expectation, we can get Theorem 1.1. Focus on a given and a time . Let be the event that is available at (the beginning of) .

For any given and , we have .

Proof.

Recall that we assume w.l.o.g. that each =1 due to Lemma 4. For each given and , let indicate if arrives at time ; indicate if is probed during round ; indicate if is present when probed. Note that in each subroutine of and after arrives, will be probed only when the final rounded vector has the entry one on . Thus we claim that due to Property of Marginal Distribution in dependent rounding and statements of . Thus,

Now assume occurs ( is available at ). Consider a given and let indicate is probed during round in . Notice that occurs if (1) arrives at time and (2) is probed either in or .

.

Proof.

We focus on the first inequality and try to show that is probed at in with probability at least (including the probability of its online arrival). Observe that events arrives at time and runs the subroutine both happen with probability . Let be the rounded binary vector from and we use to denote its entry on . Let be the set of edges in excluding . For each , let indicate if falls before in the random order and indicate if is present when probed. Thus we have

(7)
(8)
(9)
(10)
(11)
(12)
(13)

Inequality (10) follows from Markov’s inequality. Inequality (12) is due to these two observations: (1) due to negative correlation in dependent rounding and (2) , . Inequality (13) follows from the fact due to Constraint (5). Following a similar analysis, we can prove the second part. ∎

Now we have all ingredients to prove the main Theorem 1.1.

Proof.

Consider a given , let and be the expected number of successful probes of in and respectively. Here a probe of is successful iff is available when we assign to (but no necessarily means is present).

The last term is obtained after taking . Similarly, we can show that .

Let be the expected total profit obtained by . By linearity of expectation, we have . From Lemma 3, we know that the expected profit in offline optimal is upper bounded by . Thus we claim that achieves a ratio at least on the profit. Similarly, we can argue that achieves a ratio at least on the fairness. ∎

The second algorithm . Inspired by [BSSX17], we can improve at least the theoretical performance of with attenuation techniques applied to edges and (offline) vertices. The motivation behind is very simple. Note that edges in are competing for each other since we have to stop probing whenever is matched. Thus, attenuating those edges which win the higher chance of probing over others can potentially boost the worst-case performance.

Let be such a series that is defined as .

Let be the set of available edges at time ( is available at ). The formal description of is stated in Algorithm 3. We defer the proofs of Theorems 1.1 to the Appendix.

1 for  do
2       Apply vertex-attenuation such that each is available at with probability equal to . Let arrive at time . With probability , Run . Apply edge-attenuation such that each edge is probed in with probability equal to . With probability , Run . Apply edge-attenuation such that each edge is probed in with probability equal to . With probability , reject .
Algorithm 3 An LP-based algorithm after attenuation:

5 Hardness Results

We prove Theorem 1.1 in this section. Consider the below example.

Consider a graph which consists of identical units, each unit is a star graph which includes the center of and two other neighbors and . Set and where we use () to index the edges and respectively. Assume that (1) unit edge weight on all edges; (2) and unit arrival rate on all (all ); (3) unit matching capacity on all (all =1); and (4) unit patience on all (all ).

Let and be the optimal LP values of -(1) and -(2) on the above example respectively. We can verify that: (1) , where there is a unique optimal solution and for all ; (2) , where there is a unique optimal solution and for all . Now based on Example 5, we prove the below lemma.

Consider Example 5 and assume -(1) and -(2) as benchmarks. We have (1) no algorithm can achieve a competitive ratio larger than on the profit; (2) no algorithm can achieve competitive ratios on the profit and fairness with a sum larger than .

Proof.

Consider a given online algorithm , in which the expected number of probes for and are and for each , respectively. Let and be the profit and fairness achieved by . We have that . Set and . Note that (1) , and (2) . The latter inequality is due to each . Thus, the sum of competitive ratios on profit and fairness should be

As for profit, we see that . ∎

Based on the example presented in Lemma 5 of Section 3.1 of [fata2019multi], we can get a stronger version of statement (2) in Lemma 5, which states that no online algorithm can get an online ratio better than for either the profit or fairness based on -(1) and -(2). Summarizing all analysis we prove Theorem 1.1.

(a)
(b)
(c)
(d)
Figure 1: Competitive ratios for profit and fairness with different values of and with .
(a)
(b)
(c)
(d)
Figure 2: Performance comparisons with Greedy_P and Greedy_F.

6 Experiments

In this section, we describe our experimental results on a real dataset: the New York City yellow cabs dataset222http://www.andresmh.com/nyctaxitrips/ which contains the trip histories for thousands of taxis across Manhattan, Brooklyn, and Queens.

Data preprocessing. The dataset is collected during the year of 2013. Each trip record includes the (desensitized) driver’s license, the pick-up and drop-off locations for the passenger, the duration and distance to complete the trip, the starting and ending time of the trip and some other information such as the number of customers. Although the demographics of the drivers and riders are not recorded in the original dataset, we synthesize the racial demographics for riders and drivers in a similar way to [nanda2019]. To simplify the demonstration, we consider a single demographic factor of the race only, which takes two possible options between “disadvantaged” (D) or “advantaged” (A). We set the ratio of D to A to be among riders, which roughly matches the racial demographics of NYC [ridersref]. Similarly, we set the ratio of D to A among drivers to be  [driversref]. The acceptance rates among the four possible driver-rider pairs (based on race status only), (A,A), (A,D), (D,A), (D,D), are set to be and , respectively. These probabilities are then scaled up by a factor such that . In our experiments we set . Note that we can apply our model straightforwardly to the case when the real-world distribution of values is known or can be learned. We collect records during the off-peak period of 4–5 PM when a lot of drivers are on the road while the requests are relatively lower than peak hours. On January , 2013, trips were completed in the off-peak hour (from 16:00 to 17:00), compared to trips in the peak hour (from 19:00 to 20:00). We focus on longitude and latitude ranging from and respectively. We partition the area into grids with equal size. Each grid is indexed by a unique number to represent a specific pick-up and drop-off location.

We construct the compatibility graph as follows. Each represents a driver type which has attributes of the starting location and race. Each represents a request type which has attributes of the starting location, ending location, and race. We downsample from all driver and request types such that and . For each driver type , we assign its capacity with a random value uniformly sampled from where we vary . For each request of type , we sample a random patience value uniformly from and a random arrival rate

(Normal distribution), and then set

. We add an edge if the Manhattan distance between starting location of request type and the location of driver type is not larger than . The profit for each is defined as the normalized trip length of the request type such that .

Algorithms. We test the with against two natural heuristic baselines, namely Greedy-P (short for Greedy-Profit) and Greedy-F (short for Greedy-Fairness)333A future direction is to consider a hybrid version of Greedy-P and Greedy-F, which will optimize the two objectives simultaneously.. Suppose a request type of arrives at time . Recall that is the set of neighboring edges incident to (the set of assignments feasible to ). Let be the set of available assignments such that there exists at least one drive of type at . For Greedy-P, it will repeat greedily selecting an available assignment with the maximum weight over (breaking ties arbitrarily) until either accepts a driver or runs out of patience. In contrast, Greedy-F will repeat greedily selecting an available with having the least matching rate before either accepts a driver or leaves the system. We run all algorithms for independent trials and take the average as the expectations. We also run Greedy-P and Greedy-F for instances and take the average values as the final performance. Note that we use -(1) and -(2) as the default benchmarks for profit and fairness, respectively.

Results and discussions. Figure 1 shows the results of competitive ratios for the proposed algorithm with different values of with (). We can observe that the profit and fairness competitive ratios of always stay above the theoretical lower bounds (in dotted lines), as predicted in Theorem 1.1. The gaps between performances and lower bounds suggest that theoretical worst scenarios occur rarely in the real world. Note that when and as shown in Figure 1(d), the lower bound is tight and matches the fairness performance.

Figure 2 shows the profit and fairness performances of compared to Greedy-P and Greedy-F. Here are a few interesting observations. (1) As for profit, Greedy-P can always beat Greedy-F but not necessarily for . The advantage of Greedy-P over becomes more apparent when is large and less when is small. Note that in our experiment, the expected total number of arrivals of riders is fixed and therefore, directly controls the degree of imbalance between drivers and riders. When is larger, we have more available drivers compared to riders and thus, Greedy-P will outperform all the rest for profit. When is small, however, we really need to carefully design the policy to boost profit. That’s why becomes dominant. (2) As for fairness, Greedy-F seemingly can always dominate the rest, though shows high flexibility in the fairness performance. shows a relatively low sensitivity toward the first parameter for profit while high sensitivity toward the second parameter for fairness: the latter becomes particularly obvious when is large.

7 Conclusion

In this paper, we present a flexible approach for matching requests to drivers to balance the two conflicting goals, maximizations of income equality among all rideshare drivers and the total revenue earned by the system. Our proposed approach allows the policy designer to specify how fair and how profitable they want the system to be via two separate parameters. Extensive experimental results on the real-world dataset show that our proposed approaches not only are far above the theoretical lower bounds but also can smoothly tradeoff the two objectives between the two natural heuristics. Our work opens a few directions for future research. The most direct one is to shorten the gap between the sum of ratios of profit and fairness achieved by (which is ). It will be interesting to give a tighter online analysis than what are presented here or offer a sharper hardness result which suggests the sum of the two ratios should be much lower than .

References

8 Appendix

8.1 Proof of Theorem 1.1

See 1.1

Here are two main ingredients to prove Theorem 1.1. Assume at time , each is available with probability , and arrives at . Recall that is the set of available edges in at time .

Each edge will be probed with probability at least if is invoked and at least if is invoked without attenuation.

Proof.

From definition, we see . We present a similar but refined version of proof to that in Lemma 4. Consider a given , and let be the probability that is probed in . Similar to before, let be the rounded binary vector from and be the entry of on . Let be the set of edges in excluding . For each , let indicate if falls before in the random order and indicate if is present when probed. We introduce an additional indicator to show if ( is available at ). Note that here we assume not only is available () but also arrives at and is invoked. Thus, we have

(14)
(15)
(16)
(17)
(18)
(19)
(20)

Note that Inequality (19) is due to facts that from Lemma 3.1 in [BSSX17] and from negative correlation in dependent rounding. Similarly we can prove for the case of . ∎

Lemma 8.1 justifies the edge-attenuation steps (3) and (3) in . The lemma below will instead justify the vertex-attenuation step (3) in .

Consider a given and assume that is available at with probability equal to