A fundamental and widely-studied problem in both computational sciences and economics is the following allocation setting: Given a set of items and agents
, each with a specified valuation function, how should items be distributed among the agents to maximize the sum of their valuations? Such scenarios arise in a wide variety of applications, capturing settings in internet advertising, social choice, and machine learning (such as word alignment for document summarization). Often for such problems, we are faced with two key challenges when designing allocation algorithms. One challenge is that of submodularity, as agents typically exhibit some form of diminishing returns in their valuation functions. Another challenge is that of indivisibility. In many settings, items cannot be split fractionally among agents. From an optimization perspective, good or optimal fractional assignments are often much easier to obtain, while understanding the complexity of integral settings continues to garner much focus in the literature.
A classic example of a problem that illustrates the algorithmic challenges that arise from the interaction of these two problem facets is the Maximum Budgeted Allocation Problem (MBA), where agent valuations are each given as a budget function. A series of results [16, 2, 3, 6, 17] improved the best approximation for the problem to for a small constant , while the best known hardness upper bound currently sits at 15/16 . Meanwhile, the complexity of the problem is much better understood in fractional settings (e.g., small bids) both in the offline and online input models [18, 23, 11, 6]. Another example of such an problem - which has received increasing attention - is that of Nash Social Welfare Maximization (NSW). In its canonical form, the objective of NSW is to maximize the product of agent valuations. However, this can also be converted to a sum of logs objective, becoming a submodular variant of the allocation problem. It has been well-known that the optimal fractional solution to NSW can be obtained by solving the famous Eisenberg-Gale convex program , but the indivisible case was less understood until a recent breakthrough by Cole and Gkatzeli  that obtained the first constant approximation for the product objective . Since then, there has been an explosion of work on NSW for indivisible assignments, both in terms of its approximability [7, 4, 1, 21] and its appealing fairness properties [10, 13, 4, 25, 5].
Note that both MBA and NSW (for smoothed log objectives) are special cases of Submodular Welfare Maximization, where each agent’s valuation is given as general submodular set function.333 A set function is said to be submodular if when . The optimal approximation one can achieve in this general setting is [29, 20]. However, this hardness barrier is partly a consequence of defining the problem over a general class of functions, and thus in special settings (such as MBA and NSW, as discussed above) we can still achieve approximations that improve upon this bound. Ideally, we would like to be able to define a general class of submodular functions that capture many natural allocation scenarios and permits improved approximations beyond the general submodular barrier.
For example, one such class of functions that has received recent attention is a line of work that examines submodular functions with bounded curvature [9, 27, 30]. For this problem, the total curvature of a monotone submodular function is given by:
Intuitively, measures the multiplicative gap between the marginal gain of receiving an item after first receiving all other items, versus the marginal gain from only receiving item . Thus, for linear functions (no curvature) , and then as more curvature is introduced into the function, tends towards 1 as these margins begin to diverge. For the problem of submodular maximization with bounded curvature of (SMBC), Sviridenko et al. showed an approximation of , which is optimal . Thus, as the overall curvature of the function tends towards a linear function, the approximation ratio approaches 1.
However, note that since is a measure of the total curvature of the function, this algorithm does not necessarily yield an improved approximation for many natural allocation objectives. For example, for any non-trivial instance of MBA, will equal 1 (since an agent will earn linear utility for their first item and then no marginal gain once their budget has been saturated), and thus for MBA this algorithm does not improve the approximation ratio beyond the general bound. This is also true for sufficiently large instances of log NSW or any “budget-like” objective that initially starts off close to linear and then eventually flattens out. Thus this raises the question: For the allocation problem, can we design approximation algorithms for a general yet natural class of submodular functions whose performance scales according to more local notions of curvature?
In this paper, we make progress towards this end by studying what we call the indivisible concave allocation problem (), which is an integral variant of the fractional online problem studied by Devanur and Jain . This problem is formally defined as follows: As input, we are given a set of agents and indivisible items . Each agent has specified valuation function which is a non-decreasing, continuous concave function with a well-defined first derivative. Each agent also has a specified bid value for each item . The algorithm must assign each item to a unique agent, where the objective is to maximize the following:
where denotes the set of items assigned to agent by the algorithm.
As discussed in , is a strict generalization of MBA444Technically, a true budget function does not have well-defined first derivative at its inflection point. However, our results converge to those of MBA for any continuous differentiable approximation of a budget function. and captures many setting that arise in practice, such as ad display systems with under delivery penalties and soft budgets. Additionally, this setting captures a variant of NSW known as the Smooth Nash Welfare Problem examined in [13, 15], which is a variant of NSW that reduces the extent to which the objective penalizes under-allocations by adding a smoothing constant to agent utilities, i.e., as increases, the penalty for allocating very little to a particular agent is softened. (We describe this problem more formally below in our results section.) Thus, our results contribute to the growing body of work that aim to better understand the Nash Social Welfare problem for indivisible assignments.
As mentioned above, one goal in studying will be to obtain approximation guarantees will that depend on a more local notion of function curvature, which we call the local curvature bound (LCB), denoted . We formally define for an instance of as follows. Denote
to be the slope of the lower-bounding secant line that intersects at points and . Define the LCB555Note that for some functions, a fixed that is an in (2) may not exist, and therefore in such cases the in the definition should be replaced by an infimum. Such cases can be handled in our analysis by introducing limits when necessary.. of a function at point with -width to be:
Informally, measures the largest (inverse) multiplicative gap between a point on the lower bounding secant line and the function evaluated at . The definition of is illustrated in Figure 1. Given this point-wise definition, we then also define the overall LCB for a particular agent valuation function to be , and then the overall LCB666For our local curvature parameter , lower values of correspond to higher degrees of curvature, whereas the opposite is true for the total curvature parameter for SMBC. We elect for this discrepancy between these definitions in order to avoid having to frequently take reciprocals throughout. of an instance to be .
Since is defined according to the marginal change that can occur over all points of the function (in some sense, one can think of as a bound on the multiplicative gap between the function and a “low-resolution” derivative) intuitively it should capture a more accurate measure of a valuation function’s curvature. For example, for MBA (where and for a budget of ), capturing the fact that portions of the function do not exhibit diminishing returns. Figure 2 illustrates the value of for functions with varying degrees of curvature and maximum bid values.
As our main contribution, we give an efficient polynomial-time algorithm that achieves an approximation of . This result is stated formally in the following theorem.
For an instance of , let , and let . Let denote the LCB of the instance. Then there exists an algorithm that achieves an approximation of for that runs in time .
Our algorithm is primal-dual in nature and combines the approach of the primal-dual algorithm in  for MBA along with the convex programming duality techniques leveraged in  for the online fractional version of . Furthermore, we show an integrality gap of for the problem, establishing that our algorithm is optimal among those that utilize the natural assignment CP. This result is formally stated as follows.
For any monotone concave function and maximum bid value with LCB of , there exists an instance of such that the integrality gap of the convex program is .
In relation to prior work, we would like to emphasize the following:
As mentioned earlier, for MBA, which is also the integrality gap of the assignment LP formulation of the problem. Thus, our results essentially (i.e., barring the approximation via the configuration LP in ) generalize the state of the art of MBA to instances with general monotone concave functions.
Observe that as , tends towards 1. Thus, for “small bids” instances, our algorithm produces close to optimal allocations. To the best of our knowledge, the best algorithm for this setting is the online fractional algorithm given in , which for many natural concave functions achieves a ratio . For example, when for , the algorithm’s approximation is (e.g., when , ).
We believe the analyses of our algorithm and integrality gap also provide a clearer connection as to why the 3/4 approximation ratio of the primal-dual algorithm for MBA in  matches the 3/4 integrality gap of the MBA assignment LP. Specifically, our integrality gap for generalizes that of MBA and is established by relating the cost the optimal fractional solution to that of a dual solution of equivalent cost. The structure of this dual solution then points to why the “local search via defection” approach used by the primal-dual algorithm can achieve an approximation of for a generic instance. (In fact, the authors derived our algorithm from the integrality gap). We hope this connection - made from the general perspective of - may help shed light on how primal-dual algorithms can potentially be derived from integrality gaps for special settings of , such as MBA, NSW, or other related problems and settings.
For our final result, we examine the particular application of Smooth Nash Social Welfare . In this setting, for the instance we specify a smoothing parameter . The objective is to maximize the following product objective:
where denotes the total spend of agent . Note that since is a product objective, wlog we can scale agent bid values such that without affecting the approximation ratio. Thus after this scaling, one can think of the smoothing parameter as giving each agent an initial spend of at the outset of the allocation.
As discussed earlier, by taking logs this objective can be converted to a sum of logs objective:
producing an instance of (e.g., when , our algorithm from Theorem 1.1 obtains an approximation of ). However interestingly, we also show that for the log objective given in (4), we can extend our techniques to obtain (roughly) an additive guarantee on the log objective, which then translates into an multiplicative guarantee for the product objective. Thus, we obtain the following result.
There exists an algorithm for that runs in time that achieves an approximation of for and .
For example when , the approximation ratio of the algorithm is as . Therefore for the smoothed version of the NSW, we obtain ratios that significantly improve upon the current state of the art for the standard version of NSW, which is currently . (Our algorithm is an improvement over this bound for all greater than ). Furthermore, our algorithm for is more time efficient than the known constant approximation algorithms for NSW, since the algorithms in [7, 8] rely on solving CP relaxations, and the combinatorial algorithm in  requires an intricate run time analysis (the exact dependence on and is not specified).
1.2 Related Work
The first approximation algorithm for MBA was given by Garg et al. , who obtained an approximation ratio of . Later, Andelman and Mansour  gave a approximation, and a -approximation for the special case when all budgets are equal. The approximation ratio was subsequently improved by Azar et al. , who gave a -approximation, and Srinivasan , who gave a -approximation. Concurrently, Chakrabarty and Goel  also achieved an approximation ratio of , and they showed that it is NP-hard to approximate MBA to a factor better than . These -approximation algorithms consider the standard LP relaxation known as the assignment LP. Most recently, Kalaitzis et al.  analyzed a stronger relaxation LP known as the configuration LP. For two special cases which they call graph MBA and restricted MBA, they give -approximation algorithms for some constant .
The online version of budgeted allocation, also known as AdWords, has also received significant attention, due to its applicability to Internet advertisements. In this setting, each item arrives in an online fashion, and must be assigned to a bidder. Note that online matching is the special case where each bidder makes unit bids and has unit budget. For this problem, Karp et al.  gave a -competitive algorithm and proved that no randomized algorithm can do better. Kalyanasundaram and Pruhs  considered the case where each bidder has a budget of , and all bids are . For this setting, they gave an algorithm whose competitive ratio tends to as tends to infinity. Under an assumption that bids are small compared to budgets, Mehta et al.  give a -competitive algorithm, and prove that no randomized algorithm can do better, even under the small bids assumption.
Maximizing the Nash Social Welfare has also received notable attention, especially in more recent years. For this problem, Cole and Gkatzelis  gave a -approximation algorithm, which was the first to achieve a constant approximation. Later, Cole et al.  gave a tight factor 2 analysis of this algorithm. For additive utilities, Barman et al.  gave a -approximation, matching the lower bound given by Cole et al. . One motivation for maximizing NSW is that it provides fairness guarantees. Caragiannis et al.  show that the maximum Nash welfare solution satisfies a property known as envy freeness up to one good, and is also Pareto optimal. Conitzer et al.  gave three relaxations of the proportionality notion of fairness and showed that the maximum NSW solution satisfies or approximates all three relaxations. The smooth version of NSW, in which we add to every agent’s utility, was considered by Fain et al. . For , Fluschnik et al.  showed that this objective is NP-hard to maximize.
For submodular maximization subject to a cardinality constraint (i.e., a uniform matroid), Nemhauser et al.  showed that the greedy algorithm is a -approximation. For the submodular welfare maximization problem, Feige and Vondrák  gave a -approximation algorithm for some constant , and they showed that it is NP-hard to achieve approximation ratio better than . For submodular maximization with a single matroid constraint and bounded total curvature , Conforti and Cornuéjols  achieve an approximation ratio of , which tends to 1 as (i.e., the function becomes linear). Sviridenko et al.  improved the ratio to , and they also extend the notion of curvature curvature to general monotone set functions to get a -approximation algorithm. Finally, Vondrák  introduced a weaker notion of curvature and gave an algorithm with approximation ratio for arbitrary matroid constraints.
2 ( Approximation Algorithm for
In this section we define our approximation algorithm for . For simplicity, throughout the section we assume the algorithm has knowledge of the value of for any given set of valuation functions. At the end of the section, we will briefly discuss how the algorithm can be redefined so that algorithm does not need knowledge of .
2.1 Algorithm Definition
Our algorithm utilizes the natural assignment convex program for the problem, which we will refer to as -CP:
Our algorithm is primal-dual in nature, and will utilize the dual program which was defined for the online variant of the problem in :
where is as defined -intercept of the tangent to at . Thus we have the following lemma.
Lemma 2.1 (shown in ).
The above convex programs form a primal-dual pair. That is, any feasible solution to has objective at least that of any feasible solution to .
As was done for the algortihm in  for the budgeted allocation problem, it will be useful to partition the cost of the dual solution according to algorithm’s current assignments. In particular, recall that denotes the current set of items assigned to agent by the algorithm, and let be the current dual variable maintained by the algorithm for agent . Define
to be the spend of the algorithm but evaluated according to the tangent line in the dual objective taken at point . The algorithm will maintain that at any point, an item will be assigned to the bidder that maximizes (and will reassign an item if this doesn’t hold). We call such an allocation a proper allocation.
Given a dual solution , an item is said to be properly allocated if is assigned to agent . Otherwise, item is said to be improperly allocated.
In an allocation where all item’s are properly allocated, we can obtain the following characterization of the dual objective.
Fix a point in the algorithm with primal and dual variables and for each agent . If all items are properly allocated, then (i) setting forms a feasible dual solution, and (ii) the objective of the dual can be expressed as .
Note that (i) follows immediately from the dual constraints. To see (ii), observe that
Adding to both sides of (6), on the LHS we obtain the dual objective, and on the RHS we obtain , as desired. ∎
We are now ready to define our algorithm, which is outlined in Algorithm 1. At any point the algorithm maintains a setting of dual variables for all agents and variables for all items . Each variable is initialized to be 0, and then items are properly allocated accordingly. The algorithm proceeds by continuously increasing values (thus decreasing ), allowing items to defect if they are no longer properly allocated. The goal of the algorithm is to eventually is obtain an allocation such that for all agents. As long items remain (close to) properly allocated upon the algorithm’s termination, Lemmas 2.1 and 2.3 imply that the approximation ratio of the algorithm is at least .
2.2 Algorithm Analysis
As discussed in the introduction, our algorithm can viewed as generalization of primal-dual algorithm for MBA using LP duality given in , and thus we will use a similar approach to analyze our algorithm. In particular, throughout a run of the algorithm, we define the following terms which characterize the spend of the algorithm in relation to the current dual objective:
Call an agent paid for if . There are two types of agents who are not paid for, where either or . Call such agents underspent and overspent, respectively.
To establish the approximation of the algorithm, it is sufficient to argue that (i) no agent ever becomes underspent, (ii) up to a scaling of the dual variables, the dual remains feasible (which is necessitated by the continuous nature of the algorithm), and (iii) once becomes large enough, agent must be paid for.
The main technical hurdle in our general setting will be arguing that no agent ever becomes underspent. In particular, since in  they use the LP primal-dual formulation of the problem, one can interpret their dual update as adjusting a supergradient that passes through the point if the budget of the agent is . The maximum ratio of (that guarantees that no agent becomes underspent) is derived via of set of algebraically obtained equations that express the ratio between both the linear part (for to ) and budgeted part of the primal objective, versus these respective locations along the supergradient defined by the dual. Clearly, it would be difficult or impossible to do such a calculation for a general concave function that may not even have a closed form. Thus, for our main technical insight, we show this algebraic approach can be bypassed via a more elegant geometric argument that coincides directly with the definition of (given in Lemma 2.5.)
With this intuition, we are now ready to analyze the performance of the algorithm. Lemmas 2.5 and 2.6 will establish the approximation ratio of the algorithm. In Lemma 2.7, we bound the run time of the algorithm.
Throughout the algorithm, an agent never becomes underspent. In particular, if , then agent is paid for, i.e., .
At the start of the algorithm for all agents, and so no agent can be underspent at the outset of the algorithm. Therefore, the only point at which an agent with total spend could potentially become underspent is when some item is reassigned to another agent on Line 6 of the algorithm such that after the reassignment . Fix such a point in the algorithm.
Let denote the equation for the secant line that passes through at points and . More formally, let , where the definition of is given by Equation (1). Then the equation for is given as:
Denote and let be value given in Equation (2) that determines . Based on the definition of , observe that if we scale up by a factor of , we obtain an equation for a line that is tangent to at the point (. Denote the equation for this line as . Since by definition , it follows that
where the first inequality holds by definition of and the last inequality holds because entered the loop on Line 2.
Given Inequality (8), we can now argue that . In particular, since Inequality (8) implies that , it follows that . In other words, the point at which is tangent to must have a greater -coordinate than the point at which is tangent to . This follows directly from the fact that is concave and non-decreasing. Therefore, by the same reasoning, we can then conclude for . Since agent is underspent and therefore , we have
Finally, since we want to show that agent is paid for after the reassignment of item , we can establish the lemma as follows:
where the the first equality follows from the definition of , and the first inequality follows from . ∎
Throughout the algorithm, variables and always form a feasible dual solution.
Fix an item . If is properly allocated, then by definition , so satisfies the dual constraints for all agents . Now suppose is improperly allocated. Observe this only happens if is properly allocated to some bidder , and then (i) the algorithm increases on Line 8 (thereby decreasing the value of ), (ii) as a result, is no longer the , and (iii) the algorithm exits the while loop on Line 4 before reassigning on Line 6.
However in this case, because was properly allocated before the change in we have:
where the second equality follows from the update step on Line 8. Thus satisfies the dual inequalities. ∎
Next, we establish the run time of the algorithm. For simplicity, we will assume that is selected such that . This is possible as long as . (If , then the instance is trivial, since this implies all are linear and therefore the optimal solution is obtained by assigning each to the agent ).)
Let denote the maximum possible spend for fixed agent , and let
denote the maximum ratio (over all agents ) between the total spend evaluated along the tangent line at , versus the total spend evaluated with . If , then the algorithm terminates in time , where is the time needed to perform the update of on Line 8.
Once an item is reassigned on Line 6, the algorithm cannot reassign again until it increases for some agent on Line 8. Thus, the algorithm can perform at most reassignments on Line 6 before an update on Line 8 must occur for some agent. Therefore, to establish the desired run-time bound, it suffices to show that once the update on Line 8 occurs times for a fixed agent , then agent is paid for the remainder of algorithm’s execution (i.e., the algorithm does not again enter the while loop on Line 3 for agent ).
In particular, define to be:
After Line 8 updates to agent , we can bound as follows:
where the inequality follows from our assumption . Rearranging (9), we obtain
Recall that by Lemma 2.5, an agent can be never be underspent. Therefore we may assume that . We can then upper bound as follows:
implying agent must be paid for. The first inequality follows from the fact , and last inequality follows from (10).
To complete the proof of the lemma, observe that negating both the numerator and denominator yields
We are now ready to prove Theorem 1.1.
Proof of Theorem 1.1.
Lemma 2.7 ensures the algorithm will eventually terminate, and since the algorithm terminates, no agent can be over-spent. Furthermore, by Lemma 2.5, no agent can be underspent. Therefore for all agents we have for every . Combining these inequalities yields
which proves the theorem. ∎
We conclude the section by discussing how Algorithm 1 can still execute without knowledge of . This can be done by instead having the algorithm repeatedly guess values of . In particular, we can start with an overestimate of (i.e., start with and repeatedly lower the guess in increments of ), and then check after all reassignments whether or not an agent is underspent. If no agent ever becomes underspent, then the algorithm achieves an approximation at its current guess, and by the Lemma 2.5, the algorithm is guaranteed to have no agents become underspent once the guess reaches the true value of . This modification comes at an factor in the run time of the algorithm and an additional additive error in the approximation factor.
3 Integrality Gap for
In this section, we prove Theorem 1.2. In particular, we show that for a fixed monotone concave function and maximum bid value with LCB , we can construct an instance of such that the optimal fractional solution to has objective times that of any integral assignment.
Let be the that defines for function , and let be the that defines in Equation (2). Without loss of generality, assume can be expressed as rational number where , since any irrational number has arbitrarily close rational approximation. (In which case our instance construction can be taken such a limit to obtain the desired bound).
We construct our instance as follows. The valuation function of all agents is . There are agent and items in total. Of the items, of them are “public,” i.e., for all agents we have have . Call this subset of items . The remaining items items are partitioned among the agents so that each agent receives a set of private items , such the first items in we set . For the remaining item (if any) in , we set . For all other agents , for all . Observe that based on this construction, the sum of over all is equal to . Intuitively, the private items are defined so that each agent can achieve a total bid value of “for free” before receiving any allocation of the public items, while still maintaining that the maximum bid value in the instance is . This completes the construction, which is illustrated in Figure 3.
Analysis of Integrality Gap.
Let and denote the optimal fractional and integral solutions, respectively, to the convex program for the above instance construction. We first show that is obtained by evenly splitting the spend of the agents among the public items. To do this, we construct a feasible dual solution o the program that has objective equal objective to that of . By Lemma 2.1, it then follows is an optimal fractional solution. Constructing this dual solution will also be useful for showing the desired integrality gap, as it will be easier to relate the objective of to Note that to simplify notation, for the remainder of the section we denote .
Observe that in the solution specified above, each agent spends a total of on public items. By the definitions of and , we have . Each agent also receives a total spend of from from their private items. Therefore since there are agents in the total, the objective of equals:
To construct we set for all agents . Since each agent has an identical valuation function and is the same for all agents, setting variables if is a public item and setting if is a private item in produces a feasible solution to
To show that the objectives of and re the same, first observe that the definitions of and , we have the following identity:
Intuitively, Equation (12) gives two equivalent ways of expressing the value of the line tangent to at evaluated at -coordinate . On the LHS, we start at the -intercept of the line, and then follow the slope of the tangent for a total -width of . On the RHS, we instead start at (the tangent point) and follow the tangent line to (by going a total -width of ).
By a similar reasoning, the following identity also follows, which instead characterizes the tangent line evaluated at :
From the construction of the instance and the definition of the objective of an be characterized as follows:
which can simplified as follows:
where the second equality follows since , and the last equality follows from (11). Thus and ave equivalent objectives.
Now consider , which is obtained by assigning a unique public item to of the agents. (By a simple exchange argument, the objective cannot increase by assigning multiple public items to the same agent, since is monotone and concave.) Each agent that receives a public item spends a total of . The remaining agents spend only a total of from solely their private items. Thus the objective of the optimal integral solution is:
4 Extension to Smooth Nash Social Welfare
In this section, we apply our techniques to the problem of Smooth Nash Social Welfare (SNSW). In this problem, the goal is to find an allocation that maximizes
for a value , which we call the smoothing parameter for the instance. Since is a product objective, we can scale the objective of each agent by without changing the approximation factor of the algorithm. Therefore, wlog for the rest of the section we will assume that for all agents. After this scaling, we can think of the smoothing parameter as essentially giving each agent an initial spend of at the beginning of the instance.
As discussed in the introduction, by taking the logarithm and multiplying by , we can reduce this problem to maximizing:
When , is a monotone concave function that maps , thus optimizing gives us an instance of . Therefore, we can use our algorithm from Section 2 to obtain a multiplicative guarantee for this log objective. However, note that this transformation from to is not approximation preserving. In particular, if denotes the optimal objective for , then an approximation for only implies the algorithm’s objective for is at least . Therefore, to obtain an multiplicative guarantee for product objective, we instead need to obtain an additive guarantee for .
Thus in this section, we will define a similar notion of a local curvature bound for that instead measures the largest additive gap between the function and any lower-bounding secant line. We then characterize this additive curvature for the objective as a function of the smoothing parameter . Given this characterization, we then show our techniques from Section 2 translate to this additive setting, giving us an approximation for based on the additive curvature for . At a high level, the key reasons why this extensions works is (i) the geometric arguments used in Section 2 are completely relational, and therefore ratios can be replaced by differences while preserving the logic the argument, and (ii) the algorithm’s objective is bounded against the cost of the dual on a per-agent basis, and therefore an additive per-agent guarantee easily translates to an overall additive guarantee. (One can imagine that an argument which establishes the multiplicative guarantee using a more global approach might be less amenable to such a translation.)
4.1 Algorithm Definition
The corresponding primal and dual programs (again given by the CP duality in ) for this objective are given as follows:
To define the analogous notion of additive curvature for , for let denote the slope of the lower-bounding secant line that intersects the points and , given as:
We then define the additive local curvature bound for at to be:
Note that Lemma 2.3 still applies to and , so we can still partition the objective the dual in terms of the function , given by:
Given these definitions specific for , our adaptions of our algorithm to obtain an additive guarantee is given in Algorithm 2.