Approximating Nash Social Welfare under Submodular Valuations through (Un)Matchings

12/28/2019 ∙ by Jugal Garg, et al. ∙ University of Illinois at Urbana-Champaign 0

We study the problem of approximating maximum Nash social welfare (NSW) when allocating m indivisible items among n asymmetric agents with submodular valuations. The NSW is a well-established notion of fairness and efficiency, defined as the weighted geometric mean of agents' valuations. For special cases of the problem with symmetric agents and additive(-like) valuation functions, approximation algorithms have been designed using approaches customized for these specific settings, and they fail to extend to more general settings. Hence, no approximation algorithm with factor independent of m is known either for asymmetric agents with additive valuations or for symmetric agents beyond additive(-like) valuations. In this paper, we extend our understanding of the NSW problem to far more general settings. Our main contribution is two approximation algorithms for asymmetric agents with additive and submodular valuations respectively. Both algorithms are simple to understand and involve non-trivial modifications of a greedy repeated matchings approach. Allocations of high valued items are done separately by un-matching certain items and re-matching them, by processes that are different in both algorithms. We show that these approaches achieve approximation factors of O(n) and O(n log n) for additive and submodular case respectively, which is independent of the number of items. For additive valuations, our algorithm outputs an allocation that also achieves the fairness property of envy-free up to one item (EF1). Furthermore, we show that the NSW problem under submodular valuations is strictly harder than all currently known settings with an e/(e-1) factor of the hardness of approximation, even for constantly many agents. For this case, we provide a different approximation algorithm that achieves a factor of e/(e-1), hence resolving it completely.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We study the problem of approximating the maximum Nash social welfare () when allocating a set of indivisible items among a set of agents with non-negative monotone submodular valuations , and unequal or asymmetric entitlements called agent weights. Let denote the set of all allocations, i.e., . The problem is to find an allocation maximizing the following weighted geometric mean of valuations,

(1)

where is the weight of agent . We call this the Asymmetric Submodular problem.222In the rest of this paper, we refer to various special cases of the problem as the problem, where is the nature of agents, symmetric or asymmetric, and is the type of agent valuation functions. We skip one or both qualifiers when they are clear from the context. When agents are symmetric, .

Fair and efficient allocation of resources is a central problem in economic theory. The objective provides an interesting trade-off between the two extremal objectives of social welfare (i.e., sum of valuations) and max-min fairness, and in contrast to both it is invariant to individual scaling of each agent’s valuations (see [Mou03]

for additional characteristics). It was independently discovered by three different communities as a solution of the bargaining problem in classic game theory 

[Nas50], a well-studied notion of proportional fairness in networking [Kel97], and coincides with the celebrated notion of competitive equilibrium with equal incomes (CEEI) in economics [Var74]. While Nash [Nas50] only considered the symmetric case, [HS72, Kal77] proposed the asymmetric case, which has also been extensively studied, and used in many different applications, e.g., bargaining theory [LV07, CM10, Tho86], water allocation [HdLGY14, DWL18], climate agreements [YvIWZ17], and many more.

The problem is known to be notoriously hard, e.g., -hard even for two agents with identical additive valuations, and APX-hard in general [Lee17].333Observe that the partition problem reduces to the problem with two identical agents. Efforts were then diverted to develop efficient approximation algorithms. A series of remarkable works [CG18, CDG17, AGSS17, AMGV18, BKV18, GHM19, CCG18] provide good approximation guarantees for the special subclasses of this problem where agents are symmetric and have additive(-like) valuation functions444Slight generalizations of additive valuations are studied: budget additive [GHM19], separable piecewise linear concave (SPLC) [AMGV18], and their combination [CCG18]. via utilizing ingenious different approaches. All these approaches exploit the symmetry of agents and the characteristics of additive-like valuation functions,555For instance, the notion of a maximum bang-per-buck (MBB) item is critically used in most of these approaches. There is no such equivalent notion for the submodular case. which makes them hard to extend to the asymmetric case and more general valuation functions. As a consequence, no approximation algorithm with a factor independent of the number of items  [NR14] is known either for asymmetric agents with additive valuations or for symmetric agents beyond additive(-like) valuations. These questions are also raised in [CDG17, BKV18].

The objective also serves as a major focal point in fair division. For the case of symmetric agents with additive valuations, Caragiannis et al. [CKM16] present a compelling argument in favor of the ‘unreasonable’ fairness of maximum by showing that such an allocation has outstanding properties, namely, it is (a popular fairness property of envy-freeness up to one item) as well as Pareto optimal (), a standard notion of economic efficiency. Even though computing a maximum allocation is hard, its approximation recovers most of the fairness and efficiency guarantees; see e.g., [BKV18, CCG18, GM19].

In this paper, we extend our understanding of the problem to far more general settings. Our main contribution is two approximation algorithms, and for asymmetric agents with additive and submodular valuations respectively. Both algorithms are simple to understand and involve non-trivial modifications of a greedy repeated matchings approach. Allocations of high valued items are done separately by un-matching certain items and re-matching them, by processes that are different in both algorithms. We show that these approaches achieve approximation factors of and for additive and submodular case respectively, which is independent of the number of items. For additive valuations, our algorithm outputs an allocation that is also .

1.1 Model

We formally define the valuation functions we consider in this paper, and their relations to other popular functions. For convenience, we also use instead of to denote the valuation of agent for item .

  1. Additive: Given valuation of each agent for every item , the valuation for a set of items is the sum of the individual valuations. That is, In the restricted additive case, .

  2. Budget additive (): Every agent has an upper cap on the maximum valuation she can receive from any allocation. For any set of items, the agent’s total valuation is the minimum value from the additive value of this set and the cap. i.e., where denotes agent ’s cap.

  3. Separable piecewise linear concave (): In this case, there are multiple copies of each item. The valuation of an agent is piecewise linear concave for each item, and it is additively separable across items. Let denote the agent ’s value for receiving copy of item . Concavity implies that . The valuation of agent for a set of items, containing copies of items , is .

  4. Monotone Submodular: Let denote the marginal utility of agent for a set of items over set , where and Then, the valuation function of every agent is a monotonically non decreasing function that satisfies the submodularity constraint that for all

Other popular valuation functions are , gross substitutes (), and subadditive [NTRV07]. These function classes are related as follows:

1.2 Results

Table 1 summarizes approximation guarantees of the algorithms and under popular valuation functions, formally defined in Section 1.1. Here, the approximation guarantee of an algorithm is defined as for an , if it outputs an allocation whose (weighted) geometric mean is at least times the maximum (optimal) geometric mean. All current best known results are also stated in the table for reference.

Valuations Symmetric Agents Asymmetric Agents
Hardness Algorithm Hardness Algorithm
Restricted Additive [GHM19] [BKV18] [GHM19]
Additive —"— [BKV18] —"— —"—
Budget additive —"—  [CCG18] —"— —"—
—"— —"—
Gross substitutes
Submodular [Thm 4.1] —"— [Thm 4.1] —"—
—"— [NR14] —"— [NR14]
Subadditive
Table 1: Summary of results. Every entry has the best known approximation guarantee for the setting followed by the reference, from this paper or otherwise, that establishes it. Here, and respectively refer to Algorithms and .

To complement these results, we also provide a -factor hardness of approximation result for the submodular problem in Section 4. This hardness even applies to the case when the number of agents is constant. This shows that the general problem is strictly harder than the settings studied so far, for which factor approximation algorithms are known.

For the special case of the submodular problem where the number of agents is constant, we describe another algorithm with a matching approximation factor in Section 5, hence resolving this case completely. Finally in the same section, we show that for the symmetric additive problem, the allocation of items returned by also satisfies . Finally, a -factor guarantee can be shown for the further special case of restricted additive valuations, by showing that the allocation returned by the algorithm in this case is . This matches the current best known approximation factor for this case.

1.3 Techniques

We describe the techniques used in this work in a pedagogical manner. We start with a naive algorithm, and build progressively more sophisticated algorithms by fixing the main issues that result in bad approximation factors for the corresponding algorithms, finally ending with our algorithms.

All approaches compute, sometimes multiple, maximum weight matchings of weighted bipartite graphs. These graphs have agents and items in separate parts, and the edge weight assigned for matching an item to an agent is the logarithm of the valuation of the agent for the item, scaled by the agent’s weight, i.e., . Observe that, by taking the logarithm of the objective (1), we get an equivalent problem where the objective is to maximize the weighted sum of logarithms of agents’ valuations.

Let us first consider the additive problem, and see what is assured by computing a single such maximum weight matching. If the number of agents, say , and items, say , is the same, then the allocation obtained by matching items to agents according to such a matching results in the maximum objective.  [NR14] extend this algorithm to the general case, by allocating items according to one matching, and arbitrarily allocating the remaining items. They prove that this gives an factor approximation algorithm.

A natural extension to this algorithm is to compute more matchings instead of arbitrary allocations after a single matching. That is, compute one maximum weight matching, allocate items according to this matching, then repeat this process until all items are allocated. This repeated matching algorithm still does not help us get rid of the dependence on in the approximation factor. To see why, consider the following example.

Example 1.1.

Consider agents with weights each, and items. The valuations of and for the first item are and respectively. Agent values each of the remaining items at , while only values the last of these at , and values remaining items at . An allocation that optimizes the of the agents allocates the first item to , and allocate all remaining items to . The optimal geometric mean is . A repeated matching algorithm, in the first iteration, allocates the first item to , and the last to . No matching can now give non zero valuation to . The maximum geometric mean that can be generated by such an algorithm is . Thus, using , the ratio of these two geometric means depends on .

The above example shows the critical reason why a vanilla repeated matching algorithm may not work. In the initial matchings, the algorithm has no knowledge of how the agents value the entire set of items. Hence during these matchings it might allocate the high valued items to the wrong agents, thereby reducing the by a large factor. To get around this problem, our algorithm needs to have some knowledge of an agent’s valuation for the unallocated (low valued) set of items, while deciding how to allocate high valued items. It can then allocate the high valued items correctly with this foresight.

It turns out that there is a simple way to provide this foresight when the valuation functions are additive(-like). Effectively, we keep aside high valued items of each agent, and assign the other items via a repeated matching algorithm. We then assign the items we had set aside to all agents via matchings that locally maximize the resulting objective. The collective set of items put aside by all agents will have all the high valued items that required the foresight for correct allocation as a subset. Because these items are allocated after allocating the low valued items, this algorithm allocates the high valued items more smartly. In Section 2, we describe this algorithm, termed , and show that it gives an factor approximation for the objective.

The above idea, however, does not work for submodular valuation functions. The main, subtle reason is as follows. Even in the additive case, the idea actually requires to keep aside not the set of items with highest valuation, but the set of items that leave a set of lowest valuation. For additive valuations, these sets are the same. However, it is known from [SF11] that finding a set of items of minimum valuation with lower bounded cardinality for monotone submodular functions is inapproximable within factor, where is the number of items.

We get around this issue and get the foresight for assigning high valued items in a different way. Interestingly, we use the technique of repeated matchings itself for this. In algorithm , we allocate items via repeated matchings, then release some of the initial matchings and re-match the items of these initial matchings.

The idea is that the initial matchings will allocate all high valued items, even if incorrectly, and give us the set of items that must be allocated correctly. If the total number of all high valued items depends only on , then the problem of maximizing the objective when allocating this set of items is solved up to some factor of by applying a repeated matching algorithm. In Lemma 3.3 we prove such an upper bound on the number of initial matchings to be released.

Thus far, we have proved that we can allocate one set of items, the high valued items, approximately optimally. Now submodular valuations do not allow us to add valuations incurred in separate matchings to compute the total valuation of an agent. Getting such a repeated matchings type cumulative approach result in high total valuation requires the following natural modification in the approach. We redefine the edge weights used for computing matchings. We now consider the marginal valuations over items already allocated in previous matchings as edge weights rather than individual valuations.

There are several challenges to prove this approach gives an allocation of high overall. First, bounding the amount of valuation received by a particular agent as a fraction of her optimal allocation is difficult. This is because the subset of items allocated by the algorithm might be completely different from the set of optimal items. We can however give a relation between these two values and this is seen in Lemma

Then, since we release and reallocate the items of initial matchings, now the set of items allocated to an agent can be completely different from the set before, changing all marginal utilities completely. It is thus non-trivial to combine the valuations from these stages too. This is done in the proof of Theorem

Apart from this, we also have the following results in the paper that use different techniques.

Submodular with constant number of agents. We completely resolve this case using a different approach that uses techniques of maximizing submodular functions over matroids developed in [CVZ10] and a reduction from [Von08]. At a high level, we first maximize the continuous relaxations of agent valuation functions, then round them using a randomized algorithm to obtain an integral allocation of items. The two key results used in designing the algorithm are Theorems 5.2 and 5.3.

Hardness of approximation. The submodular problem is to maximize the sum of valuations of agents over integral allocations of items. [KLMM08] describe a reduction of , which is -Hard to approximate within a constant factor, to . We prove that this reduction also establishes the same hardness for the submodular problem.

1.4 Further Related Work

An extensive work has been done on special cases of the problem. For the symmetric additive problem, several constant-factor approximation algorithms have been obtained. The first such algorithm used an approach based on a variant of Fisher markets [CG18], to achieve an approximation factor of . Later, the analysis of this algorithm was improved to  [CDG17]. Another approach based on the theory of real stable polynomials gave an -factor guarantee [AGSS17]. Recently,  [BKV18] obtained the current best approximation factor of using an approach based on approximate and allocation. These approaches have also been extended to provide constant-factor approximation algorithms for slight generalizations of additive valuations, namely the budget additive [GHM19],  [AMGV18], and a common generalization of these two valuations [CCG18].

All these approaches exploit the symmetry of agents and the characteristics of additive-like valuation functions. For instance, the notion of a maximum bang-per-buck (MBB) item is critically used in most of these approaches. There is no such equivalent notion for the submodular case. This makes them hard to extend to the asymmetric case and to more general valuation functions.

Fair and efficient division of items to asymmetric agents with submodular valuations is an important problem, also raised in [CDG17]. However, the only known result for this general problem is an -factor algorithm [NR14], where is the number of items.

Two other popular welfare objectives are the social welfare and max-min. In social welfare, the goal is to maximize the sum of valuations of all agents and in the max-min objective, the goal is to maximize the value of lowest-valued agent. The latter objective is also termed as the Santa Claus problem for the restricted additive valuations [BS06].

The social welfare problem under submodular valuations has been completely resolved with a -factor algorithm [Von08] and a matching hardness result [KLMM08]. Note that the additive case for this problem has a trivial linear time algorithm, hence it is perhaps unsurprising that a constant factor algorithm would exist for the submodular case.

For the max-min objective, extensive work has been done on the restricted additive valuations case, resulting in constant factor algorithms for the same [AKS15, DRZ18]. However, for the unrestricted additive valuations the best approximation factor is  [AS10]. For the submodular Santa Claus problem, there is an factor algorithm [KP07]. On the other hand, a hardness factor of is the best known lower bound for both settings [BD05].

Organization of the paper: In Section 2, we describe the algorithm and analysis for the additive problem. In Section 3, we present the algorithm for submodular valuations. Section 4 contains the hardness proof for the submodular setting. The results for the special cases of submodular with constant number of agents, symmetric additive , and symmetric additive with restricted valuations are presented in Section 5. In Section 6, we present counter examples to prove tightness of the analysis of Algorithms and . The final Section 7 discusses possible further directions.

2 Additive Valuations

In this section, we present , described in Algorithm 1, for the asymmetric additive problem, and prove the following approximation result.

Theorem 2.1.

The objective of allocation , output by for asymmetric additive problem, is at least times the optimal , denoted as , i.e., .

is a single pass algorithm that allocates up to one item to every agent per iteration such that the objective is locally maximized. An issue with a naive single pass, locally optimizing greedy approach is that the initial iterations work on highly limited information. As shown in Example 1.1, such algorithms can result in outcomes with very low even for symmetric agents with additive valuation functions. In the example, although agent can be allocated an item of high valuation later, the algorithm does not know this initially. Algorithm 1 resolves this issue by pre-computing an approximate value that the agents will receive in later iterations, and uses this information in the edge weight definitions when allocating the first item to every agent. We now discuss the details of .

2.1 Algorithm

works in a single pass. For every agent, the algorithm first computes the value of least valued items and stores this in . then defines a weighted complete bipartite graph with edge weights and allocates one item to each agent along the edges of a maximum weight matching of . It then starts allocating items via repeated matchings. Until all items are allocated, iteratively defines graphs with denoting the set of unallocated items and edge weights defined as , where is the valuation of agent for items that are allocated to her. then allocates at most one item to each agent according to a maximum weight matching of .

Input : A set of agents with weights , a set of indivisible items, and additive valuations , where is the valuation of agent for a set of items .
Output : An allocation that approximately optimizes the .
  // defined in Section 2.2
Define weighted complete bipartite graph with weights Compute a maximum weight matching for   // allocate items according to
  // update set of unallocated items
1 while  do
       Define weighted complete bipartite graph with weights Compute a maximum weight matching for   // allocate items according to
         // remove allocated items
2      
3 end while
Return
Algorithm 1 for the Asymmetric Additive problem

2.2 Notation

In the following discussion, we use to denote the set of items received by agent in . We use to denote the set of items in ’s optimal bundle. Then for every , all items in and are ranked according to the decreasing utilities as per . We use the shorthand to denote the set . Let denotes the items ranked from to according to agent in , and is the total allocation to agent from the first matching iterations. We also use to denote the ranked item of agent from the entire set of items. For all , we define as the minimum value for the remaining set of items upon removing at most items from , i.e., .666As the valuation functions are monotone, the minimum value will be obtained by removing exactly items. The less than accounts for the case when the number of items in is fewer than .

2.3 Analysis

To establish the guarantee of Theorem 2.1, we first prove a couple of lemmas.

Lemma 2.1.

Proof.

Since every iteration of allocates at most items, at the start of iteration at most items are allocated. Thus at least items from ranked between to by agent are still unallocated. In the iteration the agent will thus get an item with value at least the value of and the lemma follows. ∎

Lemma 2.2.

Proof.

Using Lemma 2.1 and since

Thus,

As at most items are allocated in every iteration, agent receives items for at least iterations.777Here we assume that the agents have non-zero valuation for every item. If it does not, the other case is also straightforward and the lemma continues to hold. This implies that and hence,

The second inequality follows as . ∎

We now prove the main theorem.

Proof of Theorem 2.1.

where the last inequality follows from Lemma 2.2. During the allocation of the first item , items of all agents are available. Thus, allocating each agent her own is a feasible first matching and we get

Now, . Suppose we define, , then . It follows by using , we get . Thus,

Remark 2.1.

When is applied to the instance of Example 1.1, it results in a better allocation than that of a naive repeated matching approach. Stage of computes as and for A and B respectively. When this value is included in the edge weight of the first bipartite graph , the resulting matching gives B the first item, and A some other item. Subsequently A gets all remaining items, resulting in an allocation having the optimal .

The algorithm easily extends to budget additive ( and separable piecewise concave ( valuations using the following small changes: In , where is the utility cap for agent , and in , needs to be calculated while considering each copy of an item as a separate item. In both cases, the edge weights in the bipartite graphs will use marginal utility (as we use in the submodular valuations case in Section 3). Lemma 2.2 and the subsequent proofs can be easily extended for these cases by combining ideas from Lemma 3.2 and Proof of Theorem 3.1. Thus, we obtain the following theorem.

Theorem 2.2.

The objective of allocation , output by for asymmetric (and ) problem, is at least times the optimal , denoted as , i.e., .

3 Submodular Valuations

In this section we present the , given in Algorithm 2, for approximating the objective under submodular valuations. We will prove the following relation between the of the allocation returned by and the optimal geometric mean .

Theorem 3.1.

The objective of allocation , output by for asymmetric submodular problem, is at least times the optimal , denoted as , i.e., .

3.1 Algorithm

takes as input an instance of the problem, denoted by , where is the set of agents, is the set of items, and

is the set of agents’ monotone submodular valuation functions, and generates an allocation vector

. Each agent is associated with a positive weight

runs in three phases. In the first phase, in every iteration, we define a weighted complete bipartite graph as follows. is the set of items that are still unallocated ( initially). The weight of edge , denoted by , is defined as the logarithm of the valuation of the agent for the singleton set having this item, scaled by the agent’s weight. That is, . We then compute a maximum weight matching in this graph, and allocate to agents the items they were matched to (if any). This process is repeated for iterations.

We perform a similar repeated matching process in the second phase, with different edge weight definitions for the graphs . We start this phase by assigning empty bundles to all agents. Here, the weight of an edge between agent and item is defined as the logarithm of the valuation of agent for the set of items currently allocated to her in Phase of , scaled by her weight. That is, if we denote the items allocated in iterations of Phase as , in iteration,

In the final phase, we re-match the items allocated in Phase . We release these items from their agents, and define as union of these items. We define by letting the edge weights reflect the total valuation of the agent upon receiving the corresponding item, i.e., , where is the final set of items allocated to in Phase . We compute one maximum weight matching for so defined, and allocate all items along the matched edges. All remaining items are then arbitrarily allocated. The final allocations to all agents, denoted as , is the output of .

Input : A set of agents with weights , a set of indivisible items, and valuations , where is the valuation of agent for a set of items .
Output : An allocation that approximately optimizes the objective
Phase :   // ’s store the set of items allocated in Phase
  // set of unallocated items before every iteration
  // iteration counter
while  and  do
       Define weighted complete bipartite graph with weights Compute a maximum weight matching for   // allocate items to agents according to
         // remove allocated items
1      
2 end while
Phase : For all ,   // ’s are the sets of items allocated in Phase
while  do
       Define weighted complete bipartite graph with weights Compute a maximum weight matching for   // allocate items to agents according to
         // remove allocated items
3      
4 end while
Phase :   // release items allocated in Phase 1
Define weighted complete bipartite graph with Compute a maximum weight matching for   // allocate items to agents according to
Arbitrarily allocate rest of the items to agents, let denote the final allocation return
Algorithm 2 for the Asymmetric Submodular problem

3.2 Notation

There are three phases in . We denote the set of items received by agent in Phase by , and its size by . Similarly, and respectively denote the final set of items received by agent and the size of this set. Note that Phase releases and re-allocates selected items of Phase , thus is not equal to The items allocated to the agents in Phase are denoted by . We also refer to the complete set of items received in iterations to of Phase by for any

For the analysis, the marginal utility of an agent for an item over a set of items is denoted by . Similarly, we denote by the marginal utility of set of items over set where and We use to denote the optimal allocation of all items that maximizes the , and for . For every agent , items in are ranked so that is the item that gives the highest marginal utility over all higher ranked items. That is, for , is the item that gives the highest marginal utility over and for all .888Since the valuations are monotone submodular, this ensures that for all This implies that in any subset of items in the optimal bundle, the highest ranked item’s marginal contribution is at least times that of this set, when the marginal contribution is counted in this way.

We let denote the set of items from that are not allocated (to any agent) at the end of Phase , and we denote by and respectively the total valuation and number of these items. For convenience, to specify the valuation for a set of items , instead of we also use Similarly, while defining the marginal utility of a set over instead of writing we also use

3.3 Analysis

We will prove Theorem 3.1 using a series of supporting lemmas. We first prove that in Phase , the minimum marginal utility of an item allocated to an agent over her current allocation from previous iterations of Phase is not too small. This is the main result that allows us to bound the minimum valuation of the set of items allocated in Phase .

In the iteration of Phase , finds a maximum weight matching. Here, the algorithm tries to assign to each agent an item that gives her the maximum marginal utility over her currently allocated set of items. However, every agent is competing with other agents to get this item. So, instead of receiving the best item, she might lose a few high ranked items to other agents. Consider the intersection of the set of items that agent loses to other agents in the iteration with the set of items left from her optimal bundle at the beginning of iteration. We will refer to this set of items by Let the number of items in be

For the analysis of we also introduce the notion of attainable items for every iteration. is the set of an agent’s preferred items that she lost to other agents. The items that are now left are referred as the set of attainable items of the agent. Note that in any matching, every agent gets an item equivalent to her best attainable item, that is, an item for which her marginal valuation (over her current allocation) is at least equal to that from her highest marginally valued attainable item.

For all , we denote the intersection of the set of attainable items in the iteration and agent ’s optimal bundle by , and let be the total valuation of attainable items at the first iteration of Phase . In the following lemma, we prove a lower bound on the marginal valuation of the set of attainable items over the set of items that the algorithm has already allocated to the agent.

Lemma 3.1.

For any ,

Proof.

We prove this lemma using induction on the number of iterations Consider the base case when Agent has already been allocated She now has at most items left from that are not yet allocated. In the next iteration the agent loses items to other agents and receives . Each of the remaining items have marginal utility at most over Thus, the marginal utility of these items over is also at most We bound the total marginal valuation of over , by considering two cases.

Case 1: : By monotonicity of ,

Case 2: : Here,

In both cases, submodularity of valuations and the fact that for all implies,