“How should a central agency incentivize agents to create high value, and then distribute this value among them in a fair manner?” – this question forms the central theme of this paper. Formally, we model a set of selfish agents in a combinatorial setting consisting of a set of projects. Each project is characterized by a valuation function; specifies the welfare generated by a set of agents working on project . The problem that we study is the following: compute an assignment of agents to projects to maximize social welfare, and provide rewards or payments to each agent so that no group of agents deviate from the centrally prescribed solution.
For example, consider a firm dividing its employees into teams to tackle different projects. If these employees are not provided sufficient remuneration, then some group could break off, and form their own startup to tackle a niche task. Alternatively, one could imagine a funding agency incentivizing researchers to tackle specific problems. More generally, a designer’s goal in such a setting is to delicately balance the twin objectives of optimality and stability: forming a high-quality solution while making sure this solution is stable. A common requirement that binds the two objectives together is budget-balancedness: the payments provided to the agents must add up to the total value of the given solution.
Cooperative Coalition Formation The question of how a group of agents should divide the value they generate has inspired an extensive body of research spanning many fields [7, 28, 32, 38, 40]. The notion of a ‘fair division’ is perhaps best captured by the Core: a set of payments so that no group of agents would be better off forming a coalition by themselves. Although the Core is well understood, implicit in the papers that study this notion is the underlying belief that there are infinite copies of one single project [5, 12], which is often not realistic. For example, a tacit assumption is that if the payments provided are ‘not enough’, then every agent can break off, and simultaneously generate a value of by working alone; such a solution does not make sense when the number of projects or possible coalitions is limited. Indeed, models featuring selfish agents choosing from a finite set of distinct strategies are the norm in many real-life phenomena: social or technological coordination [1, 2], opinion formation [13, 21], and party affiliation [6, 8] to name a few.
The fundamental premise of this paper is that many coalition formation settings feature multiple non-identical projects, each with its own (subadditive) valuation . Although our model allows for duplicate projects, the inherently combinatorial nature of our problem makes it significantly different from the classic problem with infinite copies of a single project. For example, in the classic setting with a single valuation , the welfare maximization problem is often trivial (complete partition when is subadditive), and the stabilizing core payments are exactly the dual variables to the allocation LP . This is not the case in our setting where even the welfare maximization problem is NP-Hard, and known approximation algorithms for this problem use LP-rounding mechanisms, which are hard to reconcile with stability. Given this, our main contribution is a poly-time approximation algorithm that achieves stability without sacrificing too much welfare.
1.1 The Core
Given an instance with agents () and projects, a solution is an allocation
of agents to projects along with a vector of payments. With unlimited copies of a project, core stability refers to the inability of any set of agents to form a group on their own and obtain more value than the payments they receive. The stability requirement that we consider is a natural extension of core stability to settings with a finite number of fixed projects. That is, when a set of agents deviate to project , they cannot displace the agents already working on that project . Therefore, the payments of the newly deviated agents (along with the payments of everyone else on that project) must come from the total value generated, . One could also take the Myersonian view  that ‘communication is required for negotiation’ and imagine that all the agents choosing project together collaborate to improve their payments. Formally, we define a solution to be core stable if the following two conditions are satisfied,
No set of agents can deviate to a project and obtain more total value for everyone in that project than their payments, i.e., for every set of agents and project , .
The total payments sum up to the social welfare (i.e., total value) of the solution: .
Observe that Stability for together with budget-balancedness imply that the value created from a project will go to the agents on that project only. Finally, we consider a full-information setting as it is reasonable to expect the central authority to be capable of predicting the value generated when agents work on a project.
(Example 1) We begin our work with an impossibility result: even for simple instances with two projects and four agents, a core stable solution need not exist. Consider , and define and otherwise; for all . If all agents are assigned to project , then in a budget-balanced solution at least one agent has to have a payment of at most ; such an agent would deviate to project . Instead, if some agents are assigned to project , then it is not hard to see that they can deviate to project and the total utility goes from to .
Approximating the Core Our goal is to compute solutions that guarantee a high degree of stability. Motivated by this, we view core stability under the lens of approximation. Specifically, as is standard in cost-sharing literature [29, 39], we consider relaxing one of the two requirements for core stability while retaining the other one. First, suppose that we generalize the Stability criterion as follows:
(-Stability) For every set of agents and every project ,
-stability captures the notion of a ‘switching cost’ and is analogous to an Approximate Equilibrium; in our example, one can imagine that employees do not wish to quit the firm unless the rewards are at least a factor larger. In the identical projects literature, the solution having the smallest value of is known as the Multiplicative Least-Core . Next, suppose that we only relax the budget-balance constraint,
(-Budget Balance) The payments are at most a factor larger than the welfare of the solution.
This generalization offers a natural interpretation: the central authority can subsidize the agents to ensure high welfare, as is often needed in other settings such as public projects or academic funding . In the literature, this parameter has been referred to as the Cost of Stability [4, 35].
We do not argue which of these two relaxations is the more natural one: clearly that depends on the setting. Fortunately, it is not difficult to see that these two notions of approximation are equivalent. In other words, every approximately core stable solution with -stability can be transformed into a solution with -budget balancedness by scaling the payments of every player by a factor . Therefore, in the rest of this paper, we will use the term -core stable without loss of generality to refer to either of the two relaxations. All our results can be interpreted either as forming fully budget-balanced payments which are -stable, or equivalently as fully stable payments which are -budget balanced. Finally, the problem that we tackle in this paper can be summarized as follows:
(Problem Statement) Given an instance with subadditive valuation functions, compute an -core stable solution having as small a value of as possible, that approximately maximizes social welfare.
1.2 Our Contributions
The problem that we face is one of bi-criteria approximation: to simultaneously optimize both social welfare and the stability factor ( refers to a core stable solution). For the rest of this paper, we will use the notation -Core stable solution to denote an -Core solution that is also a -Approximation to the optimum welfare. The bounds that we derive are quite strong: we are able to approximate both and simultaneously to be close to the individually best-possible lower bounds. In a purely algorithmic sense, our problem can be viewed as one of designing approximation algorithms that require the additional property of stabilizability.
Main Result Our main result is the following black-box reduction that reduces the problem of finding an approximately core stable solution to the purely algorithmic problem of welfare maximization,
For any instance where the projects have subadditive valuations, any LP-based -approximation to the optimum social welfare can be transformed in poly-time to a -core stable solution.
The strength of this result lies in its versatility: our algorithm can stabilize any input allocation at the cost of half the welfare. The class of subadditive valuations is extremely general, and includes many well-studied special classes all of which use LP-based algorithms for welfare maximization; one can simply plug-in the value of for the corresponding class to derive an approximately core stable solution. In particular, for general subadditive valuations, one can use the -approximation algorithm of Feige  and obtain a -Core. As is standard in the literature , we assume that our subadditive functions are specified in terms of a demand oracle (see Section 2 for more details). However, even in the absence of a demand oracle, one can obtain a poly-time reduction as long as we are provided an allocation and the optimum dual prices as input.
For various sub-classes of subadditive valuations, we obtain stronger results by exploiting special structural properties of those functions. These results are summarized in Table 1. The classes that we study are extremely common and have been the subject of widespread interest in many different domains.
|Valuation Function Class||Our Results: -Core||Lower Bound for|
|Fractionally Subadditive (XoS)|||
Lower Bounds. All of our results are ‘almost tight’ with respect to the theoretical lower bounds for both welfare maximization and stability. Even with anonymous functions, a core may not exist; thus our -approximation for this class is tight. For general subadditive functions, one cannot compute better than a -approximation to the optimum welfare efficiently, and so our result has only a gap of in both criteria. Finally, for XoS and Submodular functions, we get almost stable solutions (-Core) that match the lower bounds for welfare maximization.
A Fast Algorithm for Anonymous Subadditive Functions We devise a greedy -approximation algorithm for anonymous functions that may be of independent algorithmic interest. The only known -approximation algorithm even for this special class is the rather complex LP rounding mechanism for general subadditive functions. In contrast, we provide an intuitive greedy algorithm that obtains the same factor, and use the structural properties of our algorithm to prove improved bi-criteria bounds ( as opposed to ).
Ties to Combinatorial Auctions with Item Bidding We conclude by pointing out a close relationship between our setting and simultaneous auctions where buyers bid on each item separately [9, 14, 17]. Consider ‘flipping’ an instance of our problem to obtain the following combinatorial auction: every project is a buyer with valuation , and every is an item in the market. We prove an equivalence between Core stable solutions in our setting and Pure Nash equilibrium for the corresponding flipped simultaneous second price auction. Adapting our lower bounds to the auction setting, we make a case for Approximate Nash Equilibrium by constructing instances where every exact Nash equilibrium requires buyers to overbid by a factor of , when they have anonymous subadditive valuations. Finally, we apply our earlier algorithms to efficiently compute approximate equilibria with small over-bidding for two settings, namely, a -optimal, -approximate equilibrium when buyers have anonymous subadditive valuations, and a -optimal, -approximate equilibrium for submodular buyers.
1.3 Related Work
The core has formed the basis for a staggering body of research in a myriad of domains, and one cannot hope to do justice to this vast literature. Therefore, we only review the work most pertinent to our model. The non-existence of the core in many important settings has prompted researchers to devise several natural relaxations [3, 4, 43, 41]: of these, the Cost of Stability [4, 7, 35], and the Multiplicative Least-Core [10, 42, 43] are the solution concepts that are directly analogous to our notion of an -core. That said, there are a few overarching differences between our model, and almost all of the papers studying the core and its relatives; Duplicate Projects: In the classic setting, it is assumed that there are infinite copies of one identical project so that different subsets of agents (say , ) working independently on the same project can each generate their full value , and Superaddivity: In order to stabilize the grand coalition, most papers assume that the valuation is superadditive, which inherently favors cooperation. On the contrary, our setting models multiple dissimilar projects where each project is a fixed resource with a subadditive valuation.
Although cooperative games traditionally do not involve any optimization, a number of papers have studied well-motivated games where the valuation or cost function (
) is derived from an underlying combinatorial optimization problem[15, 26, 28, 34]. For example, in the vertex cover game [15, 19] where each edge is an agent, is the size of the minimum cover for the edges in . Such settings are fundamentally different from ours because the hardness arises from the fact that the value of the cost function cannot be obtained precisely. For many such problems, core payments can be computed almost directly using LP Duality [26, 28, 34].
In the cooperative game theory literature, our setting is perhaps closest to the work studying coalitional structures where instead of forming the grand coalition, agents are allowed to arbitrarily partition themselves [4, 27] or form overlapping coalitions . This work has yielded some well-motivated extensions of the Core, albeit for settings with duplicate projects. Our work is similar in spirit to games where agents form coalitions to tackle specific tasks, e.g., threshold task games  or coalitional skill games . In these games, there is still a single valuation function which depends on the (set of) task(s) that the agents in can complete. Once again, the tacit assumption is that there are an infinite number of copies of each task.
Recently, there has been a lot of interest in designing cost-sharing mechanisms that satisfy strategy-proofness in settings where a service is to be provided to a group of agents who hold private values for the same [37, 16]
. In contrast, we look at a full information game where the central agency can exactly estimate the output due to a set of agents working on a project. A powerful relationship between our work and the body of strategy-proof mechanisms was discovered by Moulin who showed that a natural class of ‘cross-monotonic cost sharing schemes’ can be used to design mechanisms that are both core-stable (CS) and strategy-proof (SP). This has led to the design of beautiful SP+CS mechanisms for several combinatorially motivated problems with a single identical project or service [25, 39]. Finally, we briefly touch upon the large body of literature in non-transferable utility games that (like us) study coalition formation with a finite number of asymmetric projects [2, 11, 13, 23]. However, these papers use fixed reward-sharing schemes, and thus do not model the bargaining power of agents that is a key aspect of coalition formation.
2 Model and Preliminaries
We consider a transferable-utility coalition formation game with a set of projects and a set of agents. Each project is specified by a monotone non-decreasing valuation function . A solution consists of an allocation of agents to projects , and a payment scheme and is said to be -core stable for , if
The payments are fully budget-balanced, and for every project , and set of agents, . An equivalent condition is that the payments are at most a factor times the social welfare of the solution, and we have full stability, i.e., .
The allocation is a -approximation to the optimum allocation, i.e., the welfare of the solution is at least times the optimum welfare.
Throughout this paper, we will use to denote the welfare maximizing allocation as long as the instance is clear. Given an allocation , we use to denote the social welfare of this allocation, and to be the set of projects that are empty under , i.e., if .
Comparison to Traditional Models
We digress briefly to highlight the key differences between our model as defined above and traditional utility-sharing settings found in the literature. Traditionally, a transferable-utility coalition formation game consists of a single valuation function . The objective there is to provide a vector of payments ( to user ) in order to stabilize some desired solution 111Usually, this is the grand coalition but it can also refer to other solutions, for example, the social welfare maximizing solution, where the number of coalitions can be any positive integer. Here, core stability means that for any group of agents , . Notice from the above definition that (unlike our setting), the same core payments are applicable for every single solution , i.e., the payments are completely independent of the solution formed.
A stark contrast to our notion of a stable solution is the implicit assumption that there are an infinite number of copies of a single project (specified by ) available for the agents to deviate to. For instance, a necessary condition for core stability is that for every agent ; this implies that in theory, each of the agents could work independently on the same project and generate a total value of and not . As mentioned in the introduction, such assumptions do not always make sense, and it is reasonable to assume that the value generated depends on which project the agents deviate to, and how many other agents are currently working on that project or resource. Finally, in the traditional model, the minimum core payments (irrespective of the solution) can be obtained directly using the dual of the allocation LP. In contrast, this is not so in our setting due to the presence of slack variables (See Section 3).
Our main focus in this paper will be on the class of monotone subadditive valuation functions. A valuation function is said to be subadditive if for any two sets , , and monotone if . The class of subadditive valuations encompasses a number of popular and well-studied classes of valuations, but at the same time is significantly more general than all of these classes. It is worth noting that when there are an unlimited number of allowed groups, subadditive functions are almost trivial to deal with: both the maximum welfare solution and the stabilizing payments are easily computable. For our setting, however, computing OPT becomes NP-Hard, and a fully core-stable solution need not exist. Due to the importance and the natural interpretation of subadditive functions, we believe it is very desirable to understand utility sharing under such valuations; our paper presents the first known results on utility sharing for general subadditive functions. In addition, we are able to show stronger results for the following two sub-classes that are extremely common in the literature.
- Submodular Valuations
For any two sets with , and any agent , .
- Fractionally Subadditive (also called ‘XoS’) Valuations
a set of additive functions such that for any , . These additive functions are referred to as clauses.
Recall that an additive function has a single value for each so that for a set of agents, . The reader is asked to refer to [18, 20, 31, 44] for alternative definitions of the XoS class and an exposition on how both these classes arise naturally in many interesting applications.
Anonymous Subadditive Functions In project assignment settings in the literature modeling a number of interesting applications [30, 35], it is reasonable to assume that the value from a project depends only on the number of users working on that project. Mathematically, this idea is captured by anonymous functions: a valuation function is said to be anonymous if for any two subsets with , we have . One of our main contributions in this paper is a fast algorithm for the computation of Core stable solutions when the projects have anonymous subadditive functions. We remark here that anonymous subadditive functions form an interesting sub-class of subadditive functions that are quite different from submodular and XoS functions.
The standard approach in the literature while dealing with set functions (where the input representation is often exponential in size) is to assume the presence of an oracle that allows indirect access to the valuation by answering specific types of queries. In particular, when dealing with a subadditive function , it is typical to assume that we are provided with a demand oracle that when queried with a vector of payments , returns a set that maximizes the quantity . Demand oracles have natural economic interpretations, e.g., if represents the vector of potential payments by a firm to its employees, then denotes the assignment that maximizes the firm’s revenue or surplus.
In this paper, we do not explicitly assume the presence of a demand oracle; our algorithmic constructions are quite robust in that they do not make any demand queries. However, any application of our black-box mechanism requires as input an allocation which approximates OPT, and the optimum dual prices, both of which cannot be computed without demand oracles. For example, it is well-known  that one cannot obtain any reasonable approximation algorithm for subadditive functions (better than ) in the absence of demand queries. That said, for several interesting valuations, these oracles can be constructed efficiently. For example in the case of XoS functions, a demand oracle can be simulated in time polynomial in the number of input clauses. We conclude this discussion by reiterating that demand oracles are an extremely standard tool used in the literature to study combinatorial valuations; almost all of the papers [18, 25, 20] studying Subadditive or XoS functions take the presence of a demand oracle for granted.
2.1 Warm-up Result: -Core for Submodular Valuations
We begin with an easy result: an algorithm that computes a core stable solution when all projects have submodular valuations, and also retains half the optimum welfare. Although this result is not particularly challenging, it serves as a useful baseline to highlight the challenges involved in computing stable solutions for more general valuations. Later, we show that by sacrificing an amount of stability, one can compute for submodular functions, a solution with a much better social welfare -approximation to OPT).
We can compute in poly-time a -Core stable solution for any instance with submodular project valuations.
The above claim also implies that for every instance with submodular project valuations, there exists a Core stable solution. In contrast, for subadditive valuations, even simple instances (Example 1) do not admit a Core stable solution.
The proof uses the popular greedy half-approximation algorithm for submodular welfare maximization due to . Initialize the allocation to be empty. At every stage, add an agent to project so that the value is maximized. Set ’s final payment to be exactly the above marginal value. Let the final allocation once the algorithm terminates be , so . Consider any group of agents , and some project : by the definition of the greedy algorithm, and by submodularity, it is not hard to see that , . Therefore, we have that , and since the payments are clearly budget-balanced, the solution is core-stable.
Challenges and Techniques for Subadditive Valuations At the heart of finding a Core allocation lies the problem of estimating ‘how much is an agent worth to a coalition’. Unfortunately, the idea used in Claim 2.1 does not extend to more general valuations as the marginal value is no longer representative of an agent’s worth. One alternative approach is to use the dual variables to tackle this problem: for example, in the classic setting with duplicate projects, every solution along with the dual prices as payments yields an -budget balanced core. Therefore, the challenge there is to bound the factor using the integrality gap. However, this is no longer true in our combinatorial setting as the payments are closely linked to the actual solution formed, and moreover, there is no clear way of dividing the dual variables due to the presence of slack (see LP 1).
Our Approach. We attempt to approximately resolve the question of finding each agent’s worth by identifying (for each project) a set of “heavy users” who contribute to half the project’s value. We provide large payments to each heavy user based on her best outside option which are determined using Greedy Matchings. Finally, the dual variables are used only as a ‘guide’ to ensure that , the payment given to the users on that project is at least a good fraction of the value they generate.
3 Computing Approximately Core Stable Solutions
In this section, we show our main algorithmic result, namely a black-box mechanism that reduces the problem of finding a core stable solution to the algorithmic problem of subadditive welfare maximization. We use this black-box in conjunction with the algorithm of Feige  to obtain a -Core stable solution, i.e., a -approximate core that extracts one-fourth of the optimum welfare. Using somewhat different techniques, we form stronger bounds (-Core) for the class of anonymous subadditive functions. Our results for the class of anonymous functions are tight: there are instances where no -core stable solution exists. This indicates that our result for general subadditive valuations is close to tight (up to a factor of two).
We begin by stating the following standard linear program relaxation for the problem of computing the welfare maximizing allocation. Although the primal LP contains an exponential number of variables, the dual LP can be solved using the Ellipsoid method where the demand oracle serves as a separation oracle. The best-known approximation algorithms for many popular classes of valuations use LP-based rounding techniques; of particular interest to us is the -approximation for Subadditive valuations , and -approximation for XoS valuations .
As long as the instance is clear from the context, we will use to denote the optimum solution to the Dual LP, referring to as the dual prices, and as the slack.
Main Result We are now in a position to show the central result of this paper. The following black-box mechanism assumes as input an LP-based -approximate allocation, i.e., an allocation whose social welfare is at most a factor smaller than the value of the LP optimum for that instance. LP-based approximation factors are a staple requirement for black-box mechanisms that explicitly make use of the optimum LP solution . Along these lines, we make the assumption that the optimum dual variables (for the given instance) are available to the algorithm along with an input allocation.
Given any -approximate solution to the LP optimum, we can construct a -Core Stable Solution in polynomial time as long as the projects have subadditive valuations.
For general subadditive functions, the only known poly-time constant-factor approximation is the rather intricate randomized LP rounding scheme proposed in . Using this -approximation, we get the following corollary.
We can compute in poly-time a -Core stable solution for any instance with subadditive projects.
We now prove Theorem 3.1.
We provide an algorithm that takes as input an allocation that is an -approximation to the LP Optimum and returns a core stable solution along with payments whose welfare is at least half that of , and such that the total payments are at most times the welfare of .
Recall that in a core stable solution , it is necessary that for every project and set of agents, . A naive approach is to consider whether the dual payments (price plus the slack divided equally among ) would suffice to enforce core stability to the solution . Unfortunately, this naive strategy fails because the payments are not enough to prevent the deviation of agents to empty projects. To remedy this, we take the following approach: we implement a matching-based routine that allows us to identify the ‘light’ and ‘heavy’ users at each project so that when the light users deviate to the empty projects, there is not much welfare loss (and vice-versa for the heavy users). We assign the light users to these projects, and provide the heavy users with payments that depend on both ‘the best outside option’ available to them and their contribution to social welfare in order to stabilize them.
We begin by defining a simple Greedy Matching with Reserve Prices procedure that will serve as a building block for our main algorithm. The procedure is straightforward so we state it in words here and formally define it in Appendix 0.A.
Algorithm 2: “Begin with an input allocation and initial payments. During every iteration, assign an agent to a currently empty project , as long as her current payment , and update her payment to . Terminate when for each agent and empty project .”
We begin our analysis of the above procedure with a simple observation: during the course of the algorithm, the payments of the agents are non-decreasing (in fact, in every iteration, the payment of at least one agent strictly increases). Specifically, we are interested in analyzing the solution returned by the algorithm when the input allocation is , and the input payments are the naive dual payments discussed above. We first describe some notation and then prove some lemmas regarding the solution returned by the algorithm for this input. Recall that for any given solution , denotes the set of empty projects under .
We denote by the marginal contributions given by the optimal dual payment plus the slack divided equally as per , i.e., if agent , then . Suppose we run the algorithm on the input ; let the corresponding output allocation be , and the payments be . Also define for every , to be the agents who remained on project , to be the agents who left project , and to be the set of projects that the agents in switched to in allocation . Note that all the projects in will only have one agent each in due to the definition of the algorithm. We now divide the non-empty projects in into two categories based on the welfare lost after running the algorithm. Specifically, consider any project in . We refer to as a good project if the welfare in due to the agents originally in is at least half their original welfare, and refer to as a bad project otherwise. That is, is a good project iff,
The following lemma which we prove in the Appendix establishes the crucial fact that although bad projects may result in heavy welfare losses, they surprisingly retain at least half the agents originally assigned to them under . Later, we use this to infer that the agents who deviated from bad projects are ‘heavy’ users who contribute significantly to the project’s welfare.
For every bad project , , i.e., more than half the agents in still remain in project .
Our next lemma relates the output payments to the optimal dual variables.
For every project , and every agent who is allocated to in , her payment under is not larger than .
We prove the lemma in two cases. First consider any agent whose allocation remains the same (say project ) during the entire course of Algorithm 2 for the input . Clearly, this agent’s final payment returned by the algorithm is exactly the same as her initial payment . However, we know that . Therefore, .
Next, consider an agent whose allocation changed at some point during the course of the algorithm. This means that . Then, by definition, her final payment is exactly , which is not larger than by dual feasibility. ∎
Main Algorithm: Phase I While the returned solution is indeed core stable, its welfare may be poor due to the presence of one or more bad projects. Instead of using solution , we use its structure as a guide for how to form a high-welfare solution. For good projects, we can put the agents in onto and the agents in onto ; since these are good projects this is guaranteed to get us half of the welfare , as desired. For bad projects, on the other hand, more than half of the welfare disappeared when we moved agents on away; due to sub-additivity this means that . So instead we will assign agents in to project (which is the opposite of what happens in solution ), and put some agents from onto projects . This is Phase I of our main algorithm, defined formally in Appendix 0.A.
Payments at the end of Phase I: Suppose that the allocation at the end of the above procedure is ; let us define the following payment vector . For every good project : for each agent assigned to , her payment is For every bad project , define to be the set of dummy agents belonging to that project. Each dummy agent receives exactly as payment; every non-dummy agent assigned to a bad project receives plus the left over slack from that project. For each bad project , every agent assigned to some receives a payment of .
We break the flow of our algorithm and show some properties satisfied by the solution returned by Phase I of our algorithm . Mainly we show that this solution is almost core-stable and has desired welfare properties. In Phase II, we once again invoke our Greedy Matching Procedure to ensure core-stability. Recall that every bad project contains at least one dummy agent; all the agents other than the dummy agents will be referred to as non-dummy agents.
For every agent that does not belong to the set of dummy agents, her payment at the end of the first phase () is at least her payment returned by the call to the Greedy Matching Procedure .
Specifically, the above lemma implies that with respect to the non-dummy agents, our solution retains the ‘nice’ stability properties guaranteed by the greedy matching procedure.
For every empty project in (i.e., ), and every non-dummy agent , her payment at the end of the first phase is at least her individual valuation for project , i.e.,
Now that we have a lower bound on the payments returned by the first phase of our algorithm, we show a stricter lemma giving an exact handle on the payments.
For every non-empty project , the total payment to agents in at the end of Phase I is exactly .
Our final lemma shows that the total welfare at the end of the first phase is at least half the welfare of the original input allocation .
For every good project , the welfare due to the agents in is at least half of . For every bad project , the welfare due to the non-dummy agents in , i.e., is at least half of .
The first half of the lemma is trivially true because of the definition of good projects and the fact that the allocation of agents to the projects in is the same as the allocation returned by the call to Algorithm 2.
Moving on to bad projects, we know that and the agents in are exactly the non-dummy agents in project . Therefore, we have
Observe that by virtue of this lemma, we can immediately obtain that the solution returned by the first phase of our algorithm has half the social welfare of the allocation . ∎
Main Algorithm - Phase II
From the above lemmas, it is not hard to conclude that the solution at the end of Phase I has good social welfare and is resilient against deviations to empty projects as long as we only consider non-dummy agents222We can also show that the solution is resilient against deviations to non-empty projects, although this is not needed at this time.. In the second phase of our algorithm, we fix this issue by allowing dummy agents to deviate to empty projects using our Greedy Matching procedure and lower bounding their final payments using the dual variables. We formalize the algorithm for Phase II in the Appendix. Suppose that is the solution returned by the Greedy Matching Algorithm with input , and is the payment vector where all the agents who deviated from receive as much as their dual variables, and the rest of the agents receive their payment under . We now state some simple properties that compare the output of Phase II with its input, and formally prove them in Appendix 0.A.
The following properties are true:
The set of empty projects in is a subset of the set of empty projects in , i.e., .
For all non-dummy agents, their strategies in and coincide.
For every agent , her payment at the end of Phase II () is at least her payment at the end of Phase I.
Our final lemma before showing the main theorem tells us that for every project, the total payment made to agents of that project coincides with the dual payments. The final payments to agents, therefore, are simply a redistribution of the dual payments. We defer its proof to the Appendix.
For every non-empty project , the total payments made to agents in is exactly . Moreover, the payment made to any agent is at least her dual price .
The rest of the theorem follows almost immediately. We begin by showing that the solution is core stable. Consider any project , and a deviation by some set of agents to this project. We only have to show that the total payment made to the agents in is at least . We proceed in two cases.
|By Lemma 6|
|By dual feasibility|
We now establish that the social welfare of our solution is at least half the social welfare of the original allocation . Recall that every non-empty project in
was classified as agood or bad project. For every good project and its associated projects , the fact that follows from Lemma 5 since , and .
Consider any bad project . We know that for every non-dummy agent in , her strategy in is still project . Therefore, the welfare due to any bad project is at least the welfare due to the non-dummy agents in that project which by Lemma 5 is at least half of . Finally, all that remains is to show that the total payments made in are at most a factor larger than the welfare of the solution.
From Lemma 6, we know that the total payments made to agents at the end of Phase I is at most the value of the Dual Optimum of LP 1, which by Strong Duality is equal to the value of the Primal Optimum. However, we know that the welfare of is at least half the welfare of , which by definition is at most a factor away from the LP Optimum. This completes the proof.
3.1 Anonymous Functions
Our other main result in this paper is a -Core stable solution for the class of subadditive functions that are anonymous. Recall that for an anonymous valuation , for any . Such functions are frequently assumed in coalition formation and project assignment settings . We begin with some existential lower bounds for approximating the core. From Example (1), we already know that the core may not exist even in simple instances. Extending this example, we show a much stronger set of results.
(Lower Bounds) There exist instances having only two projects with anonymous subadditive functions such that
For any , no -core stable solution exists for any value .
For any , no -core stable solution exists for any value .
(Proof Sketch) (Part 1) For ease of notation, we show that no -budget-balanced core stable solution exists for a given . Consider an instance with buyers. The valuations for the two projects are , and ; . Assume by contradiction that there is a -core stable solution, then this cannot be achieved when all of the agents are assigned to project because they would each require a payment of to prevent them from deviating to project . On the other hand, suppose that some agents are assigned to project , then the social welfare of the solution is at most . If these agents cannot deviate to project , then, the total payments would have to be at least . For a sufficiently large , we get that the budget-balance is . The example for Part 2 is provided in the Appendix.
We now describe an intuitive -approximation algorithm (Algorithm 1) for maximizing welfare that may be of independent interest. To the best of our knowledge, the only previously known approach that achieves a -approximation for anonymous subadditive functions is the LP-based rounding algorithm for general subadditive functions . Our result shows that for the special class of anonymous functions, the same approximation factor can be achieved by a much faster, greedy algorithm. In addition, our greedy algorithm also possesses other ‘nice structural properties’ that may be of use in other settings such as mechanism design .
Recall that the quantity refers to .
Although Algorithm 1 is only an approximation algorithm, the following theorem shows that we can utilize the greedy structure of the allocation and devise payments that ensure core stability. In particular, the solution that we use to construct a (yet to be proved) -core is , where is the allocation returned by the algorithm, and is the payment provided to agent . We remark that the ‘marginal contributions’ are defined only for the sake of convenience. They do not serve any other purpose. We make the following simple observation regarding the algorithm: the total social welfare of the solution , is exactly equal to the sum of the marginal contributions .
For any instance with anonymous subadditive projects, the allocation returned by Algorithm 1 along with a payment of for every constitutes a )-core stable solution.
We begin with some basic notation and simple lemmas highlighting the structural properties of our algorithm leading up to the main result. First, note that the total payments are exactly equal to the twice the aggregate marginal contribution, which in turn is equal to twice the social welfare. Therefore, our solution is indeed -budget balanced. Now, let us divide the execution of the greedy algorithm into iterations from to such that in every iteration, the algorithm chooses a set of unallocated agents maximizing the marginal contribution (average increase in welfare). We define to be the set of agents assigned to some project during iteration . Clearly, all the agents in are allocated to the same project, and have the exact same marginal contribution, and therefore payment. Let us use to refer to the marginal contribution of the agents in .
Note that in order to characterize the state of the algorithm during iteration , it is enough if we express the set of agents assigned to each project, and the set of unallocated agents. Define to be the set of agents allocated to project at the beginning of iteration (before the agents in are assigned), and to be the set of unallocated agents during that instant. Suppose that the agents in are assigned to project , then by definition the following equation must be true,
Finally, given any , we denote by , the ordered set of elements in in the decreasing order of their payment. We begin with a simple property that links the prices to the welfare of every project.
In the final allocation , the social welfare due to every project equals the total marginal contributions to the agents assigned to that project, i.e.,
The proof follows directly from the definition of the algorithm. Now, we establish that as the algorithm proceeds, the marginal contributions of the agents cannot increase.
For every with , the marginal of the agents in is not smaller than the marginal of the agents in , i.e., .
(Proof Sketch) Since, the assignment of agents to any one project does not affect the marginal contribution (or average increase in welfare) in other projects, it suffices to prove the lemma for the case when and are assigned to the same project. The rest of the proof involves showing that if the lemma does not hold, then adding instead of in iteration would have lead to larger average welfare. The full proof is in the Appendix.
Recall that Proposition 1 equates the marginal contributions to the welfare for every set . The following lemma establishes a relationship between payments and welfare for subsets of . Note that since the payments to the agents are exactly twice their marginal, we can use the payments and marginal contributions interchangeably. Once again, its proof is in Appendix 0.B.
For every project , and any given positive integer , the total marginal contribution of the highest paid agents in is at least the value derived due to any set of agents, i.e., if denotes the set of -highest paid agents in , then
We now move on to the most important component of our theorem, which we call the Doubling Lemma. This lemma will serve as the fundamental block required to prove both core stability and the necessary welfare bound. The essence of the lemma is rather simple; it says that if we take some project and add any arbitrary set of elements on top of , then the total resulting welfare is no larger than the final payments to the agents in . We first state the Doubling Lemmma here and then prove that using this lemma as a black-box, we can obtain both our welfare and stability result. The proof of the lemma is deferred to the Appendix.
(Doubling Lemma) Consider any project and the set of elements assigned to in our solution (). Let be some set of agents such that and . Then, the total payment to the agents in is at least , i.e.,
Proof of Core stability
We need to show that our solution is core stable, i.e., for every project and set of agents , . Assume by contradiction that some project and some set that does not satisfy the inequality for stability. We claim that .
If , then there are strictly more agents in the set than in project under , i.e., .
We know that
Let be some arbitrary set of size . Applying Proposition 3, we get,
By monotonicity, it must be that , and so giving us the desired lemma. ∎
So, and satisfy the conditions required for the doubling lemma. Applying the lemma, we get , which is a contradiction. Therefore, our solution is indeed core stable.
Suppose that the optimum solution has a social welfare of . We need to show that the social welfare of our solution is at least half of . Recall that our social welfare is exactly equal to half the payments . Therefore, it suffices if we prove that the welfare of the optimal solution is not larger than the sum of the payments. Our approach is as follows: we will map every project to a proxy set so that . If we ensure that the sets are mutually disjoint, we can sum these inequalities up to get our desired welfare bound.
We begin by dividing the projects into three categories based on the number of agents assigned to these projects in our solution and how this compares to ,
All projects satisfying, ,
All projects satisfying ,
All projects satisfying , i.e., in the optimum solution has more than double the number of agents assigned to in our solution.
We define the sets as follows: for every project , . For every project , is defined as the set of agents in with the highest payments as per our solution. Notice that for every , there are some ‘left over’ agents who are not yet assigned to any . Let be the union of such leftover agents over all projects in .
Finally, for every project , we define to be plus some arbitrarily chosen agents from the set . It is not hard to see that we can choose ’s for the projects in in such a manner that these sets are all mutually disjoint. Indeed, this is true because
The above inequality comes from the fact that summed over all is a non-negative number. Now all that remains is for us to show that for every .
First, look at the projects in . We can show that , where the last inequality comes from Proposition 3 since has at least half as many agents as . Now, for the projects in , we can directly apply Lemma 8 to get .
Finally, look at the projects in . Fix some , and define . Since , we immediately get from the definition of . Therefore, we can apply the important Doubling Lemma and get the desired result. This completes the proof of our final welfare bound.
Envy-Free Payments One interpretation for projects having anonymous valuations is that all the agents possess the same level of skill, and therefore, the value generated from a project depends only on the number of agents assigned to it. In such scenarios, it may be desirable that the payments given to the different agents are ‘fair’ or envy-free, i.e., all agents assigned to a certain project must receive the same payment. The following theorem (which we formally prove in the Appendix) shows that Algorithm 1 can be used to compute a -approximate core that also satisfies this additional constraint of envy-freeness.
For any instance where the projects have anonymous subadditive valuations, there exists a -core stable solution such that the payments are envy-free, i.e., all the agents assigned to a single project receive the same payment.
3.2 Submodular and Fractionally Subadditive (XoS) Valuations
Submodular and Fractionally Subadditive valuations are arguably the most popular classes of subadditive functions, and we show several interesting and improved results for these sub-classes. For instance, for XoS valuations, we can compute a -core using Demand and XoS oracles (see  for a treatment of XoS oracles), whereas without these oracles, we can still compute a -core. For submodular valuations, we provide an algorithm to compute a -core even without a Demand oracle. All of these solutions retain at least a fraction of the optimum welfare, which matches the computational lower bound for both of these classes. We begin with a simple existence result for XoS valuations, that the optimum solution along with payments obtained using a XoS oracle form an exact core stable solution. All the results that we state in this Section, and Section 4 are proved in the Appendix.
There exists a -core stable solution for every instance where the projects have XoS valuations.
Since , this result extends to Submodular valuations as well. Unfortunately, it is known that the optimum solution cannot be computed efficiently for either of these classes unless P=NP . However, we show that one can efficiently compute approximately optimal solutions that are almost-(core)-stable.
For any instance where the projects have XoS valuations, we can compute -core stable solution using Demand and XoS oracles, and a -core stable solution without these oracles.
For submodular valuations, we can compute a -core stable solution using only a Value oracle.
Note that for both the classes, a -core can be computed in time polynomial in the input, and . We conclude by pointing out that the results above are much better than what could have been obtained by plugging in in Theorem 3.1 for Submodular or XoS valuations.
4 Relationship to Combinatorial Auctions
We now change gears and consider the seemingly unrelated problem of Item Bidding Auctions, and establish a surprising equivalence between Core stable solutions and pure Nash equilibrium in Simultaneous Second Price Auctions. Following this, we adapt some of our results specifically for the auction setting and show how to efficiently compute Approximate Nash equilibrium when buyers have anonymous or submodular functions.
In recent years, the field of Auction Design has been marked by a paradigm shift towards ‘simple auctions’; one of the best examples of this is the growing popularity of Simultaneous Combinatorial Auctions [9, 14, 17], where the buyers submit a single bid for each item. The auction mechanism is simple: every buyer submits one bid for each of the items, the auctioneer then proceeds to run -parallel single-item auctions (usually first-price or second-price). In the case of Second Price Auctions, each item is awarded to the highest bidder (for that item) who is then charged the bid of the second highest bidder. Each buyer’s utility is her valuation for the bundle she receives minus her total payment.
We begin by establishing that for every instance of our utility sharing problem, there is a corresponding combinatorial auction, and vice-versa. Formally, given an instance , we define the following ‘flipped auction’: there is a set of items, and a set of buyers. Every buyer has a valuation function for the items. In the simultaneous auction, the strategy of every buyer is a bid vector ; denotes buyer ’s bid for item . A profile of bid vectors along with an allocation is said to be a pure Nash equilibrium of the simultaneous auction if no buyer can unilaterally change her bids and improve her utility at the new allocation.
Nash equilibrium in Simultaneous Auctions is often accompanied by a rather strong no-overbidding condition that a player’s aggregate bid for every set of items is at most her valuation for that set. In this paper, we also study the slightly less stringent weak no-overbidding assumption considered in  and  which states that ‘a player’s total bid for her winning set is at most her valuation for that set’. The set of equilibrium with no-overbidding is strictly contained in the set of equilibrium with weak no-overbidding. Finally, to model buyers who overbid by small amounts, we focus on the following natural relaxation of no-overbidding known as -conservativeness that was defined by Bhawalkar and Roughgarden .
(Conservative Bids)  For a given buyer , a bid vector is said to be -conservative if for all , we have
We now state our main equivalence result that is based on a simple black-box transformation to convert a Core stable solution to a profile of bids that form a Nash Equilibrium: if , and otherwise.
Every Core stable solution for a given instance of our game can be transformed into a Pure Nash Equilibrium (with weak no-overbidding) of the corresponding ‘flipped’ simultaneous second price auction, and vice-versa.
Existence and Computation of Equilibrium Although simultaneous auctions enjoy several desirable properties like good Price of Anarchy [9, 14], their applicability is limited by both existential and computational barriers. Particularly, while a no over-bidding Nash equilibrium always exists for simple valuations like XoS, it may not be possible to actually compute one . For more general subadditive (and even anonymous) valuations, Nash equilibria without over-bidding may not even exist , and whether or not they exist cannot be determined without exponential communication .
A case for Approximate Equilibrium The exciting connection between Core stable solutions and Nash Equilibrium unfortunately extends to negative results as well. One can extend our lower bound examples (See Appendix) to show that even when all buyers have anonymous subadditive functions, there exist instances where every Nash equilibrium requires -conservative bids. The expectation that buyers will overbid by such a large amount appears to be unreasonable. In light of these impossibility results and the known barriers to actually compute a (no-overbidding) equilibrium , we argue that in many auctions, it seems reasonable to consider -approximate Nash equilibrium that guarantee that buyers’ utilities cannot improve by more than a factor when they change their bids. In the following result, we adapt our previous algorithms to compute approximate equilibria with high social welfare for two useful settings. Moreover, these solutions require small over-bidding, and can be obtained via simple mechanisms, so it seems likely that they would actually arise in practice when pure equilibria either do not exist or require a large amount of overbidding.
Given a Second Price Simultaneous Combinatorial Auction, we can compute in time polynomial in the input (and for a given )
A -approximate Nash equilibrium that extracts half the optimal social welfare as long as the buyers have anonymous subadditive valuations.
A -approximate Nash equilibrium that is a -approximation to the optimum welfare when the buyers have submodular valuations.
The first solution involves -conservative bids, and the second solution involves -conservative bids.
Given a submodular valuation , define . Also, define such that . That is is the smallest non-zero increment in utility. Then, the algorithm for Submodular Functions converges in Poly( time. One can contrast this result to an algorithm by  that computes an exact Nash equilibrium in pseudo-polynomial time, i.e., . On the contrary, we show that we can compute an approximate Nash equilibrium in poly-time (using a PTAS).
We conclude by remarking that despite the large body of work in Simultaneous Auctions, our main results do not follow from any known results in that area, and we hope that our techniques lead to new insights for computing auction equilibria.
Elliot Anshelevich and Shreyas Sekar.
Approximate equilibrium and incentivizing social coordination.
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Québec City, Québec, Canada., pages 508–514, 2014.
-  John Augustine, Ning Chen, Edith Elkind, Angelo Fanelli, Nick Gravin, and Dmitry Shiryaev. Dynamics of profit-sharing games. Internet Mathematics, 11(1):1–22, 2015.
-  Robert J Aumann and Michael Maschler. The bargaining set for cooperative games. Advances in game theory, 52:443–476, 1964.
-  Yoram Bachrach, Edith Elkind, Reshef Meir, Dmitrii V. Pasechnik, Michael Zuckerman, Jörg Rothe, and Jeffrey S. Rosenschein. The cost of stability in coalitional games. In Algorithmic Game Theory, Second International Symposium, SAGT 2009, Paphos, Cyprus, October 18-20, 2009. Proceedings, pages 122–134, 2009.
-  Yoram Bachrach, David C. Parkes, and Jeffrey S. Rosenschein. Computing cooperative solution concepts in coalitional skill games. Artif. Intell., 204:1–21, 2013.
-  Maria-Florina Balcan, Avrim Blum, and Yishay Mansour. Improved equilibria via public service advertising. In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2009, New York, NY, USA, January 4-6, 2009, pages 728–737, 2009.
-  Camelia Bejan and Juan Camilo Gómez. Core extensions for non-balanced tu-games. Int. J. Game Theory, 38(1):3–16, 2009.
-  Anand Bhalgat, Tanmoy Chakraborty, and Sanjeev Khanna. Approximating pure nash equilibrium in cut, party affiliation, and satisfiability games. In Proceedings 11th ACM Conference on Electronic Commerce (EC-2010), Cambridge, Massachusetts, USA, June 7-11, 2010, pages 73–82, 2010.
-  Kshipra Bhawalkar and Tim Roughgarden. Welfare guarantees for combinatorial auctions with item bidding. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011, pages 700–709, 2011.
-  Nicolas Bousquet, Zhentao Li, and Adrian Vetta. Coalition games on interaction graphs: A horticultural perspective. In Proceedings of the Sixteenth ACM Conference on Economics and Computation, EC ’15, pages 95–112, New York, NY, USA, 2015. ACM.
-  Simina Brânzei and Kate Larson. Coalitional affinity games and the stability gap. In IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence, Pasadena, California, USA, July 11-17, 2009, pages 79–84, 2009.
-  Georgios Chalkiadakis, Edith Elkind, Evangelos Markakis, Maria Polukarov, and Nick R. Jennings. Cooperative games with overlapping coalitions. J. Artif. Intell. Res. (JAIR), 39:179–216, 2010.
-  Flavio Chierichetti, Jon M. Kleinberg, and Sigal Oren. On discrete preferences and coordination. In ACM Conference on Electronic Commerce, EC ’13, Philadelphia, PA, USA, June 16-20, 2013, pages 233–250, 2013.
-  George Christodoulou, Annamária Kovács, and Michael Schapira. Bayesian combinatorial auctions. In Automata, Languages and Programming, 35th International Colloquium, ICALP 2008, Reykjavik, Iceland, July 7-11, 2008, Proceedings, Part I: Tack A: Algorithms, Automata, Complexity, and Games, pages 820–832, 2008.
-  Xiaotie Deng, Toshihide Ibaraki, and Hiroshi Nagamochi. Algorithmic aspects of the core of combinatorial optimization games. Mathematics of Operations Research, 24(3):751–766, 1999.
-  Nikhil R. Devanur, Milena Mihail, and Vijay V. Vazirani. Strategyproof cost-sharing mechanisms for set cover and facility location games. Decision Support Systems, 39(1):11–22, 2005.
-  Shahar Dobzinski, Hu Fu, and Robert D. Kleinberg. On the complexity of computing an equilibrium in combinatorial auctions. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 110–122, 2015.
-  Shahar Dobzinski, Noam Nisan, and Michael Schapira. Approximation algorithms for combinatorial auctions with complement-free bidders. Math. Oper. Res., 35(1):1–13, 2010.
-  Qizhi Fang, Liang Kong, and Jia Zhao. Core stability of vertex cover games. Internet Mathematics, 5(4):383–394, 2008.
-  Uriel Feige. On maximizing welfare when utility functions are subadditive. SIAM J. Comput., 39(1):122–142, 2009.
-  Michal Feldman and Ophir Friedler. A unified framework for strong price of anarchy in clustering games. In Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part II, pages 601–613, 2015.
Michal Feldman, Hu Fu, Nick Gravin, and Brendan Lucier.
Simultaneous auctions are (almost) efficient.
Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013, pages 201–210, 2013.
-  Moran Feldman, Liane Lewin-Eytan, and Joseph Naor. Hedonic clustering games. In 24th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA ’12, Pittsburgh, PA, USA, June 25-27, 2012, pages 267–276, 2012.
-  Hu Fu, Robert Kleinberg, and Ron Lavi. Conditional equilibrium outcomes via ascending price processes with applications to combinatorial auctions with item bidding. In ACM Conference on Electronic Commerce, EC ’12, Valencia, Spain, June 4-8, 2012, page 586, 2012.
-  Konstantinos Georgiou and Chaitanya Swamy. Black-box reductions for cost-sharing mechanism design. Games and Economic Behavior, 2013.
-  Michel X Goemans and Martin Skutella. Cooperative facility location games. Journal of Algorithms, 50(2):194–214, 2004.
-  Gianluigi Greco, Enrico Malizia, Luigi Palopoli, and Francesco Scarcello. On the complexity of the core over coalition structures. In IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011, pages 216–221, 2011.
-  Martin Hoefer. Strategic cooperation in cost sharing games. Int. J. Game Theory, 42(1):29–53, 2013.
-  Nicole Immorlica, Mohammad Mahdian, and Vahab S. Mirrokni. Limitations of cross-monotonic cost-sharing schemes. ACM Transactions on Algorithms, 4(2), 2008.
-  Jon M. Kleinberg and Sigal Oren. Mechanisms for (mis)allocating scientific credit. In Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6-8 June 2011, pages 529–538, 2011.
-  Benny Lehmann, Daniel J. Lehmann, and Noam Nisan. Combinatorial auctions with decreasing marginal utilities. Games and Economic Behavior, 55(2):270–296, 2006.
-  Yoad Lewenberg, Yoram Bachrach, Yonatan Sompolinsky, Aviv Zohar, and Jeffrey S. Rosenschein. Bitcoin mining pools: A cooperative game theoretic analysis. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2015, Istanbul, Turkey, May 4-8, 2015, pages 919–927, 2015.
-  Brendan Lucier and Allan Borodin. Price of anarchy for greedy auctions. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin, Texas, USA, January 17-19, 2010, pages 537–553, 2010.
-  Evangelos Markakis and Amin Saberi. On the core of the multicommodity flow game. Decision support systems, 39(1):3–10, 2005.
-  Reshef Meir, Yoram Bachrach, and Jeffrey S. Rosenschein. Minimal subsidies in expense sharing games. In Algorithmic Game Theory - Third International Symposium, SAGT 2010, Athens, Greece, October 18-20, 2010. Proceedings, pages 347–358, 2010.
-  Hervé Moulin. Incremental cost sharing: Characterization by coalition strategy-proofness. Social Choice and Welfare, 16(2):279–320, 1999.
-  Hervé Moulin and Scott Shenker. Strategyproof sharing of submodular costs: budget balance versus efficiency. Economic Theory, 18(3):511–533, 2001.
-  Roger B Myerson. Graphs and cooperation in games. Mathematics of operations research, 2(3):225–229, 1977.
-  Tim Roughgarden and Mukund Sundararajan. Quantifying inefficiency in cost-sharing mechanisms. J. ACM, 56(4), 2009.
-  Walid Saad, Zhu Han, Mérouane Debbah, Are Hjørungnes, and Tamer Başar. Coalitional game theory for communication networks. Signal Processing Magazine, IEEE, 26(5):77–97, 2009.
The nucleolus of a characteristic function game.SIAM Journal on applied mathematics, 17(6):1163–1170, 1969.
-  Andreas S. Schulz and Nelson A. Uhan. Approximating the least core value and least core of cooperative games with supermodular costs. Discrete Optimization, 10(2):163–180, 2013.
-  Lloyd S Shapley and Martin Shubik. Quasi-cores in a monetary economy with nonconvex preferences. Econometrica: Journal of the Econometric Society, pages 805–827, 1966.
-  Jan Vondrák. Submodularity in combinatorial optimization. PhD thesis, Citeseer, 2007.
-  Jan Vondrák. Optimal approximation for the submodular welfare problem in the value oracle model. In Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008, pages 67–74, 2008.
Appendix 0.A Appendix: Proofs for Subadditive Valuations
We begin by formally defining the Greedy Matching Procedure procedure that is the building block of our main algorithm.
Lemma 1. For every bad project , , i.e., more than half the agents in still remain in project .
We prove this by contradiction. Suppose that for some such ,