DeepAI
Log In Sign Up

Computing the Shapley Value in Allocation Problems: Approximations and Bounds, with an Application to the Italian VQR Research Assessment Program

In allocation problems, a given set of goods are assigned to agents in such a way that the social welfare is maximised, that is, the largest possible global worth is achieved. When goods are indivisible, it is possible to use money compensation to perform a fair allocation taking into account the actual contribution of all agents to the social welfare. Coalitional games provide a formal mathematical framework to model such problems, in particular the Shapley value is a solution concept widely used for assigning worths to agents in a fair way. Unfortunately, computing this value is a # P-hard problem, so that applying this good theoretical notion is often quite difficult in real-world problems. We describe useful properties that allow us to greatly simplify the instances of allocation problems, without affecting the Shapley value of any player. Moreover, we propose algorithms for computing lower bounds and upper bounds of the Shapley value, which in some cases provide the exact result and that can be combined with approximation algorithms. The proposed techniques have been implemented and tested on a real-world application of allocation problems, namely, the Italian research assessment program, known as VQR. For the large university considered in the experiments, the problem involves thousands of agents and goods (here, researchers and their research products). The algorithms described in the paper are able to compute the Shapley value for most of those agents, and to get a good approximation of the Shapley value for all of them.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

07/19/2021

Maximizing Nash Social Welfare in 2-Value Instances

We consider the problem of maximizing the Nash social welfare when alloc...
05/13/2020

Fair and Efficient Allocations under Subadditive Valuations

We study the problem of allocating a set of indivisible goods among agen...
06/18/2019

Weighted Maxmin Fair Share Allocation of Indivisible Chores

We initiate the study of indivisible chore allocation for agents with as...
02/14/2022

Optimizing over Serial Dictatorships

Motivated by the success of the serial dictatorship mechanism in social ...
01/19/2022

Achieving Envy-Freeness with Limited Subsidies under Dichotomous Valuations

We study the problem of allocating indivisible goods among agents in a f...
08/17/2022

Finding Fair Allocations under Budget Constraints

We study the fair allocation of indivisible goods among agents with iden...
08/21/2014

A Study of Proxies for Shapley Allocations of Transport Costs

We propose and evaluate a number of solutions to the problem of calculat...

1 Introduction

1.1 Coalitional Game Theory

Coalitional games provide a rich mathematical framework to analyze interactions between intelligent agents. We consider coalitional games of the form , consisting of a set of

agents and a characteristic function

. The latter maps each coalition to the worth that agents in can obtain by collaborating with each other. In this context, the crucial problem is to find a mechanism to allocate the worth , i.e., the value of the grand-coalition , in a way that is fair for all players and that additionally satisfies some further important properties such as efficiency: we distribute precisely the available budget to players (not more and not less). Moreover, for fairness and stability reasons, it is usually required that every group of agents gets at least the worth that it can guarantee to the game.

Several solution concepts have been considered in the literature as “fair allocation” schemes and, among them, a prominent one is the Shapley value Shapley (1953). According to this notion, the worth of any agent is determined by considering its actual contribution to all the possible coalitions of agents. More precisely, it is considered the so-called marginal contribution to any coalition , that is, the difference between what can be obtained when collaborates with the agents in and what can be obtained without the contribution of . More formally, the Shapley value of a player is defined by the following weighted average of all such marginal contributions:

1.2 Allocation Games

Among the various classes of coalitional games, we focus in this paper on allocation games, which is a setting for analyzing fair division problems where monetary compensations are allowed and utilities are quasi-linear Moulin (1992). Allocation games naturally arise in various application domains, ranging from house allocation to room assignment-rent division, to (cooperative) scheduling and task allocation, to protocols for wireless communication networks, and to queuing problems (see, e.g., Greco  Scarcello (20142); Iera . (2011); Maniquet (2003); Mishra  Rangarajan (2007); Moulin (1992) and the references therein).

Computing the Shapley value of such games is a difficult problem, indeed it is #P-hard even if goods can only have two different possible values Greco . (2015). In this paper we focus on large instances of this problem, involving thousands of agents and goods, for which no algorithm described in the literature is able to provide an exact solution. There are however some promising recent advances that identify islands of tractability for the allocation problems where at most one good is allocated to each agent: it has been recently shown that those instances where the treewidth of the agents’ interaction-graph is bounded by some constant (i.e., have a low degree of cyclicity) can be solved in polynomial-time Greco . (2015). The result is based on recent advances on counting solutions of conjunctive queries with existential variables Greco  Scarcello (20141). Unfortunately, if the structure is quite cyclic this technique cannot be applied to large instances, because its computational complexity has an exponential dependency on the treewidth.

In some applications, one can be satisfied with approximations of the Shapley value. With this respect, things are quite good in principle, since we know there exists a fully polynomial-time randomized approximation scheme to compute the Shapley value in supermodular games Liben-Nowell . (2012). The algorithm can thus be tuned to obtain the desired maximum expected error, as a percentage of the correct Shapley value. However, not very surprisingly, for very large instances one has to consider a huge number of samples, in order to stay below a reasonable expected error. Maleki et al. Maleki . (2013)

provide bounds for the estimation error (as an absolute number rather than a percentage of the correct value) if the variance or the range of the samples are known. They also introduce stratified sampling as a method to further reduce the number of required samples.

1.3 Contribution

In order to attack large instances of allocation problems, we start by proving some useful properties of these problems that allow us to decompose instances into smaller pieces, which can be solved independently. Moreover, some of these properties identify cases where the computation of the worth function can be obtained in a very efficient way.

With these properties, we are able to use the randomized approximation algorithm of Liben-Nowell et al. Liben-Nowell . (2012) even on instances that (when not decomposed) are very large.

Furthermore, we note that in some applications one may prefer to determine a guaranteed interval for the Shapley value, rather than one probably good point. Therefore, we propose algorithms for computing a lower bound and an upper bound of the Shapley value for allocation problems. In many cases the distance between the two bounds is quite small, and sometimes they even coincide, which means that we actually computed the exact value. We also used these algorithms together with the approximation algorithm of Liben-Nowell et al. 

Liben-Nowell . (2012), to provide a more accurate evaluation of the maximum error of this randomized solution, for the considered instances.

Moreover, by plugging the computed lower bound values into the randomized sampling algorithm proposed by Maleki et al. Maleki . (2013), we were able to express their error bound as a percentage of the correct Shapley value, rather than as an absolute number, at least for our test instances. This allowed us to compute approximate Shapley values for our largest test case (namely, the 2011-2014 research assessment exercise of Sapienza University of Rome), within 5% of the correct value with 99% probability, in a matter of hours.

1.4 The Case Study

We have tested the proposed techniques on large real-world instances of the VQR2011-2014 Italian research assessment exercise. This exercise requires every Italian research structure to select some research products, and submit them to an evaluation agency called ANVUR. While doing so, the structure is in competition with all other Italian research structures, as the outcome of the evaluation will be used to proportionally transfer the funds allocated by the Ministry to support research activities in the next years (until the subsequent evaluation process). Every structure is therefore interested in selecting and submitting its best research products. For the sake of simplicity, we next simply speak of publications instead of research products (which can also be patents, books, etc.), and of universities and departments instead of structures and substructures (which can be other research subjects). The programme is articulated in two phases: (1) Based on authors’ self-evaluations and on ANVUR guidelines, selects and submits to ANVUR (at most) two publications for each one of its authors111There are exceptions to this rule: in specific circumstances, fewer than two publications are expected for some authors. To our ends, this detail is immaterial., in such a way that any product is formally associated with at most one author. (2) ANVUR formulates its independent quality judgment about the submitted publications (the score assigned to each publication is currently made known only to its authors), and the sum of the scores resulting from ANVUR’s evaluation is then the VQR score of . Eventually, will receive funds in subsequent years proportional to this score. Furthermore, ANVUR also published an evaluation of all departments, based on the product scores (the score of each department was computed as the sum of the scores of the products formally assigned to the authors in that department). Finally, the scores were also used for evaluating individual researchers that had been recently hired by (this also greatly influenced ’s funds in subsequent years), as well as those researchers that were members of PhD committees. Scores for recently hired researchers were computed as the sum of the scores of the products formally assigned to them; data in this respect were published by ANVUR in aggregated form only, for each department and for each scientific disciplinary sector. Evaluations for researchers that were members of PhD committees were computed as the sum of the scores of the best publications each one of them had coauthored, among all the publications submitted for the VQR (for this evaluation, the formal assignment of publications to authors was irrelevant); data in this respect were published by ANVUR in aggregated form only, for each PhD committee.

The way ANVUR currently uses product scores, for the purposes described above, yields evaluations that do not satisfy the desirable properties outlined in Section 4. In order to deal with this issue, we have modeled the problem as an allocation game Greco  Scarcello (2013), with a fair way to divide the total score of the university among researchers, groups, and departments based on the Shapley value. The proposed division rule enjoys many desirable properties, such as the independence of the specific allocation of research products, the independence of the preliminary (optimal) products selection, the guarantee of the actual (marginal) contribution, and so on.

2 Preliminaries

In the setting considered in this paper, a game is defined by an allocation scenario comprising a set of agents and a set of goods , whose values are given by the function mapping each good to a non-negative real number. The function associates each agent with the set of goods he/she is interested in. Moreover, the natural number provides the maximum number of goods that can be assigned to each agent. Each good is indivisible and can be assigned at most to one player.

For a coalition , a (feasible) allocation is a mapping from to sets of goods from such that: each agent gets a set of goods with , and , for any other agent (each good can be assigned to one agent at most).

We denote by the set of all goods in the image of , that is, . With a slight abuse of notation, we denote by the sum of all the values of a set of goods , and by the value . An allocation is optimal if there exists no allocation with . The total value of such an optimal allocation for the coalition is denoted by . The budget available for , also called the (maximum) social welfare, is , that is, the value of any optimal allocation for the whole set of agents (the grand-coalition). The coalitional game defined by the scenario is the pair , that is, the game where the worth of any coalition is given by the value of any of its optimal allocations. Note that holds, for each , since the allocation where no agent receives any goods is a feasible one (the value of an empty set of goods is ). The definition trivializes for , with .

Figure 1: Allocation scenario in Example 1.
Example 1

Consider the allocation scenario , depicted in a graphical way in Figure 1, where each edge connects an agent to a good she is interested in, and it is possible to allocate just one good to each agent (). The figure shows on the left an allocation for all the agents, with the edges in bold identifying the allocation of goods to agents. Note that this is an optimal allocation, i.e., a feasible allocation whose sum of values of the allocated goods is the maximum possible one. The value of this allocation is .

The coalitional game associated with this scenario is , where the worth function is precisely . In particular, we have seen that, for the grand-coalition, holds. For each with , an optimal allocation restricted to the agents in is also reported in Figure 1. It follows that the other values of the worth function are , = , , and .

For any allocation scenario , we define the agents graph as the undirected graph such that if there is a good .

3 The VQR Allocation Game

Note that the VQR research assessment exercise can be naturally modeled as an allocation scenario where is the set of researchers affiliated with a certain university , is the set of publications selected by for the assessment exercise, maps authors to the set of publications they have written, and assigns a value to each publication. In the current VQR programme (covering years 2011-2014), the range of is , with the latter value reserved to the excellent products.

In the submission phase, the values are estimated by the universities according to authors’ self-evaluations, and to the reference tables published by ANVUR (not available for some research areas). At the end of the program, will receive an amount of funds proportional to , that is, to the considered measure of the quality of the research produced by the university . The first combinatorial problem, which is easily seen to be a weighted matching problem, is to identify the best allocation scenario for the university. That is, to select a set of publications to be submitted, having the maximum possible total value among all those authored by in the considered period.

The final result may sometimes be different from the preliminary estimate, in particular because of those publications that undergo a peer-review process by experts selected by ANVUR, which clearly introduces a subjective factor in the evaluation. We assume that the values used by in the preliminary phase do coincide with the final ANVUR evaluation for all products. This is actually immaterial for the purpose of this paper, because we are interested here in the final division, where only the final (ANVUR) evaluation matters. However, we recall for the sake of completeness that, by adopting the fair division rule used in this paper, the best choice for all researchers is to provide their most accurate evaluation, so that is able to submit any optimal selection of products to ANVUR. In particular, any strategically incorrect self-evaluation by any researcher is useless, in that it cannot lead to any improvement in her/his personal evaluation, while it can lead to a worse evaluation if the best total value for is missed Greco  Scarcello (2013).

Figure 2: Authors and products in Example 2.
Example 2

Let us consider the weighted bipartite graph in Figure 2, whose vertices are the researchers of a university and all the publications they have written. Edges encode the authorship relation , and weights encode the mapping providing the values of the publications. Consider the optimal allocation such that , , and , encoded by the solid lines in the figure. Based on this allocation, an optimal selection of publications to be submitted for the evaluation is . The publications that are not submitted are shown in black in the figure. Note that is co-authored by , , and , while is co-authored by and Thus, the allocation scenario to be considered is , and the associated coalitional game is the pair . In particular, the total value of the grand-coalition is .

The problem that we face is how to compute, from the total value obtained by , a fair score for individual researchers, or groups, or departments, and so on. As mentioned above, product scores are currently used for evaluating the hiring policy of universities and the PhD committees, and from this year such scores contribute to evaluate the quality of courses of study, too. Unfortunately, this is currently done in a way that fails to satisfy the properties that we outline below. Instead, following Greco  Scarcello (2013), we propose to use the Shapley value of the allocation game defined by the scenario selected by the given structure as the division rule to distribute the available total value (or budget) to all the participating agents. For the allocation scenario in Example 2, we get , , and . Notice that the Shapley value is not a percentage assignment of publications to authors, but takes into account all possible coalitions of agents. Note that is not penalized by the fact that its best publication is assigned to researcher , in the submission phase determined by the optimal allocation depicted in Figure 2. Similarly, is not penalized by the fact that the worst publication is assigned to her/him (instead of being assigned to ).

Another important property is that the value assigned to each researcher is independent by the specific selection of products to be submitted, as long as the submission is an optimal one. For instance, an equivalent selection would consist of the products , because of the optimal allocation such that , , and . It can be checked that no Shapley value changes for any researcher, by considering the alternative allocation scenario based on the selection of products . On the other hand this nice property does not hold for many division rules. For instance, assume that the value of each researcher is determined by the average score of all the products evaluated by ANVUR of which she is a (co-)author222The products that were not submitted cannot be used, because they miss a certified evaluation by ANVUR.. Then, in the former allocation scenario gets , while in the latter one she gets . Symmetrically, gets a higher value in the former scenario and a lower one in the latter.

We will now recall the main desirable properties enjoyed by the division rule based on the Shapley value used in this paper. We refer the interested reader to Greco  Scarcello (2013) for a more detailed description and discussion of these properties.

Budget-balance. The division rule precisely distributes the VQR score of over all its members, i.e., .

Fairness. The division rule is indifferent w.r.t. the specific optimal allocation used to submit the products to ANVUR. In particular, the score of each researcher is independent of the particular products assigned to him in the submission phase; moreover, it is independent of the specific set of products selected by the university, as long as the choice is optimal (i.e., with the same maximum value ).

Marginality. For any group of researchers , , where and . That is, every group is granted at least its marginal contribution to the performance of the grand-coalition .

We remark the importance of the fairness property, as the choice of a specific optimal set of products is immaterial for , but it may lead to quite different scores for individuals (and for their aggregations, assume e.g. that researchers and above belong to different departments). As a matter of fact, this property does not hold for the division rules adopted by ANVUR for the evaluation of both departments and newly hired researchers (see Section 1.4). The budget-balance property, on the other hand, is violated by the division rule for evaluating researchers who are members of PhD committees.

4 Useful Properties for Dealing with Large Instances

Recall that computing the Shapley value is P-hard for many classes of games (see, e.g., Aziz  de Keijzer (2014); Bachrach  Rosenschein (2009); Deng  Papadimitriou (1994); Nagamochi . (1997)), including the allocation games, even if goods may have only two possible values Greco  Scarcello (20142).

For large instances, a brute-force approach is unfeasible, because to compute the value of each agent , it would need to solve optimization problems, where is the number of agents. This is particularly true in our case study, where is in the order of thousands.

In order to mitigate the complexity of this problem, in this section we will describe some useful properties of the Shapley value, in particular for allocation problems, which allow us to simplify the instances in a preprocessing phase.

Let us consider in this section an allocation scenario , with denoting its associated game, whose agents graph is . For such scenario we show the following properties which allow us to simplify the game at hand without altering the Shapley value of any player: Modularity, Null goods, Separability, Disconnected agent.

Theorem 4.1 (Modularity)

Let be a partition of agents of such that , for every pair of agents with and . Let (resp., ) be the coalitional game restricted to agents in (resp., ). Then, for each agent , .

Proof

Let and be two coalitional games such that, for each , and . Contrasted with the games in the statement, these games are defined over the full set of agents .

Since there are no interactions between agents in and agents in , the total value of the optimal allocation for any coalition is given by the sum of the values of the goods in the optimal allocations restricted to the two sets of agents and . Therefore, we have . Then, from the additivity property of the Shapley value, for each agent , .

Consider now the games and ) restricted to agents in and in , respectively. Note that each player is dummy with respect to the game , so that her Shapley value is null, and her presence have no actual impact on any other player in . In particular such dummy agents could be removed from the game without changing the Shapley value of the other agents, so that for every , we have and the result immediately follows (by using the same reasoning for ).

From the above fact, it follows immediately that each connected component of the agents graph can treated as a separate coalitional game.

Corollary 1

Let be any connected component of the agents graph. The coalitional game associated with the allocation scenario obtained by restricting to the players in is such that the Shapley value of each player in is the same as in the full game associated with .

It easy to see that goods having value do not impact on the computation of the optimal allocation. However, the existence of shared null goods between multiple agents induces connections (among agents) which complicates the structure of the graph.

For instance, consider an allocation scenario comprising three agents having a joint interest only for one good, say , whose value is . Any other good has just a single agent interested in it. In such a scenario, Corollary 1 cannot be used, since the agents graph associated with the scenario consists of one connected component. On the other hand, without , the agents graph would be completely disconnected and thus it would be possible to compute the Shapley values immediately, by using Corollary 1. The following fact states that, in fact, we can get rid of such null goods.

Fact 4.2 (No shared null goods)

By removing all goods having value from , we get an allocation scenario with the same associated allocation game.

Proof

Just observe that in the computation of the marginal contribution of any agent to a coalition , there is no advantage for agents in in using a good in having value .

If it is useful in the algorithms, we can also use Fact 4.2 in the opposite way, and add null-value goods. Let be a good with and let be the set of agents that are interested in having . Then, the game associated with is the same as the game associated with the allocation scenario where is replaced by fresh goods such that each of them is of interest to just one agent in (hence, there are no connections in the graph because of such goods).

The following property provides us with a powerful simplification method for allocation games. Intuitively, the property states that any set of agents that does not exhibit an effective synergy with the rest of the agents can be removed from the game and solved separately.

Theorem 4.3 (Separability)

Let be any coalition such that . Then, we can define from the allocation scenario two disjoint allocation scenarios restricted to agents and , respectively, that can be solved separately. For each player , we can compute its Shapley value in the game associated with by considering only the game associated with the restricted scenario where occurs.

Proof

Denote by , and consider the allocation games and restricted to agents in and , respectively.

Preliminary observe that, for each pair of disjoint coalitions , holds. Indeed, given any optimal allocation for the agents in , its restriction to is a feasible allocation for , as well as its restriction to is a feasible allocation for . In particular, we have that, combined with the hypothesis about the considered coalition , entails that . This means that the values of the goods not used in any optimal allocation for is equal to the sum of the values of the best goods for the agents in .

We shall show that, for each optimal allocation for , the set of goods allocated by to is such that and the analogous property holds for . Therefore, these agents get the best goods they can obtain. To prove this claim, consider the value and the value . We know that and, by the optimality of , it holds too.

Consider now any coalition , and let and . Let be an optimal allocation for . We claim that there is an optimal allocation mapping goods from to with , and an optimal allocation mapping goods not in to with . Assume by contradiction that this is not the case. Then at least one of those allocations lead to values smaller than those in (note that cannot be worse, because the union of the two restricted allocations is a valid candidate mapping for ). Assume gets a smaller total value (the other case is symmetrical), that is, . Then, there exists some agent and a good so that . By using Theorem 4.4 in Greco  Scarcello (20142), we can show that this would contradict the fact that . In fact, goods such as that are shared with agents outside and that allows us to get a better value for the agents in , could be used to improve the choice of the available goods for the full set .

Now, given that it suffices to use only the goods in for and the remaining goods for , we can define an equivalent game in which the goods in are of interest to agents in only and the remaining to agents in only. In the new game, and are in fact sets of agents with no shared connections and the theorem follows immediately from Theorem 4.1.

A very frequent and important case in applications, which falls in the case considered by this latter property, occurs when is a singleton , and it happens that the optimal allocation for this coalition is equal to the marginal contribution of to . By using the property described above, the set can be removed from the game and solved separately, so that we immediately get .

The following property identifies some goods that are useless for some agent and thus can be safely removed from its set of relevant goods . Note that this operation does not affect other agents possibly interested in such goods.

Fact 4.4 (Useless goods)

Let be an agent, and let be a good such that . Then, the modified allocation scenario where is removed from is equivalent to the original one, that is, the two scenarios have the same associated game.

We conclude this section with a simple property that does not help to simplify the game, but allows us to avoid the computation of unnecessary optimal allocations, during the computation of marginal contributions.

Fact 4.5 (Disconnected agent)

Let be an agent and let be a component disconnected from , that is, such that , for each . Then, holds and the marginal contribution of to is .

5 Lower and Upper Bounds for the Shapley Value

In this section we describe the computation of a lower bound and an upper bound for the Shapley value of any given allocation game . The availability of such bounds can be helpful to provide a more accurate estimation of the approximation error in randomized algorithms. Moreover, whenever the two bounds coincide for some agent, we clearly get the precise Shapley value for that agent. We shall see that this often occurs in practice, in our case study.

Preliminarily observe that in allocation games we have for free a simple pair of bounds. Indeed, recall that the anti-monotonicity property holds, so that, for each pair of coalitions , . Then, for each player and for every coalition , we have . It immediately follows that

To obtain tighter bounds we observe that the neighbors of in a coalition are the agents having the higher influence on the marginal contribution of to . Indeed, they are precisely those agents interested in using the goods of when he/she does not belong to the coalition. We already observed that, in the extreme case that no neighbors are present, contributes with all her/his best goods. The idea is to consider the power-set of as the only relevant sets of agents.

Let be a set of neighbors of , and For the computation of the lower bound in Algorithm 1, for such a profile we compute the marginal contribution of to , but use this same value for the marginal contributions of to every coalition such that , that is, for every coalition with the same configuration of neighbors of . Furthermore, we use a suitable factor to weigh this value in order to simulate that every such a coalition gets that same marginal contribution from .

The case of the upper bound is obtained in the dual way, by using instead the most favorable case where we use the marginal contribution of to in place of the marginal contribution of to any coalition with .

Input: An allocation game ;
Output:

A pair of vectors

encoding, respectively, a lower bound and an upper bound of the Shapley value of ;

1:for all  do
2:     ;
3:     ;
4:     ;
5:     for all  do
6:         ;
7:         ;
8:         ;
9:         ;
10:     end for
11:end for
12:return ;
Algorithm 1 Computing Bounds for the Shapley Value in Allocation Games
Theorem 5.1

Let be the output of Algorithm 1. For each agent , holds, and the computation of such values can be done in time .

Proof

Let be an agent of the game. The algorithm is based on the computation of any possible combination of the neighbors of . Regarding the computation of the lower bound, for each such profile , the algorithm considers a coalition obtained by completing with all the other agents in that are not neighbors of .

The algorithm uses the value of the marginal contribution of to such coalition, that is, the value , in place of the marginal contributions of to each coalition such that . Now, because , by exploiting the anti-monotonicity property of the marginal contributions in allocation games, we get immediately . Then, the algorithm weighs in a suitable way so that this value is used in place of the right marginal contribution (not lower than ) of to each coalition of the form described above. A simple combinatorial argument shows that this can be achieved by multiplying by the following factor

(1)

where and .

Regarding the computation of the upper bound of the Shapley value of , we proceed in a similar way but using the marginal contribution of to the profile containing only its neighbors, instead of the marginal contributions to the various coalitions such that . Indeed, in this case we have and therefore . Again, we need to multiply such value by a factor which takes into account of all possible ways of completing to any coalition with the same profile of ’s neighbors. It is easy to see that we can again use the factor described above, by exploiting the fact that .

Concerning the computational complexity, just observe that, for each element of the power set of , we have to solve a constant number of optimal allocation problems. Each of these problems requires the computation of an optimal weighted matching, which can be solved in time .

6 Approximating the Shapley Value

6.1 FPRAS for Supermodular and Monotone Coalitional Games

In order to approximate the Shapley value, one possibility is to use the Fully Polynomial-time Randomized Approximation Scheme (FPRAS) proposed in Liben-Nowell . (2012): for any and , it is possible to compute in polynomial-time an approximation of the Shapley value with probability of failure at most . The technique works for supermodular and monotone coalitional games, and it can be shown that our allocation games indeed meet these properties Greco  Scarcello (20142).

The method is based on generating a certain number of permutations (of all agents) and computing the marginal contribution of each agent to the coalition of agents occurring before her (him) in the considered permutation. Then the Shapley value of each player is computed as the average of all such marginal contributions. The above procedure is repeated times, in indepedent runs, with the result for each agent consisting of the median of all computed values for her (him). Finally, the obtained values are scaled (i.e., they are all multiplied by a common numerical factor) to ensure that the budget-balance property is not violated.

Clearly enough, the more permutations are considered, the closer to the Shapley value the result will be. We next report a slightly modified version of the basic procedure of this algorithm, where we avoid the computation of some marginal contributions, if we can obtain the result by using Fact 4.5.

Input: An allocation game ;
Parameters: Real numbers and ;
Output: A vector that is an -approximation of the Shapley value of , with probability ;

1:;
2:;
3:while  do
4:     ;
5:     ;
6:     for all  do
7:         if  then
8:              ;
9:         else
10:              ;
11:         end if
12:         ;
13:         ;
14:     end for
15:end while
16:for all  do
17:     ;
18:end for
19:return ;
Algorithm 2 Shapley value approximation in allocation games

As a preliminary step, we compute the required number of permutations to meet the required error guarantee. In each of the iterations, the algorithm generates a random permutation from the set of agents . We then iterate through this permutation and compute the marginal contribution of each agent to the set of agents occurring before in the permutation at hand. If some neighbor of (in the agents graph) occurs in , the algorithm proceeds as usual by computing the value of an optimal allocation for in order to obtain the value . Note indeed that this one computation is sufficient to get such a marginal contribution, because the value for the coalition including the preceding agents (for the permutation at hand) is known from the previous step. Moreover, by Fact 4.5, we know that for those permutations in which all the players in follow , the marginal contribution of is just (see step 10). Finally, at steps 16–18 for each agent the algorithm divides the sum of her contributions by the number of performed iterations . The correctness of the whole algorithm follows from Theorem 4 in Liben-Nowell . (2012).

Computation Time Analysis. Let be the number of agents, and let be the required number of iterations. The cost of the algorithm is , where denotes the cost of computing each marginal contribution (steps 7–11). This requires the computation of an optimal weighted matching in a bipartite graph, which is feasible in , via the classical Hungarian algorithm. However, if the current agent is disconnected from the rest of the coalition, the cost is given by a simple lookup in the cache where the best allocation for each single agent is stored.

6.2 Sampling Algorithm When the Range of Marginal Contributions Is Known

Maleki et al. Maleki . (2013) propose a bound on the number of samples (over the population of marginal contributions) required to estimate an agent’s Shapley value, when the range of his/her contributions is known. Their bound is based on Hoeffding’s inequality Hoeffding (1963), and it states that, in order to approximate the Shapley value of agent within an absolute value , with failure probability at most , that is, in order to get

(2)

at least samples are required, where:

(3)

In the above expression, denotes the range of ’s marginal contributions (i.e., ), where is the set of all agents that partecipate in the allocation game). This bound allows us to determine the number of required random samples for each agent , once and are fixed. Assuming we want an overall failure probability , each agent could be assigned a failure probability . In principle a higher failure probability could be tolerated for agents with larger ranges, at the expense of lower failure probability for agents with smaller ranges. However, our experimental tests performed with this variant, exhibited just a few marginal gains.

Once the number of required samples for each agent is determined, the approximate Shapley value, with the desired guarantees on the absolute error, can easily be computed by a randomized algorithm evaluating the required samples of coalitions for each player (see Section 7.1 for a brief description of our parallel implementation).

In order to consider the classical percentage expression for the approximation error, we should replace by in (2). First observe that for all agents that are considered by the algorithms, because our simplification techniques preliminarily identify and remove from the game those agents having a null Shapley value (these agents must be interested only in goods with a null value). In fact, the value of that would appear in (3) may be replaced by any known (non-null) lower bound , at the expense of taking more samples than those strictly necessary. On our largest test instance (namely, the researchers of Sapienza University of Rome who participated in the research assessment exercise VQR2011-2014), the technique described in Section 5 yields lower bounds that are greater that for all agents. It turns out that, in a matter of hours, we are able to get approximate Shapley values within of the correct values.

It should be noted that the bound presented by Maleki et al., due to the exponential relation it establishes between and , allows us to compute efficiently good approximate Shapley values, at least on our test instances where the range of the marginal contributions is fairly limited. For a comparison, the FPRAS approach described in Section 6.1 would have taken a few years (instead of the few hours required by the approach presented here) to process our largest input instance with the same error guarantee (see Section 7 for details on our experiments).

7 Implementation Details and Experimental Evaluation

7.1 Parallel Implementation of Shapley Value Algorithms

All the algorithms considered in this paper are amenable to parallel implementation. We engineered our parallel implementations as follows.

FPRAS algorithm Liben-Nowell . (2012). Besides the input allocation game, and the two parameters and , we added a third parameter, the thread pool size. During the execution of the algorithm, each thread (there are as many threads as the thread pool size dictates) is responsible for generating a certain number of permutations according to the requested approximation factor and, for each permutation, it computes the marginal contributions of all authors to that permutation, and saves them to a local cache. Whenever a thread has generated its assigned number of permutations, it delivers its local cache of computed scores to a synchronized output acceptor (which increments the overall score of each author accordingly), and then shuts itself down as its work is completed. When all threads have shut down, each entry of the acceptor’s output vector is averaged over the total number of permutations, yielding the final approximate Shapley vector for that run. The above procedure is repeated for each independent run. When all runs are done, the component-wise median of all final approximate Shapley vectors is computed, and the resulting vector is scaled (i.e., all entries are multiplied by a number such that the budget-balance property is enforced), yielding the desired approximation with the desired probability.

Algorithm based on the ranges of samples Maleki . (2013). As a preliminary step, the number of required samples for each author is determined by a sequential routine (as this computation is very fast), based on the approximation parameters and , and on precomputed values for , , where is the set of all authors, and . The algorithm also receives two extra parameters, and . Subsequently, each thread (the total number of threads is determined by ) asks a synchronized producer for a job (i.e., a pair ). The synchronized producer either provides a job for the requesting thread, or it returns null, if enough jobs have already been distributed to satisfy the approximation requirements. Upon receiving a job, a thread produces uniformely distributed random subsets of , and for each such subset , computes the marginal contribution of to . The sum of these contributions is delivered to a synchronized output acceptor, which stores, for each author, the sum of all marginal contributions computed so far by the various threads. Notice that the job provider will always distribute pairs for which . This is done to ensure, with proper tuning of parameter , load balancing between the threads. Finally, when a thread receives null from the synchronized job provider, it simply shuts itself down, as there is no more work to do. When all threads have shut down, the output acceptor will average the sum of all marginal contributions of each author over the number of required samples for that author, yielding the approximate Shapley value.

Exact algorithm. In our exact algorithm implementation, each thread (the total number of threads is specified by an input parameter) asks a synchronized producer for a subset of authors to work with. The synchronized subset producer either provides an -bit integer number (where is the number of authors) for the requesting thread, or it returns null if all subsets have already been delivered for elaboration. Upon receiving an -bit integer from the subset provider, a thread turns it into a subset of authors (if a bit is set to 1, then the corresponding author is included in the subset), and computes partial scores for all authors in the subset, storing the values obtained in a local cache. When a thread receives null from the subset provider, it delivers its local cache of computed scores to a synchronized output acceptor (which increments the overall score of each author accordingly), and then shuts itself down, as it has no more work to do. When all threads have shut down, the output vector will contain the exact Shapley values for all authors.

7.2 Experimental Results

Hardware and software configuration. Experiments have been performed on two dedicated machines. In particular, sequential implementations were run on a machine with an Intel Core i7-3770k 3.5 GHz processor, 12 GB (DDR3 1600 MHz) of RAM, and operating system Linux Debian Jessie. We tested the parallel implementations on a machine equipped with two Intel Xeon E5-4610 v2 @ 2.30GHz with 8 cores and 16 logical processors each, for a total of 32 logical processors, 128 GB of RAM, and operating system Linux Debian Wheezy. Algorithms were implemented in Java, and the code was executed on the JDK 1.8.0 05-b13, for the Intel Core i7 machine, and on the OpenJDK Runtime Environment (IcedTea 2.6.7) (7u111-2.6.7-1 deb7u1), for the Intel Xeon machine.

Dataset description. We applied the algorithms to the computation of a fair division of the scores for the researchers of Sapienza University of Rome who participated in the research assessment exercise VQR2011-2014. Sapienza contributors to the exercise were 3562 and almost all of them were required to submit 2 publications for review. We computed the scores of each publication by applying, when available, the bibliographic assessment tables provided by ANVUR.

Preprocessing. The analysis was carried out by preliminarily simplifying the input using the properties discussed in Section 4, as explained next.

Starting with a setting with 3562 researchers and 5909 publications, first we removed each researcher having no publications for review. After this step a total of 370 authors were removed. Then, by exploiting the simplification described in Fact 4.2, we removed 2323 publications. By using Theorem 4.3, the graph was subsequently filtered removing each author whose marginal contribution to the grand coalition coincides with the optimal allocation restricted to the author himself. After this step 2427 researchers out of 3562 were removed. Then we divided the resulting agents graph into connected components obtaining a total number of 156 connected components and we discovered only two components consisting of more than 10 agents. The sizes of these components are 691 and 15. Eventually, the components were further simplified by using Fact 4.4. After the whole preprocessing phase, we obtained a total of 159 connected components with the largest one having 685 nodes. The size of the second largest component is just 15 while all the others remain very small (less than 10 nodes). In the rest of the section, we shall illustrate results of experimental activity conducted over the various methods. To this end, we fixed the value

. This value was chosen heuristically, based on a series of tests conducted on various CUN Areas of Sapienza, where CUN Areas are (large) scientific disciplines such as Math and Computer Science (Area 01) or Physics (Area 02).

Tests with components of variable size. As already pointed out, after the preprocessing step we obtained very small connected components (less than 10 nodes) except for the largest two (685 and 15 nodes, respectively). For all components with less than 10 nodes, the exact algorithm, of which we used a sequential implementation for these tests, performs very well (a few milliseconds), therefore we omit the analysis here. In order to test all the other algorithms, besides the two largest components, we randomly extracted samples of (distinct) nodes out of the original graph, to produce different subgraphs with size .

Figure 3: Methods comparison ().
Figure 4: Methods comparison ().

For the considered cases, we do not find significant differences among the values obtained by using the two approximation algorithms and the exact ones (see, e.g., figures 3 and 4, in which the approximation algorithms were required to produce results within 5% of the exact value333In these two figures the values obtained by FPRAS are not visible because they coincide with the exact values.). Notably, with the exception of a small number of cases, our bounds (especially the lower bounds) are always very close to the exact value. In particular, for we were able to immediately get the Shapley value for all agents, since upper and lower bounds coincide for all of them.

We also evaluated how many computations of optimal allocations were avoided in the FPRAS of Liben-Nowell et al., by exploiting Fact 4.5 (and hence executing in the latter case Step 10 rather than Step 8 in Algorithm 2). By fixing the approximation error at , for each we get the following savings: out of (i.e., 28%), out of (18%), out of (29%), out of (30%), and out of (21%), respectively.

As already pointed out, the FPRAS method performed much better than its theoretical guarantee on the maximum approximation error. We report the real maximum and average approximation errors (denoted by X and Y, respectively) of our implementation w.r.t. the exact algorithm for each , with . For , we get X = 0.01 and Y = , for we get X = and Y = , and for we get X = and Y = . In all cases, the maximum approximation error was about 1% (or less) and therefore considerably below the theoretical guarantee (30%). The algorithm based on the bound of Maleki et al. also performs better than its theoretical guarantee, though not by as wide a margin as the FPRAS method (it is, however, much faster, as we will see in the next paragraph). In this case, for we get X = 0.093 and Y = 0.046, for we get X = 0.098 and Y = 0.011, and for we get X = 0.097 and Y = 0.019. In all cases, the maximum approximation error was below 10%, and therefore quite smaller than the required threshold.

Running Times. Figures 56 and 7 report the computation times of the various algorithms. In particular, Figure 5 focuses on the sequential implementations of the brute-force algorithm for computing the exact values, and of the algorithms for computing the upper and lower bounds. For the experiments, we computed separately the two bounds in order to point out that the computation of the lower bound requires in general more time, because it considers allocation over larger coalitions than those considered for the computation of the upper bound. Moreover, as discussed in Section 5, the running times for computing the bounds heavily depend on the cardinality of the agents’ neighborhoods. This explains why the running times for the case are smaller than those for the case .

Figure 5: Sequential implementations: running times for the computation of the exact value by using the brute-force algorithm (green), and of the upper and lower bounds (blue) vs instance size.
Figure 6: Parallel implementation of FPRAS method: running times vs .
Figure 7: Parallel implementation of Maleki-based algorithm: running times vs .

Figure 6 shows the running time of the parallel implementation of the FPRAS method, using 24 threads, for different values of . In particular, we performed five trials over the different (sub)games described above, and report averaged measures. We can see that for games of reasonable size we can achieve a high theoretical approximation error guarantee. For instance, for the largest considered game () we were able to compute the approximate Shapley value with in less than 90 minutes. There is a big gap between the performances of the FPRAS method, when using the extreme values we considered for the allowed approximation error. However, as already pointed out, even when we used a poor theoretical guarantee on the approximation error, we still obtained a quite reasonable accuracy.

In spite of its excellent accuracy, and its high efficiency when compared to the exact algorithm, we estimated that our parallel implementation of the FPRAS method would take, with and 24 threads, roughly years to fully analyze the largest component of our Sapienza test case, comprising authors. By contrast, the parallel implementation of the algorithm based on the bound proposed by Maleki et al., with the same settings, takes only 11.75 hours. The bound on the number of samples proposed by Maleki et al. requires the knowledge of the range of the marginal contributions, which was computed in less than 3 minutes. Moreover, in order to guarantee that the results are within a certain percentage of the correct values, the lower bounds for the Shapley value are also required. For the biggest component of our test instance, we computed the lower bounds for the 681 authors with neighborhood size up to 19; for the few remaining authors with more neighbors (just 4 authors), we used as lower bound the marginal contribution to the grand coalition. Multithreaded computation of the lower bounds took approximately 160 hours.

It should be noted that the bound by Maleki et al. could be applied directly to the largest CC in the unsimplified Sapienza VQR graph. This CC comprises 1176 authors. In this case, straightforward application of the bound for all authors requires, on our server, with 24 threads and an absolute error , roughly 20.5 hours. If we set , the computation time increases to approximately 31 days. Figure 7 shows the running times of the parallel implementation of Maleki-based algorithm on the two largest CCs in our test instances, with varying values for .

8 Conclusions and Future Work

In this paper, we have identified useful properties that allow us to decompose large instances of allocation problems into smaller and simpler ones, in order to be able to compute the Shapley value. The proposed techniques greatly improve the applicability to real-world problems of the approximation algorithms described in the literature. Furthermore, we described an algorithm for the computation of an upper bound and a lower bound for the Shapley value. These bounds provide a more accurate estimate of approximation errors, and (often, in our case study) yield the exact Shapley value for those agents where upper and lower bounds coincide.

We have engineered parallel implementations of the considered algorithms, and we have tested them on a real-world problem, namely, the 2011-2014 Italian research assessment program (known as VQR), modeled as an allocation game. With the proposed tools, we have been able to compute, either exactly, or within a fairly good approximation (5% of the correct value with 99% probability) the Shapley value for all agents in our largest test instance, namely, Sapienza University of Rome, comprising 3562 researchers and 5909 research products.

As future work, we would like to extend the structure-based technique described in Greco . (2015) to the more general class of games where more than one good can be allocated to each agent (as it is the case in VQR allocations). This way, we could compute efficiently the exact Shapley value for large games, provided that the treewidth of the agents graph is small. With this respect, we note that this is not the case for the large Sapienza VQR instance, because after the simplification performed with the tools described in the paper we are left with a large component whose estimated treewidth is 64. This is too much for using structure-based decomposition techniques. However, for the sake of completeness, we note that all other components have a low treewidth. For instance, the component with 50 agents used in our tests has treewidth 5.

Finally, we would like to obtain tighter lower and upper bounds, possibly with a computational effort that can be tuned to meet given time constraints.

References

  • Aziz  de Keijzer (2014) AzizK14Aziz, H.  de Keijzer, B.  2014. Shapley meets Shapley Shapley meets shapley. 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014), STACS 2014, March 5-8, 2014, Lyon, France 31st international symposium on theoretical aspects of computer science (STACS 2014), STACS 2014, march 5-8, 2014, lyon, france ( 99–111). http://dx.doi.org/10.4230/LIPIcs.STACS.2014.99
  • Bachrach  Rosenschein (2009) BachrachR09Bachrach, Y.  Rosenschein, JS.  200902. Power in Threshold Network Flow Games Power in threshold network flow games. Autonomous Agents and Multi-Agent Systems181106–132. http://dx.doi.org/10.1007/s10458-008-9057-6
  • Deng  Papadimitriou (1994) Deng1994Deng, X.  Papadimitriou, CH.  1994May. On the complexity of cooperative solution concepts On the complexity of cooperative solution concepts. Mathematics of Operations Research19257–266. http://dl.acm.org/citation.cfm?id=183315.183317
  • Greco . (2015) GLS15Greco, G., Lupia, F.  Scarcello, F.  2015. Structural Tractability of Shapley and Banzhaf Values in Allocation Games Structural tractability of shapley and banzhaf values in allocation games.

    Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015 Proceedings of the twenty-fourth international joint conference on artificial intelligence, IJCAI 2015, buenos aires, argentina, july 25-31, 2015 ( 547–553).

    http://ijcai.org/papers15/Abstracts/IJCAI15-083.html
  • Greco  Scarcello (2013) GS13Greco, G.  Scarcello, F.  2013. Fair division rules for funds distribution: The case of the Italian Research Assessment Program (VQR 2004-2010) Fair division rules for funds distribution: The case of the italian research assessment program (vqr 2004-2010). Intelligenza Artificiale7145–56. http://content.iospress.com/articles/intelligenza-artificiale/ia042
  • Greco  Scarcello (20141) Greco2014bGreco, G.  Scarcello, F.  20141. Counting solutions to conjunctive queries: structural and hybrid tractability Counting solutions to conjunctive queries: structural and hybrid tractability. Proceedings of the 33rd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS’14, Snowbird, UT, USA, June 22-27, 2014 Proceedings of the 33rd acm sigmod-sigact-sigart symposium on principles of database systems, pods’14, snowbird, ut, usa, june 22-27, 2014 ( 132–143). http://doi.acm.org/10.1145/2594538.2594559
  • Greco  Scarcello (20142) Greco2014iGreco, G.  Scarcello, F.  20142. Mechanisms for Fair Allocation Problems: No-Punishment Payment Rules in Verifiable Settings Mechanisms for fair allocation problems: No-punishment payment rules in verifiable settings. J. Artif. Intell. Res. (JAIR)49403–449. http://dx.doi.org/10.1613/jair.4224
  • Hoeffding (1963) Hoeffding1963Hoeffding, W.  1963.

    Probability Inequalities for Sums of Bounded Random Variables Probability inequalities for sums of bounded random variables.

    Journal of the American Statistical Association5830113-30. http://www.tandfonline.com/doi/abs/10.1080/01621459.1963.10500830
  • Iera . (2011) militano:twcIera, A., Militano, L., Romeo, L.  Scarcello, F.  2011. Fair Cost Allocation in Cellular-Bluetooth Cooperation Scenarios Fair Cost Allocation in Cellular-Bluetooth Cooperation Scenarios. IEEE Transactions on Wireless Communications1082566–2576.
  • Liben-Nowell . (2012) Liben-Nowell2012Liben-Nowell, D., Sharp, A., Wexler, T.  Woods, K.  2012. Computing Shapley Value in Supermodular Coalitional Games Computing shapley value in supermodular coalitional games. J. Gudmundsson, J. Mestre  T. Viglas (), Computing and Combinatorics: 18th Annual International Conference, COCOON 2012, Sydney, Australia, August 20-22, 2012. Proceedings Computing and combinatorics: 18th annual international conference, cocoon 2012, sydney, australia, august 20-22, 2012. proceedings ( 568–579). Berlin, HeidelbergSpringer Berlin Heidelberg. http://dx.doi.org/10.1007/978-3-642-32241-9_48
  • Maleki . (2013) Maleki2014Maleki, S., Tran-Thanh, L., Hines, G., Rahwan, T.  Rogers, A.  2013. Bounding the Estimation Error of Sampling-based Shapley Value Approximation With/Without Stratifying Bounding the estimation error of sampling-based shapley value approximation with/without stratifying. CoRRabs/1306.4265. http://arxiv.org/abs/1306.4265
  • Maniquet (2003) Francois2003Maniquet, F.  2003. A characterization of the Shapley value in queueing problems A characterization of the Shapley value in queueing problems. Journal of Economic Theory109190-103. http://www.sciencedirect.com/science/article/pii/S0022053102000364
  • Mishra  Rangarajan (2007) Mishra2007Mishra, D.  Rangarajan, B.  2007October. Cost sharing in a job scheduling problem Cost sharing in a job scheduling problem. Social Choice and Welfare293369-382. http://ideas.repec.org/a/spr/sochwe/v29y2007i3p369-382.html
  • Moulin (1992) Moulin1992Moulin, H.  1992November. An Application of the Shapley Value to Fair Division with Money An application of the Shapley value to fair division with money. Econometrica6061331-49. http://ideas.repec.org/a/ecm/emetrp/v60y1992i6p1331-49.html
  • Nagamochi . (1997) Nagamochi1997Nagamochi, H., Zeng, DZ., Kabutoya, N.  Ibaraki, T.  1997February. Complexity of the minimum base game on matroids Complexity of the minimum base game on matroids. Mathematics of Operations Research22146–164. http://dl.acm.org/citation.cfm?id=265654.265660
  • Shapley (1953) shapley53Shapley, LS.  1953. A value for n-person games A value for n-person games. Contributions to the theory of games2307–317.