This paper considers the classic “house allocation” problem of Hylland and Zeckhauser (1979). A set of agents are to be matched, one-to-one, to a set of items, without monetary transfers. Each agent has a value for each item
, and a randomized matching corresponds to a doubly-stochastic matrix
, providing the probabilitythat will be matched to ; the expected utility of in is . Our main result is a mechanism that incentivizes the agents to always truthfully report their cardinal preferences, i.e., their values, and yields an outcome that is approximately both fair and efficient.111Zhou (1990) shows that there is no mechanism that is truthful, anonymous, and Pareto efficient; thus, some notion of approximation is necessary. We measure the performance of our mechanism using the canonical benchmark defined by the Nash bargaining solution and show that our mechanism outperforms the standard mechanisms with the same, or weaker, incentive properties.
The literature on one-sided matching has considered three main approaches, none of which gives rise to mechanisms that are both truthful and obtain a non-trivial approximation of the aforementioned benchmark. Hylland and Zeckhauser (1979) propose the competitive equilibrium from equal incomes (CEEI). Just like our approach, this is a cardinal mechanism, i.e., it elicits all the values from the agents. Agents are endowed with equal incomes and the outcome is the one that results from market clearing prices. CEEI provides a natural notion of efficiency and fairness but, among other issues, it is not truthful (except under a large market limit assumption; Budish, 2011). The random serial dictatorship (RSD), or random priority, mechanism is an important mechanism with a long history in practice. This mechanism randomly orders the agents and, following this order, gives to each agent her favorite item among the ones that are still available. RSD is an ordinal mechanism: it requires only the ordinal preferences of each agent, i.e., only her ranking of the items from most to least preferred. It elicits this information truthfully, but its outcomes can be very inefficient. The probabilistic serial (PS) mechanism of Bogomolnaia and Moulin (2001) is another ordinal mechanism, and its outcome is computed by continuously allocating to each agent portions of her most preferred item that has not already been totally allocated. PS satisfies an ordinal notion of efficiency, but it achieves only a trivial approximation of our much stronger benchmark and it is not truthful (except under a large market limit assumption; Kojima and Manea, 2010). We provide a more detailed discussion regarding these mechanisms and other related work in Section 7.
Seeking to provide stronger efficiency and fairness guarantees compared to known mechanisms, we consider the approximation of a cardinal benchmark: the well-studied Nash bargaining solution, proposed by Nash (1950). Given a disagreement point, i.e., the “status quo” that would arise if negotiations among the agents were to break down, the Nash bargaining solution is the outcome that maximizes the product of the agents’ marginal utilities relative to their utility for the disagreement point. This outcome indicates the utility that each agent “deserves” to get, so we use this utility as the benchmark for that agent. The choice of disagreement point can depend on the application at hand: if a buyer and a seller are negotiating a transaction, the disagreement point could be that the seller keeps the goods and the buyer keeps her money. In one-sided matching markets the disagreement point needs to be a matching because leaving an agent without a house is infeasible. Since all agents have symmetric claims on the items when entering the market, we let the disagreement point be a matching chosen uniform at random, thus ensuring that each agent is equally likely to be matched to each item. The Nash bargaining solution therefore corresponds to the doubly-stochastic matrix that maximizes , where is the expected utility of agent for an item chosen uniformly at random. Note that, once the valuations of each agent are adjusted by subtracting , then our objective corresponds to the Nash social welfare (NSW), which has recently received a lot of attention in the fair division literature (e.g., Cole and Gkatzelis, 2018; Brânzei et al., 2017; Caragiannis et al., 2016; Cole et al., 2013). The NSW maximizing outcome is proportionally fair in that it satisfies a multiplicative version of Pareto efficiency, namely, the utility of an agent cannot be increased by a multiplicative factor without decreasing the product of utilities of other agents by a greater multiplicative factor.
Since truthful mechanisms are unable to guarantee Pareto efficiency (Zhou, 1990), it is clearly impossible for and such mechanism to implement the Nash bargaining solution, which is a refinement of Pareto efficiency. Thus, we consider the problem of approximating this solution. With fairness in mind, rather than considering an aggregate notion of approximation, our goal is to ensure that every agent’s utility is as close as possible to the utility she would obtain in the Nash bargaining solution. Formally, a mechanism is a -approximation if the utility of each agent is at least a fraction of her utility in the Nash bargaining solution. Since we consider randomized mechanisms, our setting is equivalent to allocating divisible items with the constraint that each agent receive exactly one item in expectation. Our mechanism leverages the partial allocation (PA) mechanism of Cole et al. (2013), which considers the fair allocation of divisible items and guarantees each agent at least a fraction of her utility in the NSW maximizing outcome. To incentivize truthful reports, the PA mechanism fractionally reduces an agent’s allocation in a way that, from the perspective of the agent’s utility, is equivalent to a form of payment. However, in the context of the house allocation problem, such fractional reductions imply that some agents are likely to be left without a house, which is an infeasible outcome. We therefore introduce a random sampling approach that enables the use of fractional reduction in a way that maintains incentives but addresses feasibility.
It has long been known that, unlike the Kalai-Smorodinsky solution, the Nash bargaining solution can violate population monotonicity for some instances of the bargaining problem (Thomson, 1983; Thomson and Lensberg, 1989). That is, there exist instances where removing some of the agents and computing the updated Nash bargaining solution can decrease the utility of some of the remaining agents. When allocating items among competing agents, this lack of monotonicity is somewhat counter-intuitive. Why would the decreased competition from agents departing the market not lead to (weakly) increased utility for the agents remaining in the market? Indeed, we show that population monotonicity can be violated in the Nash bargaining solution for matching markets. Effectively, the constraint that the allocation is a distribution over perfect matching introduces positive externalities between agents.
In order to quantify the extent to which one of the remaining agents’ utility can drop after such a change in the agent population, the bargaining literature in economics introduced the opportunity structure notion (e.g., see the book by Thomson and Lensberg, 1989, and references therein). This structure identifies the largest factor by which a remaining agent’s utility can drop after some subset of agents is removed. In fact, resembling the standard computer science approach, the opportunity structure is defined as the worst-case factor over all instances, all removed subsets of agents, and all remaining agents. Prior work has provided upper and lower bounds for this measure in general instances but, to the best of our knowledge, no such bounds were previously known for matching markets. In this paper we provide such bounds, showing that this factor can grow logarithmically in the number of agents but strictly slower than any polynomial. Apart from the broader interest in understanding this measure in matching markets, we show that the upper bound on the population non-monotonicity provides, up to constant factors, an upper bound on the approximation factor of the truthful matching mechanism that we define.
In this paper we introduce a novel use of random sampling which enables us to translate non-trivial truthful one-sided matching mechanisms that may produce partial matchings (i.e., possibly leaving some agents unmatched) into ones where (i) every agent is always assigned an item, and (ii) the incentives for truthful reporting of preferences are maintained. For example, the PA mechanism’s truthfulness guarantee depends heavily on its ability to penalize the agents that cause inconvenience to others; it thereby ensures that none of these agents are misreporting their preferences. Since monetary payments are prohibited, this mechanism penalizes the agents by assigning positive probability to the possibility of leaving them unmatched. Such a partial matching, however, is unacceptable in the house allocation problem. Every agent, no matter what values she reports, needs to be guaranteed an item, and this constraint significantly restricts our ability to introduce penalties. Nevertheless, we show that we can still recreate such penalties by using random sampling. Using the PA mechanism as a sub-routine, we define the randomized partial improvement (RPI) mechanism, which significantly outperforms all the standard matching mechanisms according to our fairness benchmark.
The RPI mechanism endows each agent with a baseline allocation given by a uniform random item and uses the PA mechanism to improve the agents’ utility relative to this baseline. In fact, it is not possible to simultaneously maintain the baseline and offer improvements, so RPI circumvents this impossibility by imposing these two conditions on a sample of just half the agents. With half the agents (but all of the items) there is sufficient flexibility to faithfully implement the PA mechanism with the outside option of a uniform random house. With these agents properly allocated half the items, RPI recursively allocates the remaining half of the agents the remaining half items.
As an intermediate step toward the theoretical analysis of RPI’s approximation factor, we study the extent to which population monotonicity may be violated in a one-sided matching market instance. We refer to an instance as -utility monotonic if removing a subset of its agents can decrease a remaining agent’s utility in the new Nash bargaining solution by a factor no more than . We show that, for a very carefully constructed family of instances, can be as high as and we complement this bound by proving that for any one-side matching instance is no more than for any constant .
Apart from the broader interest in understanding the extent to which the Nash bargaining solution may violate population monotonicity, our upper bound on also directly implies an upper bounds for the approximation factor of RPI. Specifically, we prove that RPI guarantees to every agent a approximation of the utility that she gets in the Nash bargaining benchmark. Therefore, as a corollary, we conclude that RPI approximates the Nash bargaining benchmark within for any constant , even with the worst case choice of . In stark contrast to this upper bound, which is strictly better than any polynomial, we show that the approximation factor of all ordinal mechanisms (even ones that are not truthful, such as probabilistic serial) grows linearly with the number of agents. Therefore, our mechanism significantly outperforms all ordinal mechanisms while at the same time satisfying truthfulness.
Section 2 provides some preliminary definitions and Section 3 formally introduces the benchmark and approximation measure used throughout the paper. Our results showing that ordinal mechanisms fail to achieve any non-trivial approximation are in Section 4, and Section 5 includes the description of our mechanism and the proofs regarding its truthfulness and fairness guarantees. Finally, in Section 6 we study the population monotonicity of the Nash bargaining solution and provide both upper and lower bounds for it.
Given a set of agents and a set of items, a randomized matching can be represented by a doubly-stochastic matrix of marginal probabilities, where denotes the marginal probability that agent is allocated item
. Clearly, any probability distribution over matchings implies a double-stochastic matrix, and the Birkhoff-von-Neumann Theorem shows thatany doubly-stochastic matrix can be implemented as a probability distribution over matchings. Denote by a matrix of agent values where is the value of agent for item . The expected utility of agent for random matching is . The random matching that a mechanism outputs when the agents’ reported values are is denoted by .
For each agent , her values are private and a matching mechanism must be designed to properly elicit them. A mechanism is truthful if it is a dominant strategy for each agent to report her true values. If we let denote the outcome of the mechanism when agent reports values and all the other agents report values , then a mechanism is truthful if for every agent , any matrix of values , and any misreports :
Our benchmark, formally defined in the following section, uses the Nash social welfare
(NSW) objective on appropriately adjusted agent valuations. The NSW maximizing outcome is known to provide a balance between fairness and efficiency by maximizing the geometric mean (or, equivalently, the product) of the agents’ expected utilities, i.e.,. The partial allocation mechanism from Cole et al. (2013) provides a truthful approximation of that outcome and can be easily adapted to randomized matchings by interpreting fractional allocations as probabilities.
The partial allocation (PA) mechanism on values works as follows:
Compute the doubly-stochastic matrix that maximizes the Nash social welfare.
For each agent , compute the fraction to allocate as follows:
Let be agent ’s utility in .
Let be agent ’s utility in , the NSW maximizing allocation with agent absent and all other agents restricted to one unit, i.e., for all .
The fraction is defined as:
Allocate each item to each agent with probability .
Notice that the fraction of the NSW maximizing assignment allocated to agent is equal to the relative loss in utility that ’s presence imposes on the other agents. Note that the denominator is independent of ’s declared valuations, and so in maximizing , which would be agent ’s goal, she is maximizing the NSW when she reports truthfully. Cole et al. (2013) show that , without the unit constraint on allocations, but the same argument holds with the unit constraint.
Theorem 2 (Cole et al., 2013).
The partial allocation mechanism is truthful, feasible, and allocates each agent at fraction of the NSW maximizing assignment, where is at least .
3 The Nash Bargaining Benchmark
In this section, we define our cardinal benchmark as well as an approximation measure for evaluating mechanisms for the one-sided matching problem. Our benchmark is the Nash bargaining solution with a uniformly random matching as the disagreement point. Each agent ’s expected utility for this disagreement point is and the Nash bargaining solution is the outcome that maximizes the Nash Social Welfare objective with respect to the marginal valuations . In other words, the Nash bargaining soluition distributes the additional value, beyond each agent’s outside option, in a fair and efficient manner.
The Nash bargaining solution with disagreement point is
where every agent is constrained to have non-negative utility .
Apart from its fairness properties, this benchmark is also appealing because of its invariance to additive shifts and multiplicative scalings of any agent’s values for the items. Shifting all the values of an agent by adding some constant does not affect the marginal values after the outside option is subtracted. Also, scaling all of the values of an agent by some constant does not have any impact on what the Nash bargaining solution, , is; the product value of every outcome is multiplied by the same constant, and hence the optimum is unaffected. As a result, we do not need to assume that the values reported by the agents are scaled in any particular way. One thing to note about the benchmark being invariant to these changes is that, on instances where the agents’ values are identical up to shifts and scales, the benchmark assignment is the uniform random assignment.222The combined property of shift and scale invariance has some counterintuitive implications. Consider an example instance where all agents have value for item 1, and for all other items . In the Nash bargaining solution, all agents receive a uniform random item and in particular a fraction of the preferred item 1. This outcome may seem surprising as it does not account for the possibility that some agents may prefer item 1 much more than other agents. This uniform outcome results because the agents’ preferences are equivalent up to additive and multiplicative shifts.
Our goal is to approximate , the Nash bargaining solution with disagrement point given by a uniform random matching, with the following per-agent guarantee.
The per-agent approximation of mechanism with benchmark assignment is the worst-case ratio of the utility of any agent in and ,
4 Inapproximability by Ordinal Mechanisms
Ordinal mechanisms are popular in the literature on matching. Rather than asking agents for cardinal values for each item, an ordinal mechanism need only solicit an agent’s preference order over the items. Two prevalent ordinal mechanisms are the random serial dictatorship (RSD) and probabilistic serial (PS) mechanisms. One of our main motivations for studying cardinal mechanisms in this paper is that ordinal mechanisms are bound to generate unfair allocations for some instances, due to the fact that they disregard the intensity of the agents’ preferences; when the agents agree, or mostly agree, on their preference order, they may still disagree on preference intensities. Such correlated ordinal preferences can be common in many settings. For instance, when allocating course to students the ordinal preferences of the students are likely to be similar if they are in the same department or program. On the other hand, they may have stronger or weaker preferences for courses due to needing to take required courses before graduation, personal taste or preferences, or other factors.
Our first lower bound shows that the random serial dictatorship mechanism can be very unfair to some agent, leading to an approximation factor as bad as (the number of agents).
The worst case approximation ratio of the random serial dictatorship (RSD) mechanism to the Nash bargaining benchmark is .
Consider the example where agent 1 has value 1 for item 1 and no value for any other item, and each agent has value 1 for item 1, value for item , and no value for other items:
In RSD an ordering of the agnents is generated uniformly at random, and then each agents is allocated her favorite available item in that order. In this instance, the first agent in the random ordering will always select item 1, and every agent has the same probability, of being ordered first. Since agent 1 has no value for any other item, the expected utility of this agent 1 in RSD is .
On the other hand, the Nash bargaining solution, as approaches zero, assigns each agent to item with probability that approaches 1. To verify this fact, note that for the Nash bargaining would assign agent to item with probability 1, and observe that the distribution that RSD outputs is continuous in . Thus, the utility of each agent in the Nash bargaining solution – and specifically of agent 1 – approaches 1. As a result, the RSD mechanism is being unfair to agent 1, leading to an approximation factor of . ∎
In fact, with a small modification of the instance used to verify how unfair the RSD mechanism can be, the following theorem shows that every ordinal mechanism is susceptible to this issue.
The worst case approximation ratio of any ordinal mechanism to the Nash bargaining benchmark is at least .
Consider the following instance , where agents correspond to rows and items to columns:
A key property of this instance is that agents are ordinally indistinguishable. Each ranks item 1 first, one of items second, and all other items last. On the other hand, each item is ordinally indistinguishible. Each is ranked second by exactly one agent and ranked equivalently by agent .
Fix an ordinal mechanism. The ordinal indistinguishibility of agents implies, without loss of generality up to agent relabeling, that agent 1 receives item 1 with probability at most . Thus, in the limit of going to , agent 1 obtains a utility of in this ordinal mechanism.
The Nash bargaining solution is continuous in and with gives each agent the maximum utility of 1 by allocating item 1 to agent 1, item 2 to agent , and item to agent for . Thus, in the limit, as goes to zero the Nash bargaining solution gives agent 1 a utility of 1. Combining the two analyses, the per-agent approximation of the ordinal mechanism with the Nash bargaining welfare is . ∎
5 The Randomized Partial Improvement Mechanism
In this section, we define the random partial improvement matching matching mechanism. This mechanism truthfully elicits the agents’ cardinal preferences and uses them in a non-trivial manner to select an outcome. We prove that the per-agent approximation of this mechanism with respect to the Nash bargaining benchmark is proportional to the population monotonicity of the benchmark and its worst-case is b etter than the worst-case of any ordinal mechanism. The approach of the mechanism is to run the PA mechanism with outside option given by the uniform random assignment on a large sample of the agents and a large fraction of the supply. The resulting mechanism will inherit the truthfulness of the PA mechanism.
There are two key difficulties with this approach. First, in order to faithfully implement the outside option, some of the supply needs to be kept aside in the same proportion as the original supply. To enable this set aside, we need to reduce the allocation consumed by the PA mechanism; we achieve this with a novel use of random sampling (cf. Goldberg et al., 2006). Second, it is non-trivial to compare an agent’s utility across Nash social welfare maximizing assignments for the original market and a sample of the market. A major endeavor of our analysis shows that per-agent utility is approximately monotone, i.e., the fraction of an agent’s utility that is lost in the Nash social welfare as the competition from other agents decreases is non-trivially bounded. (Note, competition from other agents decreases as they are removed from the market.) Our mechanism, then, is structured to take advantage of this approximate monotonicity.
The mechanism is defined by a sequence of steps that gradually construct a doubly-stochastic matrix. By the Birkhoff von Neumann theorem, this matrix can be viewed as a probability distribution over matchings. The high-level steps and intuition are as follows: the mechanism samples half the agents and runs at half scale (i.e., with half-unit-demand agents and half-unit-supply items) the PA mechanism with outside option given by the uniform random assignment. Note, the total demand of half the agents (roughly ) with half-unit demand is a quarter of the total supply (roughly ), so there is a leftover one quarter of the total supply from the half-units on which the PA mechanism was run. A further one-quarter of each of the units is used to provide a half-unit of the outside option to each of the (roughly) agents in the sample. The final quarter is used to replace as necessary the fractions of items withheld due to the fractional reduction in the PA mechanism. Necessarily, the one-unit allocation to these agents uses up exactly half the supply. The remaining half of the supply is then allocated recursively to the remaining half of the agents. A formal description of this mechanism is below.
The randomized partial improvement (RPI) mechanism on agents with values and items with supplies with total works as follows:
Randomly sample a subset of agents.
On the sampled agents run the PA mechanism with outside option given by the uniform random assignment (from the given supplies). Denote the allocation of item to agent by ; denote the total amount allocated to agent by .
To each sampled agent, allocate half of their assignment from PA and “pad” with the outside option. As a result, the total allocation of itemto agent is .
Recursively run RPI on the remaining agents with supplies with to obtain assignment .
Return the assignment that combines assignments and .
On agents, allocate the items uniformly at random, i.e., for all and .
The following proof of correctness (feasibility and truthfulness) formalizes the intuition preceding the definition of the mechanism. The ideas of the proof are more transparent in the case where is even and, in particular, .
The randomized partial improvement mechanism with on unit-demand agents and unit-supply items is feasible, i.e., it gives fractional allocations that produce a doubly stochastic matrix, and truthful, i.e., it is a dominant strategy for each agent to truthfully report her value for each item.
Feasibility is proved by induction on the recursive definition of the mechanism. The inductive hypothesis is that the fractional allocation on agents with supplies with sum has a total fractional allocation to each agent of one, i.e., for each , and a total fractional allocation of each item equal to its supply, i.e., for each . The base case of clearly satisfies the inductive hypothesis. For the inductive step, the key point to argue is that the supply of each item is sufficient to cover the allocation to the sampled agents.
This can be seen as follows. The sampled agents are allocated half of their PA assignment on half the supply and up to units from the uniform random assignment on the other half of the supply (recall that for all ); units are allocated in total. For the second half of the supply to be sufficient to implement the outside option in Step 3 of the mechanism, it suffices to observe that is at most when as assumed.
Truthfulness follows by considering each agent conditioned on the state of the mechanism during the recursive step where that agent is selected in the sampled set . The agent’s report plays no role in determining the state at this point. Given the state, the outcome for this agent is fully determined by the PA mechanism which is truthful. Thus, the mechanism is truthful. ∎
To bound the per-agent utility of the random partial improvement mechanism, we analyze the contribution to the utility of an agent who is sampled in the outermost recursive call of the mechanism. An agent is sampled as such with probability at least one half, and otherwise the agent’s utility is at least zero. The utility of these sampled agents is easily compared to the utility of the PA mechanism (without the agents that are not sampled). An issue significantly complicating the analysis of the approximation is the fact that we need to compare the utility of an agent sampled in this invocation of the PA mechanism with their utility in the Nash social welfare maximizing allocation on the full set of agents. Counter-intuitively, it is not true that these agents are always better off without the competition from the agents that are not sampled: there are instances where removing some of the competition, in fact, lowers the utility of an agent.
In Section 6 we define the -utility monotonicity for NSW to be the maximum non-monotonicity of utility of any agent and sets of agents and subset with NSW solutions and respectively:
This parameter quantifies the extent to which some agent may be worse off in the NSW solution after the removal of some subset of agents. Defining the worst-case value of across instances and subsets as , Section 6 bounds between and , the latter of which is for any constant .
The randomized partial improvement mechanism with on unit-demand agents and unit-supply items is a per-agent approximation to the Nash bargaining solution with disagreement point given by the uniform random assignment.
If then the base case of RPI is invoked. With , a uniform random assignment is a approximation as each agent obtains 1/3 of each item.
Otherwise, we analyze the contribution to the utility of an agent conditioned on the agent being sampled in the first recursive call of the algorithm; otherwise the agent’s utility is at least zero. This event happens with probability at least . When this happens the utility of the agent is half the utility of PA on the sampled agents plus half the utility from the outside option. The -utility monotonicity property implies that the utility of an agent in NSW on the sample is a approximation to the same agent’s utility in NSW on the full set of agents. Running PA guarantees an fraction of this utility. Combining these steps we obtain a approximation. ∎
The randomized partial improvement mechanism with on unit-demand agents and unit-supply items guarantees a per-agent-approximation of the Nash bargaining with uniform outside option with approximation factor for any constant .
6 Approximate Utility Monotonicity of the Nash Bargaining Solution
A factor significantly complicating the analysis of the approximation of the random partial improvement mechanism is the fact that the benchmark is computed based on the Nash social welfare maximizing solution when all agents are present, while the mechanism’s performance only is directly related to the solution for the sampled agent . The NSW maximizing solution for and can generally be quite different. Moreover, as it turns out, there are instances where the utilities of some agents in the NSW maximizing solution are non-monotone with respect to removal of other agents, i.e., instances that exhibit positive externalities between agents. Table 1 gives a simple example of such an instance and the remainder section develops upper and lower bounds on the worst-case non-monoconicity of utility.
|i. Agent valuations||ii. Initial solution||iii. Final solution|
A matching environment on agents is -utility monotone if for any subset of and any the utility of in the NSW maximizing assignment, , for is at least a approximation to the NSW maximizing assignment, , for :
This parameter quantifies the extent to which some agent may be worse off in the NSW solution after the removal of some subset of agents. We let , denote the worst case value of across instances; this value is known as the opportunity structure of the Nash bargaining solution for this class of instances (Thomson and Lensberg, 1989). In Section 6.1 we prove an upper bound of , which is for any constant , for the value of over all one-sided matching instances, and in Section 6.2 we complement this result by proving a lower bound of for this value.
6.1 Upper Bound on Population Non-monotonicity
Given a valuation matrix and a random matching , we henceforth use to denote the expected utility of agent for given , i.e., (similarly, we use for valuation matrix ). In order to prove the upper bound on , we first prove the following very useful lemmata.
Let be a NSW maximizing solution, and be the valuations normalized so that for every agent , . Then, if some agent is allocated an item with positive probability, i.e., , every other agent must have . Equivalently, .
For contradiction, assume that that there exists some agent such that for some . Since the expected utility of agent is 1, there must exist some item with and (otherwise his expected utility would be greater than 1). Let be the probability distribution which is identical to , except , , , and , for some positive , whose exact value we will choose later on. In other words, swaps probability between agents and items . The new expected utility of agent is
and the new expected utility of agent is
Since every other agent’s expected utility is the same in and (equal to 1), the NSW of is
Therefore, if we let , the NSW of is greater than 1, which is the NSW of , thus contradicting the fact that is a NSW maximizing solution. ∎
For a given problem instance, let and be the NSW maximizing outcomes before and after (respectively) some subset of the agents has been removed. If, among the remaining agents, there exists a set of agents such that for some constant and every agent we have , then there also exists a larger set of remaining agents such that and for all agents we have .
Without loss of generality, let be the agent valuations normalized so that for every agent , and be the valuations normalized so that . Given the values that yield , we can get the values that yield using the simple formula . In other words, for each agent who is worse-off in compared to , i.e., , we scale all of that agent’s item values up by the same factor, . In particular, for each agent this means that for every item .
For every we know that the drop in that agent’s value with respect to the original valuations is . In order to account for that drop, we partition the set of items of which is allocated more in compared to into two sets depending on whether or not: and . We first show that from the aforementioned drop in value, no more than could be due to the items in , since
Therefore, at least of this drop in value for each agent is due to items in . Summing this up over all the agents in , we get
Let be the set of agents that are allocated with positive probability in an item from for some . Using Lemma 12 we get that for every item , if then and . Using the fact that for every , shown above, the latter inequality also implies that . Therefore
where the second inequality uses the fact that and the last inequality uses the fact that . This implies that for every
where the last equation uses the fact that according to our normalization.
Since we have shown that for all we have , it now suffices to show that the size of is at least . Since, for any item , any agent with satisfies , the total value, with respect to valuations , generated by the item fractions of the items “lost” by the agents in is at least
where the last inequality uses the fact that . But, since the total value of each agent in with respect to valuations is exactly 1, there need to be at least agents in sharing this value, otherwise there would exist some agent such that . ∎
For any problem instance the value of is for any constant .
In order to prove this bound, we will repeatedly apply the result of Lemma 13. Let and be the NSW maximizing outcomes in a problem instance before and after some subset of the agents has been removed, and without loss of generality, let be the agent valuations normalized so that for every agent , and be the valuations normalized so that .
By definition of our approximation measure, in an instance with approximation factor is there exists at least one agent such that or . If , then Lemma 13 would imply that there also exists a set of at least agents such that for every . Lemma 13, combined with the existence of the set , in turn, implies the existence of an even larger group of at least agents, and each agent has value . Applying Lemma 13 a total of times thus implies the existence of a set of at least agents such that each such agent has value . Assume that there exists some instance for which is . If we choose , however, this implies the existence of agents of value at most . But, this would imply that all the agents have a value less than 1 in , which contradicts the fact that is a NSW maximizing solution because the product in is equal to 1. ∎
6.2 Lower Bound on Population Non-monotonicity
We conclude with a lower bound showing that for a very carefully designed (and somewhat artificial) family of instances, is not constant, and can grow with the number of agents.
There exists a family of problem instances for which .
Due to space limitations and the complexity of the construction that yields Theorem 15, we defer its description to the Appendix A. To exhibit how we use KKT conditions to prove that this elaborate construction implies the desired bound, we use the rest of this section applying this approach to the much simpler construction of the example in Table 1, which yields a bound of .
Our lower bound construction in the appendix proceeds by building a family of instances (parameterized by the number of agents ), and in each instance, we define an “initial” setting in which all agents are present, and a “final” setting, in which some agents have been removed. For each setting, we identify the Nash bargaining solution, respectively called the initial and final solution. We focus on a particular agent, called the loser, who is present in both settings. We show that the loser’s valuation drops by a multiplicative factor in going from the initial to the final solution, and consequently, for that market and overall.
To prove a lower bound, we need to be able to verify that a given doubly stochastic matrix is indeed the Nash bargaining solution of the instance at hand. We do so using the KKT conditions, which allow us to interpret these solutions as a form of market equilibrium. The optimization problem which yields the Nash bargaining solution in one-sided matching markets, is shown below:
If is the dual variable related to each item and is the dual variable related to each agent in the above program, then the KKT conditions state that:
The KKT conditions are necessary and sufficient for the optimal solution when the constraints are linear and the objective is convex, as is the case here. To check whether a given candidate solution is a Nash bargaining solution for some instance, we first normalize the valuations so that for all . Then, at a solution satisfying the KKT conditions we have if and if . Thus a solution that satisfies these two conditions plus conditions (2)–(3) is a Nash bargaining solution. For the market equilibrium interpretation of a solution, we can think of the values of and as item specific and agent specific prices, respectively. To illustrate the usefulness of these variables, which are used extensively in the appendix, we revisit the instance of Table 1 where the items are named , , and ; the bidders , , and ; and the unscaled valuations of the agents appear in Table 2(i).
First, we observe that in the initial equilibrium (with all agents present), receive items , respectively, each with probability 1. In Table 2(ii) we show the normalized values of the agents in this equilibrium and we also provide the dual variables for each item and for each agent . It is easy to verify that the aforementioned KKT conditions are satisfied in this case and hence this is indeed the Nash bargaining solution when all agents are present. If agent is removed, then the final equilibrium finds receiving each of and with probability , while receiving each of and with probability . Table 2(iii) provides the scaled valuations and dual variable values for this outcome, and it is again easy to verify that KKT conditions are satisfied. In this example, bidder is the loser. Using the valuations from Table 2(i), her value in the initial equilibrium was 1 and it dropped to in the final equilibrium, leading to in this example.
|i. Agent valuations||ii. Initial solution||iii. Final solution|
7 Further Related Work
Hylland and Zeckhauser (1979) study the problem of matching with cardinal preferences and the solution of competitive equilibrium from equal incomes (CEEI). CEEI gives both a natural cardinal notion of efficiency and of fairness. Recently, Alaei et al. (2017) give a polynomial time algorithm for computing the CEEI in matching markets when there are a constant number of distinct agent preferences. To our knowledge, the complexity of computing CEEI in general matching problems is unknown. With linear preferences, but without the unit-demand constraint, CEEI and Nash social welfare coincide and can be computed in polynomial time. Devanur and Kannan (2008) generalize this computational result to piecewise linear concave utilities when the number of goods (or alternatively the number of agents) is constant.
Recently, Budish (2011)
considers the generalization from matching to a combinatorial assignment problem where agents may have non-linear preferences over bundles of goods, and showed that an approximate version of CEEI exists. This work also shows that, in large markets, the mechanism that outputs this approximate CEEI is asymptotically truthful. Heuristics for computing the CEEI outcome are given byOthman et al. (2010) and these heuristics have been deployed for the course assignment problem by Budish et al. (2016). On the other hand, Othman et al. (2016) show that the computation of CEEI in these combinatorial assignment problems is PPAD-hard.
The Nash social welfare objective of our work compares to competitive equilibrium from equal incomes of the aforementioned works as follows: the two objectives coincide for linear preferences without the matching constraint (Vazirani, 2007), but with the matching constraint the concepts are not equivalent. Both NSW and CEEI outcomes are Pareto efficient and envy-free; to our knowledge, in matching markets, the agents’ utilities under the two criteria have not been directly compared. Contrasting with CEEI, for stochastic matchings, the NSW outcomes can be calculated by a convex program, i.e., a program that optimizes the product of utilities over the marginal probabilities given by a doubly-stochastic matrix, and is therefore computationally tractable.
A second line of literature considers ordinal mechanisms for one-sided matching. The random serial dictatorship (RSD) mechanism has a long history of practical application. Recently it has been used in applications such as housing and course allocation. Pathak and Sethuraman (2011) study the use of RSD for school choice in New York City. RSD is truthful, ex post Pareto efficient, and easy to implement (e.g., Abdulkadiroglu and Sonmez, 1998). On the other hand, RSD is neither ex ante Pareto efficient nor envy-free. To remedy this deficiency of RSD, Bogomolnaia and Moulin (2001) developed the probabilistic serial (PS) mechanism which, while not truthful, is ordinally efficient, envy-free, and easy to implement. PS has been studied in various contexts ranging from school assignments to kidney matching and it is often contrasted with RSD. For example, Pathak and Sethuraman (2011) show that students often obtain a more desirable random assignment from PS than from RSD. Nonetheless, under a large market assumption PS and RSD converge and the desirable properties of both are attained (Kojima and Manea, 2010; Che and Kojima, 2010).
Several recent papers have considered approximation in one-sided matching markets without money when agents have cardinal preferences. With cardinal preferences, it is possible to consider the aggregate welfare of an allocation as the sum of the expected utilities of each agent. For an aggregate notion of welfare to make sense, the values of the agents need to be normalized. Two common normalizations are unit-sum, which scales each agent’s values so that their sum is one, and unit-range, which scales and shifts each agent’s values so that the minimum value is zero and the maximum value is one. Under either of these normalizations, Filos-Ratsikas et al. (2014) show that randomized serial dictatorship is an approximation and that no algorithm for mapping ordinal preferences to allocations is asymptotically better. Christodoulou et al. (2016) consider the unit-sum normalization and show that the price of anarchy of PS is and that no mechanism, ordinal or cardinal, is asymptotically better. Important comparison of these above results to ours are as follows: our guarantees do not require a normalization of values. Our approximation guarantees are on per-agent utilities, not on the aggregate welfare, which allows some agents to be harmed if other agents benefit. We show that our randomized partial improvement mechanism is asymptotically better than RSD in our per-agent analysis framework by a factor of .
More recently, Immorlica et al. (2017) introduce a novel cardinal notion of approximate Pareto efficiency and use this notion to analyze the raffles mechanism in one-sided matching markets. In this mechanism, agents express strength of preferences for items using tickets they can allocate to items and items are allocated proportionally to the tickets. The main result of their analysis is to show that this mechanism is -approximately Pareto efficient in equilibrium, i.e., that there is no outcome where each agent’s utility is increased by more than an factor. Like the welfare objective, (approximate) Pareto efficiency allows harming agents if it benefits other agents, while our benchmark requires good per-agent utility and enforces approximate fairness as given by the Nash social welfare.
Our mechanism is based on the partial allocation (PA) mechanism of Cole et al. (2013) that truthfully and approximately solves the fair division of heterogeneous goods. A novel feature of the PA mechanism is that a fraction of the fair allocation is withheld from individual agents in a way that behaves, in the agents’ utilities, as payments that align the incentives of the agents with the Nash social welfare objective. The fair division problem is closely tied to the cake-cutting literature, which originated in the social sciences but has garnered interest from computer scientists and mathematicians alike (Brams and Taylor, 1996; Moulin, 2003; Robertson and Webb, 1998; Young, 1995). The cake – a heterogeneous, divisible item – is represented by the interval and the agents have valuation functions assigning each subinterval to a non-negative value. These valuations are also assumed to be additive. Algorithmic challenges in cake cutting have recently attracted the attention of computer scientists. A historical overview as well as notable results in cake cutting can be found in surveys by Procaccia (2013) and Procaccia and Moulin (2016). While the setting considered by Cole et al. (2013) is more general than the cake-cutting problem, the cardinal matching problem we consider is closely related to the cake cutting problem with piecewise uniform valuations since our agents have linear preferences over items.
Random sampling techniques are now common in the literature on mechanism design. They have been primarily developed for revenue maximization problems where the seller lacks prior information on the agents’ preferences (Hartline and Karlin, 2007). Our use of random sampling more closely resembles the literature on redistribution mechanisms, where the designer aims to maximize the consumer surplus and monetary transfers between agents are allowed (Cavallo, 2006; Guo and Conitzer, 2007). An approach by Moulin (2009) is to single out a random agent as the residual claimant, run an efficient mechanism on the remaining agents, and pay the revenue generated by the mechanism to the residual claimant. Similarly, our mechanism randomly partitions the agents into two groups and attempts to implement the PA mechanism on the first group while using the items that would be reserved for the second group to implement the first group’s outside option. Further connections between our approach and redistribution mechanisms may be possible.
8 Conclusion and Future Work
We defined the random partial improvement (RPI) mechanism for one-sided matching markets without monetary transfers. RPI both truthfully elicits the cardinal preferences of the agents and outputs a distribution over matchings that approximates every agent’s utility in the Nash bargaining solution.
Our analysis suggests several open questions and directions for future work. For matching markets, how do the utilities of agents compare in competitive equilibria from equal incomes and the optimal outcome for Nash bargaining solution? (Recall, CEEI and the Nash bargaining solution are equivalent in linear markets without the matching constraint; Vazirani, 2007.) Can the approximation bound of RPI, which is for -utility monotonic markets, be improved upon? Specifically, is it possible to get a good bound on the utility of agents that are not sampled in the first iteration of the RPI mechanism? Can we get tight upper and lower bounds for the value of in the class of one-sided market instances, possibly implying an improved approximation bound for RPI? Alternatively the analysis of the approximation factor of RPI could be improved via tigher bounds on population monotonicity on average when the agents removed are drawn uniformly at random.
- Abdulkadiroglu and Sonmez (1998) Abdulkadiroglu, A. and Sonmez, T. (1998). Random serial dictatorship and the core from random endowments in house allocation problems. Econometrica, 66:689–701.
- Alaei et al. (2017) Alaei, S., Jalaly Khalilabadi, P., and Tardos, E. (2017). Computing equilibrium in matching markets. In Proceedings of the 2017 ACM Conference on Economics and Computation, pages 245–261. ACM.
- Bogomolnaia and Moulin (2001) Bogomolnaia, A. and Moulin, H. (2001). A new solution to the random assignment problem. Journal of Economic Theory, 100(2):295 – 328.
- Brams and Taylor (1996) Brams, S. and Taylor, A. (1996). Fair Division: From Cake Cutting to Dispute Resolution.
- Brânzei et al. (2017) Brânzei, S., Gkatzelis, V., and Mehta, R. (2017). Nash social welfare approximation for strategic agents. In Proceedings of the 2017 ACM Conference on Economics and Computation, EC ’17, Cambridge, MA, USA, June 26-30, 2017, pages 611–628.
- Budish (2011) Budish, E. (2011). The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. Journal of Political Economy, 119(6):1061–1103.
- Budish et al. (2016) Budish, E., Cachon, G. P., Kessler, J. B., and Othman, A. (2016). Course match: A large-scale implementation of approximate competitive equilibrium from equal incomes for combinatorial allocation. Operations Research, 65(2):314–336.
- Caragiannis et al. (2016) Caragiannis, I., Kurokawa, D., Moulin, H., Procaccia, A. D., Shah, N., and Wang, J. (2016). The unreasonable fairness of maximum nash welfare. In Proceedings of the 2016 ACM Conference on Economics and Computation, pages 305–322.
- Cavallo (2006) Cavallo, R. (2006). Optimal decision-making with minimal waste: Strategyproof redistribution of VCG payments. In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pages 882–889. ACM.
- Che and Kojima (2010) Che, Y.-K. and Kojima, F. (2010). Asymptotic equivalence of probabilistic serial and random priority mechanisms. Econometrica, 78(5):1625–1672.
- Christodoulou et al. (2016) Christodoulou, G., Filos-Ratsikas, A., Frederiksen, S. K. S., Goldberg, P. W., Zhang, J., and Zhang, J. (2016). Social welfare in one-sided matching mechanisms. In International conference on autonomous agents and multiagent systems, pages 30–50. Springer.
- Cole and Gkatzelis (2018) Cole, R. and Gkatzelis, V. (2018). Approximating the nash social welfare with indivisible items. SIAM J. Comput., 47(3):1211–1236.
- Cole et al. (2013) Cole, R., Gkatzelis, V., and Goel, G. (2013). Mechanism design for fair division: Allocating divisible items without payments. In Proceedings of the Fourteenth ACM Conference on Electronic Commerce, EC ’13, pages 251–268, New York, NY, USA. ACM.
- Devanur and Kannan (2008) Devanur, N. R. and Kannan, R. (2008). Market equilibria in polynomial time for fixed number of goods or agents. In Foundations of Computer Science, 2008. FOCS’08. IEEE 49th Annual IEEE Symposium on, pages 45–53. IEEE.
Filos-Ratsikas et al. (2014)
Filos-Ratsikas, A., Frederiksen, S. K. S., and Zhang, J. (2014).
Social welfare in one-sided matchings: Random priority and beyond.
International Symposium on Algorithmic Game Theory, pages 1–12. Springer.
- Goldberg et al. (2006) Goldberg, A. V., Hartline, J. D., Karlin, A. R., Saks, M., and Wright, A. (2006). Competitive auctions. Games and Economic Behavior, 55(2):242–269.
- Guo and Conitzer (2007) Guo, M. and Conitzer, V. (2007). Worst-case optimal redistribution of VCG payments. In Proceedings of the 8th ACM conference on Electronic commerce, pages 30–39. ACM.
- Hartline and Karlin (2007) Hartline, J. D. and Karlin, A. R. (2007). Profit maximization in mechanism design. In Nisan, N., Roughgarden, T., Tardos, E., and Vazirani, V. V., editors, Algorithmic game theory, chapter 13. Cambridge University Press Cambridge.
- Hylland and Zeckhauser (1979) Hylland, A. and Zeckhauser, R. (1979). The efficient allocation of individuals to positions. Journal of Political economy, 87(2):293–314.
- Immorlica et al. (2017) Immorlica, N., Lucier, B., Weyl, G., and Mollner, J. (2017). Approximate efficiency in matching markets. In International Conference on Web and Internet Economics, pages 252–265. Springer.
- Kojima and Manea (2010) Kojima, F. and Manea, M. (2010). Incentives in the probabilistic serial mechanism. Journal of Economic Theory, 145(1):106 – 123.
- Moulin (2003) Moulin, H. (2003). Fair Division and Collective Welfare.
- Moulin (2009) Moulin, H. (2009). Almost budget-balanced VCG mechanisms to assign multiple objects. Journal of Economic theory, 144(1):96–119.
- Nash (1950) Nash, J. (1950). The bargaining problem. Econometrica, 18(2):155–162.
- Othman et al. (2016) Othman, A., Papadimitriou, C., and Rubinstein, A. (2016). The complexity of fairness through equilibrium. ACM Transactions on Economics and Computation (TEAC), 4(4):20.
- Othman et al. (2010) Othman, A., Sandholm, T., and Budish, E. (2010). Finding approximate competitive equilibria: Efficient and fair course allocation. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pages 873–880. International Foundation for Autonomous Agents and Multiagent Systems.
- Pathak and Sethuraman (2011) Pathak, P. A. and Sethuraman, J. (2011). Lotteries in student assignment: An equivalence result. Theoretical Economics, 6(1):1–17.
- Procaccia (2013) Procaccia, A. D. (2013). Cake cutting: Not just child’s play. Commun. ACM, 56(7):78–87.
- Procaccia and Moulin (2016) Procaccia, A. D. and Moulin, H. (2016). Cake Cutting Algorithms, page 311–330. Cambridge University Press.
- Robertson and Webb (1998) Robertson, J. and Webb, W. (1998). Cake Cutting Algorithms: Be Fair If You Can.
- Thomson (1983) Thomson, W. (1983). The Fair Division of a Fixed Supply Among a Growing Population. Mathematics of Operations Research, 8(3):319–326.
- Thomson and Lensberg (1989) Thomson, W. and Lensberg, T. (1989). Axiomatic Theory of Bargaining with a Variable Number of Agents. Cambridge University Press.
- Vazirani (2007) Vazirani, V. V. (2007). Combinatorial algorithms for market equilibrium. In Nisan, N., Roughgarden, T., Tardos, E., and Vazirani, V. V., editors, Algorithmic game theory, chapter 5, pages 103–134. Cambridge University Press Cambridge.
- Young (1995) Young, H. P. (1995). Equity: In Theory and Practice.
- Zhou (1990) Zhou, L. (1990). On a conjecture by gale about one-sided matching problems. Journal of Economic Theory, 52(1):123–135.
Appendix A The Lower Bound Construction
The construction uses a collection of overlapping submarkets named . Associated with these markets are integer parameters , for . There will be copies of in the construction. Note that there is exactly one copy of .
Next, in Figure 1 and Table 3, we show the form of . The nodes in this figure correspond to the items, and a directed edge , labeled by the name of an agent, indicates that this agent was allocated portions of item in the initial solution and portions of in the final solution. Now items and bidders occur with multiplicity possibly greater than 1 and this is called their size. In the initial equilibrium, every item is fully allocated as the total size of the bidders and items are the same; in the final equilibrium, item is the one item that is not fully allocated. Note that in each equilibrium, the conditions (2)–(4) from Section 6.2 are satisfied.
|The initial equilibrium:|
|The final equilibrium:|