One-sided matching mechanisms have been extensively adopted in many resource allocation settings such as assigning dormitory rooms or offices to students, students to public schools, college courses to students, organs and medical resources to patients, and members to subcommittees roth2004kidney ; ashlagi2013mix ; budish2012multi ; manlove2013algorithmics . Two prominent (randomized) matching mechanisms that only elicit ordinal preferences from agents are Random Serial Dictatorship (RSD) abdulkadirouglu1998random and Probabilistic Serial Rule (PS) bogomolnaia2001new . Both mechanisms have important economic properties and are practical to implement. The RSD mechanism has strong truthful incentives but guarantees neither efficiency nor envyfreeness. PS satisfies efficiency and envyfreeness; however, it is susceptible to manipulation. Therefore, there are subtle points to be considered when deciding which mechanism to use. For example, given a particular preference profile, the mechanisms often produce random assignments which are simply incomparable and thus, without additional knowledge of the underlying utility models of the agents, it is difficult to determine which is the “better” outcome. Furthermore, properties like efficiency, truthfulness, and envyfreeness can depend on whether there is underlying structure in the preferences, and even in general preference models it is valuable to understand under what conditions a mechanism is likely to be efficient, truthful, or envyfree as this can guide designers choices.
In this paper, we study the comparability of PS and RSD when there is only one copy of each object, and analyze the space of all preference profiles for different numbers of agents and objects. Working in the space of general ordinal preferences, we provide empirical results on the (in)comparability of RSD and PS and analyze their respective economic properties. We show that despite the inefficiency of RSD, the fraction of random assignments at which PS stochastically dominates RSD vanishes, especially when the number of agents is less than or equal to the available objects. We also investigate the manipulability of PS and show that PS is almost always 99% manipulable for any combination of agents and objects, and the fraction of strongly manipulable profiles goes to one as the ratio of objects to agents increases. We show similar trends under lexicographic preferences, and further present results on envy of RSD. Our results show that although the fraction of envious agents grows with the number of agents, there is a sudden drop in the fraction of envious agents when there are equal number of agents and objects.
In Section 5, we instantiate utility functions for agents to gain deeper insights on the manipulability, social welfare, and envyfreeness of the two mechanisms under different risk attitudes. Our main result is that under risk aversion, the social welfare of RSD is as good as PS but RSD does create envy among the agents (though the fraction of envious profiles and the total envy are small). Moreover, when the number of agents and objects are equal, RSD assignments are less likely to be dominated by PS, and overall RSD assignments create negligible envy among agents. We also show that PS is highly susceptible to manipulation in almost all combinations of agents and objects. The fraction of manipulable profiles and the gain from manipulation rapidly increases, particularly when agents become more risk averse. In Section 7, we consider two statistical preference distribution models, namely Mallows Models and Polya-Eggenberger Urn Models, and show that the same patterns and trends hold for various combinations of agents and objects, when varying risk parameters and utility functions.
In this section, we describe the basic one-sided matching problem and introduce the two mechanisms we study in detail, Random Serial Dictatorship (RSD) abdulkadirouglu1998random and Probabilistic Serial Rule (PS) bogomolnaia2001new . We then introduce a number of properties and criteria used to evaluate these mechanisms.
A one-sided matching problem consists of a set of agents, , and a set of indivisible objects, .111This problem is sometimes called the assignment problem or house allocation problem in the literature. Each agent has a private strict preference ordering, , over where indicates that agent prefers to receive object over object . We represent the preference ordering of agent by the ordered list of objects or , for short. We let denote the set of all complete and strict preference orderings over . A preference profile specifies a preference ordering for each agent, and we use the standard notation to denote preferences orderings of all agents except and thus .
The goal in a one-sided matching problem is to assign the objects in to the agents in according to preference profiles, under the constraint that no object can be assigned to more than one agent. If then this means that each agent will receive exactly one object, however if then some agents will receive no object and if then some agents may receive multiple objects. An assignment is represented as a matrix
is the probability that agentis assigned object . We let denote the set of all feasible assignments where an assignment is feasible if and only if , . If is such that then we say that is a deterministic assignment; otherwise, is a random assignment. Every random assignment can be represented as a convex combination of deterministic assignments von1953certain
, and thus we view random assignments as a probability distribution over a set of deterministic assignments.
2.1 Matching Mechanisms
In general, a matching mechanism, , is a mapping from the set of preference profiles, to the set of feasible assignments, . That is, . We focus our attention on two widely studied mechanisms for one-side matching: Random Serial Dictatorship (RSD) abdulkadirouglu1998random and Probabilistic Serial Rule (PS) bogomolnaia2012probabilistic .
RSD relies on the concept of priority orderings over agents. Such an ordering is an ordered list of agents where the first agent gets to select its most preferred object from the set of objects, the second agent then selects its most preferred object from the set of remaining objects and so on until no objects remain.222When and agents can receive more than one object, RSD requires a careful method for the picking sequence at each priority ordering to ensure strategyproofness. This picking sequence should be based on an arbitrary serial dictatorship quota mechanism, which directly affects the efficiency and envy of the assignments hosseini2015strategyproof ; bouveret2014manipulating . For simplicity, we use the variant of RSD based on a quasi-dictatorial mechanism papai2000strategyproof where the first agent selects its most preferred objects, and the rest of the agents choose one object each. Given a preference profile , RSD returns an assignment
which is a uniform distribution over all deterministic assignments induced from all possible priority orderings over the set of agents. RSD has been widely adopted for fair and strategyproof assignments for the school choice problem, course assignment, house allocation, and room assignmentabdulkadiroglu2009strategy ; sonmez2010course ; abdulkadirouglu1998random ; abdulkadirouglu1999house
PS treats objects as a set of divisible goods of equal size and simulates a simultaneous eating algorithm. Each agent starts “eating” its most preferred object, all at the same rate. Once an object is gone (eaten away) then the agent starts eating its next preferred object among the remaining objects. This process terminates when all objects have been “eaten”. Given a preference profile , is a random assignment where is the probability (fraction) that object is assigned to (or “eaten by”) agent .
2.2 General Properties
In this section we define key properties for matching mechanisms. To evaluate the quality of a random assignment, we use first-order stochastic dominance hadar1969rules ; bogomolnaia2001new . Given a random assignment , the probability that agent is assigned an object that is at least as good as object is defined as follows
We say an agent always prefers assignment to , if for each object the probability of assigning an object at least as good as under is greater or equal that of , and strictly greater for some object.
Definition 1 (Stochastic Dominance)
Given a preference ordering , random assignment stochastically dominates (sd) assignment if
A matching mechanism is sd-efficient if at all preference profiles , for all agents , the prescribed assignment is not stochastically dominated by any other assignment.
Definition 2 (sd-Efficiency)
A random assignment is sd-efficient if for all agents, it is not stochastically dominated by any other random assignment.
An important desirable property in matching mechanisms is strategyproofness, that is the mechanism is designed so that no agent has incentive to misreport its preferences.
Definition 3 (sd-Strategyproofness)
Mechanism is sd-strategyproof if at all preference profiles , for all agents , and for any misreport , such that and , we have:
Sd-strategyproofness is a strict requirement. It implies that under any utility model consistent with the preference orderings, no agent can improve her expected utility by misreporting. We say that a mechanism is weakly sd-strategyproof if at all preference profiles there is no misreport such that for all , with at least one such that . Clearly, sd-strategyproofness implies weak sd-strategyproofness but the converse does not hold.
An assignment is manipulable if it is not sd-strategyproof. If there exists some agent who strictly benefits from the manipulation, (i.e. the mechanism is not even weakly sd-strategyproof) then we say the assignment is sd-manipulable (or strictly manipulable).
We are also interested in whether mechanisms are fair and use the notion of envyfreeness to this end. An assignment is sd-envyfree if each agent strictly prefers her random allocation to any other agent’s assignment.
Definition 4 (sd-Envyfreeness)
Given agent ’s preference , assignment is sd-envyfree if for all agents ,
We say an assignment is weakly sd-envyfree if the inequality in Equation 4 is strict for some , but there exists at least one for which the inequality in Equation 4 does not hold. A matching mechanism satisfies sd-envyfreeness if at all preference profiles , it induces sd-envyfree assignments for all agents.
Lastly, we are interested in investigating efficiency, manipulation, and envy of the random mechanisms when preferences are lexicographic. Under lexicographic preferences, given two allocations, an agent prefers the one in which there is a higher probability for getting the most-preferred object.
Definition 5 (Lexicographic Dominance)
Given a preference ordering , random assignment lexicographically dominates (ld) assignment if
We say that allocation lexicographically dominates another allocation if there exists no agent that lexicographically prefers to . Thus, an allocation mechanism is lexicographically efficient (ld-efficient) if for all preference profiles its induced allocation is not lexicographically dominated by any other random allocation.
2.3 Properties of RSD and PS
The theoretical properties of PS and RSD have been well studied in the economics literature bogomolnaia2001new , and we summarize the results in Table 1. Both mechanisms are ex post efficient, that is, their realized outcomes cannot be improved without making at least one agent worse off. PS has been shown to be both sd-envyfree and sd-efficient. However, it is not even weakly sd-strategyproof when kojima2010incentives and is only weakly sd-strategyproof when . On the other hand, RSD is always sd-strategyproof, but it is only weakly sd-envyfree and is not sd-efficient. Example 1 illustrates the sd-inefficiency of RSD.
Suppose there are four agents and four objects . Consider the following preference profile . Table 2 shows the outcomes for and . In this example, all agents strictly prefer the assignment induced by PS over the RSD assignment. Thus, RSD is inefficient at this preference profile.
3 Incomparability of RSD and PS
We argue that the theoretical findings on RSD and PS do not necessarily provide enough guidance to a market designer trying to select the correct mechanism for a specific setting. For example, while we know that PS is sd-efficient and RSD is not, this does not mean that PS assignment always stochastically dominate the assignments prescribed by RSD.
Suppose there are three agents and three objects . Consider the following preference profile . Table 3 shows and . Neither assignment dominates the other since agent 1 is ambivalent between the two assignments while agent 2 prefers and agent 3 prefers .
If we knew the utility functions of the agents, consistent with their ordinal preferences, then we might be able to use the notion of (utilitarian) social welfare to help determine the better assignment.333Given utility functions for the agents, where is the utility agent derives from being assigned object , the (utilitarian) social welfare of an assignment is . However, it is easy to construct different utility functions for the agents in Example 2 where both and maximize social welfare. Similarly, the envy of RSD and the manipulability of PS both depend on the structure of preference profiles, and thus, a compelling question, that justifies studying the practical implications of deploying a matching mechanism, is to analyze the space of preference profiles to find the likelihood of inefficient, manipulable, or envious assignments under these mechanisms. In Example 2, for instance, if utilities of agents 1 and 2 are 10, 9, and 1, and agent 3’s utility is 10, 6, and 4 for the first, second, and third objects respectively, then PS assignment outperforms that of RSD with respect to social welfare because . However, if utility functions change such that all agents have the same utilities of 10, 9, and 1 for the first, second, and third objects respectively, then the social welfare under RSD outperforms that of PS because .
4 General and Lexicographic Preferences
The theoretical properties of PS and RSD only provide limited insight into their practical applications. In particular, when deciding which mechanism to use in different settings, the incomparability of PS and RSD leaves us with an ambiguous choice in terms of efficiency, manipulability, and envyfreeness. Thus, we examine the properties of RSD and PS in the space of all possible preference profiles as well as under lexicographic preferences. Lexicographic preferences are present in various applications and have been extensively studied in artificial intelligence and multiagent systems as a means of assessing allocations based on ordinal preferencesdomshlak2011preferences ; saban2013note ; fishburn1974lexicographic . Under lexicographic preferences, an allocation that assigns a higher probability to the top ranked object is always preferred to any other allocation, regardless of the probabilities assigned to objects in the next positions. Only when two allocations assign equal probabilities to the top ranked object, the probability of the next preferred object is considered. In the rest of this paper, we denote the efficiency, strategyproofness, manipulability, and envyfreeness with ld- (lexicographically dominate) prefix.
The number of all possible preference profiles is super exponential . For each combination of agents and objects we performed a brute force coverage of all possible preference profiles. Thus, for all subsequent figures each data point shows the fraction of all possible preference profiles. For the cases of and , we randomly generated 1,000 instances by sampling from a uniform preference profile distribution. For each preference profile, we ran both PS and RSD mechanisms and compared their outcomes in terms of the stochastic dominance relation. Appendix A illustrates our numerical results. Note that not only is computing RSD probabilities #P-complete (and thus intractable) aziz2013computational ; saban2015complexity , but checking the desired properties such as envyfreeness, efficiency, and manipulablity of random allocations is shown to be NP-hard for general settings aziz2015fair ; aziz2015manipulating . Thus, for larger settings even if we randomly sample preference profiles it is not easy to verify the aforementioned properties.
4.1 Preliminary Results
Our experimentation disclose several intriguing observations, confirming theoretical results and providing additional insights into matching markets. A preliminary look at our empirical results illustrates the following: when , PS coincides exactly with RSD, which results in the best of the two mechanisms, i.e., both mechanisms are sd-efficient, sd-strategyproof, and sd-envyfree. Another interesting observation is that when , for all , PS is sd-strategyproof (although the PS assignments are not necessarily equivalent to assignments induced by RSD), RSD is sd-envyfree, and for most instances when , PS stochastically dominates RSD, particularly when .
Our first finding is that the fraction of preference profiles at which RSD and PS prescribe identical random assignments goes to when grows. There are two conclusions that one can draw. First, this result confirms the theoretical results of Manea on asymptotic inefficiency of RSD manea2009asymptotic , in that, in most instances, the assignments induced by RSD are not identical to the PS assignments. Second, this result suggests that the incomparability of outcomes is significant, that is, the social welfare of the random outcomes is highly dependent on the underlying utility models.
The fraction of preference profiles for which RSD is stochastically dominated by PS at converges to zero as . Figure 0(a) shows that when grows beyond , due to incomparability of RSD and PS with regard to the stochastic dominance relation, the RSD assignments are rarely stochastically dominated by sd-efficient assignments prescribed by PS.
We also see similar results when we restrict ourselves to lexicographic preferences (Figure 0(b)). The fraction of preference profiles for which RSD is lexicographically dominated by PS at converges to zero as .
For lexicographic preferences, we also observe that the fraction of preference profiles for which PS assignments strictly dominate RSD-induced allocations goes to 1 when the number of agents and objects diverge. The fraction of preference profiles for which RSD is lexicographically dominated by PS at converges to 1 as grows. Intuitively, when some agents can receive more than one object () or when there are not sufficient objects () for all agents, in each realized ordering of agents by RSD, those with higher priority are treated very differently than those in lower priority. Thus, the RSD outcome tend to be unfair and undesirable for most agents.
One immediate conclusion is that although RSD does not guarantee either sd-efficiency or ld-efficiency, in most settings when (and also for sd-efficiency according to Figure 0(a)), neither of the two mechanisms is preferred in terms of efficiency. Hence, one cannot simply rule out the RSD mechanism.
4.3 Manipulability of PS
One critical issue with deploying PS is that it does not provide incentives for honest reporting of preferences. Although for PS is weakly sd-strategyproof bogomolnaia2001new and ld-strategyproof schulman2012allocation , when PS no longer satisfies these two properties.444A recent experimental study on the incentive properties of PS shows that human subjects are less likely to manipulate the mechanism when misreporting is a Nash equilibrium. However, subjects’ tendency for misreporting is still significant even when it does not improve their allocations hugh2013experimental . The real concern is that, in the absence of strategyproofness, PS allocations are only efficient (or envyfree) with respect to the reported preferences. Thus, if an agent decides to manipulate the outcome by misreporting its preferences, PS will no longer guarantee efficiency, nor envyfreeness with respect to the true underlying preferences. Thus, we are interested in understanding the degree to which PS allocations are manipulable.
Figure 2 shows that the fraction of manipulable profiles goes to 1 as or grow. PS is almost 99% manipulable for . Another interesting observation is that, for all , the fraction of sd-manipulable preference profiles goes to 1 as grows (Figure 1(b)). These results imply that when agents are permitted to receive more than a single object, agents can strictly benefit from misreporting their preferences.
Moreover, at those instances of problem where PS is sd-strategyproof, the assignment prescribed by PS most often coincides with the RSD induced assignment. For example, when , PS is only sd-strategyproof at of preference profiles, of which are identical to the assignments induced by RSD. This insight further confirms the vulnerability of PS to misreporting (See Table 6 for detailed numerical results).
As illustrated in Figure 3, the manipulability of PS under lexicographic preferences has a similar trend when there are more objects than agents () and the fraction of ld-manipulable preference profiles converges to 1 even more rapidly when grows.
4.4 Envy in RSD
The PS mechanism has a desirable fairness property and is guaranteed to satisfy sd-envyfreeness, whereas RSD is not sd-envyfree. To further investigate the envy among agents under RSD, we measured the fraction of agents that are weakly sd-envious of at least one other agent.
Figure 4 shows that for RSD, the percentage of agents that are weakly envious increases with the number of agents. Figure 3(a) reveals an interesting observation: fixing any , the percentage of agents that are (weakly) envious grows with the number of objects, however, there is a sudden drop in the percentage of envious agents when there are equal number of agents and objects.
For better understanding of the population of agents who feel (weakly) envious under RSD, we illustrate the various envy profiles based on the percentage of envious agents in all instances of the problem when (Figure 3(b)). One observation is that there are few distinct envy profiles at each , each representing a particular class of preference profiles, and by increasing , the fraction of agents that are envious of at least one other agent increases.
5 Utility Models
Given a utility model consistent with an agent’s preference ordering, we can find the agent’s expected utility for a random assignment. Let denote agent ’s Von Neumann-Morgenstern (VNM) utility model consistent with its preference ordering . That is, if and only if . Then, agent ’s expected utility for random assignment is .
We say that agent (strictly) prefers assignment to if and only if . A mechanism is strategyproof if there exists no agent that can improve its expected utility by misreporting its preference ordering.
Definition 6 (Strategyproof)
Mechanism is strategyproof if for all agents , and for any misreport , such that and , given a utility model consistent with , we have .
A matching mechanism is envyfree if for all preference profiles it prescribes an envyfree assignment.
Definition 7 (Envyfreeness)
Assignment is envyfree if for all , given utility model consistent with , we have .
Given utility functions for the agents, the (utilitarian) social welfare of an assignment is . A random assignment is sd-efficient if and only if there exists a profile of utility values consistent with such that maximizes the social welfare ex ante bogomolnaia2001new ; mclennan2002ordinal . This existence result does not shed light on the social welfare when comparing two random assignments, since an assignment can be sd-efficient but may not have desirable ex ante social welfare. Consider the following random assignments: assignment which is sd-efficient and assignment which is not stochastically dominated by . Given a preference profile, is guaranteed to maximize the social welfare for at least one profile of consistent utilities. However, there may be other profiles of utilities consistent with preferences at which maximizes the sum of utilities (social welfare).
Consider the problem introduced in Example 2 with assignments illustrated in Table 3. Let’s assume that all agents have the same utility model where the utilities are for the first, second, and third objects respectively. The sum of expected utilities under the PS assignment is , while the sum of expected utilities under the RSD allocation is . It is easy to see that for this profile, the ex ante social welfare under RSD is larger than that of PS.
Thus, given a profile of utilities we investigate the (ex ante) social welfare of the assignments under PS and RSD.
5.1 Instantiating Utility Functions
To deepen our understanding as to the performance of the two mechanisms, we investigate different utility models. In particular we look at the performance of the mechanisms when the agents are all risk neutral (i.e. have linear utility functions), when agents are risk seeking and when agents are risk averse.
Our first utility model is the well-studied linear utility model. Given an agent ’s preference ordering , we let denote the rank of object . For example, given preference ordering then , and . The utility function for agent , given object is .
We use an exponential utility model to capture risk attitudes beyond risk-neutrality. An exponential utility has been shown to provide an appropriate translation for individuals’ utility models arrow1974essays . In particular, we define the exponential utility as follows:
The parameter represents the agent’s risk attitude. If then the agent is risk averse, while if then the agent is risk seeking. When then the agent is risk neutral and we have a linear utility model. The value represents the intensity of the attitude. That is, given two agents with , we say that agent 1 is more risk averse than agent 2. Similarly if then agent 1 is more risk seeking than agent 2. Figure 5 illustrates the risk curvature for various risk taking and risk averse parameters.
Table 4 shows sample utility values for various risk taking, neutral, and risk averse utility profiles. These values show how a utility for objects in various ranking positions will change according to risk attitude models. Note that we do normalize the utilities such that all utilities add up to 1.
For our experiments, we vary three parameters: the number of agents , the number of objects , and the risk attitude factor . Each data point in the graphs shows the average over all possible preference profiles. We study the same settings as in Section 4 when and . For each utility function, we look at homogeneous populations of agents where agents have the same risk attitudes but may have difference ordinal preferences.
To compare the social welfare, we investigate the percentage change (or improvement) in social welfare of PS compared to RSD under various utility models. That is,
To measure the manipulability of PS, we are interested in answering two key questions: i) In what fraction of profiles is PS manipulable by at least one agent? and ii) If manipulation is possible, what is the average percentage of maximum gain? That is,
To study the envy under the RSD mechanism, we consider two measures: i) the fraction of envious agents, and ii) the total envy felt by all agents.
6.1 Linear Utility Model
We first looked at how RSD and PS perform under the assumption that the utility models are linear (Figure 6). In most cases, the social welfare under PS increases compared to RSD; however, the social welfare of PS is very close to that of RSD when (less than overall improvement in all cases). Interestingly, under RSD the fraction of envious agents gets close to when . With regards to strategyproofness, PS is manipulable in most combinations of and and the fraction of manipulable profiles and the utility gain from manipulation increases as the number of objects compared to agents increases.
6.2 Risk Seeking
Figure 7 presents our results in terms of percentage change in social welfare between PS and RSD. Positive numbers show the percentage of improvement in social welfare from PS to RSD. Negative values represent those cases where RSD has increased social welfare compared to PS.
Social welfare: Fixing , for when grows PS improves the social welfare compared to RSD in all instances of the problem and the percentage of improvement also increases. A similar trend holds when varying risk intensity for fixed and where . For , when grows the fraction of profiles at which PS has higher social welfare compared to RSD rapidly increases and the percentage change is also noticeably larger, quickly getting close to 90% improvement (Figures 6(a), 6(c), and 6(e)). This social welfare gap between PS and RSD grows as the risk intensity increases. Surprisingly, this trend changes for equal number of agents and objects : the more risk-seeking agents are (larger ), RSD becomes more desirable than PS, and in fact, RSD improves the social welfare in more instances.
Envy: Figure 8 shows that for , the fraction of envious agents under all profiles vanishes and RSD becomes envyfree. This is more evident when agents are more risk-seeking. Intuitively, these observations confirm the theoretical findings about the envyfreeness of RSD under lexicographic preferences hosseini2015strategyproof . This is because one can consider lexicographic preferences as risk-seeking preferences where an object in a higher ranking is infinitely preferred to all objects that are ranked less preferably hosseini2015strategyproof . When , our quasi-dictatorial extension of RSD creates some envy among the agents, because the agent with the highest priority receives objects, while all other agents receive at most one object. An interesting result is the envy created by RSD starts to fade out when the risk intensity increases.
Manipulability: Figure 9 shows the manipulability of the PS assignments when agents are risk seeking. We see that the possibility of manipulation (and any gain) decreases as the risk intensity increases. When the fraction of manipulable profiles goes to the more risk seeking agents become. However, when even though the fraction of manipulable profiles (and manipulation gain) decreases, the fraction of manipulable profiles goes to 1 as grows.
6.3 Risk Aversion
Social welfare: Figures 6(b), 6(d), and 6(f) show that, fixing risk factor , when grows, PS assignments are superior to that of RSD in terms of social welfare in more instances, and the percentage change in social welfare increases. Fixing risk factor and when grows, RSD is more likely to have the same social welfare as PS, and in fact in some instances the social welfare under RSD is better than the social welfare under PS. Fixing and , when the risk intensity increases RSD is more likely to have the same social welfare as PS, that is, the welfare gap between PS and RSD closes when agents are more risk averse ( increases). This result is insightful and states that under risk aversion the random allocations prescribed by RSD are either as good as PS or in some cases even are superior to the allocations prescribed by PS due to the underlying shape of the utility models. Figure 17 illustrates the percentage change in social welfare based on the difference between available objects and agents () for risk seeking, linear, and risk averse utilities with different risk intensities.
Envy: In Figure 10 we observe that when , the fraction of envious agents and total envy grows as . Increasing the risk intensity (), the fraction of envious agents increases; however, the total envy among the agents remains considerably low. For , the fraction of envious agents and total envy grows as risk intensity increases. An interesting observation is that envy is maximized when , and it decreases as grows. This is mostly due to the choice of using randomized quasi-dictatorial mechanism for implementing RSD where the first dictator receives objects and all other agents only receive a single object. Lastly, we noticed that in all instance where RSD creates envy among the agents, around 25% of agents bear more than 50% of envy. That is, few agents feel extremely envious while all other agents are either envyfree or only feel a minimal amount of envy.
Manipulability: Figures 11 illustrates the manipulability of the PS assignments when agents have risk averse preferences. The fraction of manipulable profiles rapidly goes to 1 as grows. Similarly, as agents become more risk averse ( increases) the fraction of manipulable profiles goes to 1 and the manipulation gain increases.
7 Other Ranking Distribution Models
In this section, we use variations of two statistical models that are commonly used to capture realistic preference distributions in a population of players. Considerable work in computational social choice and machine learning has exploited these statistical models to capture the distribution of ranking preferences in a population of agentslu2011learning ; lu2011robust ; Aziz:2015:EUP:2832249.2832402 . We will focus on Mallows Models and Polya-Eggenberger Urn Models (Urn) mallows1957non ; marden1996analyzing ; berg1985paradox .
In Mallows models the population is distributed around a reference ranking proportional to the Kendall-Tau (KT) distance kendall1938new ; kendall1948rank . Henceforth, preferences closer to the reference ranking are more likely to appear in the population. In other words, agents’ preferences deviate from the reference ranking with decreasing probability as rankings move away from the reference. Mallows models are parametrized by a reference ranking and a dispersion parameter. Formally, given a reference ranking () and a dispersion parameter (), we have
where . When the Mallows model is equivalent to the uniform distribution, and when the distribution mass is entirely on the reference ranking. It is also possible for an agent population to have multiple references. In these cases, Mallows Mixture models are parametrized by a set of ranking reference with their corresponding dispersion parameters.
In the Urn distribution model, with every random selection of a preference order the probability of this preference order being selected in subsequent samples increases. Intuitively, we can think of a collection of preference orderings and every time an ordering is sampled uniformly from this collection, it will be replaced by two copies of the same preference ordering.
We use the PrefLib Toolkit MaWa13a to generate Mallows and Urn distribution models. In our experiments, we used Mallows model with one reference ranking as well as Mallows mixture models with five reference rankings. Every data point in the figures is averaged over 1,000 samples.
In general, the same patterns under the uniform preference distribution hold for various numbers of agents and objects and when varying the risk parameter and utility functions.
Social welfare: Figures 12 and 13 show the results of our simulation for social welfare when agents’ preferences are drawn from Mallows models. These results are consistent with pure Mallows models with single reference rankings (Figure 12) as well as Mallows mixture models with five references (Figure 13). The percentage change in social welfare is infinitesimal when for both risk averse and risk seeking populations. Under risk aversion, this percentage change in social welfare remains small when . Similar to the uniform populations, the negative values show that in some cases, particularly when , RSD assignments outperform those under PS assignments.
Manipulability: The PS assignments remain very susceptible to manipulation even under more natural assumptions on how the preferences are distributed. Figures 14 illustrates the manipulability of the PS assignments and the average gain from manipulation when agents are drawn from Mallows mixture models under risk averse and risk seeking attitudes.
The fraction of manipulable profiles and manipulation gain goes to 0 when agents are risk seeking. Under risk aversion, the fraction of manipulable profiles and manipulation gain rapidly increases as grows (Figure 14). These results hold under pure Mallows distribution (Figure 15) as well as under the Polya-Urn model (Figure 16) but with slightly slower growth. This is consistent with the fact that, in less diverse populations, agents’ preference are more similar and conflicting, and thus manipulation is less likely (even though still significantly considerable) as opposed to more diverse and uniform set of profiles.
8 Related Literature
Assignment problems with ordinal preferences have attracted interest from many researchers. Svensson showed that serial dictatorship is the only deterministic mechanism that is strategyproof, nonbossy, and neutral svensson1999strategy . Random Serial Dictatorship (RSD) (uniform randomization over all serial dictatorship assignments) satisfies strategyproofness, proportionality, and ex post efficiency abdulkadirouglu1998random . Bogomolnaia and Moulin noted the inefficiency of RSD from the ex ante perspective, and characterized the matching mechanisms based on first-order stochastic dominance bogomolnaia2001new . They proposed the probabilistic serial mechanism as an efficient and envyfree mechanism with regards to ordinal preferences. While PS is not strategyproof, it satisfies weak strategyproofness for problems with equal number of agents and objects. However, PS is strictly manipulable (not weakly strategyproof) when there are more objects than agents kojima2009random . Kojima and Manea, showed that in large assignment problems with sufficiently many copies of each object, truth-telling is a weakly dominant strategy in PS kojima2010incentives . In fact PS and RSD mechanisms become equivalent che2010asymptotic , that is, the inefficiency of RSD and manipulability of PS vanishes when the number of copies of each object approaches infinity.
The practical implications of deploying RSD and PS have been the center of attention in many one-sided matching problems abdulkadiroglu2009strategy ; mennle2014hybrid . In the school choice setting with multi-capacity alternatives, Pathak observed that many students obtained a more desirable random assignment through PS in public schools of New York City pathak2006lotteries ; however, the efficiency difference was quite small. These equivalence results and their extensions to all random mechanisms liu2013ordinal , do not hold when the quantities of each object is limited to one.
Other interesting aspects of PS and RSD such as computational complexity and best-response strategies have also been explored ekici2012equilibrium ; aziz2015manipulating ; Aziz:2015:EUP:2832249.2832402 . In this vein, Aziz et al. proved the existence of pure Nash equilibria, but showed that computing an equilibrium is NP-hard Aziz:2015:EUP:2832249.2832402 . Nevertheless, Mennle et al. Mennle:2015:PLM:2832249.2832261 showed that agents can easily find near-optimal strategies by simple local and greedy search. In the absence of truthful incentives, the outcome of PS is no longer guaranteed to be efficient or envyfree with respect to agents’ true underlying preferences, and this inefficiency may result in outcomes that are worse than RSD, especially in ‘small’ markets ekici2012equilibrium . The utilitarian and egalitarian welfare guarantees of RSD have been studied under ordinal and linear utility assumptions bhalgat2011social ; aziz2015egalitarianism . For arbitrary utilities, RSD provides the best approximation ratio for utilitarian social welfare when among all mechanisms that rely only on ordinal preferences filos2014social .
We studied the space of general preferences and provided empirical results on the incomparability of RSD and PS. It is worth mentioning that at preference profiles where PS and RSD induce identical assignments, RSD is sd-efficient, sd-envyfree, and sd-strategyproof. However, PS is still highly manipulable. We further strengthen this argument by providing an observation in Example 4:
Consider the following preference profile . Table 5 shows the prescribed random assignment. In this example, with PS as the matching mechanism, agent 1 can misreport her preference as , and manipulate her assignment to .
It is easy to see that agent 1’s misreport improves her expected outcome for all utility models where (for example utilities for respectively.).
We investigated various utility models according to different risk attitudes. Our findings hold under various assumptions on the population of agents and preference profile distributions. Our main results are:
In terms of efficiency, the fraction of preference profiles for which PS stochastically (or lexicographically) dominates RSD converges to zero as . When instantiating the preferences with actual utility functions, PS allocations are only slightly better than RSD allocations in terms of social welfare when varying and , particularly under risk averse utilities. In fact, in some cases RSD allocations are superior in terms of social welfare (see Figure 17).
PS is almost 99% manipulable when and the fraction of sd- and ld- manipulable profiles rapidly goes to 1 as grows. When instantiating the preferences with utility functions, the manipulability of PS increases as agents become more risk averse. Moreover, an agent’s utility gain from manipulation also grows when the risk intensity increases.
For risk seeking utilities, when the fraction of envious agents under all profiles vanishes and RSD becomes envyfree. For risk averse utilities, the fraction of envious agents increases as agents become more risk averse. However, the total amount of envy just slightly grows, and surprisingly, only few agents feel extremely envious while all other agents are either envyfree or only feel a minimal amount of envy.
An interesting future direction is to study egalitarian social welfare of the matching mechanisms in single and multi unit assignment problems as well as in the full preference domain. Another open direction is to provide a parametric analysis of the matching mechanisms according to the risk aversion factor.
10 Design Recommendations for Multiagent Systems Practitioners
Our work in this paper can be used to help guide designers of multiagent systems who need to solve allocation problems. If a designer strongly requires sd-efficiency then the theoretical results of PS indicate that it is better than RSD. However, our results show that PS is highly prone to manipulation for various combinations of agents and objects. This manipulation and the possible gain from manipulation become more severe particularly when agents are risk averse, and designers need to take this into consideration. On the other hand, while RSD does not theoretically guarantee sd-efficiency, our results show that it tends to do quite well – sometimes even outperforming PS in terms of social welfare. RSD also has the added advantage of being sd-strategyproof and thus is not prone to the manipulation problems of PS.
Although computing RSD probabilities (fractional assignments) is #P-hard aziz2013computational ; saban2015complexity , RSD is easy to implement in practice. However, the welfare cost of adopting manipulable mechanisms such as PS raises concern and has real consequences pathak2011lotteries ; budish2012multi
. Even though computing optimal manipulation strategies is computationally hard for the PS mechanism, evidentially individuals can easily figure out how to manipulate such mechanisms using simple greedy heuristicsbudish2012multi ; Mennle:2015:PLM:2832249.2832261 . Our investigations show that in many instances RSD performs as desirably as PS in terms of social welfare. Conversely, PS assignments are highly susceptible to manipulation especially when agents are risk averse.
These findings suggest that in multiagent settings where mechanism designers are unsure of sincere reporting of their preferences or when agents are mostly risk averse, the use of RSD is more desirable to ensure truthful reporting while providing reasonable social welfare. However, PS is still a desirable allocation mechanism for its fairness and efficiency properties, particularly in settings where agents are sincere.
- (1) Abdulkadiroğlu, A., Pathak, P.A., Roth, A.E.: Strategy-proofness versus efficiency in matching with indifferences: redesigning the new york city high school match. Tech. rep., National Bureau of Economic Research (2009)
- (2) Abdulkadiroğlu, A., Sönmez, T.: Random serial dictatorship and the core from random endowments in house allocation problems. Econometrica 66(3), 689–701 (1998)
- (3) Abdulkadiroğlu, A., Sönmez, T.: House allocation with existing tenants. Journal of Economic Theory 88(2), 233–260 (1999)
- (4) Arrow, K.J.: Essays in the theory of risk-bearing (1974)
- (5) Ashlagi, I., Fischer, F., Kash, I.A., Procaccia, A.D.: Mix and match: A strategyproof mechanism for multi-hospital kidney exchange. Games and Economic Behavior (2013)
- (6) Aziz, H., Brandt, F., Brill, M.: The computational complexity of random serial dictatorship. Economics Letters 121(3), 341–345 (2013)
- (7) Aziz, H., Chen, J., Filos-Ratsikas, A., Mackenzie, S., Mattei, N.: Egalitarianism of random assignment mechanisms. arXiv preprint arXiv:1507.06827 (2015)
- (8) Aziz, H., Gaspers, S., Mackenzie, S., Mattei, N., Narodytska, N., Walsh, T.: Equilibria under the probabilistic serial rule. In: Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI 2015, pp. 1105–1112. AAAI Press (2015). URL http://dl.acm.org/citation.cfm?id=2832249.2832402
- (9) Aziz, H., Gaspers, S., Mackenzie, S., Mattei, N., Narodytska, N., Walsh, T.: Manipulating the probabilistic serial rule. In: Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015), pp. 1451–1459. International Foundation for Autonomous Agents and Multiagent Systems (2015)
- (10) Aziz, H., Gaspers, S., Mackenzie, S., Walsh, T.: Fair assignment of indivisible objects under ordinal preferences. Artificial Intelligence 227, 71–92 (2015)
- (11) Berg, S.: Paradox of voting under an urn model: the effect of homogeneity. Public Choice 47(2), 377–387 (1985)
Bhalgat, A., Chakrabarty, D., Khanna, S.: Social welfare in one-sided matching
markets without money.
In: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 87–98. Springer (2011)
- (13) Bogomolnaia, A., Heo, E.J.: Probabilistic assignment of objects: Characterizing the serial rule. Journal of Economic Theory 147(5), 2072–2082 (2012)
- (14) Bogomolnaia, A., Moulin, H.: A new solution to the random assignment problem. Journal of Economic Theory 100(2), 295–328 (2001)
- (15) Bouveret, S., Lang, J.: Manipulating picking sequences. In: In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI’14), pp. 141–146. IOS Press, Prague, Czech Republic (2014). URL http://recherche.noiraudes.net/resources/papers/ECAI14.pdf
- (16) Budish, E., Cantillon, E.: The multi-unit assignment problem: Theory and evidence from course allocation at harvard. The American economic review 102(5), 2237–71 (2012)
- (17) Che, Y.K., Kojima, F.: Asymptotic equivalence of probabilistic serial and random priority mechanisms. Econometrica 78(5), 1625–1672 (2010)
- (18) Domshlak, C., Hüllermeier, E., Kaci, S., Prade, H.: Preferences in AI: An overview. Artificial Intelligence 175(7), 1037–1052 (2011)
Ekici, Ö., Kesten, O.: An equilibrium analysis of the probabilistic serial
International Journal of Game Theory pp. 1–20 (2015).DOI 10.1007/s00182-015-0475-9. URL http://dx.doi.org/10.1007/s00182-015-0475-9
- (20) Filos-Ratsikas, A., Frederiksen, S.K.S., Zhang, J.: Social welfare in one-sided matchings: Random priority and beyond. In: Algorithmic Game Theory, pp. 1–12. Springer (2014)
- (21) Fishburn, P.C.: Lexicographic orders, utilities and decision rules: A survey. Management Science pp. 1442–1471 (1974)
- (22) Hadar, J., Russell, W.R.: Rules for ordering uncertain prospects. The American Economic Review pp. 25–34 (1969)
- (23) Hosseini, H., Larson, K.: Strategyproof quota mechanisms for multiple assignment problems. arXiv preprint arXiv:1507.07064 (2015)
- (24) Hugh-Jones, D., Kurino, M., Vanberg, C.: An experimental study on the incentives of the probabilistic serial mechanism. Tech. rep., Discussion Paper, Social Science Research Center Berlin (WZB), Research Area Markets and Politics, Research Unit Market Behavior (2013)
- (25) Kendall, M.G.: A new measure of rank correlation. Biometrika 30(1/2), 81–93 (1938)
- (26) Kendall, M.G.: Rank correlation methods (1948)
- (27) Kojima, F.: Random assignment of multiple indivisible objects. Mathematical Social Sciences 57(1), 134–142 (2009)
- (28) Kojima, F., Manea, M.: Incentives in the probabilistic serial mechanism. Journal of Economic Theory 145(1), 106–123 (2010)
- (29) Liu, Q., Pycia, M.: Ordinal efficiency, fairness, and incentives in large markets. Unpublished mimeo (2013)
- (30) Lu, T., Boutilier, C.: Learning mallows models with pairwise preferences. In: Proceedings of the 28th international conference on machine learning (ICML-11), pp. 145–152 (2011)
- (31) Lu, T., Boutilier, C.: Robust approximation and incremental elicitation in voting protocols. In: In Proceedings of the 22nd International Joint Conference on Artificial Intelligence, IJCAI 2011, vol. 22, p. 287 (2011)
- (32) Mallows, C.L.: Non-null ranking models. Biometrika 44(1/2), 114–130 (1957)
- (33) Manea, M.: Asymptotic ordinal inefficiency of random serial dictatorship. Theoretical Economics 4(2), 165–197 (2009)
- (34) Manlove, D.: Algorithmics of matching under preferences. World Scientific Publishing (2013)
- (35) Marden, J.I.: Analyzing and modeling rank data. Chapman & Hall/CRC Monographs on Statistics & Applied Probability (1996)
- (36) Mattei, N., Walsh, T.: Preflib: A library of preference data http://preflib.org. In: Proceedings of the 3rd International Conference on Algorithmic Decision Theory (ADT 2013), Lecture Notes in Artificial Intelligence. Springer (2013)
McLennan, A.: Ordinal efficiency and the polyhedral separating hyperplane theorem.Journal of Economic Theory 105(2), 435–449 (2002)
- (38) Mennle, T., Seuken, S.: Hybrid mechanisms: Trading off strategyproofness and efficiency in one-sided matching. Tech. rep. (2014). Working Paper
- (39) Mennle, T., Weiss, M., Philipp, B., Seuken, S.: The power of local manipulation strategies in assignment mechanisms. In: Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, pp. 82–89. AAAI Press (2015). URL http://dl.acm.org/citation.cfm?id=2832249.2832261
- (40) Pápai, S.: Strategyproof multiple assignment using quotas. Review of Economic Design 5(1), 91–105 (2000)
- (41) Pathak, P.A.: Lotteries in student assignment. Unpublished mimeo, Harvard University (2006)
- (42) Pathak, P.A., Sethuraman, J.: Lotteries in student assignment: An equivalence result. Theoretical Economics 6(1), 1–17 (2011)
- (43) Roth, A.E., Sönmez, T., Ünver, M.U.: Kidney exchange. The Quarterly Journal of Economics 119(2), 457–488 (2004)
- (44) Saban, D., Sethuraman, J.: A note on object allocation under lexicographic preferences. Journal of Mathematical Economics (2013)
- (45) Saban, D., Sethuraman, J.: The complexity of computing the random priority allocation matrix. Mathematics of Operations Research 40(4), 1005–1014 (2015)
- (46) Schulman, L.J., Vazirani, V.V.: Allocation of divisible goods under lexicographic preferences. arXiv preprint arXiv:1206.4366 (2012)
- (47) Sönmez, T., Ünver, M.U.: Course bidding at business schools. International Economic Review 51(1), 99–123 (2010)
- (48) Svensson, L.G.: Strategy-proof allocation of indivisible goods. Social Choice and Welfare 16(4), 557–567 (1999)
- (49) Von Neumann, J.: A certain zero-sum two-person game equivalent to the optimal assignment problem. Contributions to the Theory of Games 2, 5–12 (1953)
Appendix A Numerical Results
The following table shows the results of comparing RSD and PS under ordinal preferences for various combinations of agents and objects. Note that in most instances, RSD and PS do not induce the same random allocation.