The problem of assigning a number of indivisible objects to a set of agents, each with a private preference ordering over the objects, in the absence of monetary transfer, is fundamental in many multiagent resource allocation applications, and has been the center of attention amongst researchers at the interface of artificial intelligence, economics, and mechanism design. Assigning dormitory rooms or offices to students, students to public schools, college courses to students, teaching load to faculty, organs and medical resources to patients, members to subcommittees, etc. are some of the myriad examples of one-sided matching problems[25, 28, 29, 3, 9, 22].
Two important (randomized) matching mechanisms are Random Serial Dictatorship (RSD)  and Probabilistic Serial Rule (PS) . Both mechanisms have important economic properties and are practical to implement. The RSD mechanism has strong truthful incentives but guarantees neither efficiency nor envyfreeness. PS satisfies efficiency and envyfreeness, however, it is susceptible to manipulation.
Therefore, since both RSD and PS induce random assignments, there are subtle points to be considered when deciding which mechanism to use. For example, given a particular preference profile, the mechanisms often produce random assignments which are simply incomparable and thus, without additional knowledge of the underlying utility models of the agents, it is difficult to determine which is the “better” outcome. Furthermore, properties like non-manipulability and envyfreeness can depend on whether there is underlying structure in the preferences, and even in general preference models it is valuable to understand under what conditions a mechanism is likely to be non-manipulable or envyfree as this can guide designers choices.
In this paper, we consider the random assignment of objects to agents, where each agent reports her complete preferences as a strict ordering over objects. We first look at the space of lexicographic preferences and define two axioms for fairness based on partial preference orderings. We show that, as opposed to the general preference domain, RSD satisfies envyfreeness. Moreover, we show that although under lexicographic preferences PS is strategyproof when the number of objects is less than or equal agents, it is susceptible to manipulation when there are more objects than agents.
We empirically study the space of all possible preference profiles for various matching problems, and provide insights on the comparability of RSD and PS under the various agent-object combinations with the aim of providing practical insights on the different properties of RSD and PS.
2 Relation to the literature
In the house allocation problem with ordinal preferences, Svensson showed that serial dictatorship is the only deterministic mechanism that is strategyproof, nonbossy, and neutral . Abdulkadiroglu and Sonmez showed that uniform randomization over all serial dictatorship assignments, a.k.a RSD, is equivalent to the top trading cycle’s core from random initial endowment, and thus satisfies stratgyeproofness, proportionality, and ex post efficiency . Although RSD is strategyproof and proportionally fair, but it does not guarantee envyfreeness.
Bogomolnaia and Moulin noted the inefficiency of the RSD mechanism from the ex ante perspective, and characterized random assignment mechanisms based on first order stochastic dominance (sd) . They further proposed the probabilistic serial mechanism as an efficient and envyfree mechanism. While PS is not strategyproof, it satisfies weak stratgyeproofness for problems with equal number of agents and objects. Kojima studied random assignment of multiple indivisible objects and showed that PS is strictly manipulable (not weakly strategyproof) even when there are only two agents . Kojima and Manea, later, considered a setting with multiple copies of each object and showed that in large assignment problems with sufficienctly many copies of each object, truthtelling is a weakly dominant strategy in PS . In order to give a rationale for practical employment of RSD and PS in real-life applications, Che and Kojima further analyzed the economic properties of PS and RSD in large markets with multiple copies of each object and concluded that these mechanisms become equivalent when the market size becomes large . Particularly, they showed that inefficiency of RSD and manipulability of PS vanishes when the number of copies of each object approaches infinity.
The practical implications of deploying RSD and PS have been the center of attention in many one-sided matching problems . In the school choice setting with multi-capacity alternatives, Pathak compared RSD and PS using data from the assignment of public schools in New York City and observed that many students obtained a more desirable random assignment through PS . However, the efficiency difference was quite small. These observations further confirm Che and Kojima’s equivalence result when there are multiple copies of each object (i.e. school seats) available . Despite these findings for arbitrary large markets, the equivalence results of Che and Kojima, and its extension to all random mechanisms by Liu and Pycia , do not hold when the quantities of each object is limited to one .
This paper sets out to study the comparability of PS and RSD when there is only one copy of each object, and analyze the space of all preference profiles for different numbers of agents and objects. We make several intriguing observations about the manipulability of PS and fairness properties of RSD. Following Manea’s work on asymptotic inefficiency of RSD , we show that despite this inefficiency result, the fraction of random assignments at which PS stochastically dominates RSD vanishes when the number of agents is less than or equal to the available objects. Moreover, we show that this result strongly holds for lexicographic preferences when there is equal number of agents and objects.
3 Formal Representation
A one-sided matching problem consists of a set of agents , where , a set of distinct indivisible objects with , and a preference profile denoting the set of strict and complete preference orderings of agents over the objects. Let denote the set of all complete and strict preferences over . Each agent has a private preference ordering denoted by , where indicates that agent prefers object over . Thus, a preference profile is . We represent the preference ordering of agent by the ordered list of objects or , for short.
A random assignment is a stochastic matrix
that specifies the probability of assigning each objectto each agent
. The probability vectordenotes the random allocation of agent , that is,
Let refer to the set of possible assignments. An assignment is said to be feasible if and only if
, that is, the probability distribution function is valid for each object.
Given random assignment , the probability that agent is assigned an object that is at least as good as object is defined as follows
A deterministic assignment is simply a binary matrix of degenerate lotteries over objects that allocates each object to exactly one agent with certainty. Every random assignments is a convex combination of deterministic assignments and is induced by a lottery over deterministic assignments .
A matching mechanism is a mapping from the set of possible preference profiles to the set of random assignments.
In this section we define key properties that matching mechanisms should have. In particular, we formally define efficiency, strategyproofness and envyfreeness for (randomized) matching mechanisms.
In the context of deterministic assignments, an assignment Pareto dominates another assignment at if such that and , where denotes that agent strictly prefers assignment over . An assignment is Pareto efficient at if no other assignment exists that Pareto dominates it at . Extending the Pareto efficiency requirement to random mechanisms, we focus only on mechanisms that always guarantee a Pareto efficient solution ex post. A random assignment is called ex post efficient if it can be represented as a probability distribution over deterministic Pareto efficient assignments.
To evaluate the quality of a random assignment, we follow the convention proposed by Bogomolnaia and Moulin based on first-order stochastic dominance .
Given a preference ordering , random assignment stochastically dominates (sd) assignment if
Stochastic dominance is a strong requirement. It implies that for the entire space of utility functions that is consistent with ordinal preferences, an agent’s expected utility under is always greater than her expected utility under .
A random assignment is sd-efficient if for all agents, it is not stochastically dominated by any other random assignment.
A matching mechanism is sd-efficient if at all preference profiles , for all agents , the induced random assignment is not stochastically dominated by any other assignment. Intuitively, no other mechanism exists that all agents strictly prefer its outcome to their current random assignments.
A matching mechanism is sd-strategyproof if there exists no non-truthful preference ordering that improves agent ’s random assignment. More formally,
Mechanism is sd-strategyproof if at all preference profiles , for all agents , and for any misreport , such that and , we have:
In the context of random mechanisms, sd-strategyproofness is a strict requirement. It states that under any utility model consistent with the preference orderings, no agent can improve her expected outcome by misreporting.
We can also define a milder version of strategyproofness. A mechanism is weakly sd-strategyproof if at all preference profiles, no agent can misreport her preferences and obtain a random assignment that strictly improves her assignment for the entire space of utility models consistent with her true preference ordering.
Mechanism is weakly sd-strategyproof if for all preference profiles , for any agent with misreport , where and , we have
We say that a random assignment induced by mechanism is manipulable if it is not sd-strategyproof. That is, there exists an agent with preference that her assignment under a truthful report does not stochastically dominate her assignment under a non-truthful report. Formally, assignment induced by mechanism is manipulable at preference profile if there exists an agent with misreport such that if and , we have .
Intuitively, an assignment is manipulable if for some utility function consistent with the ordinal preferences, an agent’s expected utility under a non-truthful report improves.
An assignment induced by a matching mechanism is strictly manipulable if the mechanism does not satisfy weak sd-strategyproofness, i.e., the induced corresponding random assignment is sd-manipulable. Thus, assignment induced by mechanism is sd-manipulable at preference profile if there exists an agent with preference , and a misreport such that stochastically dominates , that is, .
An sd-manipulable assignment indicates that there exists an agent that can strictly benefit from misreporting. Clearly, a weakly sd-strategyproof assignment is not sd-manipulable.
To analyze the fairness properties of matching mechanisms, we study the ordinal notion of envyfreeness. An assignment is sd-envyfree if each agent strictly prefers her random allocation to any other agent’s assignment, that is, given agent ’s random assignment stochastically dominates any other agent’s assignment.
Given agent ’s preference , assignment is sd-envyfree if for all agents ,
A matching mechanism satisfies sd-envyfreeness if at all preference profiles , it induces sd-envyfree assignments for all agents.
An assignment is weakly sd-envyfree if no agent strictly prefers another agent’s random assignment to her own. Formally, for all agents with , we have .
4 Two Random Mechanisms
Random Serial Dictatorship (RSD) is a uniform distribution over all possible (priority) orderings of agents, and for each realization of the orderings, the first agent receives her most preferred object, the next agent receives her most preferred object among the set of remaining objects, and so on until no object remains unassigned.111For , RSD requires a careful method for picking sequence at each realized priority ordering, which will directly affect the efficiency and envy properties of the assignments [8, 17]. For simplicity, we assume that when , at each priority ordering the first agent chooses objects, and the rest of the agents choose one object each.
Given a preference profile, the Random Serial Dictatorship (RSD) mechanism is a uniform distribution over the assignments induced by the set of all possible priority orderings over agents.
Abdulkadiroğlu and Sönmez showed the equivalence of the random serial dictatorship and the core from the random endowment in the house allocation problem, and argued that this equivalence justifies the wide use of the RSD mechanism in many practical applications such as student housing, course allocating, etc. .
Bogmolnaia and Moulin proposed the Probabilistic Serial rule (PS) . Given a preference profile, the PS mechanism treats objects as a set of divisible objects and simulates a Simultaneous Eating Algorithm (SEA): At every point in time, each agent starts eating from her top choice according to at the unit speed. When an object is completely exhausted (eaten away), each agent eats away from her most preferred objects among the remaining objects. The eating algorithm terminates when all objects are exhausted.
The random assignment of agent , when SEA terminates, is the fraction of each object that has been eaten away by agent . We adopt the following definition from Bogomolnaia and Moulin :
Given a preference profile, the Probabilistic Serial rule (PS) is the random probability assignment by simulating the Simultaneous Eating Algorithm with uniform speed.
For a given preference profile , we let and denote the outcomes of PS and RSD mechanisms respectively.
In their seminal work, Bogomolnaia and Moulin characterized the economic properties of these two mechanisms and showed that RSD does not guarantee sd-efficiency . The following example, adopted from Bogomolnaia and Moulin (2001), illustrates the inefficiency of the RSD mechanism.
Suppose there are four agents and four objects . Consider the following preference profile .
In this example, all agents strictly prefer the assignment induced by PS over the RSD assignment. Thus, RSD is inefficient at this preference profile.
Table 2 summarizes the theoretical results from the literature for both RSD and PS. For , PS satisfies sd-efficiency, sd-envyfreeness, and weakly sd-strategyproofness, while RSD is sd-strategyproof, and weakly sd-envyfree, but it is not sd-efficient nor sd-envyfree . However, for , PS is sd-manipulable, i.e., PS is not even weakly sd-strategyproof .
4.1 The Incomparability of RSD and PS
The theoretical properties of RSD and PS do not provide proper insight into the head-to-head comparison and applicability of these two mechanisms.
RSD does not guarantee sd-efficiency, meaning that the assignments induced by RSD are not equivalent to the sd-efficient assignments induced by PS. However, in many instances of preference profiles the random assignments induced by PS and RSD are simply incomparable, in that neither assignment stochastically dominates the other one. The following example illustrates this subtle distinction even for .
Suppose there are three agents and three objects . Consider the following preference profile .
As shown in Table 3, neither mechanism provides an assignment that stochastically dominates the other: agent 1 receives the same allocation under both RSD and PS, agent 2 strictly prefers PS over RSD, and agent 3 strictly prefers RSD over PS. Thus, the two assignments are incomparable in terms of sd-efficiency.
In such instances, the efficiency of the assignments is ambiguous with respect to ordinal preferences. Thus, the (ex ante) efficiency of the induced assignments is contingent on the underlying von Neumann-Morgenstern (vNM) utility functions.
Similarly, the envy of RSD and the manipulability of PS depend on the structure of preference profiles, and thus, a compelling question, that justifies the practical implications of deploying a matching mechanism, is to analyze the space of preference profiles to find the likelihood of inefficient, manipulable, or envious assignments under these mechanisms.
In the next sections, we focus attention on lexicographic preference, and discuss the properties of RSD and PS in this domain. Then, we empirically study the space of all preference profiles for various matching problems.
5 Lexicographic Preferences
In many real-life scenarios, players have preferences that are lexicographic on alternatives or objects, for example political negotiations, voting problems, team standings in sports like Hockey, sequential screening process for hiring candidates or choosing alternatives, etc. [14, 32, 13]. Lexicographic preferences are present in various applications and have been extensively studied in artificial intelligence and multiagent systems as a means of assessing allocations based on ordinal preferences [11, 26].
Given two assignments, an agent prefers the one in which there is a higher probability for getting the most-preferred object. Formally, given a preference ordering , agent prefers any allocation that assigns a higher probability to her top ranked object over any assignment with , regardless of the assigned probabilities to all other objects. Only when two assignments allocate the same probability to the top object will the agent considers the next-ranked object.
Given agent ’s preference ordering , assignment lexicographically dominates (ld) assignment if
A matching mechanism is lexicographically efficient (ld-efficient) if for all preference profiles its induced assignment is not lexicographically dominated by any other random assignment. By definition sd-efficiency yields ld-efficiency, however, an ld-efficient assignment may not be sd-efficient. The following example illustrates this:
Consider four agents and four objects at the following preference profile .
Table 4 shows the assignments induced by PS and RSD. Here, PS lexicographically dominates RSD since all the agents receive a higher probability of their more preferred objects under PS. However, PS does not stochastically dominate RSD because agent 2 (similarly agent 4) weakly prefers the RSD assignment as is greater than . Thus, the two random assignments are in fact incomparable with respect to stochastic dominance.
PS is ld-efficient, RSD is not.
The ld-efficiency of PS is directly derived from its sd-efficiency. Example 1 illustrates that RSD is not ld-efficient. ∎
Sd-efficiency implies ld-efficiency. However, similar to stochastic dominance relations, in terms of matching mechanisms it is unclear whether PS or RSD are comparable in terms of ld-efficiency, in particular at those instances of preference profiles that they are non-comparable under stochastic dominance.
In many practical situations, such as political negotiations, hiring candidates, and sports standings [14, 13], players most often associate higher regard to their top choices, and assess the fairness of random outcomes according to ld-envyfreeness.
Given agent ’s preference , assignment is ld-envyfree if there exists no agent-object pair such that, .
A matching mechanism satisfies ld-envyfreeness if at all preference profiles it induces ld-envyfree assignments for all agents. Sd-envyfreeness leads to ld-envyfreeness; however, ld-envyfree allocations may not satisfy sd-envyfreeness.
In Example 2, under RSD no agent is ld-envious of any other agent. However, agent 2 is weakly envious of agent 3’s assignment. Thus, the assignment is not sd-envyfree. Assume agent 2’s utility for the objects is . Agent 2’s utility for objects as good as object under her assignment and agent 3’s assignment can be written as follows . Thus, for all utility functions at which , agent 2 prefers agent 3’s random assignment.
Based on the definition of stochastic dominance, it is easy to see that every sd-envyfree mechanism satisfies ld-envyfreeness. PS satisfies sd-envyfreeness, hence, it is ld-envyfree for all preference profiles. However, ld-envyfreeness is stronger than weak sd-envyfreeness.
Every ld-envyfree allocation is weakly sd-envyfree, that is, ld-envyfree weakly sd-envyfree.
The inclusion in Proposition 2 is strict: consider two agents with . The random assignments and are weakly sd-envyfree, but do not satisfy ld-envyfreeness. The relationship between the different definitions of fairness is as follows:222For a complete discussion on the computational complexity of finding or verifying the fairness concepts with respect to envyfreeness and proportionality see .
Before analyzing the envyfreeness of RSD in the lexicographic domain, we need to define a simple axiom called downward partial symmetry (DPS). Intuitively, DPS is an extension to the fairness notion of equity (a.k.a proportionality). DPS states that starting from the first-ranked objects downwards, all agents with identical partial preferences receive exactly the same random assignment of the objects in the partial ordering.
Let denote the partial preference of agent , such that . A random assignment satisfies downward partial symmetry (DPS) if for all agent ,
Similarly, a random assignment satisfies upward partial symmetry (UPS) if starting from the least favorite objects, agents with identical partial preferences receive exactly the same random assignment of the objects in the partial ordering. It is easy to see that PS satisfies both UPS and DPS. However, RSD only satisfies DPS.
RSD satisfies DPS but does not guarantee UPS.
In Example 3, agent 1 and 4’s assignments illustrate the non-existence of UPS.
RSD is a uniform distribution over the set of priority orderings, and thus, satisfies equal treatment of equals for full preference orderings, i.e., then . Assume for contradiction that for two agents where , such that . For the sake of simplicity, let us assume . For any serial dictatorship where precedes in the priority ordering, chooses an object such that . This immediately implies that for some . We can continue this by induction backward up to the first-ranked item, implying that there exists a priority ordering at which agent chooses a less preferred object, that is , which contradicts the assumption. Similarly we can show that if , it is impossible to achieve . ∎
The next theorem shows that although RSD is not sd-envyfree, it satisfies the weaker notion of ld-envyfreeness.
RSD is ld-envyfree, for any and .
Assume for contradiction that there exists an agent with random assignment that agent ld prefers to her own . Therefore, by definition of lexicographic dominance, there exists an object such that RSD assigns higher probability to , i.e. , and , . According to Proposition 3 this is only possible when , implying that RSD is ld-envyfree. ∎
The ld-envyfreeness of RSD is noteworthy: it shows that despite RSD is not sd-envyfree, it satisfies envyfreeness with respect to lexicographic preferences.
When multiple objects can be allocated to each agent, manipulating the PS outcome may result in a stochastically dominant assignment. In fact, even for , the random assignment induced PS is sd-efficient and sd-envyfree but not weakly sd-strategyproof (i.e. sd-manipulable) .
Consider two agents and four objects with preferences . Assume that agent 1 misreports her preference as .
Agent 1’s allocation under the misreport stochastically dominates her assignment when reporting truthfully because for all objects , .
The main intuition for the strong manipulablity of PS comes from the eating sequence of agents. Since agents can be allocated more than one object, the sequence in which agents choose to eat away the objects becomes crucial (for a discussion on the game-theoretic properties and complexity of manipulation in picking sequences see [17, 8, 7]).
It is apparent that sd-manipulablity leads to ld-manipulatiy, however, there are random assignments under PS that are ld-manipulable but not sd-manipulable, as demonstrated by the following inclusion relation: sd-manipulable ld-manipulable manipulable.
In the domain of divisible objects, PS is the only stochastic mechanism which is sd-efficient, sd-envyfree, and ld-strategyproof on the lexicographic preference domain . However, this characterization is only valid for .
For , PS is not ld-strategyproof nor sd-strategyproof.
Proof follows from Example 5. ∎
Ld-strategyproofness is the direct consequence of a simple axiom called bounded invariance property , that is, no agent is able to change her preference order for any object she likes less than such that her (probabilistic) allocation of improves. This axiom was first characterized by Bogomolnaia and Heo, where they showed that PS satisfies bounded invariance for . However, since for each agent has an eating capacity of more than 1, PS no longer satisfies the bounded invariance property.
For , PS does not satisfy the bounded invariance property.
6 Experimental Results
The theoretical properties of PS and RSD, even in restricted domains such as lexicographic preferences, only provide limited insight into their practical applications. In particular, when deciding which mechanism to use in different settings, the incomparability of PS and RSD leaves us with an ambiguous choice. Thus, we examine the properties of RSD and PS in the space of all possible preference profiles.
The number of all possible preference profiles is super exponential . Thus, for each combination of agents and objects we considered 1,000 randomly generated instances by sampling from a uniform preference profile distribution. Thus, each data point is the average of 1,000 replications. For each preference profile, we ran both PS and RSD mechanisms and compared their outcomes (in terms of random assignments).
A preliminary look at our empirical results illustrates the followings: when , PS coincides exactly with RSD, which results in the best of the two mechanisms, i.e., both mechanisms are sd-efficient, sd-strategyproof, and sd-envyfree.333This was first noted by Bogomolnaia and Moulin for . Moreover, when , for all PS is sd-strategyproof (although the PS assignments are not necessarily equivalent to assignments induced by RSD), RSD is sd-envyfree, and for most instances PS stochastically dominates RSD, particularly when .
Our first finding is that the fraction of preference profiles at which RSD and PS induce equivalent random assignments goes to when grows. There are two conclusions that one can draw from this observation. First, this result confirms the theoretical results of Manea on asymptotic inefficiency of RSD , in that, in most instances, the assignments induced by RSD are not equivalent to the PS assignments. Second, this result also suggests that the incomparability of outcomes is significant, that is, the (ex ante) efficiency of the random outcomes is highly dependent on the underlying utility models. We propose the following conjecture.
The fraction of preference profiles for which RSD is stochastically dominated by PS at converges to zero as .
Our empirical results support Conjecture 1 on comparability of RSD and PS. Figure 0(a) shows that although RSD is inefficient when grows beyond , due to incomparability of RSD and PS with regard to stochastic dominance relation, the RSD induced assignments are not stochastically dominated by sd-efficient assignments induced by PS.
We also see similar results when we restrict ourselves to lexicographic preferences (Figure 0(b)).
The fraction of preference profiles for which RSD is lexicographically dominated by PS at converges to zero as .
For lexicographic preferences, we also observe that the fraction of preference profiles for which PS assignments strictly dominate RSD-induced allocations goes to 1 when the number of agents and objects diverge.
The fraction of preference profiles for which RSD is lexicographically dominated by PS at converges to 1 as grows.
One immediate conclusion is that although RSD does not guarantee either sd-efficiency or ld-efficiency, in most settings with , especially when , neither of the two mechanisms is preferred in terms of efficiency. Hence, one cannot simply rule out the RSD mechanism.
6.2 Envy in RSD
In our next experiment we measure the fraction of agents that are weakly envious of at least one another agent. Each data point represents the fraction of (weakly) envious agents.
Figure 2 shows that for RSD, the percentage of agents that are weakly envious increases with the number of agents. Figure 1(a) reveals an interesting observation: fixing any , the percentage of agents that are (weakly) envious grows with the number of objects, however, there is a sudden drop in the percentage of envious agents when there are equal number of agents and objects.
For better understanding of the population of agents who feel (weakly) envious under RSD, we illustrate the envy distribution over the set of preference profiles for each (Figure 1(b)). One observation is that there are few distinct envy profiles at each , each representing a particular class of preference profiles, and by increasing , the fraction of agents that are envious of at least one other agent increases.
6.3 Manipulability of PS
In Section 5, we characterized the incentive properties of the PS mechanism with regards to the stochastic dominance and lexicographic dominance relations, arguing that although for PS is weakly sd-strategyproof and ld-strategyproof, when PS no longer satisfies these two properties.444A recent experimental study on the incentive properties of PS shows that human subjects are less likely to manipulate the mechanism when misreporting is a Nash equilibrium. However, subjects’ tendency for misreporting is still significant even when it does not improve their allocations . Hence, the PS mechanism suffers from incentive properties. In fact other mechanisms, such as the Draft mechanism, has shown to be highly susceptible to manipulation in many real-life markets such as course allocation at business schools . Nonetheless, we are interested in gaining insights on the fraction of preference profiles at which PS is manipulable.
Figure 2(a) shows that the fraction of manipulable preference profiles goes to 1 as or grow. In fact, PS is almost 100% manipulable for (obviously PS is sd-strategyproof when .). Another compelling observation is that for all , the fraction of sd-manipulable preference profiles goes to 1 as grows (Figure 2(b)). This states that in problems where the number of objects are larger than the number of agents, an agent can strictly benefit from misreporting her preferences. Figure 4 shows that under lexicographic preferences, the fraction of ld-manipulable preference profiles converges to 1 even more rapidly.
In this paper we conducted a comparison study of RSD and PS in order to gain further insight into their respective properties. We argued that outcomes generated by the mechanisms can be incomparable, making it difficult to determine which is the better mechanism in different settings. We then looked at some theoretical properties for the mechanisms under the assumption that agents have lexicographic preferences. Finally, we conducted an empirical comparison on the mechanisms.
Earlier work had shown that RSD is sd-strategyproof, ex post efficient and guarantees weak sd-envyfreeness. We were able to further the theoretical understanding of the mechanism by showing that it is also ld-envyfree. Furthermore, our empirical work showed that the actual fraction of outcomes where PS stochastically dominates RSD goes to 0 as goes to 1, raising the question as to whether the lack of sd-efficiency guarantees for RSD is a significant concern in practice. In contrast, PS is sd-efficient, sd-envyfree and ld-strategyproof for . However, when , the fraction of sd- and ld-manipulable assignments approaches 1, limiting the effectiveness of PS.
Given the theoretical and empirical results, we suggest that in many cases RSD is, perhaps, a more suitable matching mechanism to consider, especially when . This confirms previous studies on settings where there was an equal number of agents and objects [23, 12, 15]. We further strengthen this argument by providing an observation: At preference profiles where PS and RSD induce identical assignments, RSD is sd-efficient, sd-envyfree, and sd-strategyproof. However, PS may still be manipulable.
Consider the following preference profile . Table 6 shows the induced random assignment.
In this case, with PS as the matching mechanism, agent 1 can misreport her preference as , and manipulate her assignment to . It is easy to see that agent 1’s misreport improves her expected outcome for all utility models where (for example utilities for respectively.).
This work raises several interesting questions. From the theoretical perspective, among these open problems, providing theoretical proofs for the efficiency relations in Conjectures 1, 2, and 3 is crucial and not trivial. From the practical view, an interesting open problem is to see if there exists any randomized ld-efficient mechanism that satisfies strategyproofness and some notion of fairness (such as proportionality) for . Lastly, it would be interesting to investigate if there is an extension to the PS mechanism (or its variant based on different eating speeds) that is potentially weakly sd-strategyproof for assigning multiple objects to each agent for .
-  Atila Abdulkadiroglu, Parag A Pathak, and Alvin E Roth. Strategy-proofness versus efficiency in matching with indifferences: redesigning the new york city high school match. Technical report, National Bureau of Economic Research, 2009.
-  Atila Abdulkadiroğlu and Tayfun Sönmez. Random serial dictatorship and the core from random endowments in house allocation problems. Econometrica, 66(3):689–701, 1998.
-  Itai Ashlagi, Felix Fischer, Ian A Kash, and Ariel D Procaccia. Mix and match: A strategyproof mechanism for multi-hospital kidney exchange. Games and Economic Behavior, 2013.
-  Haris Aziz, Serge Gaspers, Simon Mackenzie, and Toby Walsh. Fair assignment of indivisible objects under ordinal preferences. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, AAMAS ’14, pages 1305–1312, Richland, SC, 2014. International Foundation for Autonomous Agents and Multiagent Systems.
-  Anna Bogomolnaia and Eun Jeong Heo. Probabilistic assignment of objects: Characterizing the serial rule. Journal of Economic Theory, 147(5):2072–2082, 2012.
-  Anna Bogomolnaia and Hervé Moulin. A new solution to the random assignment problem. Journal of Economic Theory, 100(2):295–328, 2001.
-  Sylvain Bouveret and Jérôme Lang. A general elicitation-free protocol for allocating indivisible goods. In Proceedings of the Twenty-Second international joint conference on Artificial Intelligence-Volume Volume One, pages 73–78. AAAI Press, 2011.
-  Sylvain Bouveret and Jérôme Lang. Manipulating picking sequences. In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI’14), Prague, Czech Republic, August 2014. IOS Press.
-  Eric Budish and Estelle Cantillon. The multi-unit assignment problem: Theory and evidence from course allocation at harvard. American Economic Review, 102(5):2237–71, 2012.
-  Yeon-Koo Che and Fuhito Kojima. Asymptotic equivalence of probabilistic serial and random priority mechanisms. Econometrica, 78(5):1625–1672, 2010.
-  Carmel Domshlak, Eyke Hüllermeier, Souhila Kaci, and Henri Prade. Preferences in ai: An overview. Artificial Intelligence, 175(7):1037–1052, 2011.
-  Lars Ehlers and Bettina Klaus. Coalitional strategy-proof and resource-monotonic solutions for multiple assignment problems. Social Choice and Welfare, 21(2):265–280, 2003.
-  Peter C Fishburn. Lexicographic orders, utilities and decision rules: A survey. Management Science, pages 1442–1471, 1974.
-  Niall M Fraser. Ordinal preference representations. Theory and Decision, 36(1):45–67, 1994.
-  John William Hatfield. Strategy-proof, efficient, and nonbossy quota allocations. Social Choice and Welfare, 33(3):505–515, 2009.
-  David Hugh-Jones, Morimitsu Kurino, and Christoph Vanberg. An experimental study on the incentives of the probabilistic serial mechanism. Technical report, Discussion Paper, Social Science Research Center Berlin (WZB), Research Area Markets and Politics, Research Unit Market Behavior, 2013.
-  Thomas Kalinowski, Nina Narodytska, Toby Walsh, and Lirong Xia. Strategic behavior when allocating indivisible goods sequentially. In Twenty-Seventh AAAI Conference on Artificial Intelligence (AAAI-13), 2013.
-  Fuhito Kojima. Random assignment of multiple indivisible objects. Mathematical Social Sciences, 57(1):134–142, 2009.
-  Fuhito Kojima and Mihai Manea. Incentives in the probabilistic serial mechanism. Journal of Economic Theory, 145(1):106–123, 2010.
-  Qingmin Liu and Marek Pycia. Ordinal efficiency, fairness, and incentives in large markets. Unpublished mimeo, 2013.
-  Mihai Manea. Asymptotic ordinal inefficiency of random serial dictatorship. Theoretical Economics, 4(2):165–197, 2009.
-  David Manlove. Algorithmics of matching under preferences. World Scientific Publishing, 2013.
-  Szilvia Pápai. Strategyproof and nonbossy multiple assignments. Journal of Public Economic Theory, 3(3):257–271, 2001.
-  Parag A Pathak. Lotteries in student assignment. Unpublished mimeo, Harvard University, 2006.
-  Alvin E Roth, Tayfun Sönmez, and M Utku Ünver. Kidney exchange. The Quarterly Journal of Economics, 119(2):457–488, 2004.
-  Daniela Saban and Jay Sethuraman. A note on object allocation under lexicographic preferences. Journal of Mathematical Economics, 2013.
-  Leonard J Schulman and Vijay V Vazirani. Allocation of divisible goods under lexicographic preferences. arXiv preprint arXiv:1206.4366, 2012.
-  Tayfun Sönmez and M Utku Ünver. Course bidding at business schools. International Economic Review, 51(1):99–123, 2010.
-  Tayfun Sönmez and M Utku Ünver. House allocation with existing tenants: A characterization. Games and Economic Behavior, 69(2):425–445, 2010.
-  Lars-Gunnar Svensson. Strategy-proof allocation of indivisible goods. Social Choice and Welfare, 16(4):557–567, 1999.
-  John Von Neumann. A certain zero-sum two-person game equivalent to the optimal assignment problem. Contributions to the Theory of Games, 2:5–12, 1953.
Fusun Yaman, Thomas J Walsh, Michael L Littman, and Marie Desjardins.
Democratic approximation of lexicographic preference models.
Proceedings of the 25th international conference on Machine learning, pages 1200–1207. ACM, 2008.