1 Introduction
Efficient and fair allocation of resources is a pressing problem within society today. One important and challenging case is the fair allocation of indivisible items (Chevaleyre et al., 2006; Bouveret and Lang, 2008; Bouveret et al., 2010; Aziz et al., 2014b; Aziz, 2014). This covers a wide range of problems including the allocation of classes to students, landing slots to airlines, players to teams, and houses to people. A simple but popular mechanism to allocate indivisible items is sequential allocation (Bouveret and Lang, 2011; Brams and Taylor, 1996; Kohler and Chandrasekaran, 1971; Levine and Stange, 2012). In sequential allocation, agents simply take turns to pick the most preferred item that has not yet been taken. Besides its simplicity, it has a number of advantages including the fact that the mechanism can be implemented in a distributed manner and that agents do not need to submit cardinal utilities. Wellknown mechanisms like serial dictatorship (Svensson, 1999) fall under the umbrella of sequential mechanisms.
The sequential allocation mechanism leaves open the particular order of turns (the so called “policy”) (Kalinowski et al., 2013a; Bouveret and Lang, 2014). Should it be a balanced policy i.e., each agent gets the same total number of turns? Or should it be recursively balanced so that turns occur in rounds, and each agent gets one turn per round? Or perhaps it would be fairer to alternate but reverse the order of the agents in successive rounds: so that agent takes the first and sixth turn? This particular type of policy is used, for example, by the Harvard Business School to allocate courses to students (Budish and Cantillion, 2012) and is referred to as a balanced alternation policy. Another class of policies is strict alternation in which the same ordering is used in each round, such as . The sets of balanced alternation and strict alternation policies are subsets of the set of recursively balanced policies which itself is a subset of the set of balanced policies (see Figure 1).
We consider here the situation where a policy is chosen from a family of such policies. For example, at the Harvard Business School, a policy is chosen at random from the space of all balanced alternation policies. As a second example, the policy might be left to the discretion of the chair but, for fairness, it is restricted to one of the recursively balanced policies. Despite uncertainty in the policy, we might be interested in the possible or necessary outcomes. For example, can I get my three most preferred courses? Do I necessarily get my two most preferred courses? We examine the complexity of checking such questions. There are several highstake applications for these results. For example, sequential allocation is used in professional sports ‘drafts’ (Brams and Straffin, 1979). The precise policy chosen from among the set of admissible policies can critically affect which teams (read agents) get which players (read items).
The problems of checking whether an agent can get some item or set of items in a policy or in all policies is closely related to the problem of ‘control’ of the central organizer. For example, if an agent gets an item in all feasible policies, then it means that the chair cannot ensure that the agent does not get the item. Apart from strategic motivation, the problems we consider also have a design motivation. The central designer may want to consider all feasible policies uniformly at random (as is the case in random serial dictatorship (Aziz et al., 2013; Saban and Sethuraman, 2013)
) and use them to find the probability that a certain item or set of item is given to an agent. The probability can be a suggestion of time sharing of an item. The problem of checking whether an agent gets a certain item or set of items in some policy is equivalent to checking whether an agent gets a certain item or set of items with nonzero probability. Similarly, the problem of checking whether an agent gets a certain item or set of items in all policy is equivalent to checking whether an agent gets a certain item or set of items with probability one.
We let denote a set of agents, and denote the set of items^{1}^{1}1This is without loss of generality since we can add dummy items of no utility to any agent.. is the profile of agents’ preferences where each is a linear order over . Let denote an assignment of all items to agents, that is, . We will denote a class of policies by . Any policy specifies the turns of the agents. When an agent takes her turn, she picks her most preferred item that has not yet been allocated. We leave it to future work to consider agents picking strategically. Sincere picking is a reasonable starting point as when the policy is uncertain, a risk averse agent is likely to pick sincerely.
Example 1.
Consider the setting in which , , the preferences of agent are and of agent are . Then for the policy , agent gets whilst gets .
We consider the following natural computational problems.

PossibleAssignment: Given and policy class , does there exist a policy in which results in ?

NecessaryAssignment: Given , and policy class , is the result of all policies in ?

PossibleItem: Given where and , and policy class , does there exist a policy in such that agent gets item ?

NecessaryItem: Given where and , and policy class , does agent get item for all policies in ?

PossibleSet: Given where and , and policy class , does there exist a policy in such that agent gets exactly ?

NecessarySet: Given where and , and policy class , does agent get exactly for all policies in ?

PossibleSubset: Given where and , and policy class , does there exist a policy in such that agent gets ?

NecessarySubset: Given where and , and policy class does agent get for all policies in ?
We will consider problems top PossibleSet and top NecessarySet that are restrictions of PossibleSet and NecessarySet in which the set of items is the set of top items of the distinguished agent. When policies are chosen at random, the possible and necessary allocation problems we consider are also fundamental to understand more complex problems of computing the probability of certain allocations.
Contributions.
Our contributions are two fold. First, we provide necessary and sufficient conditions for an allocation to be the outcome of balanced policies, recursively balanced policies, and balanced alternation policies, respectively. Previously Brams and King (2005) characterized the outcomes of arbitrary policies. In a similar vein, we provide sufficient and necessary conditions for more interesting classes of policies such as recursively balanced and balanced alternation. Second, we provide a detailed analysis of the computational complexity of possible and necessary allocations under sequential policies. Table 1 summarizes our complexity results. Our NP/coNPcompleteness results also imply that there exists no polynomialtime algorithm that can approximate within any factor the number of admissible policies which do or do not satisfy the target goals.
Problems  Sequential Policy Class  

Any  Balanced  Recursively Balanced  Strict Alternation  Balanced Alternation  
PossibleItem  in P  NPC (Thm. 3)  NPC (Thm. 3)  NPC (Thm. 3)  NPC (Thm. 3)  
NecessaryItem  in P 

coNPC for all (Thm. 12)  coNPC for all (Thm. 19)  coNPC for all (Thm. 22)  
PossibleSet  in P  NPC (Thm. 3)  NPC (Thm. 3)  NPC (Thm. 3)  NPC (Thm. 3)  
NecessarySet  in P  in P (Thm. 10)  coNPC for all (Thm. 12)  coNPC for all (Thm. 19)  coNPC for all (Thm. 23)  
Top PossibleSet  in P  in P (trivial) 


NPC for all (Thm. 22)  
Top NecessarySet  in P  in P (Thm. 10)  coNPC for all (Thm. 12)  coNPC for all (Thm. 19)  coNPC for all (Thm. 23)  
PossibleSubset  in P  NPC (Thm. 3)  NPC (Thm. 3)  NPC (Thm. 3)  NPC (Thm. 3)  
NecessarySubset  in P 

coNPC for all (Thm. 12)  coNPC for all (Thm. 19)  coNPC for all (Thm. 22)  
PossibleAssignment  in P  in P (Coro. 1)  in P (Coro. 2)  in P (Coro. 3)  in P (Coro. 4)  
NecessaryAssignment  in P  in P (Thm. 6)  in P (Thm. 11)  in P (Thm. 16)  in P (Thm. 21) 
Related Work.
Sequential allocation has been considered in the operations research and fair division literature (e.g. (Kohler and Chandrasekaran, 1971; Brams and Taylor, 1996)). It was popularized within the AI literature as a simple yet effective distributed mechanism (Bouveret and Lang, 2011) and has been studied in more detail subsequently (Kalinowski et al., 2013a, b; Bouveret and Lang, 2014). In particular, the complexity of manipulating an agent’s preferences has been studied (Bouveret and Lang, 2011, 2014) supposing that one agent knows the preferences of the other agents as well as the policy. Similarly in the problems we consider, the central authority knows beforehand the preferences of all agents.
The problems considered in the paper are similar in spirit to a class of control problems studied in voting theory: if it is possible to select a voting rule from the set of voting rules, can one be selected to obtain a certain outcome (Erdélyi and Elkind, 2012). They are also related to a class of control problems in knockout tournaments: does there exist a draw of a tournament for which a given player wins the tournament (Vu et al., 2009; Aziz et al., 2014a). Possible and necessary winners have also been considered in voting theory for settings in which the preferences of the agents are not fully specified (Konczak and Lang, 2005; Betzler and Dorn, 2010; Baumeister and Rothe, 2010; Bachrach et al., 2010; Xia and Conitzer, 2011; Aziz et al., 2012).
When , serial dictatorship is a wellknown mechanism in which there is an ordering of agents and with respect to that ordering agents pick the most preferred unallocated item in their turns (Svensson, 1999). We note that serial dictatorship for is a balanced, recursively balanced and balanced alternation policy.
2 Characterizations of Outcomes of Sequential Allocation
In this section we provide necessary and sufficient conditions for a given allocation to be the outcome of a balanced policy, recursively balanced policy, or balanced alternation policy. We first define conditions on an allocation . An allocation is Pareto optimal if there is no other allocation in which each item of each agent is replaced by at least as preferred an item and at least one item of some agent is replaced by a more preferred item.
Condition 1.
is Pareto Optimal.
Condition 2.
is balanced.
It is wellknown that Condition 1 characterizes outcomes of all sequential allocation mechanisms (without constraints). Brams and King (2005) proved that an assignment is achievable via sequential allocation iff it satisfies Condition 1. The theorem of Brams and King (2005) generalized the characterization of Abdulkadiroğlu and Sönmez (1998) of Pareto optimal assignments as outcomes of serial dictatorships when . We first observe the following simple adaptation of the characterization of Brams and King (2005) to characterize possible outcomes of balanced policies:
Remark 1.
Given a balanced allocation , for each agent and each , let denote the item that is ranked at the th position by agent among all items allocated to agent by . The third condition requires that for all , no agent prefers the th ranked item allocated to any other agent to the th ranked item allocated to her.
Condition 3.
For all and all pairs of agent , agent prefers to .
The next theorem states that Conditions 1 through 3 characterize outcomes of recursively balanced policies.
Theorem 1.
Proof.
To prove the “only if” direction, clearly if is the outcome of a recursively balanced policy then Condition 1 and 2 are satisfied. If Condition 3 is not satisfied, then there exists and a pair of agents such that agent prefers to . We note that in the round when agent is about to choose according to , is still available, because it is allocated by in a later round. However, in this case agent will not choose because it is not her topranked available item, which is a contradiction.
To prove the “if” direction, for any allocation that satisfies the three conditions we will construct a recursively balanced policy . For each , we let phase denote the th round through th round. It follows that for all , are allocated in phase . Because of Condition 3, is a Pareto optimal allocation when all items in are removed. Therefore there exists an order over that gives this allocation. Let . It is not hard to verify that is recursively balanced and is the outcome of .∎
Given a profile and an allocation that is the outcome of a recursively balanced policy, that is, it satisfies the three conditions as proved in Theorem 1, we construct a directed graph
, where the vertices are the agents, and we add the edges in the following way. For each odd
, we add a directed edge if and only if agent prefers to and the edge is not already in ; for each even , we add a directed edge if and only if agent prefers to and the edge is not already in .Condition 4.
Suppose is the outcome of a recursively balanced policy. There is no cycle in .
Theorem 2.
Proof.
The “only if” direction: Suppose is achievable by a balanced alternation policy . Let denote the suborder of from round to round . Let denote the directed graph where the vertices are the agents and there is an edge if and only if . It is easy to see that is acyclic and complete. We claim that is a subgraph of . For the sake of contradiction suppose there is an edge in but not in . If is added to in an odd round , then it means that agent prefers to . Because is not in , . This means that right before choosing in , is still available, which contradicts the assumption that chooses in . If is added to in an even round, then following a similar argument we can also derive a contradiction. Therefore, is a subgraph of , which means that is acyclic.
The “if” direction: Suppose the four conditions are satisfied. Because has no cycle, we can find a linear order over such that is a subgraph of . We next prove that is achievable by the balanced alternation policy whose first rounds are . For the sake of contradiction suppose this is not true and let denote the earliest round that the allocation in differs the allocation in . Let denote the agent at the th round of , let denote the item she gets at round in , and let denote the item that she is supposed to get according to . Due to Condition 3, . If then agent didn’t get item in a previous round, which contradicts the selection of . Therefore . If is odd, then there is an edge in , which means that . This means that would have chosen in a previous round, which is a contradiction. If is even, then a similar contradiction can be derived. Therefore is achievable by . ∎
Given a profile and an allocation that is the outcome of a recursively balanced policy, that is, it satisfies the three conditions as proved in Theorem 1, we construct a directed graph , where the vertices are the agents, and we add the edges in the following way. For each and , we let denote the item that is ranked at the th position among all items allocated to agent . For each , if we add a directed edge if prefers to if the edge is not already there.
Condition 5.
Suppose is the outcome of a recursively balanced policy. There is no cycle in .
Theorem 3.
Proof.
The “only if” direction: If is an outcome of a recursively balanced policy but does not satisfy 5, then this means that there is a cycle in . Let agents and be in the cycle. This means that is before in one round and is before in some other round.
The “if” direction: Now assume that is an outcome of a recursively balanced policy but is not alternating. This means that there exist at least two agents and such that comes before in one round and comes before in some other round. But this means that there is cycle in graph . ∎
3 General Complexity Results
Before we delve into the complexity results, we observe the following reductions between various problems.
Lemma 1.
Fixing the policy class to be one of {all, balanced policies, recursively balanced policies, balanced alternation policies}, there exist polynomialtime manyone reductions between the following problems: PossibleSet to PossibleSubset; PossibleItem to PossibleSubset; Top PossibleSet to PossibleSet; NecessarySet to NecessarySubset; NecessaryItem to NecessarySubset; and Top NecessarySet to NecessarySet.
A polynomialtime manyone reduction from problem to problem means that if is NP(coNP)hard then is also NP(coNP)hard, and if is in P then is also in P. We also note the following.
Remark 2.
For , PossibleAssignment and PossibleSet are equivalent for any type of policies. Since , the allocation of one agent completely determines the overall assignment.
For , checking whether there is a serial dictatorship under which each agent gets exactly one item and a designated agent gets item is NPcomplete (Theorem 2, Saban and Sethuraman, 2013). They also proved that for , checking if for all serial dictatorships, agent gets item is polynomialtime solvable. Hence, we get the following statements.
Remark 3.
PossibleItem and PossibleSet is NPcomplete for balanced, recursively balanced as well as balanced alternation policies.
Remark 4.
For , NecessaryItem and NecessarySet is polynomialtime solvable for balanced, recursively balanced, and balanced alternation policies.
Theorem 3 does not necessarily hold if we consider the top element or the top elements. Therefore, we will especially consider top PossibleSet.
4 Arbitrary Policies
We first observe that for arbitrary policies, PossibleItem, NecessaryItem and NecessarySet are trivial: PossibleItem always has a yes answer (just give all the turns to that agent) and NecessaryItem and NecessarySet always have a no answer (just don’t give the agent any turn). Similarly, NecessaryAssignment always has a no answer.
Remark 5.
PossibleItem, NecessaryItem, NecessarySet, and NecessaryAssignment are polynomialtime solvable for arbitrary policies.
Theorem 4.
PossibleAssignment is polynomialtime solvable for arbitrary policies.
Proof.
There is also a polynomialtime algorithm for PossibleSet for arbitrary policies.
Theorem 5.
PossibleSet is polynomialtime solvable for arbitrary policies.
Proof.
The following algorithm works for PossibleSet. Let the target allocation of agent be . If there is any agent who wants to pick an item , let him pick it. If no agent in wants to pick an item , and does not want to pick an item from return no. If no agent in wants to pick an item , and wants to pick an item , let pick . If some agent in wants to pick an item , and also wants to pick , then we let pick . Repeat the process until all the items are allocated or we return no at some point. ∎
5 Balanced Policies
In contrast to arbitrary policies, PossibleItem, NecessaryItem, NecessarySet, and NecessaryAssignment are more interesting for balanced policies since we may be restricted in allocating items to a given agent to ensure balance. Before we consider them, we get the following corollary of Remark 1.
Corollary 1.
PossibleAssignment for balanced assignments is in P.
Note that an assignment is achieved via all balanced policies iff the assignment is the unique balanced assignment that is Pareto optimal. This is only possible if each agent gets his top items. Hence, we obtain the following.
Theorem 6.
NecessaryAssignment for balanced assignments is in P.
Compared to NecessaryAssignment, the other ‘necessary’ problems are more challenging.
Theorem 7.
For any constant , NecessaryItem for balanced policies is in P.
Proof.
Given a NecessaryItem instance , if is ranked below the th position by agent then we can return “No”, because by letting agent choose in the first rounds she does not get item .
Suppose is ranked at the th position by agent with , the next claim provides an equivalent condition to check whether the NecessaryItem instance is a “No” instance.
Claim.
Suppose is ranked at the th position by agent with , the NecessaryItem instance is a “No” instance if and only if there exists a balanced policy such that (i) agent picks items in the first rounds and the last rounds, and (ii) agent does not get .
Let denote agent ’s top items. In light of Claim Claim., to check whether the is a “No” instance, it suffices to check for every set of items ranked below the th position by agent , denoted by , whether it is possible for agent to get and by a balanced policy where agent picks items in the first rounds and the last rounds. To this end, for each with , we construct the following maximum flow problem , which can be solved in polynomialtime by e.g. the FordFulkerson algorithm.

Vertices: , , .

Edges and weights: For each , there is an edge with weight ; for each and such that agent ranks above all items in , there is an edge with weight ; for each , there is an edge with weight .

We are asked whether the maximum amount of flow from to is (the maximum possible flow from to ).
Claim.
is a “No” instance if and only if there exists with such that has a solution.
Because is a constant, the number of we will check is a constant. Algorithm 1 is a polynomial algorithm for NECESSARYITEM with balanced policies. ∎
Theorem 8.
For any constant , NecessarySet and NecessarySubset for balanced policies are in P.
Proof.
W.l.o.g. given a NecessarySet instance , if is not the topranked items of agent then it is a “No” instance because we can simply let agent choose items in the first rounds. When is topranked items of agent , is a “No” instance if and only if is a “No” instance for some , which can be checked in polynomial time by Theorem 7. A similar algorithm works for NecessarySubset.∎
Theorem 9.
NecessaryItem and NecessarySubset for balanced policies where is not fixed is coNPcomplete.
Proof.
Membership in coNP is obvious. By Lemma 1 it suffices to prove that NecessaryItem is coNPhard, which we will prove by a reduction from PossibleItem for , which is NPcomplete (Saban and Sethuraman, 2013). Let denote an instance of the possible allocation problem for , where , , , is the preference profile of the agents, and we are asked whether it is possible for agent to get item in some sequential allocation. Given , we construct the following NecessaryItem instance.
Agents: .
Items: , where and for each , . We have and .
Preferences:

The preferences of is .

For any , the preferences of are obtained from by replacing by , and then add to the bottom position.

The preferences for is .
We are asked whether agent always gets item .
If has a solution , we show that the NecessaryItem instance is a “No” instance by considering . In the first rounds all ’s are allocated to agent ’s. In the following rounds is allocated to , which means that does not get .
Suppose the NecessaryItem instance is a “No” instance and agent does not get in a balanced policy . Because agent through rank in their bottom position, must be allocated to agent . Clearly in the first times when agent through choose items, they will choose through respectively. Let denote the order over which agents through choose items for the last time. We obtain another order over from by moving all agents who choose an item in after agent while keeping other orders unchanged. It is not hard to see that the outcomes of running and are the same from the first round until agent gets . This means that is a solution to .∎
Theorem 10.
NecessarySet and top NecessarySet for balanced policies are in P even when is not fixed.
Proof.
Given an instance of NecessarySet, if the target set is not top then the answer is “No” because we can simply let the agent choose items in the first rounds. It remains to show that top NecessarySet for balanced policies is in P. That is, given , we can check in polynomial time whether there is a balanced policy for which agent does not get exactly her top items.
For NecessarySet, suppose agent does not get her top items under . Let denote the order obtained from by moving all agent ’s turns to the end while keeping the other orders unchanged. It is easy to see that agent does not get her top items under either. Therefore, NecessarySet is equivalent to checking whether there exists an order where agent picks item in the last rounds so that agent does not get at least one of her top items.
We consider an equivalent, reduced allocation instance where the agents are , and there are items , where is agent ’s top items. Agent ’s preferences over are obtained from by replacing the first occurrence of items in by , and then removing all items in while keeping the order of other items the same. We are asked whether there exists an order where agent is the last to pick and picks a single item, and each other agents picks times, so that agent does not get item . This problem can be solved by a polynomialtime algorithm based on maximum flows that is similar to the algorithm for NecessaryItem for balanced policies in Theorem 7. ∎
6 Recursively Balanced Policies
In this section, we consider recursively balanced policies. From Theorem 1, we get the following corollary.
Corollary 2.
PossibleAssignment for recursively balanced policies is in P.
We also report computational results for problems other than PossibleAssignment
Theorem 11.
NecessaryAssignment for recursively balanced policies is in P.
Proof Sketch.
We initialize to i.e., focus on the first round. We check if there is an agent whose turn has not come in the round whose most preferred unallocated item is not . In this case return “No”. Otherwise, we complete the round in any order. If all the items are allocated, we return “Yes”. If , we increment by one and repeat. ∎
The other ‘necessary problems’ turn out to be computationally intractable.
Theorem 12.
For , NecessaryItem, NecessarySet, top NecessarySet, and NecessarySubset for recursively balanced policies are coNPcomplete.
Theorem 13.
Top PossibleSet for recursively balanced policies is in P for .
Proof Sketch.
Let the agent under question be . We give agent the first turns in each round with ’s top two items. The agent is guaranteed to get . We now construct a bipartite graph in which each iff iff prefers to . We check whether admits a matching that perfectly matches the agent nodes. If does not, we return no. Otherwise, there exists a recursively balanced policy for which agent gets and . ∎
Finally, topPossibleSet is NPcomplete iff .
Theorem 14.
For all , top PossibleSet for balanced policies is NPcomplete.
The proof is given in the appendix.
7 Strict Alternation Policies
As for balanced alternation polices , there are possible strict alternation policies, so if is constant, then all problems can be solved in polynomial time by brute force search.
Theorem 15.
If the number of agents is constant, then PossibleItem, PossibleSet, NecessaryItem, NecessarySet, PossibleAssignment, and NecessaryAssignment are polynomialtime solvable for strict alternation policies.
As a result of our characterization of strict alternation outcomes (Theorem 3), we get the following.
Corollary 3.
PossibleAssignment for strict alternation polices is in P.
We also present other computational results.
Theorem 16.
NecessaryAssignment for strict alternation polices is in P.
Theorem 17.
Top PossibleSet for strict alternation policies is in P for .
For Theorem 17, the polynomialtime algorithm is similar to the algorithm for Theorem 13. The next theorems state that the remaining problems are hard to compute. Both theorems are proved by reductions from the PossibleItem problem.
Theorem 18.
For all , top PossibleSet is NPcomplete for strict alternation policies.
Theorem 19.
For all , NecessaryItem, NecessarySet, top NecessarySet, and NecessarySubset are coNPcomplete for strict alternation policies.
8 Balanced Alternation Policies
Balanced alternation policies and strict alternation policies are the most constrained class among all policy classes we study. There are possible balanced alternation policies, so if is constant, then all problems can be solved in polynomial time by brute force search. Note that such an argument does not apply to recursively balanced policies.
Theorem 20.
If the number of agents is constant, then PossibleItem, PossibleSet, NecessaryItem, NecessarySet, PossibleAssignment, and NecessaryAssignment are polynomialtime solvable for balanced alternation policies.
As a result of our characterization of balanced alternation outcomes (Theorem 2), we get the following.
Corollary 4.
PossibleAssignment for balanced alternation polices is in P.
NecessaryAssignment can be solved efficiently as well:
Theorem 21.
NecessaryAssignment for balanced alternation polices is in P.
Proof.
We first check whether it is possible to find over such that after running there exists an agent that does not get item . If so then we return “No”. Otherwise, we remove all items in and check whether it is possible to find over such that after running on the reduced instance, there exists an agent that does not get item . If so then we return “No”. Otherwise, we iterate until all items are removed in which case we return “Yes”. ∎
We already know that for , top possible and necessary problems can be solved in polynomial time. The next theorems state that for any other , they are NPcomplete for balanced alternation policies. Theorem 22 is proved by a reduction from the exact 3cover problem and Theorem 23 is proved by a reduction from the PossibleItem problem.
Theorem 22.
For all , top PossibleSet is NPcomplete, NecessaryItem is coNPcomplete, and NecessarySubset is coNPcomplete for balanced alternation policies.
Theorem 23.
For all , top NecessarySet for balanced alternation policies is coNPcomplete.
9 Conclusions
We have studied sequential allocation mechanisms like the course allocation mechanism at Harvard Business School where the policy has not been fixed or has been fixed but not announced. We have characterized the allocations achievable with three common classes of policies: recursively balanced, strict alternation, and balanced alternation policies. We have also identified the computational complexity of identifying the possible or necessary items, set or subset of items to be allocated to an agent when using one of these three policy classes as well as the class of all policies. There are several interesting future directions including considering other common classes of policies, as well as other properties of the outcome like the possible or necessary welfare.
References
 Abdulkadiroğlu and Sönmez [1998] A. Abdulkadiroğlu and T. Sönmez. Random serial dictatorship and the core from random endowments in house allocation problems. Econometrica, 66(3):689–702, 1998.
 Abraham et al. [2005] D. J. Abraham, K. Cechlárová, D. Manlove, and K. Mehlhorn. Pareto optimality in house allocation problems. In Proc. of the 16th International Symposium on Algorithms and Computation (ISAAC), volume 3341 of LNCS, pages 1163–1175, 2005.
 Aziz [2014] H. Aziz. A note on the undercut procedure. In Proc. of the 13th AAMAS Conference, pages 1361–1362, 2014.
 Aziz et al. [2012] H. Aziz, M. Brill, F. Fischer, P. Harrenstein, J. Lang, and H. G. Seedig. Possible and necessary winners of partial tournaments. In Proc. of the 11th AAMAS Conference, pages 585–592. IFAAMAS, 2012.
 Aziz et al. [2013] H. Aziz, F. Brandt, and M. Brill. The computational complexity of random serial dictatorship. Economics Letters, 121(3):341–345, 2013.
 Aziz et al. [2014a] H. Aziz, S. Gaspers, S. Mackenzie, N. Mattei, P. Stursberg, and T. Walsh. Fixing a balanced knockout tournament. In Proc. of the 28th AAAI Conference, pages 552–558, 2014a.
 Aziz et al. [2014b] H. Aziz, S. Gaspers, S. Mackenzie, and T. Walsh. Fair assignment of indivisible objects under ordinal preferences. In Proc. of the 13th AAMAS Conference, pages 1305–1312, 2014b.

Bachrach et al. [2010]
Y. Bachrach, N. Betzler, and P. Faliszewski.
Probabilistic possible winner determination.
In
Proceedings of the National Conference on Artificial Intelligence (AAAI)
, pages 697–702, 2010.  Baumeister and Rothe [2010] D. Baumeister and J. Rothe. Taking the final step to a full dichotomy of the possible winner problem in pure scoring rules. In Proceedings of The 19th European Conference on Artificial Intelligence (ECAI), 2010.
 Betzler and Dorn [2010] N. Betzler and B. Dorn. Towards a dichotomy for the possible winner problem in elections based on scoring rules. Journal of Computer and System Sciences, 76(8):812–836, 2010.
 Bouveret and Lang [2008] S. Bouveret and J. Lang. Efficiency and envyfreeness in fair division of indivisible goods: logical representation and complexity. Journal of Artificial Intelligence Research, 32(1):525–564, 2008.
 Bouveret and Lang [2011] S. Bouveret and J. Lang. A general elicitationfree protocol for allocating indivisible goods. In Proc. of the 22 IJCAI, pages 73–78, 2011.
 Bouveret and Lang [2014] S. Bouveret and J. Lang. Manipulating picking sequences. In In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI’14), pages 141–146, 2014.
 Bouveret et al. [2010] S. Bouveret, U. Endriss, and J. Lang. Fair division under ordinal preferences: Computing envyfree allocations of indivisible goods. In Proc. of the 19th European Conference on Artificial Intelligence (ECAI), pages 387–392, 2010.
 Brams and King [2005] S. J. Brams and D. L. King. Efficient fair division: Help the worst off or avoid envy? Rationality and Society, 17(4):387–421, 2005.
 Brams and Straffin [1979] S. J. Brams and P. D. Straffin. Prisoners’ dilemma and professional sports drafts. The American Mathematical Monthly, 86(2):80–88, 1979.
 Brams and Taylor [1996] S. J. Brams and A. D. Taylor. Fair Division: From CakeCutting to Dispute Resolution. Cambridge University Press, 1996.
 Budish and Cantillion [2012] E. Budish and E. Cantillion. The multiunit assignment problem: Theory and evidence from course allocation at Harvard. American Economic Review, 102(5):2237–2271, 2012.
 Chevaleyre et al. [2006] Y. Chevaleyre, P. E. Dunne, U. Endriss, J. Lang, M. Lemaître, N. Maudet, J. Padget, S. Phelps, J. A. RodríguezAguilar, and P. Sousa. Issues in multiagent resource allocation. Informatica, 30:3–31, 2006.
 Erdélyi and Elkind [2012] G. Erdélyi and E. Elkind. Manipulation under voting rule uncertainty. In Proc. of the 11th AAMAS Conference, pages 627–634, 2012.
 Kalinowski et al. [2013a] T. Kalinowski, N. Narodytska, and T. Walsh. A social welfare optimal sequential allocation procedure. In Proc. of the 22nd IJCAI, pages 227–233, 2013a.
 Kalinowski et al. [2013b] T. Kalinowski, N. Narodytska, T. Walsh, and L. Xia. Strategic behavior when allocating indivisible goods sequentially. In Proc. of the 27th AAAI Conference, pages 452–458, 2013b.
 Kohler and Chandrasekaran [1971] D. A. Kohler and R. Chandrasekaran. A class of sequential games. Operations Research, 19(2):270–277, 1971.
 Konczak and Lang [2005] K. Konczak and J. Lang. Voting procedures with incomplete preferences. In Multidisciplinary Workshop on Advances in Preference Handling, 2005.
 Levine and Stange [2012] L. Levine and K. E. Stange. How to make the most of a shared meal: Plan the last bite first. The American Mathematical Monthly, 119(7):550–565, 2012.
 Saban and Sethuraman [2013] D. Saban and J. Sethuraman. The complexity of computing the random priority allocation matrix. In Y. Chen and N. Immorlica, editors, Proc. of the 9th WINE, LNCS, 2013.
 Svensson [1999] LG Svensson. Strategyproof allocation of indivisible goods. Social Choice and Welfare, 16(4):557–567, 1999.
 Vu et al. [2009] T. Vu, A. Altman, and Y. Shoham. On the complexity of schedule control problems for knockout tournaments. In Proc. of the 8th AAMAS Conference, pages 225–232, 2009.
 Xia and Conitzer [2011] L. Xia and V. Conitzer. Determining possible and necessary winners under common voting rules given partial orders. JAIR, 41(2):25–67, 2011.
Testing Pareto optimality
Lemma 2.
It can be checked in polynomial time whether a given assignment is Pareto optimal.
The set of assignments achieved via arbitrary policies is characterized by Pareto optimal assignments. For any given assignment setting and an assignment, the corresponding cloned setting is one in which for each item that is owned by agent , we make a copy of agent so that each agent copy owns exactly one item. Each copy has exactly the same preferences as agent . The assignment in which copies of agents get a single item is called the cloned transformation of the original assignment.
Claim.
An assignment is Pareto optimal iff its cloned transformation is Pareto optimal for the cloned setting.
Proof.
If an assignment is not Pareto optimal for the cloned setting, then there exists another assignment in which each of the cloned agents get at least as preferred an item and at least one agent gets a strictly more preferred item. But if the new assignment for the cloned setting is transformed to the assignment for the original setting, then the new assignment Pareto dominates the prior assignment for the original setting. If an assignment is not Pareto optimal (with respect to responsive preferences) then there exists another assignment that Pareto dominates it. But this implies that the new assignment also Pareto dominates the old assignment in the cloned setting. ∎
We are now ready to prove Lemma 2.
Proof.
By Lemma Claim., the problem is equivalent to checking whether the cloned transformation of the assignment is Pareto optimal in the cloned setting. Pareto optimality of an assignment in which each agent has one item can be checked in time [see e.g., Abraham et al., 2005] where is the number of items.^{2}^{2}2The main idea is to construct a trading graph in which agent points to agent whose item he prefers more. The assignment is Pareto optimal iff the graph is acyclic. Firstly, for each item that is owned by agent , we make a copy of agent so that each agent copy owns exactly one item. Each copy has exactly the same preferences as agent . Based on the ownership information of each the agent copies, and the preferences of the agent copies, we construct a trading graph in which each copy points to each of the items more preferred than . Also each points to its owner . Then the assignment in the cloned transformation is Pareto optimal iff the trading graph is acyclic [Abraham et al., 2005, see e.g.,]. Acyclicity of a graph can be checked in time linear in the size of the graph via depthfirst search. ∎
Proof of Theorem 5
Proof.
Let the target allocation of agent be . If there is any agent who wants to pick an item , let him pick it. If no agent in wants to pick such an item , and does not want to pick an item from return no. If no agent in wants to pick such an item , and wants to pick an item , let pick . If some agents in wants to pick such an item , and also wants to pick , then we let pick . Repeat the process until all the items are allocated or we return no at some point.
We now argue for the correctness of the algorithm. Observe the order in which agent picks items in is exactly according to his preferences.
Claim.
Let us consider the first pick in the algorithm. If agent picks an item , then if there exists a policy in which agent gets , then there also exists a policy in which agent first picks and agent gets overall.
Proof.
In , by the time agent picks his second most preferred item from , all items more preferred have already been allocated. In , if , then we can obtain by bringing to the first place and having all the other turns in the same order. Note that in , for any agent’s turn the set of available items are either the same or is the extra item missing. However since was not even chosen by the latter agents, the picking outcomes of and are identical. ∎
Claim.
Let us consider the first pick in the algorithm. If some agent picks an item in the algorithm, then if there exists a policy in which agent gets , then there also exists a policy in which agent first picks and agent gets overall.
Proof.
In , if , then we can obtain by bringing to the first place and having all the other turns in the same order. If does not get in , then when we construct we simply delete the turn of the agent who got . Note that in , for any agent’s turn the set of available items are either the same or is the extra item missing. However since was not even chosen by the latter agents, the picking outcomes of and are identical. ∎
Proof of Theorem 7
Proof.
In a NecessaryItem instance we can assume the distinguished agent is . Given , if is ranked below the th position by agent then it we can return “No”, because by letting agent choose in the first rounds she does not get item .
Suppose is ranked at the th position by agent with , the next claim provides an equivalence condition to check whether the NecessaryItem instance is a “No” instance.
Claim.
Suppose is ranked at the th position by agent with , the NecessaryItem instance is a “No” instance if and only if there exists a balanced policy such that (i) agent picks items in the first rounds and the last rounds, and (ii) agent does not get .
Proof.
Suppose there exists a balanced policy such that agent does not get item , then we obtain from by moving the first occurrences of agent to the beginning of the sequence while keeping other positions unchanged. When preforming , in the first rounds agent gets her top items.
By the next time agent picks an item in , must have been chosen by another agent. To see why this is true, for each agent from the th round until agent ’s next turn in , we compare side by side the items allocated before this agent’s turn by and by . It is not hard to see by induction that the item allocated by before agent ’s next turn is a superset of the item allocated by before agent ’s th turn. Because the latter contains , agent does not get in .
Then, we obtain from by moving the th through the th occurrence of agent to the end of the sequence while keeping other positions unchanged. It is easy to see that agent does not get in . This completes the proof. ∎
Let denote agent ’s top items. In light of Claim Claim., to check whether the is a “No” instance, it suffices to check for every set of items ranked below the th position by agent , denoted by , whether it is possible for agent to get and by a balanced policy where agent picks items in the first rounds and the last rounds. To this end, for each with , we construct the following maximum flow problem , which can be solved in polynomialtime by e.g. the FordFulkerson algorithm.

Vertices: , , .

Edges and weights: For each , there is an edge with weight ; for each and such that agent ranks above all items in , there is an edge with weight ; for each , there is an edge with weight .

We are asked whether the maximum amount of flow to is (the maximum possible flow from to ).
Claim.
is a “No” instance if and only if there exists with such that has a solution.
Proof.
If is a “No” instance, then by Claim Claim. there exists such that agent picks items in the first rounds and the last rounds, and agent gets for some . For each agent with , let there be a flow of amount from to and a flow of amount from to all items that are allocated to her in . Moreover, let there be a flow of amount from any to . It is easy to check that the amount of flow is .
If has a solution, then there exists an integer solution because all weights are integers. This means that there exists an assignment of all items in to agent through such that no agent gets an item that is ranked below any item in . Starting from this allocation, after implementing all trading cycles we obtain a Pareto optimal allocation where are allocated to agent through , and still no agent gets an item that is ranked below any item in . By Proposition 1 in Brams and King, there exists a balanced policy that gives this allocation. It follows that agent does not get under the balanced policy . ∎
Because is a constant, the number of we will check is a constant. The polynomial algorithm for NecessaryItem for balanced policies is presented as Algorithm 1. ∎
Proof of Theorem 11
Proof.
In the allocation , let be the th most preferred item for agent among his set of allocated items.
Claim.
If there exists a recursively balanced policy achieving the target allocation. Then, in any such recursively balanced policy, we know that in each th round, each agent gets item .
We initialize to i.e., focus on the first round. We check if there is an agent whose turn has not come in the round whose most preferred unallocated item is not . In this case return “no”. Otherwise, we complete the round in any arbitrary order. If all the items are allocated, we return “yes”. If , we increment by one and repeat the process.
We now argue for correctness. If the algorithm returns no, then we know that there is a recursively balanced policy that does not achieve the allocation. This policy was partially built during the algorithm and can be completed in an arbitrary way to get an allocation that is not the same as the target allocation. Now assume for contradiction that there is a policy which does not achieve the allocation but the algorithm incorrectly returns yes. Consider the first round where the algorithm makes a mistake. But in each round, each agent had a unique and mutually exclusive most preferred unallocated item. Hence no matter which policy we implement in the round, the allocation and the set of unallocated items after the round stays the same. Hence a contradiction. ∎
Proof of Theorem 12
Proof Sketch.
Membership in coNP is obvious. By Lemma 1 it suffices to show coNPhardness for NecessaryItem and top NecessarySet. We will prove the coNPhardness for them for by the same reduction from PossibleItem for , which is NPcomplete [Saban and Sethuraman, 2013]. The proof for other can be done similarly by constructing preferences so that the distinguished agent always get her top items. Let denote an instance of PossibleItem for , where , , , is the preference profile of the agents, and we are asked wether it is possible for agent to get item in some sequential allocation. Given , we construct the following necessary allocation instance.
Agents: .
Items: , where .
Preferences:

The preferences of is obtained from by inserting right before , and append the other items such that the bottom item is .

For any , the preferences of is obtained from by replacing by and then appending the remaining items such that the bottom items are .

The preferences for is .
For NecessaryItem, we are asked whether agent always get item ; for top NecessarySet, we are asked whether agent always get , which are her top2 items.
Suppose the has a solution, denoted by . We claim that is a “No” answer to the NecessaryItem and top NecessarySet instance. Following , in the first round gets . In the next rounds gets . Then in the th round agent gets item , which means that does not get item after all items are allocated.
We note that always get item for any recursively balanced policy. We next show that if NecessaryItem or top NecessarySet instance is a “No” instance, then the PossibleItem instance is a “Yes” instance. Suppose is a recursively balanced policy such that does not get . We let phase 1 denote the first rounds, and let phase 2 denote the th through th round.
Because is the least preferred item for all agents except and , if does not get in the second phase, then must be allocated to . This is because for the sake of contradiction suppose is allocated to agent with , then must be the last agent in and is not chosen in any previous round. However, when it is ’s turn in the second phase, is still available, which means that would have chosen and contradicts the assumption that gets .
Claim.
If gets under , then gets in the first phase.
Proof.
For the sake of contradiction, suppose in the first phase does not get , then either she gets an item before , or she gets , because it is impossible for to get an item after otherwise another agent must get in the first phase, which is impossible as we just argued above.

If gets an item before in the first phase, then in order for to get in the second phase, must be chosen by another agent. Clearly cannot be chosen by before gets , because is the bottom item by , which means that the only possibility for to get is that is the last agent in . If is chosen by with , then because are the bottom two items by , the last two agents in must be . Therefore, when chooses an item in the second phase, is still available, which means that gets in , a contradiction to the assumption that does not get her top items.

If gets in the first phase, then it means that another agent must get in the first phase, which is impossible because all other agents rank within their bottom two positions, which means that the earliest round that any of them can get is .
∎
Let denote the order over that is obtained from the first phase of by removing , and them moving all agents who get an item in after . We claim that is a solution to , because when it is ’s round all items before must be chosen and has not been chosen (if another agent gets before in then the same agent must get an item in in the first phase of , which contradicts the construction of ). This proves the coNPcompleteness of the allocation problems mentioned in the theorem.∎
Proof of Theorem 13
Proof.
We give agent the first turns in each round. He is guaranteed to get . We now construct a bipartite graph in which each iff is strictly more preferred for than . We check whether admits a perfect matching. If does not admit a perfect matching, we return no. Otherwise, there exists a recursively balanced policy for which agent gets and .
Claim.
admits a perfect matching if and only if there a recursively balanced policy for which gets .
Proof.
If admits a perfect matching, then each agent in can get a more preferred item than in the first round. If this particular allocation is not Pareto optimal for agents in for items among , we can easily compute a Pareto optimal Pareto improvement over this allocation by implementing trading cycles as in setting of house allocation with existing tenants. This takes at most . Hence, we can compute an allocation in which each agent in gets a strictly more preferred item than and this allocation for agents in is Pareto optimal. Since the allocation is Pareto optimal, we can easily build up a policy which achieves this Pareto optimal allocation via the characterization of Brams. In the second round, gets and then subsequently we don’t care who gets what because agent has already got and .
If does not admit a perfect matching, then there is no allocation in which each agent in get a strictly better item than in . Hence in each policy in the first round, some agent in will get . ∎
∎
Proof of Theorem 14
Proof.
Membership in NP is obvious. We prove that top PossibleSet for is NPhard by a reduction from PossibleItem for , which is NPcomplete [Saban and Sethuraman, 2013]. Hardness for other ’s can be proved similarly by constructing preferences so that the distinguished agent always get her top items. Let denote an instance of PossibleItem for , where ,
Comments
There are no comments yet.