Algorithmic Stability in Fair Allocation of Indivisible Goods Among Two Agents

07/30/2020 ∙ by Vijay Menon, et al. ∙ University of Waterloo 7

We propose a notion of algorithmic stability for scenarios where cardinal preferences are elicited. Informally, our definition captures the idea that an agent should not experience a large change in their utility as long as they make "small" or "innocuous" mistakes while reporting their preferences. We study this notion in the context of fair and efficient allocations of indivisible goods among two agents, and show that it is impossible to achieve exact stability along with even a weak notion of fairness and even approximate efficiency. As a result, we propose two relaxations to stability, namely, approximate-stability and weak-approximate-stability, and show how existing algorithms in the fair division literature that guarantee fair and efficient outcomes perform poorly with respect to these relaxations. This leads us to the explore the possibility of designing new algorithms that are more stable. Towards this end we present a general characterization result for pairwise maximin share allocations, and in turn use it to design an algorithm that is approximately-stable and guarantees a pairwise maximin share and Pareto optimal allocation for two agents. Finally, we present a simple framework that can be used to modify existing fair and efficient algorithms in order to ensure that they also achieve weak-approximate-stability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The starting point of our work is Spliddit (www.spliddit.org), a not-for-profit fair division website which has attracted thousands of users and aims to provide provably fair solutions for a number of different settings [gold15]. One of the settings that is considered on Spliddit is the well-studied problem of allocating a set of indivisible goods among agents. For this setting, Spliddit comes up with a fair allocation in the following way. First, it asks the agents to divide 1000 points among the goods, thus in turn making them indicate their value for the goods. Given these values, Spliddit computes a fair and efficient allocation, in particular guaranteeing to the users that the computed solution is Pareto optimal (PO) and satisfies a fairness notion called EF1. Informally, in an allocation that satisfies EF1 an agent does not envy agent after she removes some good from ’s bundle, whereas Pareto optimality of an allocation implies that there is no other allocation where every agent receives at least as much utility and at least one of the agents strictly more. Given our current state of knowledge on fair and efficient allocations, Spliddit essentially provides the best-known guarantees, in particular ensuring an EF1 and PO allocation by computing a maximum Nash welfare solution (MNW)—i.e., an allocation that maximizes the product of the utilities of the agents [car16].

Although Spliddit does a great job in making fair division algorithms more accessible, it is our view that there is at least one aspect with respect to which it is lacking. The issue we are going to discuss is not specific to Spliddit, but, as we hope to convince the reader, is something that can be raised with respect to many situations in which cardinal preferences are elicited. Nevertheless, we use Spliddit here since our motivation to look at this issue stemmed from observing an example on it that is similar to the one we will talk about next. To illustrate our concern, consider the following example. There are two agents (A and B) and four goods () that need to be fairly divided among them. The agents have additive valuation functions and the values that they attribute for each of these goods are in Table 1. Spliddit computes an EF1 and PO solution for this instance using the MNW solution and the final allocation has agent A getting goods and , and agent B getting goods and . So far so good. However, what if agent A made a minor ‘mistake’ while reporting the values and instead reported the values given in Table 1? Note that the two valuations (i.e., A’s original one and the ‘mistake’ in Tables 1 and 1, respectively) are almost identical, with the value of each good being off by at most 2. Therefore, intuitively, it looks like we would ideally like to have similar outputs—and more so given the fact that the MNW allocation for the original instance satisfies EF1 and PO even with respect to this new instance. However, does the MNW solution do this? No, and in fact, the allocation in this case is agent A getting good alone and agent B getting the other three. This in turn implies that agent A is losing roughly 38% of their utility for the ‘mistake’ (since according to their true values, agent A gets a total utility of 437 when making the ‘mistake’, whereas gets 437+273 when reporting correctly), which seems highly undesirable.

A B
104 162
273 250
186 240
437 348
Original instance
A B
105 162
271 250
186 240
438 348
Instance where agent A makes minor mistakes

The example described above is certainly not one-off, and in fact we will show later how there are far worse examples for different algorithms from the literature on fair allocations. More broadly, we believe that this is an issue that can arise in many problems where the inputs are assumed to be cardinal preferences. After all, it is not hard to imagine scenarios from our own lives where most of us would find it hard to attribute exact numerical values to our preferences. Now, of course, it is easy to see that if we insist on resolving this issue completely—meaning, if we insist that the agent making the ‘mistake’ should not experience any change in their outcome—then it cannot be done in any interesting way as long as we allow the ‘mistakes’ to be arbitrary and also insist that the algorithm be deterministic.222Although there is work on randomized fair allocations (e.g., [bogo01, budi13]), as pointed out by car16, randomization is not appropriate for many practical fair division settings where the outcomes are just used once. However, what if we make some assumption on the type of ‘mistakes’ the agents might commit? For instance, what if we make a mild assumption that any ‘mistake’ reported by the agent at least maintains the underlying ordinal information? That is, being more specific in the context of our example, what if we assume that if an agent considers good to be the -th highest valued good according to their true preference, then in the ‘mistake’ too this information is maintained? Note that this is indeed the case in the example mentioned above, and so, more broadly, these are the kind of scenarios that we consider in this paper. Our goal is to try and address the issue that we observe in the example, and intuitively we want to design algorithms were an agent does not experience a large change in their utility as long as their report is only off by a little (and as long as the report maintains the ordinal information in their true preference).

Before we make this more concrete, a reader who is familiar with the algorithmic game theory (AGT) literature might have the following question: “Why not just consider ordinal algorithms?” After all, these algorithms will have the property mentioned above under the assumption of maintaining the underlying ordinal information, and moreover there is quite a lot of work in the AGT literature that focuses on designing algorithms that only use the underlying ordinal information and still provide good guarantees with respect to different objectives in several scenarios (e.g., see

[bout15, ansh16, ansh17, goel17, abra18]). Additionally, and more specifically in the context of fair allocations, there is also a line of work that considers ordinal algorithms [bouv10, brams14, aziz15b, segal17]. While this is certainly a reasonable approach, there are a few reasons why this is inadequate: i) Constraining algorithms to use only the underlying ordinal information, thus ignoring the cardinal values, might be too restrictive in certain settings. In fact, as we will show later, this is indeed the case for the problem of designing fair and efficient algorithms. In particular, we show that there are no ordinal algorithms that achieve EF1 and even approximate Pareto optimality. Additionally, we also believe that assuming that the agents only have ordinal rankings might be too pessimistic in certain situations. ii) Even if we ignore the previous concern, there are systems like Spliddit that are used in practice and which explicitly elicit cardinal preferences, and so we believe that our approach will be useful in such settings.

Given this, we believe that there is need for a new notion to address this issue. We term this stability, and informally our notion of stability captures the idea that the utility experienced by an agent should not change much as long as they make “small” or “innocuous” mistakes when reporting their preferences.333In the economics and computation literature, the term stability is usually used in the context of stable algorithms in two-sided matching [gale62]. Additionally, the same term is used in many different contexts in the computer science literature broadly (e.g., in learning theory, or when talking about, say, stable sorting algorithms). Our choice of the term stability here stems from the usage of this term in learning theory (see Section 3 for an extended discussion) and therefore should not be confused with the notion of stability in two-sided matching. Although the general idea of algorithmic stability is certainly not new (see Section 3 for a discussion), to best of our knowledge, the notion of stability we introduce here (formally defined in Section 2.1) has not been previously considered. Therefore, we introduce this notion in the context of problems where cardinal preferences are elicited, and explicitly advocate for it to be considered during algorithm or mechanism design. This in turn constitutes what we consider as our main contribution. We believe that if the algorithms for fair division—and in fact any problem where cardinal preferences are elicited—are to be truly useful in practice they need to have some guarantees on stability, and so towards this end we consider the problem of designing stable algorithms in the context of fair allocation of indivisible goods among two agents.

We begin by formally defining the notion of stability and show how in general one cannot hope for completely stable algorithms that are EF1 and even approximately Pareto optimal. This in turn implies that one has to relax the strong requirement of stability, and towards this end we propose two relaxations, namely, approximate-stability and weak-approximate-stability, where the latter notion, as the name suggests, is strictly weaker than the former. Following this, we show how existing algorithms from the literature on fair allocations that are fair and efficient perform quite poorly even in terms of these relaxed notions of stability. This implies that one has to design new algorithms, and towards this end we present a simple, albeit exponential, algorithm for two agent fair division that is approximately-stable and that guarantees to return an allocation that is pairwise maximin share (PMMS) and Pareto optimal. The algorithm is based on a general characterization result for PMMS allocations (i.e., for the agents case) which we believe might potentially be of independent interest. Finally, we end by considering the weaker relaxation of stability, and here we show how a small change to the existing two-agent fair division algorithms can get us weak-approximate-stability along with the properties that these algorithms otherwise satisfy.

2 Preliminaries

Let denote the set of agents and denote the set of indivisible goods that needs to divided among these agents. Throughout, we assume that every agent has an additive valuation function ,444We assume valuations are integers to model practical deployment of fair division algorithms (e.g., adjusted winner protocol, Spliddit). All the results hold even if we assume that the valuations are non-negative real numbers. where denotes the set of all additive valuation functions, , and additivity implies that for a (which we often refer to as a bundle), . For ease of notation, we often omit and instead just write it as . We also assume throughout that , for some , and that , . Although the assumption that agents have positive value for a good may not be valid in certain situations, in Appendix A.1 we argue why this is essential in order to obtain anything interesting in context of our problem. Finally, for and , we use to denote the set of ordered partitions of into bundles, and for an allocation , where , use to denote the bundle allocated to agent .

In this paper we are interested in deterministic algorithms that produce fair and efficient (i.e., Pareto optimal) allocations. For and a profile of valuation functions , we use to denote the utility that obtains from the allocation returned by . Pareto optimality and the fairness notions considered are defined below.

Definition 1 (-Pareto optimality (-Po)).

Given a , an allocation is -Pareto optimal if

In words, an allocation is -Pareto-optimal if no agent can be made strictly better-off by a factor of without making another agent strictly worse-off; Pareto optimality refers to the special case when . For allocations , we say that -Pareto dominates if every agent receives at least as much utility in as in and at least one agent is strictly better-off by a factor of in .

We consider several notions of fairness, namely, pairwise maximin share (PMMS), envy-freeness up to least positively valued good (EFX), and envy-freeness up to one good (EF1). Among these, the main notion we talk about here (and design algorithms for) is PMMS, which is the strongest and which implies the other two notions. Informally, an allocation is said to be a PMMS allocation if every agent is guaranteed to get a bundle that she values more than the bundle she receives when she plays cut-choose with any other agent (i.e., she partitions the combined bundle of her allocation and the allocation of and receives the one she values less).

Definition 2 (pairwise maximin share (PMMS)).

An allocation is a pairwise maximin share (PMMS) allocation if

Next, we define EFX and EF1, which as mentioned above are weaker than PMMS.

Definition 3 (envy free up to any positively valued good (EFX)).

An allocation is envy-free up to any positively valued good (EFX) if

In words, an allocation is said to be EFX if agent is no longer envious after removing any positively valued good from agent ’s bundle.

Definition 4 (envy free up to one good (EF1)).

An allocation is envy-free up to one good (EF1) if

In words, an allocation is said to be EF1 if agent is no longer envious after removing some good from agent ’s bundle.

In addition to the requirement that the algorithms be fair and efficient, we would also ideally like it to be stable. We define the notion of stability (and its relaxations) in the next section.

2.1 Stability

At a high-level, our notion of stability captures the idea that an agent should not experience a large change in utility as long as they make “small” or “innocuous” mistakes while reporting their preferences. Naturally, a more formal definition requires us to first define what constitutes a ‘mistake’ and what we mean by “small” or “innocuous” mistakes in the context of fair division. So, below, for an , , and , we define what we refer to as -neighbours of . According to this definition, the closer is to zero, the “smaller” is the ‘mistake’, since smaller values of indicate that agent ’s report (which in turn is the ‘mistake’) is closer to their true valuation function .

Definition 5 (-neighbours of (-)).

For , , , and , define -, the set of -neighbouring valuations of , to be the set of all such that

  • i.e., ,

  • , (i.e., the ordinal information over the singletons is maintained), and

  • .555We present our results using the -norm, although qualitatively they do not change if we use, say, the -norm.

Throughout, we often refer to the valuation function of agent as its true valuation function or true report and, for some , - as its mistake or misreport. Note that although true valuation function, true report, and misreport are terms one often finds in the mechanism design literature which considers strategic agents, we emphasize that here we are not talking about strategic agents, but about agents who are just unsophisticated in the sense that they are unable to accurately convert their preferences into cardinal values. Also, although we write , it should be understood that in the context of our paper, since the valuation functions of the agents are integral, the only valid values of are when it is an integer. We use this notation for ease of exposition and because all our results hold even if we instead use real-valued valuations.

Now that we know what constitutes a mistake, we can define our notion of stability.

Definition 6 (-stable algorithm).

For , an algorithm is said to be -stable if , and -,

(1)

In words, for an , an algorithm is said to be -stable if for every agent with valuation function and all possible reports of the other agents, the utility agent obtains is the same when reporting and .666At first glance, the definition for stable algorithms (or its relaxations defined in Section 2.1.1) might seem very similar to the definition for (approximately) strategyproof algorithms (i.e., algorithms where truthful reporting is an (approximately) weakly-dominant strategy for all the agents). Although there is indeed some similarity in these definitions, it is important to note that these are different notions and neither does one imply the other. For instance, there is a stable algorithm that is EF1 (one can easily see that the well-known EF1 draft algorithm [car16, Sec. 3] is stable), but there is no strategyproof algorithm that is EF1 [aman17, App. 4.6]. Given the definition above, we have the following for what it means for an algorithm to be stable.

Definition 7 (stable algorithm).

An algorithm is said to be stable if it is -stable for all .

Although the “for all” in the definition above might seem like a strong requirement at first glance, it is not, for one can easily show that the following observation holds. The proof appears in Appendix B.1.

Observation 1.

An algorithm is stable if and only if there exists an such that is -stable.

It is important to note that the definition for an -stable algorithm is only saying that the utility agent obtains (and not the allocation itself) when moving from to is the same. Additionally, although the notion of stability in general may look too strong, it is important to note that there are several algorithms (e.g., the well-known EF1 draft mechanism [car16]) that satisfy this definition. In particular, one can immediately see from the definition of stability that the following observation holds in the context of ordinal algorithms—i.e., an algorithm that produces the same output for input profiles and as long as .

Observation 2.

Every ordinal fair division algorithm is stable.

In general, and as we will see in Section 4.2, the equality in (1) can be too strong a requirement. Therefore, in the next section we propose two relaxations to the strong requirement of stability.

2.1.1 Approximate notions of stability

We first introduce the weaker relaxation which we refer to as weak-approximate-stability. Informally, weak-approximate-stability basically says that the utility that an agent experiences as a result of reporting a neighbouring instance is not too far away from the what would have been achieved if the reports were exact.6

Definition 8 (-weakly-approximately-stable algorithm).

For an and , an algorithm is said to be -weakly-stable if , and -,

(2)

Although the definition above might seem like a natural relaxation of the notion of stability, as it will become clear soon, it is a bit weak. Therefore, below we introduce the stronger notion which we refer to as approximate-stability. However, before this we introduce the following, which, for a given valuation function , defines the set, , of valuation functions such that the ordinal information over the bundles is the same in both and .

Definition 9 ().

For a valuation function , refers to the set of all valuation functions such that for all ,

In words, equiv() refers to set all of valuation functions such that and induce the same weak order over the set of all bundles (i.e., over the set ). Throughout, for an , we say that two instances (or profiles) and are equivalent if . Also, we say that and are ordinally equivalent if .

Equipped with this notion, we can now define approximate-stability. Informally, an algorithm is approximately-stable if it is weakly-approximately-stable and if with respect to every instance that is equivalent to the true reports, it is stable.

Definition 10 (-approximately-stable algorithm).

For an and , an algorithm is said to be -approximately-stable if ,

  • , , and

  • is -weakly-approximately-stable.

Note that when the definitions for both the relaxations (i.e., weak-approximate-stability and approximate-stability) collapse to the one for -stable algorithms. Also, throughout, we say that an algorithm is -approximately-stable for if it is -approximately-stable for all (similarly for -weakly-stable).

Although the definition of approximate-stability might be seem a bit contrived at first glance, it is important to note that this is not the case. The requirement that the algorithm be stable on equivalent instances is quite natural because of the following observation which states that with respect to all the notions that we talk about in this paper any algorithm that satisfies such a notion can always output the same allocation for two instances that are equivalent. The proof of this statement appears in Appendix B.1.

Observation 3.

Let be a property that is one of EF1, EFX, PMMS, or PO. For all , if an allocation satisfies property with respect to the profile , then also satisfies property with respect to the profile , where .

3 Related work

Now that we have defined our notion, we can better discuss related work. There are several lines of research that are related to the topic of this paper. Some of the connections we discuss are in very different contexts and are by themselves very active areas of research. In such cases we only provide some pointers to the relevant literature, citing the seminal works in these areas.

Connections to algorithmic stability, differential privacy, and algorithmic fairness.

Algorithmic stability captures the idea of stability by employing the principle that the output of an algorithm should not “change much” when a “small change” is made to the input. To the best of our knowledge, notions of stability have not been considered in the AGT literature. However, the computational learning theory literature has considered various notions of stability and has, for instance, used them to draw connections between the stability of a learning algorithm and its ability to generalize (e.g.,

[bous02, shalev10]). Although the notion of stability we employ here is based on the same principle, it is defined differently from the ones in this literature. Here we are concerned about the change in utility than an agent experiences when she perturbs her input and deem an algorithm to be approximately-stable if this change is small.

Algorithmic stability can, in turn, be connected to differential privacy [dwork06a, dwork06b]

. Informally, differential privacy (DP) requires that the probability of an outcome does not “change much” on “small changes” to the input. Therefore, essentially, DP can be considered as a notion of algorithmic stability, albeit a very strong one as compared to the ones studied in the learning theory literature (see the discussion in

[dwork15, Sec. 1.4]) and the one we consider. In particular, if we were considering randomized algorithms, it is indeed the case that an -differentially private algorithm is -stable, just like how an -differentially private algorithm is a (-dominant strategy mechanism [mcsh07]. Nevertheless, we believe that the notion we introduce here is independently useful, and is different from DP in a few ways. First, the motivation is completely different. We believe that our notion may be important even in situations where privacy is not a concern. Second, in this paper we are only concerned with deterministic algorithms and one can easily see that DP is too strong a notion for this case as there are no deterministic and differentially private algorithms that have a range of at least two.

Finally, the literature on algorithmic (individual-based) fairness captures the idea of fairness by employing the principle that “similar agents” should be “treated similarly” [dwo12]. Although this notion is employed in contexts where one is talking about two different individuals, note that one way to think of our stability requirement is to think of it as a fairness requirement where two agents are considered similar if and only if they have similar inputs (i.e., say, if one’s input is a perturbation of the other’s). Therefore, thinking this way, algorithmic fairness can be considered as a generalization of stability, and just like DP it is much stronger and only applicable in randomized settings. In fact, algorithmic fairness can be seen as a generalization of DP (see the discussion in [dwo12, Sec. 2.3]) and so our argument above as to why our notion is useful is relevant even in this case.

Connections to robust algorithm and mechanism design. Informally, an algorithm is said to be robust if it “performs-well” even under “slightly different” inputs or if the underlying model is different from the one the designer has access to. This notion has received considerable amount of attention in the algorithmic game theory and social choice literature. For instance—and although this line of work does not explicitly term their algorithms as “(approximately) robust”—the flurry of work that takes the implicit utilitarian view considers scenarios where the agents have underlying cardinal preferences but only provide ordinal preferences to the designer. The goal of the designer in these settings is to then use these ordinal preferences in order obtain an algorithm or mechanism that “performs well” (in the approximation sense, with respect to some objective function) with respect to all the possible underlying cardinal preferences (e.g., [bout15, ansh16, ansh17, goel17, abra18]). Additionally, and more explicitly, robust algorithm design has been considered, for instance, in the context of voting (e.g., [shir13, bred17]) and the stable marriage problem [menon18, mai18, chen19], and robust mechanism design has been considered in the context of auctions [chiesa12, chiesa14, chiesa15] and facility location [menon19]. Although, intuitively, the concepts of robustness and stability might seem quite similar, it is important to note that they are different. Stability requires that the outcome of an algorithm does not “change much” if one of the agents slightly modifies its input. Therefore, the emphasis here is to make sure that the outcomes are not very different as long as there is a small change to the input associated with one of the agents. Robustness, on the other hand, requires the outcome of an algorithm to remain “good” (in the approximation sense) even if the underlying inputs are different from what the algorithm had access to. Therefore, in this case the emphasis is on making sure that the same output (i.e., one that is computed with the input the algorithm has access to) is not-too-bad with respect to a set of possible underlying true inputs (but ones the algorithm does not have access to). More broadly, one can think of robustness as a feature that a designer aspires to to ensure that the outcome of their algorithm is not-too-bad even if the model assumed, or the input they have access to, is slightly inaccurate, whereas stability in the context that we use here is more of a feature that is in service of unsophisticated agents who are prone to making mistakes when converting their preferences to cardinal values.

Related work on fair division of indivisible goods. The problem of fairly allocating indivisible goods has received considerable attention, with several works proposing different notions of fairness [lip04, bud11, car16, aman18] and computing allocations that satisfy these notions, sometimes along with some efficiency requirements [car16, bar18, plaut18, car19]. This paper also studies the problem of computing fair and efficient allocations, but in contrast to previous work our focus is on coming up with algorithms that are also (approximately) stable. While many of these papers address the general case of agents, our work focuses on the case of two agents.

Although a restricted case, the two agent case is an important one and has been explicitly considered in several previous works [brams12, rama13, brams14, vet14, aziz15a, plaut18]. Among these, the work that is most relevant to our results here is that of rama13. In particular, and although the results here we derived independently, this paper contains two results that are similar to the ones we have here—first, a slightly weaker version of the case of Theorem 2, and second, a slightly weaker version of Theorem 3. The exact differences are outlined in Sections 4.3 and 4.4 since we need to introduce a few more notions to make them clear.

In addition to the papers mentioned above—all of which adopt the model as in this paper where the assumption is that the agents have cardinal preferences—there is also work that considers the case when agents have ordinal preferences [bouv10, brams14, aziz15b, segal17]. Although this line of work is related in that ordinal algorithms are stable, it is also quite different since usually the goal in these papers is to compute fair allocations if they exist or study the complexity of computing notions like possibly-fair or necessarily-fair allocations.

4 Approximate-stability in fair division of indivisible goods

Our aim is to design (approximately) stable algorithms for allocating a set of indivisible goods among two agents that guarantee pairwise maximin share (PMMS) and Pareto optimal (PO) allocations. However, before we try to design new algorithms, the first question that arises is “How do the existing algorithms fare? How stable are they?” We address this below.

4.1 How (approximately) stable are the existing algorithms?

We consider the following well-studied algorithms that guarantee PO and at least EF1.

  1. [label=)]

  2. Adjusted winner protocol [brams96a, brams96b]; returns an EF1 and PO allocation for two agents.

  3. Leximin solution [plaut18]; returns a PMMS and PO allocation for two agents.

  4. Maximum Nash Welfare solution [car16]; returns an EF1 and PO allocation for any number of agents.

  5. Fisher-market based algorithm [bar18]; returns an EF1 and PO allocation for any number of agents.

All the algorithms mentioned above perform poorly even in terms of the weaker relaxation of stability, i.e., weak-approximate-stability. To see this, consider the instance in Table 00a, and note that the allocation that is output by any of these algorithms is agent A getting and agent B getting . Next, consider the instance in Table 00b. In this case, if we use any of these algorithms, then the allocation that is output is agent A getting and agent B getting .

A B
T-1 T-2
1 2
0a Original instance
A B
T-3 T-2
3 2
0b Instance where agent A makes a mistake

Given this, let the values mentioned in Table 00a constitute the true valuation function of agent . When reporting these, she receives the good . Table 00b shows agent A’s misreport in which case she receives the good . Recall from the definition of -neighbours of (Definition 5) that -, where . Therefore, we now have, , or in other words, all the four algorithms mentioned above are -weakly-approximately-stable, even when .

Remark: We can think of a question that a skeptical reader might have at this point. Here we try to preemptively address this. Seeing the instances and associated outcomes, one might wonder “is the outcome w.r.t. the valuations in Table 00b really that bad? After all, this is indeed the outcome one would expect if social welfare is a concern (and the issue becomes more pronounced if we replace the 2 in Table 00b with, say, a 20).” Our response to this is, recall that the setting under consideration is one where there is no money involved. Therefore, we believe that social welfare, or for that matter any measure which involves interpersonal comparison of utilities, is not a particularly good objective to be concerned about in such settings since the way agents map their preferences to numerical values can be imprecise.

4.2 Are there fair and efficient algorithms that are stable?

The observation that previously studied algorithms perform poorly even in terms of the weaker relaxation of stability implies that we need to look for new algorithms. So, now, a natural question that arises here is: “Is there any hope at all for algorithms that are fair, PO, and stable?” Note that without the requirement of PO the answer to this question is a “Yes”—at least when the fairness notion that is being considered is EF1, since it is easy to observe that the well-known draft algorithm (where agents take turns picking their favourite good among the remaining goods) that is EF1 [car16, Sec. 3] is stable. However, if we require PO, then we show that the answer to the question above is a “No,” in that there are no stable algorithms that always return an EF1 and even approximately-PO allocation.

Theorem 1.

Let be an algorithm that is stable and always returns an EF1 allocation. Then, for any in , cannot be -PO.

Consider the instance in Table 00c which represents agents’ true valuation functions. For simplicity we assume that . Next, since the agents are symmetric, and one can easily verify that in every EF1 allocation each of the agents have to get exactly two goods, let us assume w.l.o.g. that agent A receives . Given this, now consider the instance in Table 00d where agent B makes a mistake. Let us denote agent B’s true utility function as , misreport as -, and and as outcomes for the instances in Tables 00c, 00d, respectively. From our discussion above, we know that , since agent B gets two goods from the set .

A B
0c Original instance
A B
0d Instance where agent B makes a mistake

Since is stable, we know that . Now, it is easy to see that this is only possible if the set has exactly two goods from . So, let us assume w.l.o.g. that . If this is the case, then note that the allocation Pareto dominates the allocation by a factor of , which in turn proves our theorem. ∎

Given this result and our observation in Section 4.1 that previously studied algorithms perform poorly even in terms of weak-approximate-stability, it is clear that the best one can hope for is to design new algorithms that are fair, efficient, and approximately-stable. In Section 4.4 we show that this is possible. However, before we do that, in the next section we first present a necessary and sufficient condition for PMMS allocations when there are agents.

4.3 A necessary and sufficient condition for existence of PMMS allocations

The general characterization result for PMMS allocations presented here will be useful in the next section to design an approximately-stable algorithm that produces a PMMS and PO allocation for the case of two agents. Additionally, we also believe that the result might potentially be of independent interest.

For an agent , a set , and a set , the result uses a notion of rank of , denoted by , and defined as the number of subsets of that have value at most . More formally,

(3)

Although we were not initially aware of the use of rank in the fair division literature, it turns out that it has been considered previously. rama13 considered it (in particular, according to our notation, they talk about ) in the context of fair division among two agents who have a strict preference orders over the subsets of , and one of their results is a slightly weaker (since they assume a strict order over subsets of which is not assumed in this paper) version of the case of the theorem below [rama13, Thm. 1(2)].

Theorem 2.

Given an instance with indivisible goods and agents with additive valuation functions, an allocation is a pairwise maximin share allocation if and only if , and ,

Before we present a formal argument to prove this theorem, we present a brief overview. Overall, the proof uses some observations about the ranking function. In particular, for a set and , one key observation is that the rank of is high enough (more precisely, greater than ) if and only if the value of this set is at least half that of . Once we have this, then the proof essentially follows by combining it with a few other observations about the ranking function and some simple counting arguments.

More formally, we start by making the following claims about the ranking function. The proofs of these claims appear in Appendix B.2.

Claim 1.

Let and be some agent. Then,

  1. [label=),ref=]

  2. for ,

  3. .

Claim 2.

Let , be some agent, , and . Then,

  1. [label=),ref=]

  2. for , if , then

  3. .

Claim 3.

Let be an allocation, and be some agents. If , then

Equipped with the claims above, we are now ready to prove our theorem.

Let us assume for the sake of contradiction that is a PMMS allocation and that there exists such that . W.l.o.g., let us assume that . Next, consider the set . Since , we know from Claim 2(2) that (since there are subsets of ). This implies that there is a set and its complement such that, , which in turn using Claim 1(1) implies that . However, note that this contradicts the fact that has an MMS partition w.r.t.  in .

Let us assume for the sake of contradiction that there exists an agent such that does not have an MMS partition w.r.t. , but . This implies that , which in turn using Claim 3 and the fact that , implies that . Next, since does not perceive to be an MMS partition w.r.t. , there must exist a partition such that and . This implies that, using Claim 1(1), we have that and , or in other words that a set (i.e., ) and its complement (i.e., ) both have rank greater than . Now, if this is case, then one can see that, since , this implies there exists a set and its complement such that and . However, this is impossible because we can now use Claim 3 to see that both and have value less than , which in turn contradicts the fact that is an additive valuation function. ∎

In the next section we use this result to show an approximately-stable algorithm that is PMMS and PO when there are two agents.

4.4 rank-leximin: An approximately-stable PMMS and PO algorithm for two agents

The idea of our algorithm is simple. Instead of the well-known Leximin algorithm where one aims to maximize the minimum utility any agents gets, then the second minimum utility, and so on, our approach, which we refer to as the rank-leximin algorithm (Algorithm 1), is to do a leximin-like allocation, but based on the ranks of the bundles that the agents receive. Here for an agent and a bundle , by rank we mean , as defined in (3). That is, the rank-leximin algorithm maximizes the minimum rank of the bundle that any agent gets, then it maximizes the second minimum rank, then the third minimum rank, and so on. Note that this in turn induces a comparison operator between two partitions and this is formally specified as rank-leximinCMP in Algorithm 1. Although the original leximin solution also returns a PMMS and PO allocation for two agents, recall that we observed in Section 4.1 that it does not provide any guarantee even in terms of weak-approximate-stability, even when . Rank-leximin on the other hand is PMMS and PO for two agents and, as we will show, is also -approximately-stable for all . Additionally, it also returns an allocation that is PMMS and PO for any number of agents as long as they report ordinally equivalent valuation functions (see Definition 9).

Remark: Given the characterization result for PMMS allocations, one can come up with several algorithms that satisfy PMMS and PO. However, we consider rank-leximin here since it is a natural counterpart to the well-known leximin algorithm [plaut18]. Additionally, it turns out that contrary to our initial belief the idea of rank-leximin is not new. rama13 considered it in the context of algorithms for two-agent fair division that satisfy Pareto-optimality, anonymity, the unanimity bound, and preference monotonicity (see [rama13, Sec. 3] for definitions), where the unanimity bound is a notion that one can show is equivalent to the notion of Maximin share (MMS) that is used in the computational fair division literature. So, with caveat that rama13 assumes that the agents have a strict preference orders over the subsets of (which is not assumed in this paper), the result of [Thm. 2]rama13 already proves that rank-leximin is PMMS and PO for two agents (MMS is equivalent to PMMS in the case of two agents), which is the result we show in Theorem 3. However, we still include our proof because of the indifference issue mentioned above, and since it almost follows directly from Theorem 2.

1:rank-leximinCMP 2:two partitions 3:returns true if , i.e., if is before in the rank-leximin sorted order 4: agents sorted in non-decreasing order of the rank of their bundles in , i.e., based on , with ties broken in some arbitrary but consistent way throughout 5: similar ordering as in above, but based on 6:for each  do 7:      -th agent in 8:      -th agent in 9:     if  then 10:          return 11:     end if 12:end for 13:return false 14: 15: 16:for each agent , their valuation function 17:an allocation that is PMMS and PO 18: perform a rank-Lexmin sort on based on the rank-leximinCMP operator defined above 19:return that is the last element in

Algorithm 1: rank-leximin algorithm

Below we first show that the rank-leximin algorithm always returns an allocation that is PMMS and PO for the case of two agents. The fact that it is PO can be seen by using some of the properties that we proved about the ranking function in the previous section, while the other property follows from combining Theorem 2 along with a simple pigeonhole argument. Following this, we also show that rank-leximin always returns a PMMS and PO allocation when there are agents with ordinally equivalent valuation functions. The proof of this is slightly more involved and it proceeds by first showing how rank-leximin returns such an allocation when all the agents are identical. Once we have this, then the theorem follows by repeated application of Observation 3. However, since this result is not directly related to the core focus of this paper, the proof is deferred to Appendix B.3.

Theorem 3.

Given an instance with indivisible goods and two agents with additive valuation functions, the rank-leximin algorithm (Algorithm 1) returns an allocation that is PMMS and PO.

Let be the allocation that is returned by the rank-leximin algorithm (Algorithm 1). Below we will first show that is PO and subsequently argue why it is PMMS.

Suppose is not Pareto optimal. Then there exists another allocation such that for all , , and the inequality is strict for at least one of the agents, say . This in turn implies that using Claim 1 we have that for all , and . However, this implies that from the procedure rank-leximinCMP in Algorithm 1 we have that , and this in turn directly contradicts the fact that was the allocation that was returned.

To show that is PMMS, consider agent and all the sets such that . From Claim 2 we know that there are at least such sets. Therefore, if , then there is at least one such that . This implies is an allocation such that . Now since rank-leximin maximizes the minimum rank that any agent receives, we have that , and so now we can use Theorem 2 to see that is a PMMS allocation. ∎

Theorem 4.

Given an instance with indivisible goods and agents with additive valuation functions, the rank-leximin algorithm (Algorithm 1) returns an allocation that is PMMS and PO if all the agents report ordinally equivalent valuation functions.

Now that we know rank-leximin produces a PMMS and PO allocation, we will move on to see how approximately-stable it is in the next section. However, before that we make the following observation.

Remark: The rank-leximin algorithm takes exponential time. Note that this is not surprising since finding PMMS allocations is NP-hard even for two identical agents—one can see this by a straightforward reduction from the well-known Partition problem.

4.4.1 rank-leximin is -approximately-stable for

Before we show how approximately-stable rank-leximin is, we state the following claims which will be useful in order to prove our result. The proofs of these claims appear in Appendix B.4.

Claim 4.

Let be an additive utility function and for some , let -. Then, for any ,

  1. [label=),ref=]

  2. if and , then .

Claim 5.

Given an instance with indivisible goods and two agents with additive valuation functions, let be a PMMS allocation. If for an agent , , where , then

  1. [label=),ref=]

  2. , where is the maximum valued good of agent in the bundle

  3. , where is the minimum valued good of agent in the bundle

  4. .

Claim 6.

Given an instance with indivisible goods and two agents with additive valuation functions, let be the allocation that is returned by the rank-leximin algorithm for this instance. If and , then and .

Equipped with the claims above, we can now prove the approximate-stability bound for rank-leximin. The proof here proceeds by first arguing about how rank-leximin is stable with respect to equivalent instances. This is followed by showing upper and lower bounds on how weakly-approximately-stable it is, which in turn involves looking at several different cases and using several properties (some of them proved above and some that we will introduce as we go along) about the allocation returned by the rank-leximin algorithm.

Theorem 5.

Given an instance with indivisible goods and two agents with additive valuation functions, the rank-leximin algorithm is -approximately-stable for all .

Let us consider two agents with valuation functions . Note that w.l.o.g. we can assume that agent 1 is the one that is making a mistake. Throughout, for some , let us denote this misreport by . Also, let rank-leximin, and rank-leximin. Next, let us introduce the following notation that we will use throughout. For some arbitrary valuation functions , if an allocation precedes an allocation in the rank-leximin order with respect to (i.e., according to the rank-leximinCMP function in Algorithm 1), then we denote this by . In the case they are equivalent according to the rank-leximin operator, then we denote it by . Additionally, we use to denote that either precedes or is equivalent to .

Equipped with the notations above, let us now prove our theorem. To do this, first recall from the definition of approximate-stability (see Definition 10) that in order to prove our theorem, we first need to show that if , then . To see why this is true, consider the rank-leximinCMP function in Algorithm 1 and observe that since , have the same bundle rankings, and hence for any two partitions , (i.e., appears before in the rank-leximin order) if and only if . This in turn along with the fact that rank-leximin uses a deterministic tie-breaking rule implies that rank-leximin, thus showing that it is stable with respect to equivalent instances.

Having shown the above, let us move to the second part where we show how weakly-approxmiately-stable rank-leximin is. For the rest of this proof, let -, for some . Also, for , let the ranking function associated with valuation functions and be and , respectively. Throughout, in order to keep the notations simple, we use , , and to refer to , , and , respectively.

In order to prove the bound on weak-approximate-stability, we show two lemmas. The first one shows an upper bound of 2 for the ratio and the second one shows a lower bound of . Since both the proofs involve a lot of case-by-case analysis, we only present a brief sketch here and defer the complete proofs to Appendices B.4.1 and B.4.2.

Lemma 6.

For , if -, rank-leximin, and rank-leximin, then

We proceed by considering three cases. Note that since rank-leximin returns a PMMS allocation (Theorem 3) we know from Theorem 2 that these are the only cases. Also, below we directly consider the case when , since otherwise the bound trivially holds.

Case 1. : In this case the bound follows by using Claim 3.

Case 2.  and : In this case we proceed by first observing that since and , we have that . This in turn implies , and so we have that . Now, in order to upper-bound this, we further consider two cases. The first is when , and here the required bound follows by using Claim 5 from which we know that . The second case is when , and here, since , we have that . This implies rank-leximin and rank-leximin. Given this, one can make a series of observations to conclude that and . However, this is a contradiction because rank-leximin breaks ties deterministically.

Case 3.  and : Using the facts that , , and , it is not hard to see that for all , . This observation in turn can be used to show that .

Finally, combining all the three cases above gives us our lemma. As mentioned previously, the complete proof appears in Appendix B.4.1. ∎

Next, we show a lower bound for .

Lemma 7.<