Fair and Efficient Online Allocations with Normalized Valuations

09/25/2020 ∙ by Vasilis Gkatzelis, et al. ∙ 0

A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness and efficiency. Achieving any non-trivial guarantees in an adversarial setting is impossible. However, we show that normalizing the agent values, a very common assumption in fair division, allows us to escape this impossibility. Our main result is an online algorithm for the case of two agents that ensures the outcome is envy-free while guaranteeing 91.6 there is no envy-free algorithm that guarantees more than 93.3 social welfare.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider a basic problem in online fair division: a set of divisible items become available over a sequence of rounds (one item per round), and in each round we need to make an irrevocable decision regarding how to distribute the corresponding item among a set of agents. The value of each agent for the item in round is revealed at the beginning of that round and our goal is to ensure that the overall allocation at the end of the rounds is fair and efficient, despite the information limitations that we face.

Prior work on online resource allocation problems such as the one above has mostly focused on maximizing efficiency. In our setting, this could easily be achieved by fully allocating the item of each round to the agent with the largest value. However, this approach can often lead to outcomes that are patently unfair, which is unacceptable in many important real-world applications. For example, ensuring that the outcome is fair is crucial for food banks that allocate food each day to soup kitchens and other local charities depending on the demand Prendergast (2017), or software engineering companies that distribute shared computational resources among their employees Gorokh et al. (2020).

Achieving fairness in such an online setting can be significantly more complicated than just maximizing efficiency. This is mostly due to the fact that reaching a fair outcome may require a more holistic view of the instance at hand. For example, the fair-share property (also referred to as proportionality in some contexts), one of the classic notions of fairness, requires that each of the agents should eventually receive at least a fraction of their total value for all the items. But, agents who only value highly demanded items are harder to satisfy than agents who value items of low demand, and online algorithms may be unable to distinguish between these two types of agents soon enough. As a result, designing efficient online algorithms that also satisfy the fair-share property is an important, yet non-trivial, task.

In fact, it is easy to show that without imposing any normalization on the agent values, essentially the only algorithm that guarantees the fair-share property is the naive one that equally splits every item among all agents (see Appendix A for a proof). This yields an outcome that is inefficient, unless all agents happen to have the same values. But, the standard approach in fair division is to normalize the agents’ values so that they add up to the same constant (that constant is usually ). As we show in this paper, this normalization is sufficient for us to escape the strong impossibility result and achieve non-trivial efficiency guarantees while satisfying the fair-share property.

1.1 Our results and techniques

With the exception of a few results in Section 6, all of our results focus on instances involving two agents, which already pose several non-trivial obstacles.

We first consider the performance of non-adaptive online algorithms, i.e., algorithms whose allocation decision in each round depends only on the agents’ values for item . A major benefit of these algorithms is that they need not keep track of any additional information, making them easy to implement. We focus on the interesting family of poly-proportional algorithms that are parameterized by a value , and in each round allocate to each agent a fraction of the item equal to . For , we recover the algorithm that splits each item equally among the agents (which satisfies fair-share but can be inefficient), while for we get the algorithm that allocates each item to the agent with the highest value (which is efficient but violates fair-share). Another well-studied algorithm from this family, that is used widely in practice, is the proportional allocation (or just proportional) algorithm, which corresponds to the case . We show that this algorithm satisfies fair-share and is a significant improvement in terms of efficiency: it guarantees of the optimal social welfare (Theorem 1).

As the value of the parameter grows, the corresponding poly-proportional algorithm allocates each item more “agressively”, i.e., a larger fraction goes to the agents with the highest values. As a result, higher values of lead to increased efficiency, but may also lead to the violation of the fair-share property. We precisely quantify this intuition by first showing that for all the corresponding poly-proportional algorithm does not satisfy fair-share (Lemma 1). Then, we show that the poly-proportional algorithm with parameter , the quadratic-proportional algorithm, satisfies fair-share and guarantees of the optimal social welfare (Theorem 2). As a result, we conclude that is the optimal approximation achievable by a poly-proportional algorithm that satisfies fair-share.

Moving beyond non-adaptive algorithms, we proceed to study the extent to which adaptivity could lead to even better approximation guarantees. With that goal in mind, we propose the family of guarded poly-proportional algorithms, which are a slight modification of the poly-proportional algorithm, also parameterized by . We show that every algorithm in this family satisfies fair-share, and our main result is that the guarded poly-proportional algorithm with guarantees of the optimal social welfare (Theorem 3). On the other hand, we prove that no fair-share algorithm (adaptive or non-adaptive) can achieve an approximation to the optimal welfare better than (Theorem 4), thus establishing that our positive result is near optimal.

To prove our results, we leverage the fact that our algorithms have a closed form expression for the agents’ allocations and utilities. We can use this fact and write a mathematical program that computes the worst-case approximation to the optimal welfare over all instances. We use variables for the value of agent for item and

for the ratio between agents’ values. Even though this program is not itself convex (so at first glance it’s unclear how useful it is), we show that under a suitable choice of variables and constraints, fixing some of the variables (i.e. treating them as constants) gives a linear program with respect to the remaining variables. The majority of the constraints in this LP are non-negativity constraints, so, using the fundamental theorem of linear programming we conclude that the worst-case instance only has a few (two or three depending on the algorithm) items with positive valuations. Once we have such small instances we can analyze the approximation using simple calculus. See the proofs of Theorem 

12 and 3 for details.

We conclude with a brief discussion regarding instances with agents. We already know from the work of Caragiannis et al. (2012) on the price of fairness that even offline algorithms cannot achieve an approximation better than ; we complement this result by showing that the non-adaptive proportional algorithm matches this bound. Finally, we provide an interesting local characterization of all online algorithms that satisfy the fair-share property.

2 Related Work

The same model that we consider in this setting, i.e., online allocation of divisible items with normalized agent valuations, was very recently studied by Gorokh et al. (2020). But, rather than introducing fairness as a hard constraint, like we do here, they (approximately) maximize the Nash social welfare objective. On the other hand, Bogomolnaia et al. (2019) maximize efficiency subject to fair-share constraints, like we do, but not in an adversarial setting. The agent values are stochastically generated and fairness is guaranteed only in expectation.

An additional motivation behind our assumption that the agents’ values are normalized comes from systems where the users are asked to express their value using a budget of some artificial currency in the form of tokens. If a user has a high value for a good then she can use more tokens to convey this information to the algorithm. Since all users have the same budget, their values are normalized by design. A natural, and very well-studied algorithm in these systems is the proportional algorithm, which distributes each item in proportion to the expressed value (see, e.g., Zhang (2005); Feldman et al. (2009); Christodoulou et al. (2016); Brânzei et al. (2017)). We provide an analysis of this algorithm, but we also achieve improved results using alternative algorithms.

Zeng and Psomas (2020) considered the trade-off between fairness and efficiency under a variety of adversaries, but in a setting with indivisible items and non-normalized valuations. Against the strong adversary studied here, their results are negative: no algorithm with non-trivial fairness guarantees can Pareto-dominate a uniformly random allocation.

More broadly, our paper is part of the growing literature on online, or dynamic, fair division. Much of this prior work analyzes settings where the agents are static and the resources arrive over time, like we do Walsh (2011); Benade et al. (2018); He et al. (2019). Another line of work studies the allocation of static resources among dynamically arriving and departing agents Kash et al. (2014); Friedman et al. (2015, 2017); Im et al. (2020).

3 Preliminaries

We consider the problem of allocating divisible items among a set of agents. A fractional allocation defines for each agent and item the fraction of that item that the agent will receive. A feasible allocation satisfies for all items .

We assume the valuations of the agents are additive: each agent has valuation for item , and utility for an allocation . We also assume that the agents’ valuations are normalized so that . We evaluate the efficiency of an allocation using the social welfare (SW), i.e., the sum of all agents’ utilities .

An allocation satisfies fair-share if for every agent . We say that an algorithm satisfies fair-share if it always outputs an allocation that satisfies fair-share. Another popular definition of fairness is envy-freeness, which dictates that no agent values the allocation of some other agent more than her own. It is well known that if every item is fully allocated, i.e., , then envy-freeness implies fair-share, and for two-agent instances (which is the main focus of this paper) the two notions coincide.

The item valuations are not available to us up-front; instead, the items arrive online (one per round) and the agent values for the item of round are revealed when the item arrives. The algorithm then makes an irrevocable decision about how to allocate the item before moving on to the next round. We evaluate our algorithms using worst-case analysis, so one can think of the values being chosen by an adaptive adversary aiming to hurt the algorithm’s performance. Throughout the paper our algorithms do not need to know the total number of rounds , but all our negative results apply even to algorithms that have this information.

We say an algorithm is non-adaptive if its allocation decision for round solely depends on the valuations at round , whereas an adaptive algorithm can use the valuations and allocations of all the previous rounds. An interesting family of non-adaptive algorithms parametrized by a value are ones that we call poly-proportional algorithms whose allocation in each round is proportional to , i.e., each agent is allocated a fraction . For this become the equal-split algorithm, for the proportional algorithm, and for the greedy one.

Given some algorithm , let denote the overall allocation that it outputs on an instance with agent values , and let be the social welfare maximizing allocation. is an -approximation to the optimal social welfare if

Note that our algorithms are constrained to be online and to always output fair-share outcomes, while the welfare maximizing benchmark is restricted by neither one of the two.

4 Non-Adaptive Algorithms

Non-adaptive algorithms have the important benefit that they need not keep track of historical information regarding the agents’ allocation or preferences. A naive example of such an algorithm is equal-split, i.e,. the poly-proportional algorithm with . Since this algorithm splits every item equally among the two agents, they both always receive value exactly , and hence the outcome is fair-share. However, this outcome can be very inefficient, leading to a approximation to the optimal welfare (e.g., consider an instance with and ).

Our first result analyzes the widely-used proportional algorithm () and shows that it guarantees of the social welfare. This is already a big improvement compared to , but we then also provide a fair-share algorithm that improves this further, to . Proofs missing from this section can be found in Appendix B.

Theorem 1.

The proportional algorithm satisfies fair-share and gives a approximation to the optimal welfare.


First we porve the envy-freeness of the proportional algorithm. We will use Milne’s inequality Milne (1925) which states that for all :

Plugging in and , the LHS is exactly the value of agent for agent ’s allocation, while the RHS is equal to .

We now proof the efficiency guarantees of the proportional algorithm. Given an instance , let and for each . Let be the welfare of the proportional algorithm.

Now, consider the following mathematical program:

subject to (1)

The objective is to minimize the approximation to welfare we receive from the algorithm. In this program, we don’t enforce that the agents’ values add up to , but we simply have them be equal to each other (constraint 1). Instead, we ask that the optimal welfare is equal to (constraint 2).

First, we argue that solving this program would give us the worst case approximation to welfare. Consider an arbitrary feasible solution to this program; by dividing each agents’ values (each ) by their common total value we get a feasible instance for the original problem. Furthermore, the approximation to welfare in this instance is equal to the value of the objective: the social welfare of the proportional algorithm and the optimal social welfare are the program’s objective and , divided by the normalization term , respectively. Showing that an arbitrary online instance gives a feasible solution to this program with the approximation to welfare unchanged is equally straightforward.

Second, notice that for any fixed , the remaining program, with variables only the s, is a linear program with variables. By the fundamental theorem of linear programming, a minimizer occurs at the region’s corner, i.e. there is a minimizer with constraints tight. Since the total number of constraints is , and the first two constraints are tight, of the tight constraints are non-negativity constraints. So the worst case approximation occurs when there are exactly two variables/rounds with positive value for agent 1. Without loss of generality (the proportional algorithm is memoryless) these are the first two items.

Third, for every instance where agent values only the first two items, the approximation to optimal welfare is minimized when agent also values only the first two items.

Now, consider the two rounds instance, in the original notation, where agent has value for item and for item , while agent has values and . Without loss of generality , which implies . Therefore, , and

Then, overloading notation, we have that the approximation to the welfare is

We analyze this function, by taking partial derivatives and analyzing all critical points. We find that the worst approximation to optimal welfare is achieved for , and has value . See Appendix B for the missing details. ∎

4.1 Performance of poly-proportional algorithms

We now study the family of poly-proportional algorithms more broadly. As we mentioned in the introduction, poly-proportional algorithms with higher values of may lead to increased social welfare, but they also make it increasingly likely that the fair-share property will be violated. We first show that we cannot increase by too much before losing fair-share: for any the corresponding poly-proportional algorithm does not satisfy fair-share.

Lemma 1.

The poly-proportional algorithm with parameter does not satisfy fair-share for any .


Consider the following two item instance. The first round has values and for agents and , respectively, while the second round has values and . Agent has utility . For , agent gets utility . For all , , we have that , thus the utility of agent is . This expression is less than for all . ∎

Our main result in this section is for the poly-proportional algorithm with parameter : we call this the quadratic-proportional algorithm. We show that this algorithm satisfies fair-share and achieves a approximation to the optimal welfare, a significant improvement over the proportional algorithm. By Lemma 1, the quadratic-proportional algorithm guarantees the optimal social welfare within the class of fair-share poly-proportional algorithms.

Theorem 2.

The quadratic-proportional algorithm satisfies fair-share and achieves a approximation to the optimal social welfare.

Theorem 2 follows from Lemmas 2 and 3.

Lemma 2.

The quadratic-proportional algorithm satisfies fair-share.


It suffices to show that agent gets utility at least in all instances: if this holds, then the same holds for agent , by symmetry. Given any instance, we first show that merging and splitting certain items(rounds) results in a new instance where agent is worse off.

Merging a set of items with values creates a new item with value . A split operation on an item with values , , creates two items, with values and .

Claim 1.

Let be any instance, and let be the instance where we split all items such that , with . Then the utility of agent , in the quadratic-proportional algorithm, in instance is at most her utility in instance .


It suffices to show that the utility of agent weakly decreases after splitting a single item with values , such that . Let be the utility of agent (for this item) before splitting and the utility after splitting. We have that and .

It suffices to show that this is non-negative for all . Since , we only need to show that . Dividing both sides by , we have . For , the LHS is equal to which is non-negative. Note that we used the fact that to ensure that splitting was a valid operation. ∎

Claim 2.

Let be any instance, and let be the instance where we take two arbitrary items of that satisfy and merge them. Then the utility of agent , in the quadratic-proportional algorithm, in instance is at most her utility in instance .


Let and be the two items we want to merge, with corresponding values and . We show that

We can simplify this expression to:

If we are done. Assume that this is not the case. It suffices to show that

First, we are going to drop the second term of the sum. Second, since , we have that , and the third term is lower bounded by . It thus remains to show that , which holds since . ∎

We repeatedly apply Claims 1 and 2, until no splitting or merging is possible, to get a worst case instance for agent . This instance will have multiple items with zero value for agent that we can simply combine into a single item. Since splitting is no longer possible, there are no items with and . Since merging is not possible there is at most one item with . Therefore, we have an instance with two items, one with both positive values (that we cannot merge) and one with zero value for agent . Let be the value of agent for item , and her value for item . Agent ’s values are and .

Agent has utility . It is easy to confirm that this function is minimized for where it takes the value . ∎

Lemma 3.

The quadratic-proportional algorithm achieves a approximation to the optimal social welfare.

We start by showing that two item instances are the worst case. This is, in fact, true for all algorithms in the poly-proportional family.

Claim 3.

For any , the worst-case instance (in terms of approximation) for the poly-proportional algorithm with parameter has at most two items.


Similarly to Theorem 1 one can write a mathematical program with variables and that computes the worst case approximation to welfare, and then observe that for every fixed choice of the remaining program is in fact linear. Applying the fundamental theorem of linear programming we conclude that at most two variables are non-zero. We defer the details to Appendix B. ∎

Proof of Lemma 3.

Given Claim 3 we only need to consider two item instances. Let and be the agents’ values for item , and and their values for item .

Without loss of generality, assume that (and therefore ). The optimal welfare becomes . Consider the performance of our algorithm:

The approximation to welfare is

In the remainder of the proof we take partial derivatives with respect to and and analyze the critical points, using numerical solvers for part of the proof. The worst extreme point is , which gives . See Appendix B for details. ∎

5 Adaptive Algorithms

Moving beyond non-adaptive algorithms, in this section we consider the benefits of being adaptive. In deciding how to allocate the item of each round , adaptive algorithms can take into consideration, e.g., the utility of each agent so far, or what portion of their total value is yet to be realized. But, what would be a useful way to leverage this information in order to achieve improved approximation guarantees?

We propose a natural way to modify the family of poly-proportional mechanisms studied in the previous section. Specifically, we use the additional information to “guard” against the violation of the fair-share property. To motivate this modification, assume that at the end of some round during the execution of a poly-proportional with the utility that one of the agents has received so far plus her value for all remaining items is exactly , i.e.,

This would mean that, unless that agent receives all of the remaining items that she has positive value for in full, then she would not receive her fair share. We refer to this as a critical point and use it to define the family of guarded poly-proportional algorithms parametrized by : while no agent has reached a critical point, the algorithm is identical to the corresponding non-adaptive poly-proportional one; but, if some agent reaches a critical point, then all the remaining items are fully allocated to that agent. It therefore leverages adaptivity in a simple way, by checking for critical points.

Note that a critical point may not necessarily arise only at the beginning or the end of a round. However, it is easy to show that we can assume this is the case without loss of generality. Roughly speaking, if a critical point is reached during the execution of some round while a fraction of that item has been allocated, then we can divide that item into two pieces (of size and ), creating an instance with items where the critical point is reached at the end of round , and without affecting the outcome of the algorithm. We discuss this in more detail in Appendix C.

If some agent reaches a critical point then, clearly, these algorithms ensure that the agent will receive her fair share. But, this does not imply that the other agent will also receive her fair share. For this to be true, the other should have received her fair share before that critical point, because she will receive no more items.

Our next result shows that, in fact, this family of algorithms always satisfies fair-share.

Lemma 4.

The guarded poly-proportional algorithm with parameter satisfies fair-share for all .


If there is no critical point the statement trivially holds, so assume, without loss of generality, that agent reaches a critical point at round . By definition, we have that . By the normalization assumption, . We get . That is, it remains to show that fair-share is satisfied for agent .

Similarly to the proof of Theorem 1 and Lemma  2 we will write a mathematical program with variables and , for all . The goal of the program this time will be to find a worst-case instance with respect to agent , given that is a critical point for agent .

Agent ’s utility of resources allocated to agent can be expressed as , while agent has utility . Consider the program

Notice that given a feasible solution to this program one can always construct a valid online allocation instance, where the guarded poly-proportional algorithm with parameter will reach critical point for agent and agent ’s utility is exactly the objective function, and vice versa. Proving the lemma is therefore equivalent to showing that the optimal solution of this program above is at least .

Consider any fixed choice for the variables: the remaining program is linear, and therefore, by the fundamental theorem of linear programming we know that there exists an optimal solution with tight constraints (since there are variables). The first constraint is already tight, so we have other tight constraints. At least of those are non-negativity constraints, so we have at most 3 positive variables. In the remainder of the proof we consider all the cases; details are deferred to Appendix C. ∎

For non-adaptive algorithms we observed that efficiency increases with but, unfortunately, the largest value that yields a fair-share algorithm is . For the guarded poly-proportional family we can get a fair-share algorithm for all , but how does the efficiency depend on this value? For larger values of , the algorithm is trying to maximize social welfare more aggressively, but this means that it is more likely to reach a critical point, after which it is forced to be inefficient.

Based on a class of instances provided in Appendix C, Figure 1 provides approximation upper bounds quantifying precisely this trade-off: if for each we restrict our attention to instances where the corresponding poly-proportional algorithm does not reach a critical point, then the performance increases with . But, as increases, the set of instances with a critical point keeps growing and the greediness of the algorithm gradually hurts its efficiency.

For each value of the points in the plot upper bound the algorithm’s approximation, so the most promising choice is , where the two points meet. Our main result is that the guarded poly-proportional with parameter achieves a approximation to the optimal social welfare which, as the figure indicates, is essentially optimal within the family of guarded poly-proportional algorithms.

Figure 1: Approximation to the optimal welfare by guarded poly-proportional algorithms for different values of , depending on whether the instance has a critical point or not
Theorem 3.

The guarded poly-proportional algorithm with parameter achieves a approximation to the optimal social welfare.


Let be the approximation to the optimal welfare of the algorithm. We encode an instance with variables , and , for all . Let be the critical point (if any) and without loss of generality, assume that agent reaches her critical point. Agent ’s utility . Agent ’s utility is . Similarly to Theorem 1 and Lemma 3 we write a mathematical program for the optimal approximation ratio:

The first constraint encodes the fact that is a critical point: the LHS is the total value of agent , while the RHS is twice the utility of agent . These should be equal since is a critical point for agent . The second constraint equalizes the agents’ values (instead of normalizing them to ), while the third constraint normalizes the optimal welfare to . One can go from an arbitrary feasible solution of this program to a valid instance by dividing each by , and vice versa, while the approximation to the optimal welfare (which is equal to the welfare when the optimal welfare is ) is exactly the objective of this program.

Now observe that for every fixed choice of the variables we get a linear program (with respect to the variables):

subject to

where and .

By the fundamental theorem of linear programming we must have tight constraints, and we have total constraints (with the first three being tight), so any optimal solution should have exactly strictly positive variables.

We take cases depending on the value of . Specifically, our three strictly positive variables are either all three after the critical point, two and one, one and two, or all three before the critical point. The first case is, of course, impossible (since the first constraint cannot be satisfied), so we consider each of the other ones.

For each of the cases considered we write a closed form for the approximation to the welfare, as a function of the s, we then minimize. For (one item before, two items after the critical point) we get a worst-case approximation of . , corresponding to no critical points, also gives a worst-case approximation. This corresponds to the intuition from Figure 1. Details can be found in Appendix C. ∎

We complement our positive result by showing that no fair-share adaptive algorithm, even with full knowledge of the number of items , can achieve an approximation to the welfare much better than the guarded poly-proportional family.

Theorem 4.

There is no fair-share algorithm that achieves an approximation to the optimal welfare better than .


Assume that there exists an online algorithm that achieves an approximation better than , and consider the following two instances. In the first instance, the agents values are and in the first round and and in the second round. In the second instance, the agent values, , are again, and in the first round, but their values in the second round are and and agent 2’s remaining value of is realized in the third round. In what follows, we show that no online algorithm can simultaneously satisfy the fair-share property and guarantee an approximation better than 0.933 in both of these two instances. This argument takes advantage of the fact that prior to the second round, no online algorithm can distinguish between these two instances.

Case 1.

Assume the algorithm allocates less than of item 1 to agent 1 in the first round, i.e., , and consider instance 1. Fair-share for agent 1 implies that

The algorithm’s welfare is therefore

while the optimal welfare is , so

Case 2.

Now, let and assume that is the amount of the item that algorithm would allocate to agent 2 in round 2 if the second instance values were realized. In this case, the fair-share property will be violated for agent 2 because her utility is

Case 3.

Finally, if and and consider the second instance. The agents’ utilities are

This leads to a social welfare of

while the optimal welfare , so

6 Instances Involving Multiple Agents

We now briefly turn to instances with . Caragiannis et al. (2012) prove that even if we knew all the values in advance, the price of fairness, i.e., the worst-case ratio of the optimal social welfare of a fair-share outcome over the social welfare of the optimal outcome, is . Our next result shows that the proportional algorithm matches this bound in an online manner, and therefore achieves the optimal approximation.

Theorem 5.

The proportional algorithm guarantees a , i.e., , approximation to the optimal social welfare.


Consider any round and let be the highest value in this round, and be an agent with this value. Let be the set of agents with and be the set of all the remaining agents. If the portion of the item that the proportional algorithm allocates to the agents in is at least half of all the item, then the social welfare in this round is at least .

On the other hand, if the agents in are allocated more than half of the item, this means that . But, for all and thus

which implies that , so the allocation to agent is at least , and thus in this case as well the social welfare is at least .

Since the optimal welfare in is and the proportional algorithm guarantees a welfare of at least , summing over all rounds concludes the proof. ∎

The next result shows that even if we were to restrict the benchmark to be the optimal social welfare subject to the fair-share constraint, still, no online algorithm could achieve an approximation better than . Therefore the proportional algorithm is also optimal with respect to the competitive ratio measure, which quantifies the worst case loss of welfare due to the online aspect of the problem alone.

Theorem 6.

No online fair-share algorithm can achieve a approximation to the optimal offline fair-share algorithm. That is, the best feasible approximation is .


Consider an instance with agents and rounds. In the first rounds, for the first agent, we have , and , . For the remaining agents, we have , for all . Then, in the last rounds, we have , and elsewhere, for all .

In the offline problem, each agent gets from the last rounds. Therefore the optimal offline fair-share welfare is

We now focus our attention on round . Note that each agent has remaining value at this round. An online fair-share algorithm needs to plan for the event that the remaining values are all realized in the next round, . In order to satisfy fair-share in this scenario, each agent must have utility at least at the end of round .

Consider an agent with . Since her value for all the items before round is , to give this agent utility at least her total allocation must be . This is true for all , so there is of the resources, in the first , to be allocated among the first agents. No matter how this is split, the contribution to the welfare is the same. Let be the social welfare at the end of round . We have

For the last rounds our algorithm can make an optimal choice: . Therefore, we have .

6.1 Characterization of fair-share algorithms

Our final result provides an interesting characterization of fair-share algorithms that could enable the design of novel algorithms in this setting. This characterization uses a very simple condition, which we refer to as doomsday compatibility, and we show that this myopic condition is necessary, but also sufficient, for guaranteeing that the final outcome will satisfy fair-share.

Definition 1 (Doomsday Compatibility).

We say an allocation at day is doomsday compatible if there exists some allocation that would make the overall outcome satisfy fair-share, if was the last round, i.e., if all the agents’ remaining value was realized in round .

Proposition 1.

An online algorithm satisfies the fair-share property if and only if its allocation in every round is doomsday compatible.


First, it is easy to show that doomsday compatibility in every round is sufficient for an online algorithm to satisfy fair-share. If this condition is satisfied for all , then it is also satisfied for and , and thus the final outcome is guaranteed to satisfy fair-share.

Now, we show that this condition is also necessary for the algorithm to satisfy fair-share. Assume that there exists a round such that the online algorithm’s allocation in this round is not doomsday compatible. Then, clearly this algorithm would not be fair-share for the instance where is indeed the last round, i.e., where all of the agents’ remaining value is realized in round . ∎

Theorem 7.

If an algorithm is doomsday compatible in some round , then there always exists an allocation such that it is also doomsday compatible in round .


Consider any round where the algorithm’s allocation is doomsday compatible. This means that there exists some allocation that would achieve fair-share if was the last round. In order to show that we can always maintain doomsday compatibility in round , it suffices to show that there always exists some allocation for that round and an allocation for the next round such that the algorithm would satisfy fair-share if were the last round. We show that, in fact, using for both rounds and would satisfy this condition.

To verify this fact, let be the remaining value for each agent after round , and let be the total utility each agent received up to round . Since would make the outcome fair-share if was the last round, for any agent we have . Now, if on the other hand was the last round, let and . Then, for any agent we would have

Therefore, for , there exists a such that the algorithm is doomsday compatible in round . ∎


This work was done in part while Alexandros Psomas was visiting the Simons Institute for the Theory of Computing. Work was done in part while Alexandros Psomas was at Google Research, MTV. This work was partially supported by NSF grant CCF-1755955.


  • G. Benade, A. M. Kazachkov, A. D. Procaccia, and C. Psomas (2018) How to make envy vanish over time. In Proceedings of the 2018 ACM Conference on Economics and Computation, pp. 593–610. Cited by: §2.
  • A. Bogomolnaia, H. Moulin, and F. Sandomirskiy (2019) A simple online fair division problem. CoRR abs/1903.10361. Cited by: §2.
  • S. Brânzei, V. Gkatzelis, and R. Mehta (2017) Nash social welfare approximation for strategic agents. In Proceedings of the 2017 ACM Conference on Economics and Computation, EC ’17, Cambridge, MA, USA, June 26-30, 2017, C. Daskalakis, M. Babaioff, and H. Moulin (Eds.), pp. 611–628. Cited by: §2.
  • I. Caragiannis, C. Kaklamanis, P. Kanellopoulos, and M. Kyropoulou (2012) The efficiency of fair division. Theory Comput. Syst. 50 (4), pp. 589–610. Cited by: §1.1, §6.
  • G. Christodoulou, A. Sgouritsa, and B. Tang (2016) On the efficiency of the proportional allocation mechanism for divisible resources. Theory Comput. Syst. 59 (4), pp. 600–618. Cited by: §2.
  • M. Feldman, K. Lai, and L. Zhang (2009) The proportional-share allocation market for computational resources. IEEE Transactions on Parallel and Distributed Systems. Cited by: §2.
  • E. Friedman, C. Psomas, and S. Vardi (2015) Dynamic fair division with minimal disruptions. In Proceedings of the sixteenth ACM conference on Economics and Computation, pp. 697–713. Cited by: §2.
  • E. Friedman, C. Psomas, and S. Vardi (2017) Controlled dynamic fair division. In Proceedings of the 2017 ACM Conference on Economics and Computation, pp. 461–478. Cited by: §2.
  • A. Gorokh, S. Banerjee, B. Jin, and V. Gkatzelis (2020) Online Nash Social Welfare via Promised Utilities. arXiv e-prints. Cited by: §1, §2.
  • J. He, A. D. Procaccia, A. Psomas, and D. Zeng (2019) Achieving a fairer future by changing the past. In

    Proceedings of the 28th International Joint Conference on Artificial Intelligence

    pp. 343–349. Cited by: §2.
  • S. Im, B. Moseley, K. Munagala, and K. Pruhs (2020) Dynamic weighted fairness with minimal disruptions. Proceedings of the ACM on Measurement and Analysis of Computing Systems 4 (1), pp. 1–18. Cited by: §2.
  • I. A. Kash, A. D. Procaccia, and N. Shah (2014) No agent left behind: dynamic fair division of multiple resources. J. Artif. Intell. Res. 51, pp. 579–603. Cited by: §2.
  • E. Milne (1925) Note on rosseland’s integral for the stellar absorption coefficient. Monthly Notices of the Royal Astronomical Society 85, pp. 979–984. Cited by: §4.
  • C. Prendergast (2017) How food banks use markets to feed the poor. Journal of Economic Perspectives 31 (4). Cited by: §1.
  • T. Walsh (2011) Online cake cutting. In International Conference on Algorithmic DecisionTheory, pp. 292–305. Cited by: §2.
  • D. Zeng and A. Psomas (2020) Fairness-efficiency tradeoffs in dynamic fair division. In EC ’20: The 21st ACM Conference on Economics and Computation, Virtual Event, Hungary, July 13-17, 2020, P. Biró, J. Hartline, M. Ostrovsky, and A. D. Procaccia (Eds.), pp. 911–912. Cited by: §2.
  • L. Zhang (2005) The efficiency and fairness of a fixed budget resource allocation game. In Automata, Languages and Programming, 32nd International Colloquium, ICALP 2005, Lisbon, Portugal, July 11-15, 2005, Proceedings, L. Caires, G. F. Italiano, L. Monteiro, C. Palamidessi, and M. Yung (Eds.), Lecture Notes in Computer Science, Vol. 3580, pp. 485–496. Cited by: §2.

Appendix A Limitations without Normalization

Here, we observe that if values are not normalized, the only fair-share algorithm is equal-split. Consider any algorithm that does not always split equally. Let be the first round in which there exists an agent that gets . Since is the first such round, we have for all and . Therefore,

But then, if for all subsequent rounds all agents have zero value, i.e., for all (or, alternatively, if round was the last round), algorithm would fail to satisfy fair-share for agent .

Appendix B Proofs missing from Section 4

Missing from Theorem 1

Analysis of .

Recall that

Taking a partial derivative with respect to we have: