1 Introduction
Resource allocation is a fundamental problem in computer systems. For example, companies like Google [borg] and Microsoft [DBLP:conf/sigcomm/GrandlAKRA14] use schedulers in private clouds to allocate a limited and divisible amount of resources (e.g., CPU, memory, servers, etc.) among a number of selfish and strategic users that want to maximize their allocation; the goal of the scheduler is to maximize resource utilization while achieving fairness in resource allocation. The defacto allocation policy used in many of these systems is the classic maxmin fairness policy. For instance, maxmin fairness is used in most schedulers used in private clouds [yarn, mesos, borg, DBLP:conf/sigcomm/GrandlAKRA14, carbyene, apollo, DBLP:conf/nsdi/GhodsiZHKSS10, DBLP:conf/osdi/ShueFS12, DBLP:conf/sigcomm/GrandlAKRA14, DBLP:conf/osdi/GrandlKRAK16]; maxmin fairness is deeply entrenched in congestion control protocols like TCP and its variants [chiu1989analysis, dctcp]; and maxmin fairness is the default policy for resource allocation in most operating systems and hypervisors [kvm, esxi]. Such prevalence of maxmin fairness policy is rooted in the properties it guarantees: Paretoefficiency, envyfreeness, and incentive compatibility. To guarantee these properties, maxmin fairness assumes that user demands do not change over time. This assumption does not hold in many scenarios—several recent studies in the systems community have shown that user demands have become highly dynamic, that is, vary significantly over time [vuppalapati2020building, cheng2018analyzing, reiss2012heterogeneity, yang2020large]. For such dynamic user demands, naïvely using maxmin fairness policy (e.g., to perform new instantaneously maxmin fair allocation every unit of time) maintains Paretooptimality but loses fairness—intuitively, since maxmin fairness policy does not take past allocations into account, dynamic user demands can result in an increasingly large disparity between users’ allocations over time.
We study a natural generalization of maxmin fair allocation for dynamic demands over divisible resources (referred to as dynamic maxmin fairness, also see [DBLP:journals/pomacs/FreemanZCL18]). Every round, each user has a demand, which is the maximum amount of resources that is useful to her. Users want to maximize the sum of overall useful resources they get (over rounds). In every round, dynamic maxmin fairness allocates resources so as to maintain Paretoefficiency while being as fair as possible given the past allocations: first the minimum total allocation of any user is maximized, then the second minimum, etc. By construction, dynamic maxmin fairness is Paretoefficient: every round, it maintains the invariant that either all the resources are used or every user’s demand is satisfied. However, dynamic maxmin fairness is not incentive compatible, i.e. it is possible that a user can misreport her demand on one round to increase her total useful allocation in the future by a small amount [DBLP:journals/pomacs/FreemanZCL18] (also see Theorem 3.1 for a stronger lowerbound). Nevertheless, studying dynamic maxmin fairness is both important and interesting. First, similar to widelyused classical maxmin fairness, it is simple and easy to understand; thus, it has the potential for realworld adoption (similar to many other nonincentive compatible mechanisms used in practice, e.g., nonincentive compatible auctions used by U.S. Treasury to sell treasury bills since 1929 and by the U.K to sell electricity [krishna2009auction, harada2018, parkin2018]). Second, our results show that dynamic maxmin fairness is approximately incentive compatible, that is, strategic users can increase their allocation by misreporting their demands but this increase is bounded by a small factor and requires knowledge of future demands; moreover, misreporting demands in dynamic maxmin fairness can lead to significant decrease in overall useful allocation, suggesting that misreporting unlikely to be a useful strategy for any user.
Our Contribution.
Our goal is to show that dynamic maxmin fair allocation is close to incentive compatible. A popular relaxation of incentive compatibility is incentive compatibility [DBLP:conf/soda/ArcherPTT03, DBLP:conf/sigecom/KotharPS03, DBLP:conf/soda/DekelFP08, DBLP:conf/sigecom/DuttingFJLLP12, DBLP:conf/sigecom/MennleS14, 10.1093/restud/rdy042, DBLP:conf/ec/BalcanSV19], which requires that the possible increase in utility by untruthful reporting must be bounded by a factor of . Using this notion we show that users have very low incentive to be untruthful, especially when the number of users, , becomes large and demands are independent random variables:

Our main result in Section 3 is to show that when users’ demands are independent random variables, by increasing the total amount of resources by a factor of compared to the sum of expected instantaneous user demands, the algorithm is incentive compatible where as
. For example, when the expected demand of each user is identical and the standard deviation of the demands are at most proportional to their expectations, then we get the claimed result for
. We also extend this result to independent random demands with different means, but with a similar bound on the standard deviations.For the case of adversarial demands, we show that dynamic maxmin fairness is incentive compatible (Theorem 3.5), and give a lower bound of (Theorem 3.1) improving the lower bound of [DBLP:journals/pomacs/FreemanZCL18]. We also show that there is no incentive to overreport demand (Theorem 3.4), ensuring that all allocated resources are used.

In Section 4 we study the generalization of maxmin fairness to the case of multiple resources assuming that each user has a fixed ratio how it uses resources, and only the amount of the user’s need is variable. We study the natural extension of maxmin fairness for this setting, dominant resource fairness [DBLP:conf/nsdi/GhodsiZHKSS10], in repeated settings. When users only need subsets of the resources, one cannot bound the incentive compatibility ratio: a user can overreport her demand, and gain added resources proportional to the number of users. In contrast, when all users need all the resources (even with very different ratios) we show results similar to the single resource case, but this time depending on a parameter measuring the similarity of the ratios of different users (
when ratios are identical). Again when users’ demands are independent random variables with not too large variance, increasing the total amount of resources by a factor of
makes dynamic maxmin fairness incentive compatible (Theorem 4.6). For adversarial demands dynamic maxmin fairness is incentive compatible (Theorem 4.5), and overreporting is not beneficial (Theorem 4.3). 
In Section 5 we study the effect of collusion on dynamic weighted maxmin fairness (generalization of maxmin fairness when users have different priorities). We again show that similar results are true: if users’ demands are random variables dynamic weighted maxmin fairness even with collusion among the users tends to incentive compatibility with similar rates as before (Theorem 5.3), in adversarial settings it is incentive compatible (Theorem 5.2), and demand overreporting does not increase utility (Theorem 5.1).

Finally, in Section 6 we study how often can a user be allocated resources significantly above the share she would get if reporting truthfully. Assuming that the total resources allocated to users increase approximately linearly over time, we prove that the users cannot for long periods have a factor of more resources by misreporting, for any (Theorem 6.1), and the time between the intervals when a user has more resources scales exponentially with (also Theorem 6.1).
Related Work.
The simplest algorithm for resource allocation is strict partitioning [DBLP:conf/sigmod/VerbitskiGSBGMK17, DBLP:conf/nsdi/VuppalapatiMATM20], that allocates a fixed amount of resources to each user independent of their demands. While incentive compatible, strict partitioning can have arbitrarily bad efficiency. Static maxmin fairness [DBLP:conf/nsdi/GhodsiZHKSS10, DBLP:conf/osdi/ShueFS12, DBLP:conf/sigcomm/GrandlAKRA14, DBLP:conf/osdi/GrandlKRAK16, DBLP:journals/pomacs/FreemanZCL18] is Paretoefficient, incentive compatible and fair but only when user demands are static. However, as shown in several recent results [DBLP:journals/pomacs/FreemanZCL18, DBLP:journals/tc/SadokCC21, DBLP:conf/atal/Hossain19], naïvely using static maxmin fairness in the case of dynamic demands can result in large disparity between resources allocated to users over time since past allocations are not taken into account.
[DBLP:journals/pomacs/FreemanZCL18, DBLP:conf/atal/Hossain19]
study resource allocation for the case of dynamic demands, but the allocation model they consider is closer to maxmin fairness separately in each epoch, and less aim to be fair overall. Under this model, they present two mechanisms that are incentive compatible but either offer only 0.5 times the utility of static allocation, or are efficient under strong assumptions: user demands being i.i.d random variables, and number of rounds growing large.
[DBLP:journals/tc/SadokCC21] presents minor improvements over static maxmin fairness for dynamic demands. Their mechanism allocates resources in an incentive compatible way according to maxmin fairness while marginally penalizing users with larger past allocations using a parameter . For both and , the penalty tends to for every past allocation and the mechanism becomes identical to static maxmin fairness; for other values of , the penalty is at most a fraction of past allocation surplus, and it reduces exponentially with time (users who were allocated large amount of resources further in the past receive even tinier penalty). Thus, for all values of (and, in particular, for and ), their mechanism suffers from the same problems as static maxmin fairness.Several other papers study resource allocation where user demands can be dynamic, but with significantly different setting than ours. [DBLP:conf/pricai/AleksandrovW19, DBLP:conf/sigecom/ZengP20] examine the setting where indivisible goods arrive over time and have to be allocated to users whose utilities are random; however [DBLP:conf/pricai/AleksandrovW19] studies a much weaker version of incentive compatibility in which a mechanism is incentive compatible if misreporting cannot increase a user’s utility in the current round and [DBLP:conf/sigecom/ZengP20] does not consider strategic agents. In [DBLP:journals/corr/abs201208648] it is assumed the users do not know their exact demands every round and need to provide feedback to the mechanism after each round of allocation to allow the mechanism to learn. The goal of the paper is to offer an (approximately) truthful and (approximately) efficient version of maxmin fair allocation each iteration, despite the lack of information, but is not considering the dynamic notion of fairness that is the focus of our paper.
incentive compatibility has seen a lot of recent applications. [DBLP:conf/soda/ArcherPTT03, DBLP:conf/sigecom/KotharPS03, DBLP:conf/sigecom/DuttingFJLLP12, DBLP:conf/sigecom/MennleS14] study combinatorial auctions that are almost incentive compatible. [DBLP:conf/soda/DekelFP08]
studies approximate incentive compatibility in machine learning, when users are asked to label data.
[10.1093/restud/rdy042] examines approximate incentive compatibility in large markets, where the number of users grows to infinity. [DBLP:conf/ec/BalcanSV19]develops algorithms that can estimate how incentive compatible mechanisms for buying, matching, and voting are. To the best of our knowledge we are the first to apply
incentive compatibility to the problem of resource sharing.2 Notation
We use to denote the set for any natural number . Additionally we define .
There are users, where . The set of users is denoted with . The game is divided into epochs . Every epoch there is a fixed amount of a resource shared amongst the users. We denote the total amount of resources with , which w.l.o.g. we are usually going to assume that it is , unless stated otherwise.
We denote with the allocation of user in epoch . We also denote with the cumulative allocation of user up to round , i.e. . By definition, .
Every epoch , each user has a demand, denoted with . This represents the maximum allocation that is useful for that user, i.e. a user is indifferent between getting an allocation equal to her demand and an allocation higher than her demand, So the utility of user on epoch is . The total utility of user after epoch equals the sum of utilities up to that round, i.e. .
Dynamic Maxmin Fairness.
In maxmin fairness the resources are allocated such that the minimum amount of resources is maximized, then the second minimum is maximized, etc., as long as every user gains an amount of resources that does not exceed her demand. If for example we have total resource and three users with demands , , and , then the first user gets resources and the other two get resources each.
In dynamic maxmin fairness, every epoch the maxmin fairness algorithm is applied to the users’ cumulative allocations constrained by what they have already been allocated in previous iterations, i.e. given an epoch and that every user has cumulative allocation :
choose  
applying maxmin fairness on  
given the constraints 
Incentives in Dynamic Maxmin Fairness.
It is known from [DBLP:conf/nsdi/GhodsiZHKSS10] that applying maxmin fairness when there is a single epoch is incentive compatible, i.e. users can never increase their allocation by misreporting their demand. In dynamic settings, however, this is not the case. As was shown by [DBLP:journals/pomacs/FreemanZCL18], a user can increase her allocation by misreporting. See also the improved lower bound (Theorem 3.1).
In this work we are interested in how far dynamic maxmin fairness is from incentive compatible. W.l.o.g. we are usually going to study the possible deviations of user , i.e. how much user can increase her allocation by lying about her demand. We use the symbols , , , , to denote the claimed demand and resulting outcome of some deviation of user . We want to prove that for some , dynamic maxmin fairness is always incentive compatible, i.e. for every users’ true demands , for every deviation of user , and for every , to prove that
is often referred to as the incentive compatibility ratio.
3 Approximate incentive compatibility ratio in single resource settings
Bad Example.
As mentioned before, dynamic maxmin fairness does not guarantee incentive compatibility. This is demonstrated in the following theorem, where user can misreport her demand to increase her utility by a factor of almost .
Theorem 3.1.
There is an instance with users, in which a user can misreport her demand to increase her utility by a factor of .
We defer the proof of the theorem to Appendix A. To provide intuition about how a user can increase her utility by misreporting, we include here the example of [DBLP:journals/pomacs/FreemanZCL18], where user can increase her utility by a factor of .
Example 1 ([DBLP:journals/pomacs/FreemanZCL18]).
There are 3 users and 3 epochs. The real demands of the users are shown in Table 1, as well as their allocations when user is truthful and when she misreports.
epoch  1  2  3  

user  
user 2  
user 3 
Because user underreports her demand on the first epoch, on the second epoch she manages to “steal” some of user ’s resources. Then, on the third epoch the allocation mechanism equalizes the total allocations of users and , making user get back some of the resources she lost in epoch . This results in user having total resources instead of , i.e. her allocation increases by a factor of . ∎
Both in Theorem 3.1 and Example 1, it is important to note that user can increase her utility only by a small constant factor. Additionally, this is done by user underreporting her demand, not overreporting; this is important because it implies that the resources allocated are always used by the users. As we will show next, both of these facts are true in general.
Bounding allocations while one user deviates.
To prove the above, first we show a lemma offering a simple condition on which pair of users can gain overall allocations from one another. When users’ demands are not satisfied, for a user to get more resources someone else needs to get less. The lemma will allow us to reason about how a deviation by user can lead to a user (possibly ) getting more resources and another user getting less.
Lemma 3.2.
Fix an epoch and let be two different users. If the following conditions hold

and , i.e. user gets more resources on epoch when user deviates and user could have gotten more resources when user does not deviate.

and , i.e. user gets less resources on epoch when user deviates and user could have gotten more resources when user deviates.
then and , implying
(1) 
It should be noted that the conditions for (similarly for ) can be simplified if has the same demand in both outcomes (which is trivially true if ): if the other inequality is implied as and .
Proof.
Because of the conditions, we notice that and , which implies that it would have been feasible to increase by decreasing . This implies that ; otherwise it would have been more fair to give some of the resources user got to user . With the analogous inverse argument (we can increase by decreasing ) we can prove that . This completes the proof. ∎
The main technical tool in our work is the following lemma bounding the total amount all the users have “won” because of user deviating, i.e. . So rather than bounding the deviating user ’s gain directly, it is better to consider the overall increase in all users combined. More specifically, the lemma upper bounds the increase of that amount after any epoch, given that user does not overreport her demand (which as we are going to show later in Theorem 3.4 users have no benefit in doing). The bound on the total overallocation then follows by summing over the time periods. The bound on the increase of after any epoch is different according to three different cases:

If all users’ demands are satisfied, then the increase is at most .

If user is truthful the increase is again at most so in these steps overallocation can move between users but cannot increase. This is the reason working with the total overallocation is so helpful.

If user underreports the increase is bounded by the amount of resources she receives when she is truthful.
Lemma 3.3.
Fix any . Let and be the cumulative allocations up to epoch . Assume that are some users’ demands and that are the same demands except user ’s, who deviates but does not overreport, i.e. . Then it holds that
(2) 
When all demands are satisfied, user clearly cannot change other users’ allocation by underreporting. We will use Lemma 3.2 to show that if user is truthful on epoch , then the l.h.s. of (2) is at most ; as maxmin fairness allocates resources such that the large are decreased and the small are increased. Finally, if user underreports her demand then the (at most) resources user does not get might increase the total overallocation by the same amount.
Proof.
We first focus on the case when the users’ demands are satisfied. In this case, because user can only underreport her demand, she cannot alter other users’ allocations and she can only decrease her allocation. This entails that for all , so , proving this case of the lemma.
To prove the other case of the lemma, define for all . Suppose by contradiction:
Because , the above inequality implies
(3) 
Because user does not overreport her demand, it holds that , i.e. the total resources allocated to the users does not increase when user deviates. Combining this fact with (3) we get that
(4) 
We notice that because of (3), there exists a user for whom ; because of (4), there exists a user for whom . Additionally for that we can assume that because:

If user does not deviate then for all , .

If , then (4) implies , i.e. and we assumed that only user deviates.
Thus we have (since no user overreports), , , and . Now Lemma 3.2 proves that . This leads to a contradiction, because and , i.e. . ∎
Adversarial Demands.
In this section we will prove upper bounds on the incentive compatibility ratio when users’ demands are picked adversarially. We will prove that users have no incentive to overreport their demand. The immediate effect of overreporting is allocating resources to user in excess of her demand, which do not contribute to her utility. Intuitively, this suggest that user is put into a disadvantageous position: other users get less resources which makes them be favored by the allocation algorithm in the future, while user becomes less favored. However, a small change in the users’ resources causes a cascading change in future allocations making the proof of this theorem hard. We will see in Section 4 that this is no longer the case with multiple resources when users only use a subset of them. We defer the proof the theorem to the end of this section.
Theorem 3.4.
Users have nothing to gain by declaring a demand higher than their actual demand.
First we show that using this theorem and Lemma 3.3 allows us to bound the incentive to deviate. A bound of is easy to get by summing (2) for all up to some certain epoch. We give an incentive compatibility bound of by using the same lemma, but arguing that some other user must also share the same increased allocation of resources using Lemma 3.2.
Theorem 3.5.
No user can misreport her demand to increase her utility by a factor larger than , i.e. for any user and deviation user makes, for all , .
Proof.
Theorem 3.4 implies that it is no loss of generality to assume that user does not overreport her demand, since any benefit gained by overreporting can be gained by changing every overreport to a truthful one. This means that instead of we can show . Towards a contradiction, let be the first epoch when user gets more than more resources by some deviation of demands, i.e. and . This implies that , which in turn entails that there exists a user for who , since the total resources allocated when user is underreporting cannot be less than those when 1 is truthful. Because , , , and , we can use Lemma 3.2 and get . This inequality, , and Lemma 3.3 by summing (2) for every epoch up to , implies
The above inequality leads to , a contradiction. ∎
Next we prove Theorem 3.4, that users have no incentive to overreport.
Proof of Theorem 3.4.
Fix an epoch and let be any demands (that possibly involve user both over and underreporting). We are going to show that if user changes every overreport to a truthful one, then her utility on epoch is not going to decrease. Let be the last epoch where user overreported. For all users and epochs , let , except for which is ’s actual demand (note that ). Let , , , be the result of demands . We will show that , i.e. user does not prefer the demand sequence over the demand sequence . If we apply this inductively for every epoch before where user overreports, we are going to get that overreporting is not a desirable strategy.
Up to epoch all users’ demands in and are the same and thus so are the allocations and utilities: for all , and . Because and for , , user may earn some additional resources on , i.e. , for some , while other users get less resources: . We first note that the additional resources that user gets are in excess of ’s true demand, meaning they do not contribute towards ’s utility:
(5) 
Additionally, because user does not overreport or in epochs to (by assumption is the last epoch before where user overreports), it holds that for , user ’s utility is the same as the resources she receives: and . This fact, combined with (5) proves that
(6) 
Thus, in order for this overreporting to be a strictly better strategy, it most hold that . We will complete the proof by proving that the opposite holds. Since there is no overreporting in periods we can use Lemma 3.3, where we use in place of and summing (2) for all and noticing that we get
The above, because , if , and , proves that . This completes the proof. ∎
Random Demands.
In this section we study the case where the users’ demands are random variables. Note that Theorem 3.4 still holds, i.e. it is no loss of generality to assume no user overreports her demands. Using Lemma 3.3 we are going to show that by increasing the amount of total available resources, , by a little, we can lower the additional expected amount user can gain by deviating. The setup is the following:

is the demand of user at round , drawn by some distribution. Note that we do not require and to be independent or to come from the same distribution.

For , and are distributed independently.

We assume that the expected sum of demands does not increase over time. More specifically, w.l.o.g. we assume that for every , .

For every , .

For all and for all , .
Recall that by Lemma 3.3, the maximum amount of resources user can get by misreporting in a single round is . For adversarial demands this quantity can always be . However, when demands are randomized their sum is larger that their expectation (which is at most
) with small probability. This allows us to bound the expectation of
, which will imply that the expected benefit of misreporting is a small fraction of . In the next lemma we bound the expectation of by a quantity that can be then bounded using any concentration inequality.Lemma 3.6.
Proof.
The allocation is a complicated function of the variables and the random variables . This means that and are not independent, so we cannot bound the expectation of their product by the product of their expectations, which makes the the proof more involved. For this reason we need to bound them by quantities that are independent. We first note that the following holds for any realization of the random variables.
This makes the two terms on the right hand side “less dependant” but they are still not independent: the realization of affects both terms. We then take the expectation of the above
We can express which makes the above inequality
where in the last inequality we used the fact that . ∎
Using the above lemma and Lemma 3.3 we can use Chebyshev’s inequality to show that as we make larger than , the additional amount of resources that user can get by deviating diminishes.
Theorem 3.7.
With the assumptions in the theorem, we can use Chebyshev’s inequality to upper bound the probability in (7) by . If we combine this with Lemma 3.3, we get that every epoch the expected additional resources user can get by deviating is at most . The proof is included in Appendix A.
If we use in Theorem 3.7 we get the following corollary, in which both the incentive compatibility ratio and the total resources tend to as :
Corollary 3.8.
If the total available resources are , then .
Remark 3.9.
We extend our results to dynamic weighted maxmin fairness, where every user has a weight and the mechanism applies maxmin fairness on instead of . The same holds for the case where a group of users forms a coalition to increase their total allocation . In Section 5 we show that dynamic weighted maxmin fairness with coalitions satisfies a incentive compatibility upper bound (Theorem 5.2) and a result similar to Theorem 3.7 for randomized demands (Theorem 5.3).
4 Multiple Resources
In this section we are going to explore the generalization of dynamic maxmin fairness for multiple resources, dynamic dominant resource fairness. There are different resources, denoted with . W.l.o.g., we assume that for every resource the amount available is , and unless stated otherwise.
Every user has nonnegative values, called ratios, the ratios of the different resources that the user needs for the application running, i.e. every epoch, for some user uses of every resource . W.l.o.g. we assume that .
We assume that the users’ ratios do not change over time. The ratios depend on the type of application running by the user (e.g., Spark tasks [zaharia2010spark] or web caches [berg2020cachelib]), so we will assume they are fixed given the type of application, and also publicly known as the application running is public. The total amount of resources that the user needs for each epoch changes, depending on the current traffic. We denote with the allocation of user in epoch , i.e. on epoch user receives of every resource . Let be the cumulative allocation of user up to epoch , i.e. . By definition, for every , . As in the single resource case we assume ’s utility on epoch is and her total utility up to epoch is .
Dynamic Dominant Resource Fairness.
Dominant resource fairness (DRF) is the generalization of maxmin fairness for the case of multiple resources, where the fairness criteria is applied to the allocations . Using our notation, dynamic DRF is easy to describe. For a given epoch , assuming than every user has cumulative allocation :
choose  
applying maxmin fairness on  
given the constraints 
It is nice to observe a dissimilarity: if for every user it holds that , then every user is treated similarly by the allocation algorithm. In contrast, if for some user , , then user is less favored: if for example , then user needs double the allocation to get the same amount of her dominant resource. This means that the allocation algorithm that we use is the dynamic version of a generalization of DRF, weighted DRF, which uses the values as weights to favor some users more than others.
Potential Value of Overreporting.
We first show that when some resources are only used by a subset of users, a user can overreport her demand to increase her allocation by an arbitrary amount. Next we will show that this is no longer possible once all resources are used by all users, even if the ratios are very different.
Theorem 4.1.
There is an instance with users where a user can overreport her demand to increase her utility by a factor of .
We sketch the idea of the example here, with details presented in Appendix B, see Table 3. Consider an example with two resources. User has and , some users have and , and the rest of the users have and . If the demands of users in and are , and the demand of user is 0, then for and , .
However, if user overreports her demand from to , then for , and for , . This does put user in a disadvantage, because she has earned useless resources, but has also increased the allocation of every user in by close to , for a total increase of almost ; this will allow user to increase her future allocation because users in also have a larger allocation.
Approximate Incentive Compatibility of Dynamic Resource Fairness.
We will see that the benefit of overreporting can only happen if users use only a subset of the resources. The main results of the rest of this section are extending the results of Section 3 assuming users use every resource: overreporting is no longer incentive compatible (Theorem 4.3), and the approximate incentive compatibility ratio is bounded (Theorem 4.5 and Theorem 4.6) with the bound depending on the ratio of the values : .
Throughout the remainder of this section, we will assume that all users use each resource, that is for all . This is often the case in practice when resource sharing is applied in computer systems: most tasks run by a user require nonzero amount of each resource (e.g. CPU, memory, and storage).
Bounding allocations while one user deviates.
We start by pointing out why the example of the incentive to overreport in the previous section no longer works if for and : in order for the resulting allocation to be fair, when user overreports, all the users’ allocation must be the same. This is true because decreasing the allocation of any user always makes feasible the increase of other users’ allocation (up to their demand). This boils down to the fact that if every is positive then if users’ demands are not met it is because of a single resource being saturated. In contrast, when some users’ ratios are zero, different users are limited by different saturated resources.
The assumption that all users are using each resource allows us to prove a lemma analogous to Lemma 3.2, as with this assumption, we can increase any user’s allocation by decreasing another allocation, allowing the proof to be identical to the proof of Lemma 3.2. For example, if and , then decreasing the allocation of user does not free any amount of resource for user . Proving this lemma will lead to results similar to Theorems 3.7, 3.5, and 3.4.
Lemma 4.2.
Fix an epoch and assume that for all users and resources it holds that . Let be two different users. If the following conditions hold

For , and .

For , and .
then and , implying
Similarly to Theorem 3.4, we can now prove that there is no benefit to overreporting. As mentioned previously, this is a very important property because it guarantees that every resource allocated is utilized.
Theorem 4.3.
Assume that for all users and resources it holds that . Then the users have nothing to gain by declaring a demand higher than their actual demand.
Because of Lemma 4.2 the proof of this theorem is quite similar to the one in Theorem 5.1: if user overreports her demand to get more not useful resources, we can prove that , i.e. user will not get additional useful resources. The full proof can be found in Appendix B.
Next we present an auxiliary lemma, similar to Lemma 3.3, but this time we bound the increase of any user’s allocation when user deviates, i.e. for each . Unfortunately, it is a weaker version of Lemma 3.3, involving the parameter . If the users ratios are the same, i.e. for every and , then and the following lemma would allow us to prove a incentive compatibility ratio upper bound. In general however, the incentive compatibility ratio depends on and it becomes larger the smaller is.
Lemma 4.4.
Assume that for all users and resources it holds that . Then, for every user and every epoch , if user does not overreport her demand, it holds that
(8) 
The lemma’s proof is similar to Lemma 3.3. If the users’ demands are satisfied then user cannot decrease the allocation of other users. If users’ demands are not satisfied, then user can free at most from every resource which can increase user ’s allocation by at most . The full proof can be found in Appendix B.
Adversarial Demands.
Using Lemma 4.4 it is easy to derive an upper bound on how much user can increase her allocation when deviating.
Theorem 4.5.
Assume that for all users and resources it holds that . Then for any