1 Introduction
The fair resource sharing model, first studied in [mo2000fair], has already been investigated in numerous application domains, as well as its weighted variants. The weighted
fair resource allocation problem is to find a vector
such that 1) the utilityis maximized at , and 2) lies in a feasible set defined by linear constraints of the form where is a capacity vector for a number of resources and is the binary userresource incidence constraint matrix, for a number of users, weighted by a positive vector . The family of fair metrics is general and includes popular fairness concepts such as maxthroughput (), proportional fairness, also called Nash Bargaining Solution (), mindelay () and arbitrarily close approximations of maxmin fairness ().
In this paper, we study the general weighted fair resource allocation problem under linear constraints and we propose a novel lower bound on its optimal solution. A lower bound is a positive vector respecting feasibility (that is, ) and such that . Finding a lower bound in the context of fair resource sharing is of great interest – it permits one to automatically define a minimal share that is attributable to each resource user as initialization of any exact computation that could take time, and may be helpful in the phase of design of a system. We seek to derive usercentric formulas in the sense that their value for a specific user would depend only on the resources within a localized problem (and not on the global topology) and only on the users that compete over the same resources. We then evaluate the formulas under different instance regimes and compare them to the literature in order to appreciate the improvements they provide.
Remarkably, we also show how our lower bound can enhance the performance of a distributed algorithm based on the Alternating Directions Method of Multipliers (ADMM) (see [boyd2011distributed]) that can be invoked to solve optimally the fair resource allocation problem. The ADMM is wellknown for its fast convergence properties to modest accuracy; however, its performance is highly conditioned by the initialization of its socalled penalty parameter that can, when badly tuned, induce an extremely poor convergence rate. Thus, tuning correctly the penalty parameter is a task that one should not neglect when using the ADMM. In light of recent studies (in particular, we exploit the results proven in [deng2016global]), we demonstrate how our lower bound permits one to accomplish this task for our particular problem.
A well known lower bound of the proportionally fair () resource allocation was brought in as a building block of the axiomatization of the Nash Bargaining Solution and is commonly referred to as the midpoint domination axiom [deClippel2007]. It states that each user is given at least a fraction of their dictatorial allocation, that is, the resource they would receive if the other users accept to receive . We refer to the bound given by the midpoint domination axiom as the midpoint allocation
. One can imagine that the midpoint allocation becomes arbitrarily poor as the total number of users becomes large, and its utility as a first estimation of the optimum allocation, negligible. Indeed, the formula includes the weights of the whole set of users and is independent of the problem’s local structure. Similarly, the general lower bound found in [
marasevic2015fast] may suffer from these dependencies.Concerning proportional fairness (), we give a more precise midpoint domination axiom, and provide a lower bound that we call local midpoint. Our lower bound on the proportionally fair allocation can be interpreted as a particular case of the midpoint domination axiom to locality – now, each user is proportionally fairly attributable at least a fraction of their dictatorial allocation, where is not the total set of users, but the set of users in competition with the user for some resource. Few works attempted at providing lower bounds for the general fair resource allocation. In fact, the most recent available bound is shown by [marasevic2015fast], and used by the authors for an initialization of their
fair heuristic. To the best of our knowledge, this is the best bound that could be found in the literature for the
fair resource allocation problem and we refer to it as the StateoftheArt (SoA).The remainder of the paper is organized as follows: Section 2 is dedicated to the model definition and problem statement. Our lower bound presentation is addressed in Section 3. In Section 4, we broadly remind the key features of the ADMMbased fair distributed algorithm used for our illustration. The performance of the latter is shown in Section 5 and finally, Section 6 concludes the paper.
2 Model Definition
Let us start by formalizing the weighted fair resource allocation problem. In this work, we adopt the terminology of rate control in fixed communication networks. Thus, a resource will be referred to as a link and a user will be called a connection request (or shortly, request) from a source node to a destination node over a route formed of several links.
Let be the set of network links, each link having a capacity . Let be the set of requests. Each request has a predefined route that identifies with a subset of links of the network. In turn, for each link , is the set of all requests having a route that contains the link . We define the linkroute incidence matrix as:
For each request , denotes the bandwidth allocated to along its route . We say that an allocation belongs to the feasibility set (or is feasible) if it satisfies the capacity constraint (1) below:
(1) 
where . Each request is associated with a weight . The weight vector accounts for a degree of relative importance of each request that can be defined at the discretion of the network. Weighted fairness is formalized as in Definition 1 below.
Definition 1 (fairness)
Let be a feasibility set defined as in (1), being a strict superset of . Let and . We say that is fair (or simply fair when there is no confusion on ) if the following holds:
Equivalently, is fair if, and only if maximizes the fair utility function defined over :
() 
where
3 Alphafairness – a lower bound
In this section, we derive an explicit lower bound on the general fair resource allocation problem. Our lower bound only depends on the weight vector , the capacity vector and the linkroute incidence matrix . Moreover, the bound exploits the local structure of the problem, which prevents it from deteriorating systematically with the problem size. We compare it to the SoA bound that one can formulate as follows:
Proposition 1 ([marasevic2015fast])
Let the vector be the optimal solution to the fair resource allocation problem. Then, for all :

if

if
where , , and .
We seek to improve the above bound by removing the global dependencies on , , and , and , those parameters being the major degradation factor when the size or congestion of the problems increase.
For each request , let . The socalled utopia point is the (infeasible when the problem is non trivial) allocation representing the value each request would receive if they were alone in the network, that is, its dictatorial allocation. Our bound for the fair allocation only depends on the utopia point (hence on the capacity vector ), the matrix and on the weight vector . For , let , i.e., the set of requests sharing at least one resource with and .
First of all, we use the separability of the objective function of Problem () to better estimate our lower bound on a restricted problem. Specifically, we prove a restriction lemma (see Lemma 1) that permits one to avoid unnecessary dependencies between requests that do not share resources together. Then, we prove our general lower bound on the corresponding restricted problems. Thanks to Lemma 1, the bound remains unchanged in the original problem.
3.1 A restriction lemma
In this paragraph, we show that instead of evaluating our bound on Problem (), one can use a smaller requestcentric problem. Specifically, let denote the optimal solution of Problem () and let be an arbitrary request. We define the restricted problem at , as the following:
() 
Intuitively, Problem () arises when the allocations of all the requests that do not share any link with are fixed to their optimal fair value (that is, following the vector ), and one needs to compute the fair allocation of the remaining requests, that is, the requests within that share at least one resource with . The capacity constraints are thus updated taking into account the amounts of resources that are already allocated, as shows the second line of the constraints. Note in passing that all the links in that do not serve any of the requests within form trivial constraints in () and can hence be removed without any loss.
We then have the following result:
Lemma 1
Consider the problem:
(2) 
It suffices to show that the problems (2) and () are equivalent. Then, the unicity of the solutions permits one to conclude.
We know that the problem (2) is feasible, as is a feasible point. Denote its optimal solution by . We remark that both and are feasible for both problems and (2). Hence, by optimality, we necessarily have . Moreover, for instance, problem has a unique optimal solution. Thus,
Particularly for Thus, we can fix the values for without changing the optimum. Thus, Problem (2) is equivalent to the restricted problem ().
Thanks to Lemma 1, we are now ready to present our lower bound on the fair allocation based on the structure of the restricted problems.
3.2 Lower bound
We now show the main result of this paper. We define the local midpoint as the following:
Theorem 1
We first prove the proposition for . Let us define the request as the request with the least optimal allocation: By definition of , we have:
(3) 
Let . By Lemma 1, it suffices to show the inequality in the restricted problem associated to . Let denote its feasible set. Thus, for all we have:
This inequality holds for all feasible . Thus, we evaluate it at the dictatorial allocation of , that is, at the point defined as and for all . Let us note in passing that . Thus,
where we remind that and . Rearranging the terms, one gets:
which yields:
(4) 
In particular, applying equation (4) to , we get:
Next, we show the bound for . In the same fashion, we look at the restricted problem. Let and consider its restricted problem. Then, one has:
Rearranging the terms finally provides the desired bound. For any value of , one can remark that the bound only depends on the capacity vector , the weight vector , and the linkroute incidence matrix .
3.3 Illustration
To conclude this section, we illustrate a comparison of the two presented lower bounds and introduced in Proposition 1 and Theorem 1, respectively, under different regimes. Given the formulas, one can remark that the sensitivity of the bound to arbitrary problem sizes should be lessened as now more focused on local structures. For , we obtain requestcentric formulas. For general , this elimination came with the dependency on the global minimum local midpoint value . Intuitively, one can remark that the two bounds may react differently to a fluctuating asymmetry of the weight vector or the capacity vector , namely, a variation of the two parameters and . For a better vision, we illustrated this behavior in Figure 1.
The two bounds were compared on instances with 1000 requests over a same graph of type barabasi(100,4) (see [barabasi]). The routes were generated at random by taking the shortest path between pairs of sources and destinations drawn uniformly at random. The weights (resp. link capacities) were also drawn uniformly at random within intervals satisfying (resp. ). For each instance, and each , we define the score of as the number . The score represents the proportion of requests for which our bound beats the SoA bound for a particular . In Figure 1(a), the parameter was fixed to (which namely means ) and we plotted the score of versus for different values of . Figure 1(c) shows the score in the other extreme situation (which means all the link capacities are equal) for different values of .
In order to appreciate the quality of the bound improvement, if any, we plotted, in Figures 1(b) and 1(d), the corresponding bound improvements, measured with the values of the ratios . To preserve readability of the plots, we represented only the extreme situations corresponding to the values (dashed lines) and (solid lines) for Figure 1(b) and to the values (dashed lines) and (solid lines) for Figure 1(d). Figures 1(b) and 1(d) show the best, worst, and average improvements encountered in the same problem instance. All the points represented in Figure 1 correspond to an average over 10 instances generated under identical conditions. In Figures 1(a) and 1(c), we also included the specific points as translucent scattered markers.
According to Figure 1, our bound is an absolute improvement for values of in the interval [0,2] (thus including the maxthroughput, proportional fairness, and mindelay popular concepts) in all situations. Particularly for proportional fairness, the simulations show that we improved the bound by two orders of magnitude in all situations. For mindelay fairness, the bound is generally improved on average by a multiplicative factor between 1 and several tens. For greater values of , it is interesting to see that either or is more adapted to certain problem structures. For instance, will be of greater interest when the network link capacities are more heterogeneous, (which may correspond to situations where the network is asymmetrically congested), whereas is more adapted to asymmetrically weighted problems, . One can thus conclude that the two available bounds complement each other for general .
After presenting our bound, we now demonstrate how it permits one to boost the performance of an algorithm that solves the fair resource allocation problem.
The next section is dedicated to the presentation of the algorithm, based on ADMM.
4 Fast and Distributed ADMM (FDADMM)
Several approaches may be used to tackle the fair resource allocation problem (e.g., see [kelly1998rate] and [palomar2006tutorial] for a tutorial). One of them is the Alternating Directions Method of Multipliers (ADMM) (see, e.g., [boyd2011distributed]). The ADMM is well known for its distributivity properties that permit one to decouple constraints handled in parallel then plugged in together by means of consensus constraints. In [allybokus2017real], these properties are used to design a fully distributed algorithm that solves optimally the problem in the context of traffic rate control in distributed SoftwareDefined Networks. For a description of the general ADMM framework, the reader may refer to [boyd2011distributed], and for a more detailed construction of the presented algorithm, to [allybokus2017real]. In this section, we briefly describe the design of the distributed algorithm used in the latter.
4.1 Algorithm overview
Assume the network links are split into a number of domains. Each domain corresponds to a subset of links forming a covering of the whole set :
For , let be the set of requests that traverse domain , and the set of domains the request traverses. The problem () can thus be rewritten as: :
(6) 
where is the indicator function of the capacity subset associated to domain :
Further, we separate artificially the problem by creating a private variable for each domain , and by enforcing the agreement upon their values between domains with consensus constraints. Problem formulation (6) now reads:
s.t  (7)  
Finally, we decompose the problem by separating the private objective of each domain. For each domain , and each , the vector defines a copy of the variable for link and is reserved for the component function . We can write Problem (7) in the following form:
s.t  (8)  
Let denote the indicator function of the feasible set (8). Then, the formulation takes the compact 2block form:
(9)  
s.t. 
Applied to the last formulation (9), the distributed ADMM is described in Algorithm 1. In lines 5 and 10, the variables and are dual variables associated with the constraints and , respectively in (9). Also, P is the Euclidean projection onto the simplex , is a scalar reciprocal penalty parameter, and is a lower bound on the fair solution that will be computed with the input parameters.
4.2 Performance
The convergence of ADMM is provably known since the 1990s (see [Eckstein1992]), and its convergence rate has been widely studied. Today, the most general convergence rate of ADMM is known to be ( being the iteration count), and linear convergence rates are provably obtained for strongly convex problems. Nevertheless, the performance of the ADMM remains highly sensitive to the initialization and the update of the penalty parameter. In [deng2016global], the linear convergence rate of ADMM for strongly convex problems is quantified and optimized with regards to the penalty parameter, which yields an optimal tuning of it. Its value depends on the (global) strong convexity and the Lipschitz gradient moduli of the objective function, if those are finite. In [allybokus2017real], this result is applied to a central strongly convex equivalent formulation of our problem to derive an approximate adaptive tuning of the distributed version of the algorithm. The adaptive penalty parameter is computed as the optimal parameter of the centralized formulation, , given according to the formula
(10) 
where is the strong convexity modulus of and
is the Lipschitz modulus of its gradient. In fact, the fairness functions have singular values near
, which make the Lipschitz modulus not globally defined, unless the feasible set is reduced from below by means of a positive lower bound of the optimal solution. Thus, Equation (10) is applied to where is the Lipschitz modulus of the gradient of the objective over the set of feasible points verifying .Adaptive penalty parameter schemes have been proposed to tackle this issue and provably bring consistent improvement of the convergence behavior of ADMM. One remarkable adaptive scheme can be found in [he2000alternating], in which the authors introduce the residual balancing (RB) principle which consists of shrinking or expanding the penalty parameter whenever the primal and dual residuals are unbalanced. For a definition of RB, we refer the interested reader to [he2000alternating]. Although this scheme helps making the ADMM less dependent from initialization, empirical behaviors of the algorithm however suggest that there is still room and interest for better initialization. To demonstrate this, we adopt residual balancing as a default adaptive scheme of our penalties in all the algorithms of the present paper.
In Section 3, we introduced the lower bound (Theorem 1) on the fair optimal allocation. Next, we demonstrate how this bound permits one to enhance the performance of the ADMM for the fair resource allocation problem, and we compare it with the performance brought by the SoA bound (Proposition 1). Although the lower bound permits one to adjust quickly a minimal individual resource allocation that would never be violated during the running time of the algorithm, the major feature of its introduction is in that it permits one to define an initialization of the penalty parameter that could enhance the algorithm performance. Indeed, the initialization can provide spectacular convergence acceleration, whereas reducing the feasible set at the projection line 6 of Algorithm 1 does not seem to matter, illustrating the fast convergence of FDADMM to modest accuracy. These observations are illustrated in the next section.
5 Execution
In the present simulations, we dedicate our performance evaluation to the proportionally fair resource allocation problems (). In this section, we demonstrate the gains achievable with only tuning the initial penalty parameter of the FDADMM by comparing several initialization schemes. Indeed, the only difference between the different algorithms that we compare is in that the initial penalty parameter is chosen either arbitrarily – FDADMM(), or according to Equation (10) applied to the bound (FDADMMMB) or (FDADMMLB).
The problem instances were generated under the same conditions as in Section 3.3. As it appears the parameter can deteriorate importantly the quality of our bound when small, we execute the simulations under two different situations 1) , and 2) .
Performance results
In Figures 2 and 3, we plotted the iteration count of the algorithms under situations 1 and 2, respectively. The algorithms stop when the primal and dual residuals of the ADMM algorithm (see, e.g., [boyd2011distributed]) fall below (relatively modest accuracy). For each problem size in terms of number of different requests, we generated 10 instances of the corresponding size randomly and plotted the average performance. The specific points are also represented with scattered translucent markers to account for the exact performance of each algorithm. For each situation, we observed the performances of FDADMMLB, in particular, its average initial reciprocal penalty parameter given by Equation (10) and chose several initialization values below and above this average to account for the effect of this initialization on the algorithm’s performance.
Situation 1 (Figure 2). We observe a spectacular improvement of the FDADMM algorithm from the scheme FDADMMMB to the scheme FDADMMLB, corresponding to a reduction of the running iteration count of two orders of magnitude. When is chosen larger than the one for FDADMMLB, although the performances seem satisfactory, one can observe that FDADMMLB still executes faster on average.
Situation 2 (Figure 3). The same improvement of the performances, related to the introduction of our lower bound, is observed. It seems that for lower initialization value of , the algorithms demonstrate poorer performances. Nevertheless, one can observe that higher values of can provide algorithms with, although not consistently, better performances than FDADMMLB on average. Although this phenomenon can seem surprising after a look at situation 1, one can explain it with the fact that when the vector is highly unbalanced (as it is the case when its values are uniformly drawn at random within ) the objective function obtains highly asymmetric structure. Indeed, the computation of the strong convexity modulus of in [allybokus2017real] in order to obtain a desirable initialization , shows that the factor in Equation (10) in fact corresponds to the smaller strong convexity modulus of the functions , which is proportional to . Not surprisingly then, this evaluation becomes poorer when the vector becomes unbalanced. Thus, it is worth considering that an accurate penalty parameter tuning becomes more difficult when the weighted fairness function symmetry is poor. Nevertheless, our simulations suggest that initializing the reciprocal penalty parameter according to Equation (10) applied to our lower bound permits one to obtain a satisfactory performance of the FDADMM. We believe this scheme can be improved in order to tackle a potential performance issue under highly asymmetric realizations of the fair resource allocation problem characterized by a very low value of .
6 Conclusion
We studied the structure of weighted fair allocation problems and proposed a lower bound that permits one to better understand the problem’s features. The fair allocation can be lower bounded individually and locally (that is, each user, or request, has a minimal guarantied allocation that depends on its individual weight and that of a locally reduced subset of users). We compared experimentally our bound with the best bound available in the literature, and showed that we can provide consistent improvement in the case of high asymmetry of the capacity vector (which may describe congested networks situations) or in the case of a suitable symmetry of the fairness measures (which may cover situations where the requests have balanced relative priorities). We believe that the bound derived in the present paper for general fairness concepts () can be further improved, and intend to soften its dependencies on the global minimum local midpoint value . Our intuition suggests this would improve considerably the quality of our general bound. To demonstrate the utility of our derivation, we showed as an illustration how the introduction of this lower bound can remarkably improve the performances of an iterative distributed algorithm, the FDADMM, that solves the problem optimally, by a simple initialization of a penalty parameter. We also observed that the initialization scheme allows a remarkably satisfactory tuning of the FDADMM, and that this accuracy may impoverish as the asymmetry of the weighted problem strengthens. In the future, we envision to study this situation and strengthen our bound in order to possibly empower the initialization scheme, providing more robustness to the technique with respect to asymmetry.