1 Introduction
Software Defined Networking (SDN) technologies are radically transforming network architectures by offloading the control plane (e.g., routing, resource allocation) to powerful remote platforms that gather and keep a local or global view of the network status in realtime and push consistent configuration updates to the network equipment. The computation power of SDN controllers fosters the development of a new generation of control plane architecture that uses computeintensive operations. Initial design of SDN architectures
[20] had envisioned the use of one central controller. However, for obvious scalability and resiliency reasons large networks already in production are considering a distributed SDN control plane [10]. Thus, a logically centralized network control plane may consist of multiple controllers each in charge of a SDN domain of the network and operating together, in a flat [18] or hierarchical [6] architecture, to achieve global tasks.In this paper, we study the problem of finding a global fair resource allocation when the control plane is distributed over several domain controllers. More specifically, we consider the case where the size of flows evolves over time and bandwidth allocations have to be quickly adjusted towards the novel fair solution (in the sense of fairness defined by Mo et al. in [13]). In distributed SDN architectures, controllers operate with full information in their domain and communicate (e.g., system states or optimization variables) with adjacent domain controllers or a central gathering entity. Exchanges between controllers are expensive in terms of communication delay and overhead. In this setting, a distributed algorithm may not have enough time to converge to optimum before it has to provide a feasible answer due to the scale of networks. Therefore, the main challenge is to produce very quickly good quality feasible solutions. Local mechanisms such as AutoBandwidth [15] have been proposed to greedily and distributedly adjust the allocated bandwidth to support timevarying IP traffic in Multi Protocol Label Switching (MPLS) networks. However, they do not ensure fairness and do not optimize resources globally. On the other hand, centralized algorithms have been proposed to solve the problem but fail at quickly providing good and feasible solution in a distributed setting [12].
We propose a distributed algorithm that performs in realtime for the fair resource allocation problem in distributed SDN control planes. It is based on the Alternating Direction Method of Multipliers (ADMM) [2] that has captured a lot of attention recently for its separability and fast convergence properties. Our algorithm, called Fast Distributed (FD)ADMM, is designed to be fully deployed over a distributed SDN control plane and permits controllers to handle their domain simultaneously while operating together in the fashion of a general distributed consensus problem to achieve a global optimal value. We show that this algorithm can function in realtime by iteratively producing a feasible (global) resource allocation strategy that converges to the fair optimal allocation. It produces very quickly (in fact, from the first iterations) good quality feasible solutions that permit to adjust in milliseconds bandwidth for flows that evolve quickly and need immediate response, a property that standard dual decomposition methods such as the one in [12] do not have.
We argue that this property is crucial as the network state (e.g., flow size, flow arrival/departure, link/node congestions) may be affected by abrupt changes. Thus, we claim that it is often preferable to have a quick access to a good quality solution rather than a provably asymptotically optimal solution with poor convergence rate. Therefore, we show that our algorithm is a good candidate for realtime fair resource allocation. To this aim, we compare the performances of the algorithm to the Lagrangian dual splitting method in [21, 12], a standard decomposition method that violates feasibility, demonstrates poor convergence rate, hence responds slower in realtime.
Moreover, we provide an explicit and adaptive tuning for the penalty parameter in FDADMM so that an optimal convergence rate can be approached on any instance execution. And we show that projections can be massively parallelized linkbylink yielding a convergence rate of the algorithm that does not depend on the way the network is partitioned into domains.
The remainder of this paper is organized as follows. Section 2 surveys the related work around the realtime fair resource allocation problem. Sections 3 and 4 explicitly reformulate the fair resource allocation problem in the fashion of a general distributed consensus problem, addressed with the terminology of proximal algorithms. Section 5 discusses on an optimal tuning of the penalty parameter in FDADMM. Section 6 provides simulations that validate our approach and finally, Section 7 concludes the paper.
2 Related work
The concept of fair resource allocation has been a central topic in networking. Particularly, maxmin fairness^{1}^{1}1A resource allocation strategy is said to be maxmin fair if no route can increase its allocation while remaining feasible without penalizing another route that has a smaller or equal allocation has been the classic resource sharing principle [1] and has been studied extensively. The concept of proportional fairness and its weighted variants were introduced in [9]. Later, a spectrum of fairness metrics including the two former ones was introduced by Mo et al. in [13] as the family of fair utility functions. Some early notable work on maxmin fairness includes [3], where the authors propose an asynchronous distributed algorithm that communicates explicitly with the sources and pays some overhead in exchange for more robustness and faster convergence. Later in [17], a distributed algorithm is defined for the weighted variant of maxmin fair resource allocation problem in MPLS networks, based on the wellknown property that an allocation is maxmin fair if and only if each LabelSwitched Path (LSP) either admits a bottleneck link amongst its used links or meets its maximal bandwidth requirement (see Definition 4 there of a bottleneck link). The problem of Network Utility Maximization (NUM) was also addressed with standard decomposition methods that could give efficient and very simple algorithms based on gradient ascent schemes performing their update rules in parallel. In this context, Voice [21], then McCormick et al. [12] tackle the fair resource allocation problem with a gradient descent applied to the dual of the fair resource allocation problem.
In these works, no mention is made on the potential (in fact, systematic) feasibility violation of the sequences generated by those algorithms, which we believe is a matter that deserves attention in realtime setups. Motivated by this, recently the authors of [19] provide a feasibility preserving version of Kelly’s methodology in [9]. Their algorithm introduces a slave that gives at each (master) iteration an optimal solution of a weighted proportionally fair resource allocation problem that is explicitly addressed in the case of polymatroidal and flow aggregating networks only. As a matter of fact, we contribute to this problem with an efficient realtime version of the slave process, for any topology, preserving feasibility at each (slave) iteration. Amongst approximative approaches, one can quote the very recent work [11] where a multiplicative approximation for and additive approximation for is provably obtained in polylogarithmic time in the problem parameters. Moreover, starting from any point, the algorithm reaches feasibility within polylogarithmic time and remains feasible forever after. The algorithm described in our paper solves the problem optimally and reaches feasibility as from the first iteration from any starting point.
The work around ADMM is currently flourishing. The best known convergence rate of ADMM [7] failed to explain its empirical fast convergence until very recently, for instance in [5], where global linear convergence rates are established in four scenarios of the strongly convex case. ADMM is also wellknown for its performance that highly depends on the parameter tuning, namely, the penalty parameter in the augmented Lagrangian formulation (see Section 3 below). An effective use of this class of algorithms cannot be decoupled from an accurate parameter tuning, as convergence can be extremely slow otherwise. Thus, in the same paper [5], the authors provide a linear convergence proof that yields a convergence rate in a closed form that can be optimized with respect to the objectives parameters. Therefore, thanks to these works, an optimal tuning of ADMM for fair resource allocation is now available. Several papers use the distributivity of ADMM to design efficient algorithms solving consensus problems in e.g. model predictive control and congestion control, without however addressing this fundamental detail. In the simulations of [14] for instance, the authors try several choices of the penalty parameter and plot the best result found for each point.
Our contribution: In this paper, we reformulate the fair resource allocation problem and we design a distributed algorithm based on ADMM, called FDADMM. We show that this algorithm outputs at each iteration a feasible resource allocation strategy that converges to the unique optimum of the problem. We also provide an adaptive strategy to correctly tune the FDADMM penalty parameter and we show that projections can be massively parallelized on a linkbylink basis. Finally, we show how our algorithm outperforms the dual methods mentioned above in terms of feasibility preservation and responsiveness in dynamic scenarios.
3 Fair resource allocation problem
In this section, we reformulate the fair resource allocation problem as a convex optimization problem. Then, we start off with our algorithm design by presenting CADMM, an algorithm that solves our problem in a centralized fashion and that will be helpful to design our distributed algorithm.
3.1 Problem reformulation
Let be a set of connection requests over a network with a set of capacitated links. Each link has a total capacity of . Each request is represented by a route containing a subset of that, without any confusion, we still denote as . With some abuse of notation, we write or to say that link belongs to the route , or route goes through link , respectively. Given the set of requests and their corresponding utility function , the network allocates bandwidth to all the requests in order to maximize the overall utility , while satisfying feasibility, i.e., the link capacity constraints. Denote by the capacity allocated to route , and let . Then, we have the classic capacity constraint in matricial form:
(1) 
where is the linkroute incidence binary matrix:
Our aim is to compute an fair capacity allocation :
(2) 
where the fair utility function is defined according to the Mo and Walrand’s classic characterization in [13], that we report below.
Definition 1 (fairness, [13]).
Let be a nonempty feasible set not reduced to {0}. Let and . We say that is fair (or simply fair when there is no confusion on ) if the following holds:
Equivalently, is fair if, and only if maximizes the fair utility function defined over :
where
The success of fairness is due to its generality: in fact, for it is equivalent to maxthroughput, proportional fairness, mindelay, and maxmin fairness, respectively. We observe that the fair utility functions are nondecreasing, strictly concave, nonidentically equal to , and upper semicontinuous. It is wellknown that under these conditions, the function admits a unique maximizer over any convex closed nonempty set.
From now on, we adopt the convex optimization terminology. Define for each the convex cost function . Then, is a convex closed proper^{2}^{2}2closed stands for lower semicontinuous and proper means nonidentically equal to function over . Let us introduce as the indicator function of the convex closed set :
Then our fair problem can equivalently be formulated as the following convex program:
(3) 
(4) 
3.2 ADMM as an augmented Lagrangian splitting
Let us begin by recalling to the reader the basic principles of the Alternating Direction Method of Multipliers (ADMM), applied to our fair problem. To this aim, the augmented Lagrangian with penalty for problem (34) writes^{3}^{3}3 is the Euclidean product of and and the Euclidean norm.
(5) 
where
is the vector of Lagrange multipliers. The method of multipliers consists in the following update rules, where the superscript
denotes an iteration count:(M1) 
(M2) 
The main idea in alternating directions is in fact to decouple the variables in the optimization stage M1: instead of a global optimization over , we only optimize with respect to the variable , then, given the new update of , we optimize with respect to . Before stating the corresponding update rules of ADMM, let us first remind the following Fact.
Fact 1 ([2]).
Let be a closed proper convex function. The set denotes the domain of , that is the set upon which takes real values. Assume . Then, the following facts hold:

For , , the minimization problem
admits a unique solution. The (scaled) proximal operator of is the welldefined map .

Assume that takes the form , for (write ) where are both closed, proper and convex. Then, for , .

Assume that is the indicator function of a closed convex nonempty set . Then is the Euclidean projection onto .
The definition of a proximal operator being set, a straightforward calculus shows that we have:
ADMM can thus be expressed in the proximal (scaled) form, which we refer to as Centralized ADMM (CADMM).
In Algorithm 1, is the projection on , and the scaled dual variable. Now, the first step of Algorithm 1 (line 3) can be separated thanks to the separability property of the objective function, see Fact 1. In fact, is fully separable, as . Thus, the proximal update of line 3 takes the trivially parallelized form:
(6) 
such that each local variable can be computed separately.
Through expression (6), we are thus able to provide an efficient update rule for , provided that the separate proximal computations are inexpensive. However, two main issues arise.
Main issues with CADMM: a) First, an update of the variable in line 4 of Algorithm 1 requires full knowledge of the projection mapping, which in turn requires full information on the capacity set of the network. Thus, this global update rule represents an important limiting factor to the design of a fully distributed algorithm, which is our main design interest here to follow the distribution of SDN control planes.
b) Moreover, although the convergence of CADMM may only require some tens of iterations (see Section 6 for further details), it may be slow in terms of computation time due the successive application of a projection algorithm that would not scale with respect to the problem size. This also gives rise to a double loop algorithm where each iteration requires the convergence of an inner process that can be timeconsuming. Indeed, computing the projection of a generic point onto a closed convex nonempty polyhedron is in general nontrivial. Hence, for general polyhedra, one has to operate alternate projections, summon quadratic programming solvers or use iterative algorithms such as the one in [8].
We address issues a,b) in the next section, where we propose FDADMM, a distributed version of CADMM.
4 The general consensus form of ADMM: an efficient distributed algorithm design
In this section, we show how to alleviate the cost of the global projection subroutine in CADMM (line 4) by decomposing the formulation with respect to the network links of each SDN domain in the fashion of a consensus problem, and present FDADMM. As stated at end of Section 3, the global knowledge of the topology and the computational effort required by the projection step (line 4) of CADMM are not affordable in the distributed SDN control plane. Thus, the decomposition permits to respect the locality of the different domain controllers that now handle the projections link by link efficiently and in parallel. The decomposition into domains can be orchestrated by the SDN architect without any constraint. Unavoidably though, domains will need to exchange information as routes may traverse different domains.
4.1 Preliminaries
We organize the network into several domains such that forms a partition of the set of links . Let be the set of routes traversing the domain via some link . More formally, . Hence, forms a covering of . Let denote the indicator function for link , i.e.,
(7) 
Also, let us define . Thus, for each , is the (convex, closed) capacity set of the link . Finally, for and , denotes the Euclidean projection of onto .
4.2 Consensus form
We can now reformulate our objective to a fully separable form. For ease of notation, the variable will be written and we define . We also define an additional variable, , that will represent the consensus value of found for each route over all the domains handling the route . We write to design the set of domain indices (including index 0) which belongs to. In the same fashion as in Section 3.2, we plug the feasibility constraints into the objective. Each constraint being now handled separately, we can formulate Problem (3), (4) as follows:
(8) 
In order to obtain a separable objective and fully benefit from the separability property in Fact 1, we artificially create a copy of the variable for each link . This variable will be handled by the unique domain containing . For each , let be the copy of for link .
Creating a complete copy of all the variables for each domain is, nevertheless, of no use. Each domain indeed only needs information and manipulation over the only variables associated with the routes that they handle completely (the route is included in the domain’s links) or partially (the route meets other domains). Now, actually depends only on the subvariable . We erase all the information that is irrelevant to region : . We can thus write the objective as follows:
(9) 
To sum up, we have artificially separated the objective function by creating a minimal number of copies of the primal variable in order to fully distribute the problem. Now, instead of a global resource allocation variable, several copies of the variable account for how its value is perceived by each link of each domain. To enforce an intra (local) and inter (global) domain consistent value of the appropriate allocation, consensus constraints are added to the problem. This new formulation can be interpreted as a multiagent consensus problem formulation where route has cost , and link has cost . As we separated the global objective on purpose, the separability property of the proximal operator thus gives the following:
These considerations permit next to write our final distributed consensus model where each agent only has access to local information.
4.3 Fast Distributed ADMM
We can finally distribute ADMM by putting into practice the tricks described in the previous section. Then, the general consensus form of the problem can be expressed as follows.
(10) 
(11) 
where . By applying ADMM to this formulation and using again Fact 1 we obtain, after some simplification, Algorithm 2 (Fast Distributed (FD)ADMM).
To update the consensus variables , we exploit the fact that the Euclidean projection of a point onto the diagonal is simply its average . Hence, if denotes the indicator function of the feasible set (11), we have:
This yields the simple update rules at lines 5 and 11^{4}^{4}4These updates rules are also simplified using the straightforward fact that the sum is constant. It can thus be fixed to by initialization..
Notably, even in the distributed case, each domain can compute at each iteration a globally feasible allocation for each of the routes (see Proposition 1).
Communication among domain controllers: In FDADMM, only domains that do share a route together have to communicate. The communication procedures among the domain controllers are described at lines 3 and 11. In these steps, the domains gather from and broadcast to adjacent domains the sole information related to routes that they share in common. In particular, domains are blind to routes that do not traverse them, and can keep their internal routes secret from others. In details, after each iteration of the algorithm, each domain receives the minimal information from other domains such that is still able to compute a local value and a locally feasible value . Next, send them back to neighboring domains that traverses.
Communication overhead: In terms of overhead, we can easily evaluate the number of floats transmitted between each domain at each iteration. At each communication, domain must transmit and for each to each other domain that traverses. The variable does not need to be centralized or transmitted between controllers. Each domain controller may actually have a copy and perform the (lowcost) computation of their update rule (see line 10 in Algorithm 2) locally. Hence, domain transmits in total floats to the set of its peers. As a comparison, in a distributed implementation of the algorithm given in [12] and stated in Section 6, each domain transmits in total floats to the set of its peers, which is bounded by as grows.
Feasibility preservation: A potential drawback of the distributed approach is the potential feasibility violation by the iterate . However, we have the following positive result.
Proposition 1.
FDADMM provides a sequence of feasible points that converges to the optimum.
Proof.
Consider the iteration number and drop the superscript for lightness. For any link , we have by line 8 of Algorithm 2 that is feasible in link . That is, . Define . Then, for each link :
(12) 
Thus, no capacity is violated by the allocation . At the optimum, the consensus is reached. Thus is a feasible sequence that converges to the optimum. ∎
The number introduced in Proposition 1 above in fact corresponds to the introduced variable of the same name FDADMM. Thus, in a certain way, for sufficiently loaded and communicating domains (i.e. the are large enough) we sacrifice some overhead (counted on a per iteration basis) compared to standard dual methods, but in exchange for anytime feasibility, a major feature that dual methods do not generically provide.
5 Implementation and algorithm tuning
In this section, we discuss two major points in the design of FDADMM. First, we precise and justify the choice of the procedure Projection, in line 7 of FDADMM Algorithm 2. Next, we derive an explicit adaptive update of the reciprocal penalty parameter that permits to accelerate the convergence of FDADMM on any instance.
5.1 Projection procedure: A discussion
In Section 4, we advocated a linkwise separation of the formulation because it is nontrivial to project an arbitrary point onto an arbitrary closed convex polyhedron. However, the projection onto the sets (see Section 4.1) can be done with an exact method with a complexity dominated by the one of sorting a list of the size of its dimension. In average, sorting a list of length is done in . Hence, by operating instead a linkbylink projection, the controllers save a huge amount of time by providing an (generically infeasible) approximate projection point and deriving a locally feasible allocation (see Algorithm 2 line 11). Although the quality of the global iterate may be altered by further distribution of the projection, the point is quickly generated. Paradoxically enough, FDADMM therefore fully adapts to any network distribution into domains because it functions by link, regardless of the network partition into domains. The algorithm we use for Projection in FDADMM is presented for instance in [4] in which the authors also give a correctness proof and performance demonstration. It permits to provide an efficient update for each domain .
5.2 Estimating the optimal parameter
It is wellknown that the reciprocal penalty parameter highly conditions the convergence speed of ADMM. An inaccurate tuning can indeed lead to a very slow convergence. For appropriate problems, it is possible to use a result proven in [5] to compute an optimal reciprocal penalty parameter, that we here report. It will help us tune FDADMM to optimize its convergence performance^{5}^{5}5We recall that a differentiable function is strongly convex with modulus if . Moreover, is Lipschitz with modulus if ..
Theorem 1 ([5]).
Assume that the following problem:
(M)  
has a saddle point, and both objective functions are convex. Assume that has full row rank, and that is strongly convex and has a Lipschitz gradient. Then, the sequence of iterates (primal and dual concatenated) of ADMM converges linearly with rate , where^{6}^{6}6
is the smallest eigenvalue of a positive matrix, and
is the operator normand is the penalty parameter in the augmented Lagrangian form (see Section 3.2).
The following result directly follows.
Corollary 1.
The optimal reciprocal penalty parameter is
In order to be able to apply Corollary 1, we still need to express the coefficients of interest .
Fact 2.
The function is strongly convex and has Lipschitz gradient with:
on any compact subset of of the form , for , and .
Proof.
Consider the case . We start with the calculus of . We write , when , and when . Suppose . For , we have:
where the third equality is just an application of the mean value theorem. The case is handled likewise by integrating into the parenthesis instead of in the fourth line. The case is straightforward. By plugging in an appropriate sequence, say, respectively, and , one can see that this bound is tight.
As for the Lipschitz factor, similarly, take , we have:
where the last line is obtained in the same fashion as for the calculus of , for each case , . The case is straightforward. Now, consider . For ,
The derivation of is similar.∎
Unfortunately, Corollary 1 cannot be directly applied to our general consensus formulation. Indeed, its matricial formulation does not provide a fullrow rank matrix . The problem which the Theorem 1 applies to is actually the original, centralized one in (34). Therefore,
we will derive a reciprocal penalty parameter selection for the centralized problem, and use it as a tool to estimate a satisfactory parameter for FDADMM
.However, the last difficulty we encounter in choosing the optimal reciprocal penalty parameter is to correctly evaluate the Lipschitz modulus. Unfortunately, is not Lipschitz on the feasibility set, because of the singularity of each at . In order to circumvent this problem, we introduce the classic concept of disagreement point , according to bargaining theory terminology. A disagreement point represents the minimal values for an allocation of each route. This allows to reduce the feasibility set to a compact subset of the form , on which is now Lipschitz. The disagreement point can be naturally defined as the feasible point at the first iteration. Generically, there is no a priori guarantee that the set contains the optimum, but, we remark that at least in the first iterations, the use of provides a good approximation of the best reciprocal penalty parameter. The analytical evaluation of this phenomenon goes beyond the scope of this paper and we keep it for future work.
Thus, finally, we update in an adaptive fashion in the beginning of the algorithm with the help of those points. We found empirically that operating such update only at the initial steps of FDADMM and then fixing for the rest of the execution provides a good performance in terms of convergence speed. In the next section, we describe this typical phenomenon in Figure 2. In all our simulations, we use the simple following update scheme to estimate the optimal penalty parameter at each execution of the algorithm.
Scheme (Reciprocal Penalty Adaptation).
Set threshold . At all iterations below , denote by the last output of a feasible point. Then, choose the new reciprocal penalty parameter as:
After iterations, do not update .
In our numerical evaluations we will set . Thus, FDADMM is now fully tuned and we are ready to demonstrate its performance in the next section, in terms of convergence speed in realtime scenarios.
6 Performance analysis
We now evaluate numerically FDADMM in terms of its convergence properties. More specifically, in Section 6.1 we compare the performance of FDADMM and CADMM in offline scenarios where the optimum is desired. In Section 6.2 we evaluate FDADMM in realtime scenarios, where good and feasible solutions are needed onthefly as weights vary over time. In order to benchmark the transient properties of FDADMM we use the standard Lagrangian dual decomposition approach (LAGR) for singlepath routing in [21, 12, 16], that we recall in Algorithm 3. We here assume that domain controllers operate in synchronous mode. In this case, the decomposition into domains has no impact on FDADMM performance, as projection is on a linkbasis. All simulations are made for the proportional fairness objective functions (). We used the proximal operation formulas found in [2]. The algorithms under investigation were evaluated using BT’s 21 CN network topology^{7}^{7}7We would like to thank the authors of [12] for their willingness to share the BT 21 CN topology dataset., containing 106 nodes and 474 links. The requests were generated by computing the shortest path between randomly chosen sources and destinations.
6.1 Algorithm design
Evaluating the alleviation of the computeintensive parts of CADMM was a key concern to motivate and validate the distribution to FDADMM. To this aim, we show in Figure 1 the computation time and iteration count for those two algorithms on small instances for a number of requests ranging from 1 to 200. The centralized projection in CADMM is executed using the variation of Hildreth’s projection algorithm on general polyhedra in [8]. When convergence is desired, a precise stopping criterion for FDADMM is available, as the optimality gap can be upperbounded by the primal and dual residuals, see [2]. In our case, evaluating those residuals results in computing the absolute variation of two consecutive values of , and the consensus accuracy^{8}^{8}8One can choose any other norm in . . This is a first advantage for FDADMM implementation as no robust stopping criterion is available for standard gradient descent. When an optimality gap is computed, we thus consider a approximation by FDADMM as the reference for all tested algorithms. In Figure 2, we illustrated, on a small instance with 200 requests, the number of iterations of FDADMM to reach convergence for a various number of the parameter values, in order to evaluate our adaptive scheme’s accuracy with respect to the empirically best found parameter. It shows that our approximation of is fairly satisfactory. In Figure 1, FDADMM shows that distributing the consensus over the links exchanges several more iterations for a reduction of the compute time by two orders of magnitude for small instances. Hence, the distribution does not seem to cost too much convergence rate. Not surprisingly, the use of a central projection subroutine makes CADMM impossible to scale. The convergence criterion used in Figures 1 and 3 is modest (). Finally, we plotted a notable behavior of FDADMM in Figure 3. One can imagine a link between the convergence rate and the mean link load, i.e., . This conjecture requires further investigation that we keep for future work.
6.2 Comparison against Lagrangian method
We now compare the proposed FDADMM algorithm against the classic LAGR Algorithm 3, see [21, 12, 16]. To this aim we perform two experiments, in realtime and static scenarios, respectively.
We start by evaluating the realtime responsiveness of FDADMM by considering a small scenario where 200 routes are established and the weights vary over discrete time , following the formula:
where at each event , is chosen uniformly within the above interval in which determines the amplitude of the weight variation. In Figure 4 we illustrated the average optimality gap of the two algorithms achieved over 20 events with 10 iterations between each event. We observe that FDADMM outperforms LAGR in terms of optimality gap, although the performance of both algorithms is fairly acceptable. However, remarkably, FDADMM remains always feasible whereas LAGR constantly violates the constraints as weights change in realtime. Figure 5 shows the percentage of constraints of the problem that are violated for each value of the amplitude . In fact, LAGR iteratively approaches the fair resource allocation from the outside of the feasible set. This drawback is commonly amended by projecting the solution onto the feasible set. However, this is not doable in our distributed setting, as projection requires costly onthefly operations that require full topological information. For such reasons, we claim that the standard LAGR algorithm is not well suited for computing realtime fair allocations in a distributed SDN setting.
In our last experiment we test the two algorithms under a static scenario, where the weights do not vary over time and LAGR has enough time to find at least one feasible solution. In Figure 6 we compare the optimality gap of the best feasible solutions found after 5 seconds runtime by FDADMM and LAGR, for different instance sizes over BT topology. We observe that FDADMM obtains a closetooptimal feasible solution for all the instance sizes (from 100 to 6000 requests), while LAGR is still far from the optimum especially when the instance becomes large.
To recap, in this section we have demonstrated by experimentation that FDADMM reacts quickly to unpredictable network variations, while preserving the feasibility of the solutions computed iteratively. We then claim that FDADMM is a good candidate for realtime fair resource allocation in distributed SDN scenarios.
7 Conclusions and future work
In this paper we addressed the realtime fair resource allocation problem in the context of a distributed SDN control plane architecture. Our main contribution is the design of a distributed algorithm that continuously generates a sequence of feasible solutions and adapts to any partitioning of the network into domains. We reformulated the fair resource allocation problem in the fashion of a general consensus problem to derive the FDADMM algorithm. This algorithm can be massively parallelized on several processors that manage different regions of the network, hence fully benefiting from the computing resources of SDN controllers in distributed architectures. We also provided a strategy for a nearoptimal estimation of the penalty parameter of FDADMM that boosts its convergence. Finally, we compared FDADMM to a standard dual Lagrangian decomposition method (LAGR) and we demonstrated how the former is more adapted to a realtime situation where bandwidth has to be adjusted onthefly. In fact, FDADMM ensures a smaller optimality gap since the very first iterations and, most importantly, it produces a feasible fair allocation at all iterations.
As a next step, we envision to adapt our formulation to the case where multiple candidate paths are available for each request. Moreover, we plan to run FDADMM asynchronously while still guarantying nearoptimal convergence rate and anytime feasibility.
References
 [1] Dimitri P Bertsekas, Robert G Gallager, and Pierre Humblet. Data networks, volume 2. PrenticeHall International Series, 1992.

[2]
Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein.
Distributed optimization and statistical learning via the alternating
direction method of multipliers.
Foundations and Trends in Machine Learning
, 3(1):1–122, 2011.  [3] Anna Charny, Raj Jain, and David Clark. Congestion control with explicit rate indication. In Proc. of IEEE ICC, 1995.
 [4] Yunmei Chen and Xiaojing Ye. Projection onto a simplex. arXiv preprint arXiv:1101.6081, 2011.
 [5] Wei Deng and Wotao Yin. On the global and linear convergence of the generalized alternating direction method of multipliers. Journal of Scientific Computing, 66(3):889–916, 2016.
 [6] Soheil Hassas Yeganeh and Yashar Ganjali. Kandoo: a framework for efficient and scalable offloading of control applications. In Proc. of ACM HotSDN, 2012.
 [7] Bingsheng He and Xiaoming Yuan. On the o(1/n) convergence rate of the douglasrachford alternating direction method. SIAM J. Numer. Anal., 50(2):700–709, April 2012.
 [8] Alfredo N Iusem and Alvaro R De Pierro. A simultaneous iterative method for computing projections on polyhedra. SIAM Journal on Control and Optimization, 25(1):231–243, 1987.
 [9] Frank P Kelly, Aman K Maulloo, and David KH Tan. Rate control for communication networks: shadow prices, proportional fairness and stability. Journal of the Operational Research society, 49(3):237–252, 1998.
 [10] Diego Kreutz, Fernando MV Ramos, Paulo Esteves Verissimo, Christian Esteve Rothenberg, Siamak Azodolmolky, and Steve Uhlig. Softwaredefined networking: A comprehensive survey. Proc. of the IEEE, 103(1):14–76, 2015.
 [11] Jelena Marasevic, Clifford Stein, and Gil Zussman. A fast distributed stateless algorithm for alphafair packing problems. In Proc. of ICALP, volume 55, pages 54–1, 2016.
 [12] Bill McCormick, Frank Kelly, Patrice Plante, Paul Gunning, and Peter AshwoodSmith. Real time alphafairness based traffic engineering. In Proc. of ACM HotSDN, pages 199–200, 2014.
 [13] Jeonghoon Mo and Jean Walrand. Fair endtoend windowbased congestion control. IEEE/ACM Transactions on Networking (ToN), 8(5):556–567, 2000.
 [14] João FC Mota, João MF Xavier, Pedro MQ Aguiar, and Markus Puschel. Distributed admm for model predictive control and congestion control. In Proc. of IEEE CDC, 2012.
 [15] Udayasree Palle, Dhruv Dhody, Ravi Singh, Luyuan Fang, and Rakesh Gandhi. PCEP Extensions for MPLSTE LSP Automatic Bandwidth Adjustment with Stateful PCE. InternetDraft draftdhodypcestatefulpceautobandwidth09, Internet Engineering Task Force, November 2016. Work in Progress.
 [16] Daniel Pérez Palomar and Mung Chiang. A tutorial on decomposition methods for network utility maximization. IEEE Journal on Selected Areas in Communications, 24(8):1439–1451, 2006.
 [17] Fabian Skivée and Guy Leduc. A distributed algorithm for weighted maxmin fairness in MPLS networks. In International Conference on Telecommunications, pages 644–653. Springer, 2004.
 [18] William Stallings. Softwaredefined networks and openflow. The internet protocol Journal, 16(1):2–14, 2013.
 [19] Rajesh Sundaresan et al. An iterative interior point network utility maximization algorithm. arXiv preprint arXiv:1609.03194, 2016.
 [20] Steven J VaughanNichols. Openflow: The next generation of the network? Computer, 44(8):13–15, 2011.
 [21] Thomas Voice. Stability of multipath dual congestion control algorithms. In Proc. of Valuetools, page 56. ACM, 2006.