Distributed Multi-resource Allocation with Little Communication Overhead

We propose a distributed algorithm to solve a special distributed multi-resource allocation problem with no direct inter-agent communication. We do so by extending a recently introduced additive-increase multiplicative-decrease (AIMD) algorithm, which only uses very little communication between the system and agents. Namely, a control unit broadcasts a one-bit signal to agents whenever one of the allocated resources exceeds capacity. Agents then respond to this signal in a probabilistic manner. In the proposed algorithm, each agent is unaware of the resource allocation of other agents. We also propose a version of the AIMD algorithm for multiple binary resources (e.g., parking spaces). Binary resources are indivisible unit-demand resources, and each agent either allocated one unit of the resource or none. In empirical results, we observe that in both cases, the average allocations converge over time to optimal allocations.



There are no comments yet.


page 1

page 2

page 3

page 4


Derandomized Distributed Multi-resource Allocation with Little Communication Overhead

We study a class of distributed optimization problems for multiple share...

Resource allocation in dynamic multiagent systems

Resource allocation and task prioritisation are key problem domains in t...

Multi-resource allocation for federated settings: A non-homogeneous Markov chain model

In a federated setting, agents coordinate with a central agent or a serv...

Distributed and Efficient Resource Balancing Among Many Suppliers and Consumers

Achieving a balance of supply and demand in a multi-agent system with ma...

Value-Decomposition Networks based Distributed Interference Control in Multi-platoon Groupcast

Platooning is considered one of the most representative 5G use cases. Du...

GMA: A Pareto Optimal Distributed Resource-Allocation Algorithm

To address the rising demand for strong packet delivery guarantees in ne...

Resource Allocation Among Agents with MDP-Induced Preferences

Allocating scarce resources among agents to maximize global utility is, ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Distributed optimization has numerous applications in many different areas. These include: sensor networks; Internet of Things; smart grid; and smart transportation. In many instances, networks of agents achieve optimal allocation of resources through regular communication with each other and/or with a control unit. Details of such distributed optimization problems can be found in (among others)  [18], [10], [25], [15], [4], [6], [11], [22], [24] and the papers cited therein.

In some applications, groups of coupled multiple resources, must be allocated among competing agents. Generally speaking, such problems are more difficult to solve in a distributed manner than those with a single resource. This is particularly true when communication between agents is constrained—either through limitations of communication infrastructure, or due to privacy considerations. Motivated by such applications, we wish to find an algorithm that is tailored for such scenarios and where there is no requirement of inter-agent communication. Our starting point is [25], [23]. The authors of these papers demonstrate that simple algorithms from Internet congestion control can be used to solve certain optimization problems. Our contribution here is to demonstrate that the ideas therein extend to a much broader (and more useful) class of optimization problems.

Roughly speaking, in [25], the iterative distributed optimization algorithm works as follows. Agents continuously acquire an increasing share of the shared resource. When the aggregate agent demand exceeds the total capacity of resources, then the control unit sends a one bit capacity event notification to all competing agents and the agents respond in a probabilistic manner to reduce demand. By judiciously selecting the probabilistic manner in which agents respond, a portfolio of optimization problems can be solved in a stochastic and distributed manner. Our proposed algorithm extends this previous work. It builds on the choice of probabilistic response strategies described therein but is different in the sense that we generalize the approach to deal with multiple resource constraints. The communication overhead is quite low in our proposed solutions and the communication complexities are independent of the number of agents competing for resources.

To be precise, suppose agents compete for divisible resources with capacity , respectively. We use as an index for agents and to index the resources. Each agent has a cost function which associates a cost to a certain allotment of resources and which may depend on the agent. We assume that is twice continuously differentiable, convex, and increasing in all variables, for all . For all and , we denote by the amount of resource allocated to agent . We are interested in the following optimization problem of multi-resource allocation:


Note that there are decision variables in this optimization problem. We denote the solution to the minimization problem by , where . By compactness of the constraint set optimal solutions exist. We also assume strict convexity of the cost function , so that the optimal solution is unique.

We propose iterative schemes that achieve optimality for the long-term average of allocations. Suppose denotes the set of natural numbers and denotes the time steps. To this end we will denote by the amount of resource allocated at the (discrete) time step . The average allocation is (for , and )


In the sequel two main problems are addressed. A common feature of the two is that the agents do not communicate with each other. The only information transmitted by a control unit is the occurrence of a capacity event. The problems are

  • (Divisible Resources) In this setting, mentioned in Section 4, it is assumed that resources are divisible and agents can obtain any amount in . The problem is to derive an iterative scheme, such that

  • (Unit-Demand Resources) In this setting, treated in Section 5, only or unit of the resource can be allocated at any given time step. In the long-term average, which is defined as in (2) we may still achieve a non-unit optimal point, but the allocation at each particular time step becomes more involved. The aim is still to achieve the optimal point on long-term average, as defined in (3).

In both the settings, we use the consensus of the derivatives of the cost functions of all agents competing for a particular resource to show the optimality. Suppose denotes the optimal values of agent for all resources in the system. We say that the derivatives with respect to resource are in consensus if


As a brief background about allocation of indivisible unit-demand resource; it is an active area of research, which goes back to the work of Koopmans and Beckmann [17]. To get the details of recent works on allocation techniques of indivisible unit-demand resources, the interested readers can look into [2] and the papers cited therein. The allocation of unit-demand resources of Section 5 is a generalization of [12] but different in the sense that the cost functions and the constraints used therein depend on allocation of multiple indivisible unit-demand resources and proposed for general application settings. The proposed algorithm works as follows; the control unit broadcasts a normalization signal to all the agents in system time to time, control unit updates this normalization signal using the utilization of resources at earlier time step. After receiving this signal the algorithm of each agent responds in a probabilistic manner either to demand for the resource or not. This process repeats over time.

2 Motivation

As in [25] we use a modified additive-increase multiplicative-decrease (AIMD) algorithm. By way of background, the AIMD algorithm was proposed in the context of congestion avoidance in transmission control protocol (TCP) [7]. It involves two phases; the first is additive-increase phase, and second is multiplicative-decrease phase. In additive-increase phase, a congestion window size increases (resource allocation) linearly until an agent is informed that there is no more resource available. We call this a capacity event. Upon notification of a capacity event, the window size is reduced abruptly. This is called the multiplicative-decrease

phase. Motivated by this basic algorithm, we propose a modified algorithm for solving a class of optimization problems. Here, as in AIMD, agents keep demanding the resources of different types until a capacity event notification is sent by the control unit to them. However, after receiving a capacity event notification the agents toss a coin to determine whether to reduce their resource demand abruptly or not. In our context, the probability of responding is selected in a manner that ensures that our algorithm asymptotically solves an optimization problem. The precise details in which to select these probabilities is explained in Section 


The AIMD algorithm has been explored and used in many application areas. See for example the recent book by Corless et al. [8] for an overview of some applications; the papers [25] for distributed optimization applications; [9] for microgrid applications; and [5] for multimedia applications. The recent literature is also rich with algorithms that are designed for distributed control and optimization applications. Significant contributions have been made in many communities; including, networking, applied mathematics, and control engineering. While this body of work is too numerous to enumerate, we point the interested readers to the works of Nedic [20, 19]; Cortes [16]; Jadbabaie and Morse[14]; Bullo [21]; Pappas [13], Bersetkas [3]; Tsitsiklis [4] for recent contributions. From a technical perspective, much of the recent attention has focussed on distributed primal-dual methods and application of alternating direction method of multipliers (ADMM) based techniques. A survey of some of this related work is given in [25]. Our proposed algorithm is motivated by the fact that we are interested in allocating multiple resource types. Such a need arises in several areas, some of which are described below.

Example 2.1 (Cloud computing).

In cloud computing, on-demand access is provided to distributed facilities like computing resources [1]. Resources are shared among the users and each user gets a fraction of these resources over time. For example, companies may compete for both memory and CPU cycles in such applications. Practically, users of cloud services do not interact with each other and their demand is only known to themselves (due to privacy concerns when different, perhaps competing, companies use the same shared resource). In such a setting our proposed algorithm can be useful for allocating resources in a way that requires little communication with the control unit, and such that there is no inter-user communication.

Example 2.2 (Car sharing).

Consider now a situation where a city sets aside a number of free (no monetary cost) parking spaces and charge points to service the needs of car sharing clubs in cities. An example of a city that implements such a policy is Dublin in Ireland. Now suppose that there are a number of clubs competing for such resources via contracts. A city must decide which spaces, and which charge points, to allocate to each club. Clearly, in such a situation, resources should be allocated in a distributed manner that preserves the privacy of individual companies, but which also maximizes the benefit to a municipality.

3 Preliminaries

In addition to the notations already introduced, we let ; . For a sufficiently smooth function , we denote by the th partial derivative of , for and the Hessian of is denoted by .

3.1 A primer on AIMD

The AIMD algorithm is of interest because it can be tuned to achieve optimal distribution of a single resource among a group of agents. To this end no inter-agent communication is necessary. The agents just receive capacity signals from a central unit and respond to it in an stochastic manner. This response can be tuned so that the long-term average optimality criterion (cf. (3)) can be achieved.

In AIMD each agent follows two rules of action at each time step: either it increases its share of the resource by adding a fixed amount while total demand is less than the available capacity, or it reduces its share in a multiplicative manner when notified that global capacity has been reached. In the additive increase (AI) phase of the algorithm agents probe the available capacity by continually increasing their share of the resource. The multiplicative decrease (MD) phase occurs when agents are notified that the capacity limit has been reached; they respond by reducing their shares, thereby freeing up the resource for further distribution. This pattern is repeated by every agent as long as the agent is competing for the resource. The only information given to the agents about availability of the resource is a notification when the collective utilization of the resource achieves some capacity constraint. At such times, so called capacity events, some or all agents are instantaneously informed that capacity has been reached. The mathematical description of the basic continuous-time AIMD model is as follows. Assume agents, and denote the share of the collective resource obtained by agent at time by . Denote by the total capacity of the resource available to the entire system (which need not be known by the agents). The capacity constraint requires that for all . As all agents are continuously increasing their share this capacity constraints will be reached eventually. We denote the times at which this happens by . At time the global utilization of the resource reaches capacity, thus

When capacity is achieved, some agents decrease their share of the resource. The instantaneous decrease of the share for agent is defined by:


where is a constant satisfying In the simplest version of the algorithm, agents are assumed to increase their shares at a constant rate in the AI phase:


where, , is a positive constant, which may be different for different agents. is known as the growth rate for agent . By writing for the th agent’s share at the th capacity event as we have

where is the time between events and . There are situations where not all agents may respond to every capacity event. Indeed, this is precisely the case considered in this paper. In this case agents respond asynchronously to a congestion notification and the AIMD model is easily extended by using our previous formalism by changing the multiplicative factor to at the capacity event if agent does not decrease.

4 Divisible multi-resource allocation

Let be a fixed constant; we denote by the set of twice-continuously differentiable functions defined as follows


This is essentially the set of functions that are convex and increasing in each coordinate. We consider the problem of allocating divisible resources with capacity , for among competing agents, whose cost functions belong to the set . Each cost function is private and should be kept private. However, we assume that the set is common knowledge: the control unit needs the knowledge of and the agents need to have cost functions from this set.

Recall that is the solution of (1). We propose a distributed algorithm that determines instantaneous allocations , for all and . We also show empirically that for every agent and resource , the long-term average allocations converge to the optimal allocations i.e.,

as (cf. (3)) to achieve the minimum social cost.

4.1 Algorithm

Each agent runs a distinct distributed AIMD algorithm. We use to represent the additive increase factor or growth rate and to represent multiplicative decrease factor, both corresponding to resource , for and is uniform amongst all agents. Every algorithm is initialized with the same set of parameters , , received from the control unit of the system. The constants is chosen based on the knowledge of fixed constant according to (10) to scale probabilities. We represent the one-bit capacity constraint event signals by at time step for resource , for all and . At the start of the system the control unit initializes the capacity constraint event signals with , and updates when the total allocation exceeds the capacity of a resource at a time step . After each update, control unit broadcasts it to agents in the system signaling that the total demand has exceeded the capacity of the resource . We describe the algorithm of control unit in Algorithm 1.

Input: , for . Output: , for , . Initialization: , for , broadcast ; foreach  do
       foreach  do
             if   then
                   broadcast ;
             end if
       end foreach
end foreach
Algorithm 1 Algorithm of control unit

The algorithm of each agent works as follows. At every time step, each algorithm updates its demand for resource in one of the following ways: an additive increase (AI) or a multiplicative decrease (MD) phase. In the additive increase phase, the algorithm increases its demand for resource linearly by the constant until it receives a capacity constraint event signal from the control unit of the system at time step that is,

After receiving the capacity constraint event signal at the event of total demand exceeding the capacity of a resource (in multiplicative decrease phase), based on the probability each agent either responds to the capacity event with its updated demand for a resource or does not respond in the next time step with the goal that the resulting average allocation profile converge to the optimal allocations , for all agents and resources in the system. If , we thus have

The probability depends on the average allocation and the derivative of cost function of agent with respect to , for all and . It is calculated as follows


for all , and . This process repeats: after the reduction of consumption all agents can again start to increase their consumption until the next capacity event occurs. It is obviously required that always . To this end the normalization factor is needed which is based on the set . The fixed constant is chosen such that satisfies the following


At the beginning of the algorithm the normalization factor for resource is calculated explicitly as the following and broadcast to all agents in the system


To capture the stochastic nature of the response to the capacity signal, we define the following Bernoulli random variables


for all , and . It is assumed that this set of random variables is independent.

Theorem 4.1.

For a given , if and the cost function of agent belongs to , then for all and , the probability satisfies .


Consider that and for all , and then from (7), we write that


Given that , dividing (12) by we obtain the following


We are aware that for a fixed constant , the normalization factor satisfies , for all (cf. (9)). Therefore, placing in (13), we obtain the following


Since, for all and , an agent makes a decision to respond the capacity event of a resource with the probability , which is mentioned as follows (cf. (8))

Hence, after placing in (14), we deduce that

Figure 1: Block diagram of the proposed AIMD model

The system is described in Figure 1 as a block diagram and the proposed distributed multi-resource allocation algorithm for each agent is described in Algorithm 2.

Input: , for and , , for . Output: , for , . Initialization: and , for ; foreach  do
       foreach  do
             if  then
                   ; generate independent Bernoulli random variable with the parameter ; if   then
                   end if
             end if
       end foreach
end foreach
Algorithm 2 Algorithm of agent (AIMD )

We observe using the Experiment 4.2 that the average allocation of resource results in the optimal value over time, for all and .

Remark 4.2.

Suppose there are resources in the system, then communication overhead will be bits at time step, for all . In the worst case scenario this will be bits per time unit, which is quite low. Furthermore, the communication complexity does not depend on the number of agents in the system.

4.2 Experiments

For convenience we use only two resources and in the experiment. We denote as allocation of and as allocation of of agent at time step , for all agents competing for the resources in the system. We chose agents and the normalization factors . The initial states of all agents for and resources are initialized with . For resource we chose the additive increase factor and multiplicative decrease factor and for resource we chose and , respectively. The resource capacities and are of and resources, respectively. Suppose, for all , and

are uniformly distributed random variables. Using the random variables

and we consider the following cost function to generate random costs of each agent at different time steps

Figure 2: Evolution of average allocation ofresources Figure 3: Instantaneous allocation of resources for last time steps

The following are some of the results obtained from the experiment. We select agents randomly to plot the figures, and mention the legend wherever necessary. It is observed in Figure (2) that the average allocations and converge over time to their respective optimal values and , for all . The allocation phases (AI and MD) are demonstrated in Figure (3) that shows the instantaneous allocation and over last time steps.

As we know that, to achieve optimality the derivatives of the cost functions of agents for a particular resource should make a consensus. Figure (4) is the error bar of derivatives and of cost functions for single simulation calculated across all agents. It illustrates that the derivatives of cost functions of all agents with respect to a particular resource concentrate more and more over time around the same value. Hence, the long-term average allocation of resources for the stated optimization problem is optimal.

Figure 4: Evolution of profile of derivatives of of all agents Figure 5: Evolution of absolute difference between average allocation and the optimal allocation (calculated)

To validate the results received from the algorithm, we calculate the optimal values and using the interior-point method for the same optimization problem, for all . Let be the largest time step used in simulation, then Figure (5) illustrates that the long-term average allocation is approximately same as calculated optimal value and similarly . It can be further seen in Figure (6) that the ratio of and is also close to .

Figure 6: Evolution of Figure 7: Evolution of sum of average allocation of resources, the capacities are and .
Figure 8: Total allocation of resources for last time steps

Figure (7) illustrates the sum of average allocations over time, we observe that it is approximately equal to the respective capacities, for all resources . Figure (8) shows the sum of instantaneous allocations for last time steps of and resources with capacities and , respectively, we observe that the sum of instantaneous allocations are concentrated around the respective capacities. To overcome the overshoots of total allocations of resource , we suppose and modify the algorithm of control unit to broadcast the capacity constraint event signal when , for all and .

5 Binary multi-resource allocation

In contrast to divisible resources, indivisible unit-demand resources or the binary resources are either allocated one unit to an agent or not allocated, therefore algorithms proposed in Section 4 do not work in this setting. Hence, we propose different distributed algorithms which suit such resource allocation problems. We name these distributed algorithms as binary multi-resource allocation algorithms which are loose variants of the algorithms of Section 4. We find many applications or cases where there is need to solve binary multi-resource allocation optimization problem in a distributed fashion, e.g., the allocation of parking spaces for different type of cars, say the electric cars near the charging points and the conventional cars in the allotted parking spaces. A parking space is either allocated to a user or not allocated. In this section, we use the same notations as introduced earlier until stated otherwise.

Suppose agents are competing for indivisible unit-demand resource which have capacities , respectively. Let be the cost functions of agents, we consider that agents do not share their cost functions or allocation information with other agents. For fixed , and , let be the Bernoulli random variable which denotes whether agent receives one unit of resource at time step or not. Further, let be the average allocation of indivisible unit-demand resource of agent , which is calculated as follows


for and . Notice that (15) is different from (2) in the sense that it calculates average using the Bernoulli random variable for indivisible unit-demand resource , whereas (15) calculates average using real valued for divisible resource, for all . Here, we consider that all resources are utilized on average.

When an agent receives from the control unit at time step , it calculates the probability in the following manner to make a decision about its demand for resource at next time step


Here, is used to bound the probability , for all , and .

Suppose that takes the floating point values represented by bits. If there are indivisible unit-demand resources in the system, then the communication overhead in the system will be bits per time unit. This is in contrast to the divisible resource allocation problem, where in the worst case scenario only bits are required per time unit. The communication complexity in this case also is independent of the number of agents in the system.

5.1 Experiments

In this experiment, for convenience we used two resources and and three cost functions. Each cost function represents a class, and a set of agents belong to each class. The cost depends on the average allocation of indivisible unit-demand resources. For agent , we consider the following cost functions:

We consider agents competing for indivisible unit-demand resources in the system. Along with this, we chose the capacity of resource as and that of as . The parameters are initialized with the following values, , , and

. In the experiment, for the sake of simplicity we assume that all agents join the system at the start of the algorithm. We classified the agents as follows; agents

to belong to class , agents to belong to class and agents to belong to class . In the experiment we observed that a few times overshoots , to overcome it we use , for all and .

Figure 9: Evolution of average allocation of resources

We illustrate some of the results of the experiment here. Figure (9) shows that the average allocations and converge over time to their respective optimal values and , respectively for agent . As mentioned earlier in (4), to show the optimality of a solution, the derivative of cost function of all agents with respect to a particular resource should make a consensus. Since, the derivatives and are same, for all , we mention here just one of these derivatives. Figure (10) illustrates the profile of derivatives of cost functions of all agents for a single simulation with respect to resource , we observe that they converge with time and hence make a consensus. The empirical results thus obtained, show the convergence of the long-term average allocation of resources to their respective optimal values using the consensus of derivatives of the cost functions.

Figure 10: Evolution of profile of derivatives of cost functions of all agents Figure 11: Total allocation of resources over last time steps

We see the total allocation of resources and for last time steps in Figure (11). It is observed that most of the allocations are concentrated around their respective resource capacities. To reduce the overshoot of a resource we consider a constant and modify the algorithm of control unit to calculate (cf. (LABEL:omega)) in the following manner

for all and .

6 Conclusion

We proposed algorithms for solving the multi-variate optimization problems for capacity constraint applications in a distributed manner for divisible as well as indivisible unit-demand resources, extending a variant of AIMD algorithm. The features of the proposed algorithms are; it involves little communication overhead, there is no agent to agent communication needed and each agent has its own private cost functions. We observed that the long-term average allocation of resources reach the optimal values in both the settings. It is interesting to solve the following open problems: first is to provide a theoretical basis for the proof of convergence and second is to find the bounds for the rate of convergence, and its relationship with different parameters or the number of occurrence of capacity events. The work can also be extended in several application areas like smart grids, smart transportation or broadly in Internet of things where sensors have very limited processing power and battery life.


  • [1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, A view of cloud computing, Commun. ACM 53 (2010), no. 4, 50–58.
  • [2] H. Aziz, S. Gaspers, S. Mackenzie, and T. Walsh, Fair assignment of indivisible objects under ordinal preferences, Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, AAMAS’14, 2014, pp. 1305–1312.
  • [3] D. P. Bertsekas, Incremental proximal methods for large scale convex optimization, Math. Program. 129 (2011), no. 2, 163–195.
  • [4] V. D. Blondel, J. M. Hendrickx, A. Olshevsky, and J. N. Tsitsiklis, Convergence in multiagent coordination, consensus, and flocking, 44th IEEE Conference on Decision and Control, European Control Conference. (2005), 2996–3000.
  • [5] L. Cai, X. Shen, J. Pan, and J. W. Mark, Performance analysis of TCP-friendly AIMD algorithms for multimedia applications, IEEE Transaction on Multimedia 7 (2005), no. 2, 339–355.
  • [6] T. H. Chang, A. Nedic, and A. Scaglione, Distributed constrained optimization by consensus-based primal-dual perturbation method, IEEE Transactions on Automatic Control 59 (2014), no. 6, 1524–1538.
  • [7] D. Chiu and R. Jain, Analysis of the increase and decrease algorithms for congestion avoidance in computer networks, Computer Networks and ISDN Systems 17 (1989), no. 1, 1–14.
  • [8] M. Corless, C. King, R. Shorten, and F. Wirth, AIMD dynamics and distributed resource allocation, Advances in Design and Control, no. 29, SIAM, Philadelphia, PA, 2016.
  • [9] E. Crisostomi, M. Liu, M. Raugi, and R. Shorten, Plug-and-play distributed algorithms for optimized power generation in a microgrid, IEEE Transactions on Smart Grid 5 (2014), no. 4, 2145–2154.
  • [10] S. Deilami, A. S. Masoum, P. S. Moses, and M. A. S. Masoum, Real-time coordination of plug-in electric vehicle charging in smart grids to minimize power losses and improve voltage profile, IEEE Transactions on Smart Grid 2 (2011), no. 3, 456–467.
  • [11] J. C. Duchi, A. Agarwal, and M. J. Wainwright, Dual averaging for distributed optimization: Convergence analysis and network scaling, IEEE Transactions on Automatic Control 57 (2012), no. 3, 592–606.
  • [12] W. M. Griggs, J. Y. Yu, F. R. Wirth, F. Hausler, and R. Shorten, On the design of campus parking systems with QoS guarantees, IEEE Trans. Intelligent Transportation Systems 17 (2016), no. 5, 1428–1437.
  • [13] S. Han, U. Topcu, and G. J. Pappas, Differentially private distributed constrained optimization, IEEE Transactions on Automatic Control 62 (2017), no. 1, 50–64.
  • [14] A. Jadbabaie, J. Lin, and A. S. Morse, Coordination of groups of mobile autonomous agents using nearest neighbor rules, IEEE Transactions on Automatic Control 48 (2003), no. 6, 988–1001.
  • [15] B. Johansson, T. Keviczky, M. Johansson, and K. H. Johansson, Subgradient methods and consensus algorithms for solving convex optimization problems, 47th IEEE Conference on Decision and Control., Dec. 2008, pp. 4185–4190.
  • [16] S. S. Kia, J. Cortes, and S. Martinez, Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication, Automatica 55 (2015), 254 – 264.
  • [17] T. C. Koopmans and M. Beckmann, Assignment problems and the location of economic activities, Econometrica 25 (1957), no. 1, 53–76.
  • [18] P. Lin, W. Ren, Y. Song, and J. A. Farrell, Distributed optimization with the consideration of adaptivity and finite-time convergence, American Control Conference, June 2014, pp. 3177–3182.
  • [19] A. Nedic, Asynchronous broadcast-based convex optimization over a network, IEEE Transactions on Automatic Control 56 (2011), no. 6, 1337–1351.
  • [20] A. Nedic and A. Ozdaglar, Distributed subgradient methods for multi-agent optimization, IEEE Transactions on Automatic Control 54 (2009), no. 1, 48–61.
  • [21] G. Notarstefano and F. Bullo, Distributed abstract optimization via constraints consensus: Theory and applications, IEEE Trans. Automat. Contr. 56 (2011), no. 10, 2247–2261.
  • [22] G. Shi, K. H. Johansson, and Y. Hong, Reaching an optimal consensus: Dynamical systems that compute intersections of convex sets, IEEE Transactions on Automatic Control 58 (2013), no. 3, 610–622.
  • [23] S. Stuedli, R. H. Middleton, J. H. Braslavsky, and R. Shorten, AIMD in a discrete time implementation or with a non-constant shared resource, 2015 5th Australian Control Conference (AUCC), Nov. 2015, pp. 230–235.
  • [24] J. Wang and N. Elia, Control approach to distributed optimization, 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Sept. 2010, pp. 557–561.
  • [25] F. Wirth, S. Stuedli, J. Y. Yu, M. Corless, and R. Shorten,

    Nonhomogeneous Place-Dependent Markov Chains, Unsynchronised AIMD, and Network Utility Maximization

    , ArXiv e-prints (2014).