Optimal Resource Scheduling and Allocation in Distributed Computing Systems

The essence of distributed computing systems is how to schedule incoming requests and how to allocate all computing nodes to minimize both time and computation costs. In this paper, we propose a cost-aware optimal scheduling and allocation strategy for distributed computing systems while minimizing the cost function including response time and service cost. First, based on the proposed cost function, we derive the optimal request scheduling policy and the optimal resource allocation policy synchronously. Second, considering the effects of incoming requests on the scheduling policy, the additive increase multiplicative decrease (AIMD) mechanism is implemented to model the relation between the request arrival and scheduling. In particular, the AIMD parameters can be designed such that the derived optimal strategy is still valid. Finally, a numerical example is presented to illustrate the derived results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

03/31/2022

Optimal Resource Scheduling and Allocation under Allowable Over-Scheduling

This paper studies optimal scheduling and resource allocation under allo...
09/17/2018

The Serverless Scheduling Problem and NOAH

The serverless scheduling problem poses a new challenge to Cloud service...
01/17/2021

Tailored Learning-Based Scheduling for Kubernetes-Oriented Edge-Cloud System

Kubernetes (k8s) has the potential to merge the distributed edge and the...
12/17/2021

An Exact Algorithm for the Linear Tape Scheduling Problem

Magnetic tapes are often considered as an outdated storage technology, y...
01/08/2019

A queueing-theoretic model for resource allocation in one-dimensional distributed service network

We consider assignment policies that allocate resources to requesting us...
05/07/2018

Online Best Reply Algorithms for Resource Allocation Problems

We study online resource allocation problems with a diseconomy of scale....
04/10/2019

R-Storm: Resource-Aware Scheduling in Storm

The era of big data has led to the emergence of new systems for real-tim...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the advances in networking and hardware technology, distributed computing is increasingly demanded in many applications like multi-agent systems [1], Internet multimedia traffic [2] and network systems [3]. For this purpose, distributed computing systems have been proposed to tie together the power of large number of resources distributed across networks [4]. In particular, scalability, reliability, information sharing and information exchange from remote sources are main motivations for the users of distributed systems [5]. A distributed computing system is composed of a set of computing/processing nodes among which communication links are established. From different perspectives of computing systems, many research topics have been investigated [6, 7, 8, 9], such as power management, request scheduling, resource allocation and system reliability, thereby resulting in different models, algorithms and software tools.

Among these research topics, resource allocation is widely recognized to be an important research problem [4, 6]. The resource management mechanism determines the efficiency of the used resources and guarantees the Quality of Service (QoS) provided to the users. In this respect, resource allocation mechanisms include scheduling and allocation techniques to assign resources (sub-)optimally while considering the task characteristics and QoS requirements. The scheduling technique determines how to distribute (or even allocate) requests among various nodes, whereas the resource allocation pertains to the control of available resources that are provisioned to requests entering a computing system. In order to deal with the resource allocation problems in different environments like cluster, cloud, grid computing systems [4], many resource allocation models and mechanisms have been proposed, such as proportional-share scheduling, market-based and auction-based mechanisms [10, 11]. Among these mechanisms, complete tandem queuing models address the request arrival process and enable to analyze the QoS performance [12]. In particular, for the two-station model, the first station is a buffer to store the incoming requests before being allocated to all computing nodes in the second station. Therefore, how to allocate the requests from the first station into the second station and how to assign all computing nodes to deal with the allocated requests are two essential problems interacting with each other. However, many existing works consider these two problems separately [13, 12] or do not involve the QoS performance analysis [14]. For instance, different QoS performances, including loss rates and average delays, were considered in [12], whereas the system properties, such as station lengths and structure properties, were studied in [13, 14]. Although, in [15], a simultaneous scheduling and resource allocation solution with AIMD dynamics is designed, the proposed scheme is not tuned via an optimality criterion. In many existing works, the arrival and processing of the requests are generally modeled into stochastic processes like Poisson process, which is adapted in this paper.

Motivated by the above discussion, in this paper we investigate how to synchronously and dynamically select request scheduling and resource allocation of distributed computing systems so that a cost function associated with QoS metrics is minimised. That is, we aim to balance the request scheduling rate defined as the number of requests allocated by a dispatcher per unit time and the service capacity defined as the number of requests processed per unit time. To be specific, based on the response time that each request experiences and the power consumption of each computing node [16], we first propose the cost function to evaluate the QoS performance. Second, the optimization problem is constructed by combining the cost function and the constraints from the system setup. Finally, using optimization theory, the optimal request scheduling and resource allocation strategy is derived synchronously for all computing nodes. Since the price of each computing node is introduced to determine which computing nodes to be allocated to the incoming requests, the derived optimal strategy is dynamic.

Since the request scheduling is not arbitrary and needs to balance the request arrival and processing, we next investigate the mechanism of the request scheduling and how to guarantee the validity of the derived optimal strategy in this case. In particular, we apply the well-known AIMD-based mechanism [17, 18, 19] to model the effects of the request arrival on the scheduling rate. The AIMD-based mechanism was initially proposed to prevent congestion in computer networks and subsequently applied in many fields [20, 19] including resource allocation. The advantage of the AIMD-based mechanism lies in providing a decentralized strategy for the scenario where the communication is limited and the privacy preservation is required [21]. However, there are few works on the AIMD-based mechanisms to deal with scheduling problems. In this paper, we establish the convergence of the scheduling rates under the AIMD-based mechanism. To validate the derived optimal strategy under the AIMD-based mechanism, the AIMD parameters are designed such that the convergence of the scheduling rates is to the derived optimal scheduling rates. To the best of our knowledge, the AIMD parameters are designed and optimized explicitly for the first time in this paper, whereas only the properties of AIMD parameters/matrix are applied in existing works [21, 20, 19] where AIMD parameters are given a priori or can be chosen randomly. In conclusion, we combine the optimization problem with the AIMD-based mechanism to derive an optimal scheduling and allocation strategy for distributed computing systems.

The remainder of this paper is structured as follows. Section II introduces the system setup and the problem to be studied. The cost-aware optimal strategy for the request scheduling and resource allocation is proposed in Section III. The AIMD-based scheduling policy is developed in Section IV. A numerical example is given in Section V. Conclusion and future work are presented in Section VI.

Notation. ; ; ; . denotes the -dimensional Euclidean space. Given a function and , we denote by

the derivative vector

.

Ii System Setup and Problem Formulation

In this section, we describe our model and system setup, define some notations, and state our assumptions.

Ii-a System Description

We consider the problem of scheduling an arrival request process at a dispatcher or load balancer in distributed computing systems such as cloud-centric networks, which tends to experience Poisson bursts of user traffic [22]. The request scheduling and resource allocation of distributed computing systems are illustrated in Fig. 1. A request is an individual demand for computing resources provided by a computing node. The request arrival is modeled as a Poisson process with an arrival rate ; see also [14, 13, 12]. Also, we assume that the arrival request process consists of a stream of transactions, each of which is to be allocated to a single computing node defined as a physical or virtual machine and environment. This reflects practical architectures like MapReduce [23], in which multiple computing nodes are allocated via the map function at the dispatcher.

After receiving the incoming requests, the dispatcher schedules these requests into multiple computing nodes. The desired case is that the dispatcher schedules all received requests; otherwise, some requests are stored in the dispatcher and queued for scheduling. To describe the number of all queued requests, we introduce the variable to denote the backlog of the dispatcher. Let the number of all computing nodes be . Each request is scheduled to a computing node, which scales its service capacity to minimize the overall processing time and cost. For the -th computing node with , is its scheduling rate from the dispatcher, and is its service rate, i.e., the number of arrival requests that can be serviced by the -th computing node per unit time. Both and are design parameters. It is clear that . If the scheduling rate is larger than the service rate , the -th computing node may store some requests that cannot be served in time. In this respect, similar to the dispatcher, the variable is introduced to denote the backlog of the -th computing node. Note that not all computing nodes need to be implemented, which implies that both and can be zero.

Fig. 1: Illustration of the model for request scheduling and resource allocation.

For the request scheduling and resource allocation as in Fig. 1, the dynamics of and is derived below [13, 15]:

(1)

If all incoming requests can be scheduled well, then can be zero at some time instant. In addition, can be lower/upper bounded in practice. These cases may affect the scheduling rates such that the all scheduling rates experience a jump, which can be modeled as an event-triggered system. A well-known model for this case is AIMD dynamics [21], and will be introduced in Section IV. On the other hand, since the scheduling and service rates can be designed, we can require to be smaller than such that does not need to be considered. This requirement is also reasonable in terms of the service cost [2]. To be specific, the storage of all requests in each computing node results in memory cost and an increase in the service response time, thereby resulting in the inefficiency of the resource allocation mechanism. From the above discussion, the following assumption is made.

Assumption 1

For the request scheduling and allocation system, the following holds.

  1. For each , is upper bounded by a constant .

  2. For each and all , .

In Assumption 1, the first item comes from the nature of each computing node, whose computation capacity is definitely limited. The second item can be imposed via the scheduling and allocation strategy to be designed.

Ii-B Cost Function for the Scheduling and Allocation

In distributed computing systems, the request scheduling problem involves how to redirect incoming requests to all computing nodes, whereas the resource allocation problem pertains to how to rearrange the resource of each computing node such that the scheduled requests can be served quickly with a minimal cost. Therefore, our goal is to propose a cost-aware optimal request scheduling and resource allocation strategy for distributed computing systems.

To evaluate the scheduling and allocation, we first consider the QoS performance. Here, we consider the following average mean response time [2] as the QoS performance:

(2)

which is the average mean of all response times with . From (2), we can see the reasonability of item 2) in Assumption 1. Since holds from Assumption 1, we can check that is convex with respect to each satisfying , and convex with respect to each satisfying . Here, we point out that besides response times, both reliability and security can be treated as the QoS performances [16, 10].

In addition to the QoS performance, we need to consider the cost of all computing nodes. Here, we investigate the power consumption via the total service cost defined below:

(3)

where is the service cost for each computing node. The service cost includes the routing cost and the computing cost. The routing cost is assumed to be a constant, whereas the computing cost is related to and non-decreasing in the service rate . That is, the computing cost will increase if more computing resources are requested by each computing node. The computing cost includes the cost of the power used by each computing node and the memory costs required by each computing node. In this respect, for each computing node, its service cost is defined as follows:

(4)

where and . It is easy to check that is convex. In addition, the derivative of with respect to , that is, , is positive and increasing with respect to . In (5), the first item is the power cost which is monomial in the service rate [24]; the second item is the cost of processor and storage memory; the last item is the routing cost. Since the service rate is upper bounded in practical systems, we can define as if ; see, e.g., [2] and some physical applications like Amazon elastic compute cloud (EC2) [25].

Combining the QoS performance (2) and the power consumption (3), we yield the following cost function:

(5)

where is a fixed weight and is defined in (4).

Ii-C Problem Formulation

Under Assumption 1, our objective is to find the scheduling rate vector and the service rate vector of all computing nodes such that the cost function in (5) is minimized. To this end, the following optimization problem is formulated.

(6a)
s.t. (6b)
(6c)
(6d)

In this optimization problem, we aim to minimize the cost function (6a), which equals to (5). The constraint (6b) naturally comes from the system setup, and the ideal case is . The constraints (6c)-(6d) come from Assumption 1.

From the optimization problem (6), the backlog of the dispatcher and all computing nodes is not involved. Therefore, to deal with the optimization problem (6), we first consider the backlog-free case and the optimal strategy of which is derived in Section III. Due to the constraint (6d) and the discussion in Section II-A, the backlog of all computing nodes does not need to be considered. To address the backlog of the dispatcher, we further investigate the mechanism of the request scheduling in Section IV. In particular, the AIMD mechanism is implemented and the AIMD parameters are designed explicitly to guarantee that the derived optimal strategy is still valid.

Iii Optimal Scheduling and Allocation

In this section, we establish the optimal scheduling and service rates for (6). Since not all computing nodes need to be activated, we first focus on how to choose the computing nodes to be activated. Based on the cost function (6a), we first introduce the following variable for each computing node:

(7)

which can be treated as the price of each computing node [2]. To show this, if the dispatcher pays one dollar per unit service time and dollars per unit cost (routing and computing), then the price of each computing node is the expected total price that the dispatcher pays to this computing node. In this way, the dispatcher should choose the computing nodes with the lowest prices to ensure the minimization of the cost function. Hence, there exists a threshold for each computing node to decide whether this computing node is activated such that the requests can be scheduled and processed via this computing node. That is, if the price of the computing node is smaller than this threshold, then this computing node will be activated and provide service capacities; otherwise, this computing node will not be activated. For each and given any , let be the solution to the equation below:

(8)

By detailed computation, is larger than and upper bounded due to . Since and are positive and increasing, is an increasing function. In particular, if exceeds certain bound, which is related to and denoted by , then there does not exist any solution to the equation (8). In this case, we define . The following lemma shows the relation between the variable (7) and the equation (8).

Lemma 1

For each , there exists such that and .

Proof:

First, since is convex and is bounded, we conclude the existence of such that and minimizes the right-hand side of (7). Second, from the derivative of (7) and the detailed computation, . From (8), , which implies that is the solution to (8) with . Hence, we conclude that .

Theorem 1

Consider the optimization problem (6). Let Assumption 1 hold and with the inverse function . The optimal scheduling rates and service capacities are

(9)
(10)

where is such that and

(11)
Proof:

For the optimization problem (6), all constraints are linear and convex, whereas the cost function is not convex with respect to any pair . The Lagrangian for (6) is defined as follows:

(12)

where are Lagrange multipliers.

Since not all computing nodes need to be activated, we denote by the set of all inactivated computing nodes, and for all . In this respect, the optimization problem (6) is rewritten equally as

(13a)
s.t. (13b)
(13c)
(13d)
(13e)

Next, we need to establish the set of all activated computing nodes with optimal scheduling and service rates. From (13), is only related to and linearly independent, whereas is only related to and linearly independent. Hence, we can easily check that and with are linearly independent. From [26, Definition 4.1] and [26, Theorem 4.3], the Guignard constraint qualification (GCQ) holds at , and further from [27, Theorem 6.1.4], the optimal solution (if exists) satisfies the following KKT conditions:

(14a)
(14b)
(14c)
(14d)

It is easy to find that holds from . From (14a),

(15)

where is the optimal service rate. If , then from (14d), and further from (14b),

(16)

If , we have from (14c) that , and from (15)-(16),

(17)

which shows that . If for some , then the optimal value of has no effects on the cost function in (6a), and can be chosen arbitrarily from . In this case, we can still choose , whereas needs to be chosen appropriately to ensure the KKT conditions (14a)-(14d). If , then can be chosen arbitrarily. In this case, we can follow the above argument to derive the same optimal values.

As stated above, if , then and from (15),

(18)

where is defined in (7). If , then we have from (15) and (17) that

(19)

From , (III) holds if either or . In particular, implies that , where is from Lemma 1. Since is increasing, implies . Hence, in the derived optimal strategy, the computing nodes are chosen via the increasing order of . In addition, needs to be satisfied, thereby resulting in (1).

Finally, we show the uniqueness of the derived optimal solution via the contradiction. Let and be two different optimal solutions to (6). Since , we have for all , which further implies from (10)-(1) that for all . However, For the following equation

(20)

we have from [28] that the solution to (20) is unique, which implies that for all and results in a contradiction. Therefore, we conclude that the optimal solution to the problem (6) (if it exists) is unique.

From Theorem 1, the thresholds are determined first to rearrange all computing nodes and then to decide which computing nodes to be activated. After determining the activated computing nodes, the optimal scheduling and allocation can be established. How to achieve this optimal strategy is summarized in Algorithm 1. On the other hand, we emphasize that the derived optimal strategy is available for the case that the request arrival is modeled into a random process like Poisson process, and in this case the arrival rate is the rate parameter of Poisson process.

Input:
Output: the optimal scheduling and service rate
Sort in a non-decreasing order From (1) choose a set of computing nodes to be activated Compute the threshold Set the optimal scheduling using (10) Set the optimal allocation using (9)
Algorithm 1 Optimal Scheduling and Allocation

Iv AIMD-based Scheduling

Due to the constraint (6d) in the optimization problem (6), the backlog of each computing node does not need to be considered. However, the backlog of the dispatcher needs to be investigated. In this section, we first apply the AIMD-based strategy to reconsider the request scheduling problem in Section IV-A, and then propose an approach to determine the AIMD parameters such that the derived optimal strategy is still valid in Section IV-B.

Iv-a AIMD-based Strategy

In the request scheduling, the over-scheduling phenomena may occur. Therefore, is imposed in (6) to avoid the over-scheduling of the limited requests. On the other hand, if there exists no backlog in the dispatcher (i.e., ), then it means no enough requests to be scheduled. These two cases may have effects on the derived optimal strategy. In this respect, the scheduling problem needs to be reconsidered. In particular, the scheduling rate may experience a jump in these two cases, which may further affect the service rate. An appropriate method is to model these two cases into an AIMD-based event-triggered system.

In the AIMD strategy, there exist two phases: the additive increase (AI) phase and the multiplicative decrease (MD) phase. In the AI phase, if the scheduling rate of each computing node does no reach its maximum (which is upper bounded via the service rate), then it will increases linearly with an additive rate . Once the maximum is reached, it saturates at this value until there exists no backlog or . If there exists no backlog in the dispatcher or , then the scheduling rate needs to be adjusted and experiences an instantaneous decrease with a multiplicative factor , which occurs in the MD phase. In these two phases, the MD phase is only activated at certain time instant and results in a jump, after which the AI phase is activated.

In the AIMD mechanism, the occurrence of the MD phase is determined via an event-triggered mechanism (ETM), which depends on the variable and the condition . Let be the event-triggering time with , and for the initial time . Hence, for each computing node, the behavior of its scheduling rate can be mathematically modeled below:

(21)

where and

(22)

From the above discussion, we define with . From (1) and (21)-(22), we have that equals to

(23)

which is similar to the one in [15]. Comparing (23) and the condition with (21), we find that (23) will not be activated for the case where is fixed. In this case, the ETM (22) can be written equivalently as

(24)

which implies that the backlog would not be zero. However, if switches among different modes or additional constraints are imposed on the backlog , then the condition (23) or its variants should be included into the ETM (22), which deserves further study. For the request scheduling with the AIMD-based strategy, we have the following proposition to ensure the convergence of scheduling rates.

Proposition 1

If the request scheduling follows the AIMD-based strategy, then for all , the scheduling rate of each computing node converges to a point given below:

(25)

where is sufficiently small, is the service rate, and the constant is such that

(26)

The proof of Proposition 1 follows the similar mechanism as in Lemma IV.2 of [29], and is omitted here. The differences lies in that the lower bound of the scheduling rate is not considered here and the upper bound is not a constant given a priori but the service rate which is derived from Theorem 1. Therefore, in (25) is not fixed but related to the service rate, and the introduction of comes from in (6d). In this respect, can be computed via an iterative method summarized in Algorithm 2. To be specific, given a initial value of as in line 1 of Algorithm 2, we compute the scheduling rate as in line 2. If with a sufficiently small threshold , then both and are computed iteratively as in lines 3-4. This iteration is terminated until reaches the threshold . On the other hand, if in (25), then

(27)
Input: the constants for and the threshold
Output: and
1 Set with For all , compute as in (25) while  do
2       Compute For all , compute as in (25)
Algorithm 2 Computation of

Iv-B AIMD Parameters Design

For each scheduling rate obtained from the AIMD-based strategy, the service rate determines the convergent point of each scheduling rate. From Theorem 1, both the scheduling rate and the service rate are optimized. A direct way is to combine the solutions in Theorem 1 and Proposition 1 such that the convergence of all scheduling rates is to the optimal scheduling rates, thereby balancing the resource allocation mechanism and the AIMD-based strategy.

From Theorem 1, not all computing nodes are activated. For these inactivated computing nodes, both and are zero. Hence, we can set simply. In the following, we only investigate all activated computing nodes, whose scheduling rates and service capacities are given below with :

(28)

In terms of scheduling rates, and should be equivalent. In this case, from (25) and (28), we have

(29)

If , then from (29),

Since is arbitrarily small, we have from (8) and the derivative of that needs to be sufficiently large, which contradicts with the boundedness of . Therefore, , and from (26),

(30)

which is required to be satisfied by the parameters and for all activated computing nodes. From (30), the choices of for all activated computing nodes affect each other. In particular, if is treated as a single variable, then we can derive the relation between and . Once and are determined, we can further derive and explicitly.

Iv-C Further Discussion

From Section IV-A, the backlog in the dispatcher is discussed, while is not involved in the ETM (24). Therefore, with the applied AIMD-based mechanism, the backlog in the dispatcher would be unbounded. In this respect, we discuss the backlog in the dispatcher further. From [13, Section 4.3] where the arrival rate is modeled to be time-varying, the upper bound of the backlog in the dispatcher can be approximated by some function, and the convergence of this function implies the convergence of the backlog in the dispatcher. For the fixed arrival rate in this paper, this function is not convergent, which further results in the reconsideration of the arrival rate.

In the following, we propose an alternative method to adjust the arrival rate while having not effects on the request scheduling and resource allocation afterwards. Let and be two thresholds for the variable . If , then the backlog in the dispatcher reaches its minimum such that the following scheduling and allocation are affected. If , then the backlog in the dispatcher reaches its maximum such that the arrival rate should be decreased. In these two cases, the dispatcher sends a warning signal to the request arrival such that the arrival rate switches in two modes, which is summarized below.

(31)

where is constant, is the real arrival rate, and is the desired arrival rate for the scheduling and allocation. Here, the constraint on is to ensure the decrease of the backlog in the dispatcher; see also (1). From (31), if , then the arrival rate is in the first mode until increases to its maximum due to the limited capacity of the dispatcher; if , then the arrival rate is in the second mode until decreases to its minimum due to the optimal strategy in Section III and (26). Therefore, the backlog in the dispatcher is bounded in . By choosing and appropriately, the following scheduling and allocation strategies in Sections III and IV will not be affected. In particular, in the second mode, due to the backlog in the dispatcher, the decrease of the arrival rate has no effects on the scheduling and allocation strategies. Hence, such switching mechanism guarantees both the derived strategy and the boundedness of the backlog in the dispatcher. A general case is that the arrival rate switches in different modes having effects on the scheduling and allocation strategies, which deserves further studies.

Fig. 2: Evolution of the optimal number in (1) and the average mean response time in (2) with the arrival rate .

V Numerical Example

In this section, a numerical example is presented to illustrate the derived results. We consider a distributed computing system with 3 computing nodes with different service capacities. These heterogeneous computing nodes have different power consumption and processing capacities. Typically, three types are lightweight computing node (denoted by Node 1) with low service rate and power consumption, middleweight computing node (denoted by Node 2) with medium service rate and power consumption, heavyweight computing node (denoted by Node 3) with high service rate and power consumption. That is, in (4), we set . In addition, and the weight . Therefore, from (7) we can compute that . Hence, when the requests come, the lightweight computing node is activated first. Due to the boundedness of with , we have the upper bound of , that is, . Here, we choose and , and from Theorem 1, we can derive the optimal scheduling rates and service rate . Note that the number of all activated computing nodes depends on the arrival rate. With different arrival rates, the number of the activated computing nodes is shown in Fig. 2. In addition, the evolution of the average mean response time in (2) is also presented in Fig. 2. From Fig. 2, with the increase of the arrival rate, the number of the activated computing nodes increases, whereas the average mean response time decreases.

Fig. 3: Evolution of the scheduling rates with the time, . The dashed line is the scheduling rates obtained from (25).

Next, we consider the AIMD-based strategy. Let and . We can compute from (29) that , and further from Proposition 1 that . It is easy to check that . In addition, we can see that given the AIMD parameters, the scheduling rates from (25) are different from those from Theorem 1. In particular, for Nodes 1 and 2, whereas for Node 3. Under the AIMD-based strategy, the evolution of the scheduling rates is presented in Fig. 3. Note that the scheduling rates of all computing node reach their upper bounds during the AI phase and then do not further increase.

Fig. 4: The relation and from (29).

Finally, if the AIMD parameters are unknown a priori, then we can design these parameters via the relation in (29). In particular, if we consider as a single variable, we can derive the relation between and ; see Fig. 4. That is, if we choose a priori, then from Fig. 4 the corresponding value of is determined. The value of can be designed such that (30) and (26) are satisfied.

Vi Conclusion

In this paper, we investigated the problems of cost-aware request scheduling and resource allocation for distributed computing systems. By proposing a cost function of the response time and service cost, we proposed an optimal scheduling and allocation strategy to minimize the cost function. We further considered the AIMD mechanism to model the relation between the incoming requests and the request scheduling, and combined the AIMD mechanism and the derived optimal strategy to determine the AIMD parameters. Finally, the derived results were illustrated via a numerical example. Future research will be directed to general cases involving in switching arrival rates and different request priorities.

References

  • [1] M. Woolridge and M. J. Wooldridge, Introduction to Multiagent Systems. John Wiley & Sons, Inc., 2001.
  • [2] J. Tang, W. P. Tay, and Y. Wen, “Dynamic request redirection and elastic service scaling in cloud-centric media networks,” IEEE Trans. Multimedia, vol. 16, no. 5, pp. 1434–1445, 2014.
  • [3] A. D. Kshemkalyani and M. Singhal, Distributed Computing: Principles, Algorithms, and Systems. Cambridge University Press, 2011.
  • [4] H. Hussain, S. U. R. Malik, A. Hameed, S. U. Khan, G. Bickler, N. Min-Allah, M. B. Qureshi, L. Zhang, W. Yongji, N. Ghani et al., “A survey on resource allocation in high performance distributed computing systems,” Parallel Computing, vol. 39, no. 11, pp. 709–736, 2013.
  • [5] Y. Amir, B. Awerbuch, A. Barak, R. S. Borgstrom, and A. Keren, “An opportunity cost approach for job assignment in a scalable computing cluster,” IEEE Trans. Parallel. Distrib. Syst., vol. 11, no. 7, pp. 760–768, 2000.
  • [6] B. Huang and M. Zhou, Supervisory Control and Scheduling of Resource Allocation Systems: Reachability Graph Perspective. John Wiley & Sons, 2020.
  • [7] B. Y. Kimura, D. C. Lima, and A. A. Loureiro, “Packet scheduling in multipath TCP: Fundamentals, lessons, and opportunities,” IEEE Systems Journal, vol. 15, no. 1, pp. 1445–1457, 2020.
  • [8] A. Pentelas, G. Papathanail, I. Fotoglou, and P. Papadimitriou, “Network service embedding across multiple resource dimensions,” IEEE Trans. Netw. Serv. Manag., vol. 18, no. 1, pp. 209–223, 2020.
  • [9] J. G. Herrera and J. F. Botero, “Resource allocation in NFV: A comprehensive survey,” IEEE Trans. Netw. Serv. Manag., vol. 13, no. 3, pp. 518–532, 2016.
  • [10] C.-H. Wu, M. E. Lewis, and M. Veatch, “Dynamic allocation of reconfigurable resources ina two-stage tandem queueing system with reliability considerations,” IEEE Trans. Automat. Contr., vol. 51, no. 2, pp. 309–314, 2006.
  • [11] T. Sandholm and K. Lai, “Dynamic proportional share scheduling in hadoop,” in Workshop on Job Scheduling Strategies for Parallel Processing. Springer, 2010, pp. 110–131.
  • [12] L. Le and E. Hossain, “Tandem queue models with applications to QoS routing in multihop wireless networks,” IEEE Trans. Mob. Comput., vol. 7, no. 8, pp. 1025–1040, 2008.
  • [13] F. L. Presti, Z.-L. Zhang, and D. Towsley, “Bounds, approximations and applications for a two-queue GPS system,” in IEEE Conference on Computer Communications, vol. 3. IEEE, 1996, pp. 1310–1317.
  • [14] D. van Leeuwen and R. N. Queija, “Optimal dispatching in a tandem queue,” Queueing Systems, vol. 87, no. 3, pp. 269–291, 2017.
  • [15] E. Vlahakis, N. Athanasopoulos, and S. McLoone, “AIMD scheduling and resource allocation in distributed computing systems (accepted),” in IEEE Conference on Decision and Control. IEEE, 2021.
  • [16] X. Li, L. Zhao, G. Chen, W. Zhou, H. Zhang, Z. Pan, Q. Dong, and J. Ling, “Performance and power consumption tradeoff in multimedia cloud,” Multimedia Tools and Applications, vol. 79, no. 45, pp. 33 381–33 396, 2020.
  • [17] S. Stüdli, M. Corless, R. H. Middleton, and R. Shorten, “On the AIMD algorithm under saturation constraints,” IEEE Trans. Automat. Contr., vol. 62, no. 12, pp. 6392–6398, 2017.
  • [18] M. Ravasio, G. P. Incremona, P. Colaneri, A. Dolcini, and P. Moia, “Distributed nonlinear AIMD algorithms for electric bus charging plants,” Energies, vol. 14, no. 15, p. 4389, 2021.
  • [19] X. Fan, E. Crisostomi, D. Thomopulos, B. Zhang, R. Shorten, and S. Yang, “An optimized decentralized power sharing strategy for wind farm de-loading,” IEEE Trans. Power Syst., vol. 36, no. 1, pp. 136–146, 2020.
  • [20] E. Ucer, M. C. Kisacikoglu, and M. Yuksel, “Analysis of decentralized AIMD-based EV charging control,” in IEEE Power & Energy Society General Meeting. IEEE, 2019, pp. 1–5.
  • [21] M. Corless, C. King, R. Shorten, and F. Wirth, AIMD Dynamics and Distributed Resource Allocation. SIAM, 2016.
  • [22] S. Ranjan and E. Knightly, “High-performance resource allocation and request redirection algorithms for web clusters,” IEEE Transactions on Parallel and Distributed Systems, vol. 19, no. 9, pp. 1186–1200, 2008.
  • [23] J. Dean and S. Ghemawat, “MapReduce: simplified data processing on large clusters,” Communications of the ACM, vol. 51, no. 1, pp. 107–113, 2008.
  • [24] L. Chen and N. Li, “On the interaction between load balancing and speed scaling,” IEEE J. Sel. Areas Commun., vol. 33, no. 12, pp. 2567–2578, 2015.
  • [25] S. Yi, D. Kondo, and A. Andrzejak, “Reducing costs of spot instances via checkpointing in the Amazon elastic compute cloud,” in International Conference on Cloud Computing. IEEE, 2010, pp. 236–243.
  • [26] T. Hoheisel and C. Kanzow, “On the Abadie and Guignard constraint qualifications for mathematical programmes with vanishing constraints,” Optimization, vol. 58, no. 4, pp. 431–448, 2009.
  • [27] M. S. Bazaraa and C. M. Shetty, Foundations of Optimization. Springer Science & Business Media, 1976.
  • [28] T. Fujisawa and E. Kuh, “Some results on existence and uniqueness of solutions of nonlinear networks,” IEEE Transactions on Circuit Theory, vol. 18, no. 5, pp. 501–506, 1971.
  • [29] S. Stüdli, M. Corless, R. H. Middleton, and R. Shorten, “On the modified AIMD algorithm for distributed resource management with saturation of each user’s share,” in Proc. IEEE Conf. Decis. Control. IEEE, 2015, pp. 1631–1636.