I Introduction
Processing the enormous volume of data sets generated by the social networks such as Facebook [1] and Twitter [2] , financial institutions, healthcare industry, etc. has become a major motivations for dataparallel processing. In dataparallel processing applications, an incoming task needs a specific piece of data which is physically replicated on different servers. The task receives the service from a server which is close to the physical location of the stored data faster than another server far from the location where the data is stored. For instance, for a task, the speed of receiving the service from a server which has the data stored on its memory or local disk is much faster than receiving the service from a server which does not have the data. That forces the server to fetch the data from another server from the same rack, or even other remote racks. Unless the speed of data center networks have been increased, the differences between service time is still obviously large [3], [4], [5], [6]. While assigning tasks, it is a critical consideration to schedule a task on a local server [6], [7], [8], and [9]. Scheduling in this setting is called the neardata scheduling problem, or scheduling with data locality. As a result, scheduling and assigning tasks to more suitable servers makes the system stable in the capacity region of the system, and also reduces the mean delay experienced by all tasks in average sense. Therefore, to evaluate different algorithms for scheduling tasks to different servers, two optimality criteria are defined as follows:

Throughput Optimality: A throughput optimal algorithm stabilizes the system in the whole capacity region. That is, the algorithm is robust to arrival rate changes, as long as the arrival rate is strictly in the capacity region of the system.

Heavytraffic Optimality: A heavytraffic optimal algorithm minimizes the mean task completion time as the arrival rate vector approaches the boundary of the capacity region (note that task completion time consists of both waiting time of the task to be assigned to a server and the service time). As a result, a heavytraffic optimal algorithm assigns tasks to servers efficiently when the system is in the peak loads close to the capacity region boundary. This not only stabilizes the system, but also minimizes the mean delay in stressed load conditions.
Although there are various heuristic algorithms taking the neardata scheduling into consideration for data centers with multiple levels of data locality [10], [6], [11], their throughput and heavytraffic optimality properties are not studied. In this paper, we discuss different realtime scheduling algorithms for systems with multiple levels of data locality with theoretical guaranties for throughput and heavytraffic optimality. We also compare those algorithms against each other and evaluate them for mean task completion time in the capacity region, including both in high load and low load.
In the following, we first explain the common data center structures and problem definition. In Section III, we discuss prime algorithms such as Fluid Model Planning and Generalized cRule. We then discuss the problem for two and then three levels of data locality, in Section IV and Section V, respectively. The paper is concluded in Section VI.
Ii System Model
Consider the system consists of servers indexed by . Each server belongs to a specific rack denoted by . Without loss of generality, let’s assume that the servers in a rack are indexed sequentially. That is, servers are in the first rack, servers are in the second rack, and so on. Let the rack for the server be . Each data chunk is stored on a set of servers denoted by . In real world applications, consists of three servers. The reason to store the data in different servers is to allow the data to be accessed through other servers if a server disconnects from the network or fails to continue working. The larger the set is, the more secure the data would be. However, as the storage of servers is limited, the data is usually stored on no more than three servers. So, , and the set of all task types is denoted by . Therefore, different tasks can be labeled by the location of their associated data chunk. For example, all tasks with their stored data in servers and are denoted by type task. All servers in the set are named local servers to the task type since they keep the data needed for the task to be processed. The set of servers are racklocal servers, and all other servers are named remote servers to type task. If server is local, racklocal, or remote to task of type , it is denoted by , , or , respectively. The system model is assumed to be discretetime. If task is scheduled to server (, or
), the probability that the task receives service in a time slot and departs the system at the end of the time slot is
(, or ), where . In other words, the local, racklocal, and remote services follow , , and , respectively. As a result, on average it takes less time for a task to receive service from a local server (), other than a rack locallocal server (), or a remote server (). On the other hand, the arrival of type task at the beginning of time slot is denoted by which is bounded with average arrival . The arrival of type tasks is assumed to be i.i.d.With the service time distribution illustrated above, when a new task arrives, there might not be no local, racklocal, or remote available servers to serve the task immediately. Therefore, multiple queues exist in the data centers where the tasks are kept, waiting to receive service. Based on the structure of the data center and the scheduling algorithm, the number of queues can be less than, equal, or larger than the number of servers. For example, FIFO algorithm just needs one single queue for any number of servers to be implemented. In the rest of the paper, we will point out the number of queues needed for different algorithms.
A question that might be raised here is ”At which queue should a new arriving task wait at so to finally receive service from a server?”. This part is handled by a routing algorithm which takes care of routing new tasks to appropriate queue. On the other hand, when a server is done with processing of a task and becomes idle, it needs to decide which task to pick among all tasks queued at all servers. The act of assigning tasks to idle servers is called scheduling with a little bit abuse of terminology. Therefore, an algorithm is fully defined by both its routing and scheduling policies.
As a new terminology, three levels of data locality refers to the case of having all kinds of local, racklocal, and remote services in the system. The number of data locality levels depends directly on the structure of the system. For example, if tasks receive their services just locally or remotely (no rack structure exists), then just two levels of data locality exists.
Iia Capacity Region Realization
Let denote the arrival rate of type tasks that receive service from the th server. In fact, is the decomposition of . Assuming that a server can afford at most load 1 for all local, racklocal, and remote tasks, the capacity region can be characterized as follows [12, 13]:
(1)  
A more thorough look into the definition of the capacity region , it is easy to figure out that for finding the capacity region of the system described in Section II
, a linear programming optimization must be solved.
Iii Affinity Scheduling
The neardata scheduling problem is a special case of affinity scheduling [14], [15], [16], [17], and [18]. In this section, two algorithms, Fluid Model Planning and Generalized cRule, are illustrated which are somehow the pioneer works on the scheduling problems. However, they are not practical to be used in data centers as it will be discussed in the following two subsections.
Iiia Fluid Model Planning
For routing tasks and scheduling servers, fluid model planning algorithm is proposed by Harrison and Lopez [19] and [16] which is both throughput and heavytraffic optimal not only for three levels of data locality, but also for the affinity scheduling problem. To implement this algorithm, distinct queues are needed for different types of tasks, . Each new incoming task is routed to the queue of its own type. In the model described, there are at most in the order of types of tasks (the data associated to each task type is located on 3 servers, so at most there are different task types). For finding the scheduling policy, the arrival rate of each task type is required to be known in advance. By solving a linear programming problem, the algorithm distinguishes basic activities. Based on the basic activities derived from the linear programming part, tasks are assigned to idle servers. There are two main objections for this algorithm: First, there should be in the order of number of queues for each task type. In practice, it is not practical to have queues in the cubic order of the number of servers. It excessively complicates the system and its underlying network. Second, the algorithm assumes the arrival rate of different types of tasks to be known. However, firstly the load is not known in real applications, and secondly it changes over time. Therefore, unless the algorithm is throughput and heavytraffic optimal, it cannot be used in real applications.
IiiB Generalized cRule
Stolyar [20] and Mandelbaum and Stolyar [15] proposed Generalized cRule. On contrast to fluid model planning which uses the knowledge of the arrival rate of each task type, generalized crule uses MaxWeight notion which makes the algorithm needless of knowing the arrival rates. But similar to fluid model planning, the algorithm needs one queue per task type. For routing part, each incoming task joins the queue of its own type. Assume that the cost rate incurred by the type tasks queued is , where is the queue length of type tasks, and the cost may generally depend on the task type. The cost function should have fairly normal features among which we can mention the followings: is convex and continuous with , to be strictly increasing and continuous with . Having the cost functions for different kinds of tasks, the server is scheduled to a task type in the set below when it becomes idle:
(2) 
Where is , , or if the task type is local, racklocal, or remote to the idle server respectively. For instance, if the holding cost for type task is with where satisfies all the conditions for a valid cost function, the generalized crule is proved by Stolyar to asymptotically minimize the holding cost below [20]:
(3) 
As the constant should be strictly positive in order that the cost function satisfies the conditions, the algorithm is not heavytraffic optimal in the sense defined in Section I. Besides, using generalized crule, we still need number of servers (where M is the number of servers). However, it is not practical having a large number of queues as the system becomes more complicated.
All the algorithms given in the next sections do not employ the knowledge of arrival rate, and they assume the system to have the same number of queues as the number of servers, which is a more realistic structure.
Iv Two Levels of Data Locality
The model described in section II is for the case of three levels of data locality, as a task can be served with three different service rates , , and . However, most previous theoretical work, except the very last one by Xie, Yekkehkhany, and Lu [12], has been done on two levels of data locality which is actually the base of three or more levels of data locality. The model for two levels of data locality is somehow the same as the one described in section II; except, in two levels of data locality, there is no notion of rack and racklocal service. Assuming the two levels of data locality, tasks either get service from a server in the set locally with rate , or get service from any other servers remotely with rate . Therefore, the capacity region would be revised as . For two levels of data locality Wang et al. [21], and Xie, and Lu [22] respectively proposed JSQMaxWeight, and Pandas algorithms that are discussed in the next two subsections below.
Iva JointheShortestQueueMaxWeight (JSQMW)
For two levels of data locality, JSQMW has been proven to be throughput optimal, but heavytraffic optimal just in a specific traffic scenario [21]. Wang et al. [21] assume one queue per server, where the length of queue at time is denoted by . A central scheduler maintains the lengths of all queues to decide the routing for new incoming tasks, and scheduling for idle servers. As of routing policy, when a new task of type arrives to the system, the central scheduler routes the task to the shortest queue of the servers in the set (all ties are broken randomly all over this paper). In other words, the new task is routed to the shortest local queue. For scheduling policy, as server becomes idle, the central scheduler assigns a task queued at a queue in the set below to server :
(4) 
Therefore, the idle server gives its next available service to the queue with the maximum weight as defined above. As it was stated before, JSQMW is not heavytraffic optimal in all loads. The specific traffic scenario which the JSQMW is heavytraffic optimal is pointed out in section LABEL:evaluation. For more details refer to [12, 13], and [21].
Next, Pandas algorithm proposed by Xie, and Lu [22] is presented, which is both throughput and heavytraffic optimal.
IvB Priority Algorithm for NearDataScheduling (Pandas)
Pandas algorithm is both throughput optimal and heavytraffic optimal in all loads [22]. Again, assuming one queue per server, Pandas algorithm routes the new incoming task to the shortest local queue (which is the same as JSQMW routing). For scheduling, an idle server always give its next service to a local task queued at its own queue; unless, its queue length is zero. If the idle server’s queue does not have any tasks queued, the central scheduler assigns a task from the longest queue in the system to the idle server (which the task is remote to the idle server). This assignment of the remote task from the longest queue in the system occurs when , to make sure that the remote task experiences less service time in the remote server other than waiting and receiving its service from the local server.
As the conclusion of the previous work for two levels of data locality, Pandas algorithm proposed by Xie, and Lu [22] is the most promising algorithm that both stabilizes the system in the whole capacity region, and minimizes mean delay for all tasks in heavytraffic regime. However, in real applications there are usually more than two levels of data locality. The reason is that a server may have the data stored on the memory or local disk, or the server may not have the data saved locally, so it has to fetch the data from another server in the same rack, or even from another server in other racks, which results in appearance of multi levels of data locality. Therefore, it is more of interest to come up with a throughput and heavytraffic optimal algorithm for a system with more than two levels of data locality. The model illustrated in the system model section has three levels of locality, as a task can get its service locally, racklocally, or remotely. For the purpose of designing a throughput and heavytraffic optimal algorithm, assuming three levels of data locality is more challenging than two levels, as a tradeoff between throughput optimality and delay optimality emerges. The Pandas algorithm proposed by Xie, and Lu [22] which is both throughput and heavytraffic optimal for two levels of data locality is not even throughput optimal for three levels of data locality. Xie, Yekkehkhany, and Lu [12] proposed two algorithms for three levels of data locality which are discussed in the next section.
V Three Levels of Data Locality
For three levels of data locality of which the system model is described in section II, Xie, Yekkehkhany, and Lu [12] extended the JSQMaxWeight algorithm and proved it to be throughput optimal. However, the extension of JSQMaxWeight is still heavytraffic optimal only in a specific traffic scenario, not in all loads (note again that JSQMW is not also heavytraffic optimal in all loads for two level of data locality, except a specific load). Xie et al. [12] also proposed a new algorithm called the weightedworkload routing and priority scheduling algorithm which is throughput optimal for any , and is heavytraffic optimal for the case that , which usually holds in real data centers. What implies is that the racklocal service is much faster than remote service. In the next two subsections, JSQMaxWeight and the weightedworkload routing and priority scheduling algorithm are discussed in more details.
Va Extension of JSQMaxWeight
Assuming one queue per server, the JSQMW is as follows:
Routing: An arriving task is routed to the shortest queue of the servers in the set (shortest local queue).
Scheduling: An idle server is scheduled to a queue in the set
(5)  
Theorem 1
JSQMaxWeight stabilizes the system under any arrival rate vector within . Therefore, JSQMaxWeight is throughput optimal.
Proof outline. If , using as the Lyapunov function, the time slot drift of is bounded in a finite subset of state space of the system, and is negative outside of this subset, which results in stability of the system by FosterLyapunov theorem [12, 13] (you can refer to [21, 22, 23] for other uses of FosterLyapunov theorem for stability proof of a system).
Theorem 2
The extended JSQMaxWeight is heavytraffic optimal in a special scenario of the workload, but not in all traffic scenarios.
VB Weighted Workload Routing and Priority Scheduling
Although it is sufficient to have one queue per server to implement the weightedworkload routing and priority scheduling algorithm in a data center with three levels of data locality, it is easier to describe this algorithm assuming the existence of three queues per server. Therefore, assume each server to have three queues, which local, racklocal, and remote tasks to the server are queued in the three queues separately. The central scheduler keeps the vector of queue lengths where . That is, the first queue of th server keeps the tasks that are routed to server and are local to it, the second queue of the th server keeps the tasks that are routed to this server, but are racklocal to it, and finally remote tasks to the th server that are routed to this server are queued in the third queue.
Defining the workload on a server, it is ready to give the routing and scheduling policies. The workload on the th server is defined as the average time needed for the server to give service to all local, racklocal, and remote tasks queued in front of it. The workload on the th server is defined below:
(6) 
WeightedWorkload Routing: As a new task arrives, it joins the server with the least weighted workload. More precisely, the new task joins one of the servers in the following set, where the ties are broken randomly.
(7) 
If the task is local (racklocal, or remote) to the server with the least weighted workload, it joins the first (second, or third) queue which is (, or ).
Priority Scheduling: When a server, say , becomes idle, it gives its next service to local tasks queued in front of it at . In case that there is no local task available for idle server , that is , next service is assigned to racklocal tasks queued at . Finally if both local, and racklocal queues are empty, the next service goes to . In summary, the idle server gives the most priority to local, then racklocal, and finally remote tasks. If all three subqueues of server are empty, the server remains idle; until, a new task joins any of the three subqueues.
Theorem 3
The weightedworkload routing and priority scheduling algorithm is throughput optimal, as it stabilizes any arrival rate vector in the capacity region.
Proof outline. The time slot drift of the following Lyapunov function is bounded in a finite subset of state space, and is negative out of this subset, as long as the arrival rate vector is in the capacity region (), which results in the stability of the system [12, 13].
(8) 
Theorem 4
The weightedworkload routing and priority scheduling algorithm is heavytraffic optimal for [12].
Vi Conclusion
In this paper, we first discussed the history of task routing and affinity scheduling in data centers. Two algorithms are then proposed for a system with two levels of data locality: JSQMaxWeight, and Pandas algorithms for NearData Scheduling (Pandas) both of which are throughput optimal. However, it was shown that Pandas is the only algorithm being heavytraffic optimal in all loads. Taking further steps to three levels of data locality, we mentioned that the Pandas algorithm known to be heavytraffic optimal for two levels of data locality is not even throughput optimal for three levels of data locality. Then, an algorithm with weighted workload routing and priority scheduling as well as an extension of JSQMaxWeight are discussed for three levels of data locality. Among these two algorithms only the weightedworkload routing and priority scheduling algorithm is heavytraffic optimal in all loads.
References
 [1] “Facebook.com,” http://facebook.com.
 [2] “Twitter.com,” http://twitter.com.

[3]
G. Ananthanarayanan, S. Agarwal, S. Kandula, A. Greenberg, I. Stoica, D. Harlan, and E. Harris, “Scarlett: Coping with skewed content popularity in mapreduce clusters,”
EuroSys, pp. 287–300, 2011.  [4] G. Ananthanarayanan, A. Ghodsi, A. Wang, D. Borthakur, S. Kandula, S. Shenker, and I. Stoica, “Pacman: Coordinated memory caching for parallel jobs,” NSDI, 2012.
 [5] M. Pundir, Q. Xie, Y. Lu, C. L. Abad, and R. H. Campbell, “Pandas: Robust localityaware scheduling with stochastic delay optimality.”
 [6] M. Zaharia, D. Borthakur, J. Sen Sarma, K. Elmeleegy, S. Shenker, and I. Stoica, “Delay scheduling: A simple technique for achieving locality and fairness in cluster scheduling,” in Proceedings of the 5th European Conference on Computer Systems, ser. EuroSys ’10. New York, NY, USA: ACM, 2010, pp. 265–278. [Online]. Available: http://doi.acm.org/10.1145/1755913.1755940
 [7] M. Isard, V. Prabhakaran, J. Currey, U. Wieder, K. Talwar, and A. Goldberg, “Quincy: Fair scheduling for distributed computing clusters,” in Proceedings of the ACM SIGOPS 22Nd Symposium on Operating Systems Principles, ser. SOSP ’09. New York, NY, USA: ACM, 2009, pp. 261–276. [Online]. Available: http://doi.acm.org/10.1145/1629575.1629601
 [8] G. Ananthanarayanan, S. Agarwal, S. Kandula, A. Greenberg, I. Stoica, D. Harlan, and E. Harris, “Scarlett: Coping with skewed content popularity in mapreduce clusters,” in Proceedings of the Sixth Conference on Computer Systems, ser. EuroSys ’11. New York, NY, USA: ACM, 2011, pp. 287–300. [Online]. Available: http://doi.acm.org/10.1145/1966445.1966472
 [9] C. L. Abad, Y. Lu, and R. H. Campbell, “Dare: Adaptive data replication for efficient cluster scheduling,” in Proceedings of the 2011 IEEE International Conference on Cluster Computing, ser. CLUSTER ’11. Washington, DC, USA: IEEE Computer Society, 2011, pp. 159–168. [Online]. Available: http://dx.doi.org/10.1109/CLUSTER.2011.26
 [10] “Apache hadoop,” June 2011.
 [11] C. He, Y. Lu, and D. Swanson, “Matchmaking: A new mapreduce scheduling technique,” CloudCom, 2011.
 [12] Q. Xie, A. Yekkehkhany, and Y. Lu, “Scheduling with multilevel data locality: Throughput and heavytraffic optimality,” INFOCOM, 2016.
 [13] A. Yekkehkhany, “Near data scheduling for data centers with multi levels of data locality,” arXiv preprint arXiv:1702.07802, 2017.
 [14] M. S. Squillante, C. H. Xia, D. D. Yao, and L. Zhang, “Thresholdbased priority policies for parallelserver systems with affinity scheduling,” in American Control Conference, 2001. Proceedings of the 2001, vol. 4, 2001, pp. 2992–2999 vol.4.
 [15] A. Mandelbaum and A. L. Stolyar, “Scheduling flexible servers with convex delay costs: Heavytraffic optimality of the generalized crule,” INFORMS, pp. 836 – 855, November  December 2004.
 [16] J. M. Harrison and M. J. Lopez, “Heavy traffic resource pooling in parallelserver systems,” Queueing Systems, vol. 33, pp. 339–368, 1999.
 [17] J. M. Harrison, “Heavy traffic analysis of a system with parallel servers: asymptotic optimality of discretereview policies,” Ann. Appl. Probab., vol. 8, no. 3, pp. 822–848, 08 1998. [Online]. Available: http://dx.doi.org/10.1214/aoap/1028903452
 [18] S. L. Bell and R. J. Williams, “Dynamic scheduling of a system with two parallel servers in heavy traffic with resource pooling: asymptotic optimality of a threshold policy,” Ann. Appl. Probab., vol. 11, no. 3, pp. 608–649, 08 2001. [Online]. Available: http://dx.doi.org/10.1214/aoap/1015345343
 [19] J. M. Harrison, “Heavy traffic analysis of a system with parallel servers: Asymptotic optimality of discretereview policies,” Annals of Applied Probability, vol. 8, no. 3, pp. 822–848, 1998.
 [20] A. L. Stolyar, “Maxweight scheduling in a generalized switch: State space collapse and workload minimization in heavy traffic,” The Annals of Applied Probability, pp. 1–53, Feb., 2004.
 [21] W. Wang, K. Zhu, L. Ying, J. Tan, and L. Zhang, “Map task scheduling in mapreduce with data locality: Throughput and heavytraffic optimality,” INFOCOM, 2013.
 [22] Q. Xie and Y. Lu, “Priority algorithm for neardata scheduling: Throughput and heavytraffic optimality,” INFOCOM, pp. 963–972, 2015.
 [23] A. Ghassami, A. Yekkehkhany, N. Kiyavash, and Y. Lu, “A covert queueing channel in round robin schedulers,” arXiv preprint arXiv:1701.08883, 2017.
Comments
There are no comments yet.