1 Background and Problem Statement
We start with a brief review of the coflow abstraction and the need for non-clairvoyant coflow scheduling and state the network model. We then give an overview of existing online coflow schedulers and formally state the problem.
In data-parallel applications such as Hadoop  and Spark , the job completion time heavily depends on the completion time of the communication stage [13, 21]. The coflow abstraction  was proposed to speed up the communication stage to improve application performance. A coflow is defined as a set of flows between several nodes that accomplish a common task. For example, in map-reduce jobs, the set of all flows from all map to all reduce tasks in a single job forms a typical coflow. The coflow completion time (CCT) is defined as the time duration between when the first flow arrives and the last flow completes. In such applications, improving CCT is more important than improving individual flows’ completion time (FCT) for improving the application performance [19, 22, 25, 32, 33].
Data-parallel directed acyclic graphs (DAGs) typically have multiple stages which are represented as multiple coflows with dependencies between them. Recent systems (e.g., [23, 3, 31, 41]) employ optimizations that pipeline the consecutive computation stages which removes the barrier at the end of each coflow, making knowing flow sizes of each coflow beforehand difficult. A recent study  further shows various other reasons why it is not very plausible to learn flow sizes from applications, for example, learning flow sizes from applications requires changing either the network stack or the applications. Thus in this paper, we focus on non-clairvoyant coflow scheduling which do not assume knowledge about coflow characteristics such as flow sizes upon coflow arrival.
Non-blocking network fabric
We assume the same non-blocking network fabric model in recent network designs for coflows [22, 19, 32, 33, 8], where the datacenter network fabric is abstracted as a single non-blocking switch that interconnects all the servers, and each server (computing node) is abstracted as a network port that sends and receives flows. In such a model, the ports, i.e., server uplinks and downlinks, are the only source of contention as the network core is assumed to be able to sustain all traffic injected into the network. We note that the abstraction is to simplify our description and analysis, and is not required or enforced in our evaluation.
1.1 Prior-art on non-clairvoyant coflow scheduling
A classic approach to reduce average CCT is Shortest Coflow First (SCF)  (derived from classic SJF), where the coflow size is loosely defined as the total bytes of the coflow, i.e., sum of length of all its flows. However, using SCF online is not practical as it requires prior knowledge about the coflow sizes. This is further complicated as coflows arrive and exit dynamically and by other cluster dynamics such as failures and stragglers.
Aalo  was proposed to schedule coflows online without any prior knowledge. The key idea in Aalo is to approximate SCF by learning Coflow length using discrete priority queues. In particular, it starts a newly arrived coflow in the highest priority queue and gradually moves it to the lower priority queues when the total data sent by the coflow exceeds the per-queue thresholds.
The above idea of learning the order of jobs in a priority queues was originally applied to scheduling jobs on a single server. To apply it to scheduling coflows with many constituent flows over many network ports, i.e., in a distributed setting, Aalo uses a global coordinator to assign coflows to logical priority queues, and uses the total bytes sent by all flows of a coflow as its logical “length” in moving coflows across the queues. The logical priority queues are mapped to local priority queues at each port, and the individual local ports act independently in scheduling flows in its local priority queues, e.g., by enumerating flows from the highest to lowest priority queues and using FIFO to order the flows within each queue.
Generally speaking, using multiple priority queues in Aalo in this way has three effects: (1) Coflow segregation: It segregates long coflows (who will move to low priority queues) from short coflows who will finish while in high priority queues; (2) Finishing short coflows sooner: Since high priority queues receive more bandwidth allocation, short coflows will finish sooner (than longer ones); (3) Starvation avoidance: Using the FIFO policy for intra-queue scheduling provides starvation avoidance, since at every scheduling slot, each queue at each port receives a fixed bandwidth allocation and FIFO ensures that every Coflow (its flow) in each queue is never moved back.
Similar to Aalo , Graviton  also uses a logical priority queue structure. Unlike Aalo, Graviton uses sophisticated policies to sort coflows within a queue based on their width (total number of ports that a coflow is present on). Saath  is another priority queue based online coflow scheduler that improves over Aalo with three high-level design priniciples: (1) it schedules flows of a coflow in an all-or-none fashion to prevent flows of a coflow from going out-of-sync; (2) it incorporates contention, i.e., with how many other coflows a coflow is sharing ports with, into the metric for sorting coflows within a queue; (3) instead of using the total coflow size, it uses the length of the longest flow of a coflow to determine transition across priority queues, which helps in deciding the correct priority queue of a coflow faster.
1.2 Problem statement
Our goal is to develop an efficient non-clairvoyant coflow scheduler that optimizes the communication performance, in particular the average CCT, of data-intensive applications without prior knowledge, while guaranteeing starvation freedom and work conservation and being resilient to the network dynamics. The problem of non-clairvoyant coflow scheduling is NP-hard because coflow scheduling even assuming all coflows arrive at time 0 and their size are known in advance is already NP-hard . Thus practical non-clairvoyant coflow schedulers are approximation algorithms. Our approach is to dynamically prioritize coflows by efficiently learning their flow sizes online.
2 Key Idea
Our new non-clairvoyant coflow scheduler design, Philae, is based on a key observation about coflows that a coflow has a spatial dimension, i.e., it typically consists of many flows. We thus propose to explicitly learn coflow sizes online by using sampling, a highly effective technique used in large-scale surveys . In particular, Philae preschedules sampled flows, called pilot flows, of each coflow and uses their measured sizes to estimate the coflow size. It then resorts to SJF or variations using the estimated coflow sizes.
Developing a complete non-clairvoyant coflow scheduler based on the simple sampling idea raises three questions:
(1) Why is sampling more efficient than the priority-queue-based coflow size learning? Would scheduling the remaining flows after sampled pilot flows are completed adversely affect the coflow completion time?
(2) Will sampling be effective in the presence of skew of flow sizes?
(3) How to design the complete scheduler architecture? We answer the first two questions below, and present the complete architecture design in §LABEL:sec:design.
2.1 Why is sampling-based learning more efficient than priority-queue-based learning?
Scheduling pilot flows first before the rest of the flows can potentially incur two sources of overhead. First, scheduling pilot flows of a newly arriving coflow consumes port bandwidth which can delay other coflows (with already estimated sizes). However, compared to the multi-queue based approach, the overhead is much smaller for two reasons: (1) Philae schedules only a small subset of the flows (e.g., fewer than 1% for coflows with many flows). (2) Since the CCT of a coflow depends on the completion of its last flow, some of its earlier finishing flows could be delayed without affecting the CCT. Philae
exploits this observation and schedules pilot flows on the least-busy ports to increase the odds that it only affects earlier finishing flows of other coflows.
Second, scheduling pilot flows first may elongate the CCT of the newly arriving coflow itself whose other flows cannot start until the pilot flows finish. This is again typically insignificant for two reasons: (1) A coflow (e.g., from a MapReduce job) typically consists of flows from all sending ports to all receiving ports. Conceptually, pre-scheduling one out of multiple flows from each sender may not delay the coflow progress at that port, because all flows at that port have to be sent anyway. (2) Coflow scheduling is of high relevance in a busy cluster (when there is a backlog of coflows in the network), in which case the CCT of coflow is expected to be much higher than if it were the only coflow in the network, and hence the piloting overhead is further dwarfed by a coflow’s actual CCT.
2.2 Why is sampling effective in the presence of skew?
The flow sizes within a coflow may vary (skew). In this paper we measure skew as . Other papers like Varys have used metrics like coefficient of variation to measure the skew. We used the ratio because it allows us to analyze the learning error without assuming the specific distribution of flow-sizes.
Intuitively, if the skew across flow sizes is small, sampling even a small number of pilot flows will be sufficient to yield an accurate estimate. Interestingly, even if the skew across flow sizes is large, our experiment indicates that sampling is still highly effective. In the following, we give both the intuition and theoretical underpinning for why sampling is effective.
Consider, for example, two coflows and the simple setting where both coflows share the same set of ports. In order to improve the average CCT, we wish to schedule the shorter coflow ahead of the longer coflow. If the total sizes of the two coflows are very different, then even a moderate amount of estimation error of the coflow sizes will not alter their ordering. On the other hand, if the total sizes of the two coflows are close to each other, then indeed the estimation errors will likely alter their ordering. However, in this case since their sizes are not very different anyway, switching the order of these two coflows will not significantly affect the average CCT.
To illustrate the above effect, we show that the gap between the CCT based on sampling and assuming perfect knowledge is small, even under general flow size distributions. Specifically, coflows and have and flows, respectively. Here, we assume that and are fixed constants. Thus, by taking to be larger, we will be able to consider wider coflows.
Assume that each flow of (correspondingly, ) has a size that is distributed within a bounded interval  () with mean (), i.i.d. across flows. Let be the total completion time when the exact flow sizes are known in advance. Let be the average CCT by sampling and flows from and , respectively. Without loss of generality, we assume that .
Then, using Hoeffding’s Inequality, we can show that (see §LABEL:sec:appendix for detailed proof)
(Note that here also we have used the fact that, since both coflows share the same set of ports and is large, the CCT is asymptotically proportional to the coflow size.)
Equation (1) can be interpreted as follows. First, due to the first exponential term, the relative gap between and decreases as and decrease. In other words, as the skew of each coflow decreases, sampling becomes more effective. Second, when and are fixed, if is large (i.e., the two coflow sizes are very different), the value of the exponential function will be small. On the other hand, if is close to zero (i.e., the two coflow sizes are close to each other), the numerator on the second term on the right hand side will be small. In both cases, the relative gap between and will also be small, which is consistent with the intuition explained earlier. The largest gap occurs when is on the same order as . Finally, although these analytical results assume that both coflows share the same set of ports, similar conclusions on the impact of estimation errors due to sampling also apply under more general settings.
The above analytical results suggest that, when is large, the relative performance gap for CCT is a function of the number of pilot flows sampled for each coflow, but is independent of the total number of flows in each coflow. In practice, large coflows will dominate the total CCT in the system. Thus, these results partly explain that, while in our experiments the number of pilot flows is never larger than of the total number of flows, the performance of our proposed approach is already very good.
Finally, the above analytical results do not directly tell us how to choose the number of pilot flows, which likely depends on the probability distribution of the flow size. In practice, we do not know such distribution ahead of time. Further, while choosing a larger number of pilot flows reduces the estimation errors, it also incurs higher overhead and delay. Therefore, our design (§LABEL:sec:design) needs to have practical solutions that carefully address these issues.
Error-correction. Readers familiar with the online learning and multi-armed bandit (MAB) literature [28, 16, 36, 14] will notice that our key idea above does not attempt to correct the errors in the initial sampling step. In particular, we did not use an iterative procedure to refine the initial estimates based on additional samples, e.g., as in the classical UCB (upper-confidence-bound) algorithm . The reason is because, from our preliminary investigation (details are available in our online technical report .), we have found that straight-forward ways of applying UCB-type of algorithms do not work well for minimizing the total completion time. For instance, consider two coflows whose sizes are nearly identical. In order to identify which coflow is smaller, UCB-type of algorithms tend to alternately sample both coflows, which leads to nearly round-robin scheduling. While this is desirable for maximizing payoff (as in typical MAB problems), for minimizing completion time it becomes sub-optimal: Indeed, we should have instead let either one of coflows run to completion first, before the other coflow starts. Consistent with this intuition, our preliminary simulation results show that adding UCB-type of iterative error-correction actually degrades the performance of Philae: the average CCT improvement over Aalo reduces from 1.51 to 0.95. (Further details are available in our online technical report .) While these preliminary results do not preclude the possibilities that other iterative sampling algorithms may outperform Philae, it does illustrate that straight-forward extensions of UCB-type of ideas may not work well. How to find iterative sampling algorithms outperforming Philae remains an interesting direction for future work.
2.3 Scalability analysis
Compared to learning coflow sizes using priority queues (PQ-based) [19, 33], learning coflow sizes by sampling Philae not only reduces the learning overhead as discussed in §2.1 and shown in §LABEL:sec:sim:probeOverhead, but also significantly reduces the amount of interactions between the coordinator and local agents and thus makes the coordinator highly scalable, as summarized in Table 1.
First, PQ-based learning requires much more frequent update from local agents. PQ-based learning estimates coflow sizes by incrementally moving coflows across priority queues according to the data sent by them so far. As such, the scheduler needs frequent updates (every ms) of data sent per coflow from the local agents. In contrast, Philae directly estimates a coflow’s size upon the completion of all its pilot flows. The only updates Philae needs from the local agents are about the flow completion which is needed for updating contentions and removing flows from active consideration..
Second, PQ-based learning results in much more frequent rate allocation. In sampling-based approach, since coflow sizes are estimated only once, coflows are re-ordered only upon coflow completion or arrival events or in the case of contention based policies only when contention changes, which is triggered by completion of all the flows of a coflow at a port. In contrast, in PQ-based learning, at every interval, coflow data sent are updated and coflow priority may get updated, which will trigger new rate assignment.
Our scalability experiments in §4.3 confirms that Philae achieves much higher scalability than Aalo.
|of data sent||flow completion||calculation|
|Aalo||Periodic ()||Yes||Periodic ()|
We implemented both Philae and Aalo scheduling policies in the same framework consisting of the global coordinator and local agents (Fig. LABEL:fig:design:arch), in 5.2 KLoC in C++.
Coordinator: The coordinator schedules the coflows based on the operations received from the registering framework. The key implementation challenge for the coordinator is that it needs to be fast in computing and updating the schedules. The Philae coordinator is optimized for speed using a variety of techniques including pipelining, process affinity, and concurrency whenever possible.
Local agents: The local agents update the global coordinator only upon completion of a flow, along with its length if it is a pilot flow. Local agents schedule the coflows based on the last schedule received from the coordinator. They comply to the last schedule until a new schedule is received. To intercept the packets from the flows, local agents require the compute framework to replace datasend(), datarecv() APIs with the corresponding Philae APIs, which incurs very small overhead.
Coflow operations: The global coordinator runs independently from, and is not coupled to, any compute framework, which makes it general enough to be used with any framework. It provides RESTful APIs to the frameworks for coflow operations: (a) register() for registering a new coflow when it enters, (b) deregister() for removing a coflow when it exits, and (c) update() for updating coflow status whenever there is a change in the coflow structure, e.g., during task migration and restarts after node failures.
4 Testbed Evaluation
Next, we deployed Philae in a 150-machine Azure cluster and a 900-machine cluster to evaluate its performance and scalability.
Testbed setup: We rerun the FB trace on a Spark-like framework on a 150-node cluster in Microsoft Azure . The coordinator runs on a Standard DS15 v2 server with 20-core 2.4 GHz Intel Xeon E5-2673 v3 (Haswell) processor and 140GB memory. The local agents run on D2v2 with the same processor as the coordinator with 2-core and 7GB memory. The machines on which local agents run have 1 Gbps network bandwidth. Similarly as in simulations, our testbed evaluation keeps the same flow lengths and flow ports in trace replay. All the experiments use default parameters and the default pilot flow selection policy.
4.1 CCT Improvement
In this experiment, we measure CCT improvements of Philae compared to Aalo. Fig. LABEL:fig:eval:jct-cct-speedUp shows the CDF of the CCT speedup of individual coflows under Philae compared to under Aalo. The average CCT improvement is 1.50 which is similar to the results in the simulation experiments. We also observe 1.63 P50 speedup and 8.00 P90 speedup.
We also evaluated Philae using the Wide-coflow-only trace. Table 2 shows that Philae achieves 1.52 improvement in average CCT over Aalo, similar to that using the full FB trace. This is because the improvement in average CCT is dominated by large coflows, Philae is speeding up large coflows, and the Wide-coflow-only trace consists of mostly large coflows.
4.2 Job Completion Time
Next, we evaluate how the improvement in CCT affects the job completion time (JCT). In data clusters, different jobs spend different fractions of their total job time in data shuffle. In this experiment, we used 526 jobs, each corresponding to one coflow in the FB trace. The fraction of time that the jobs spent in the shuffle phase follows the same distribution used in Aalo , i.e., 61% jobs spent less than 25% of their total time in shuffle, 13% jobs spent 25-49%, another 14% jobs spent 50-74%, and the remaining spent over 75% of their total time in shuffle. Fig. LABEL:fig:eval:jct-cct-speedUp shows the CDF of individual speedups in JCT. Across all jobs, Philae reduces the job completion time by 1.16 in the median case and 7.87 in the 90 percentile. This shows that improved CCT translates into better job completion time. As expected, the improvement in job completion time is smaller than the improvement in CCT because job completion time depends on the time spent in both compute and shuffle (communication) stages, and Philae improves only the communication stage.
|Rate Calc.||New Rate Send||Update Recv.||Total|
|Philae||2.99 (5.35)||4.90 (11.25)||6.89 (17.78)||14.80 (28.84)|
|Aalo||4.28 (4.14)||17.65 (20.90)||10.97 (19.98)||32.90 (34.09)|
[Testbed] Average (standard deviation) coordinator CPU time (ms) per scheduling interval in 900-port runs.Philae did not have to calculate and send new rates in 66% of intervals, which contributes to its low average.
|150 ports||900 ports|
Finally, we evaluate the scalability of Philae by comparing its performance with Aalo on a 900-node cluster. To drive the evaluation, we derive a 900-port trace by replicating the FB trace 6 times across ports, i.e., we replicated each job 6 times, keeping the arrival time for each copy the same but assigning sending and receiving ports in increments of 150 (the cluster size for the original trace). We also increased the scheduling interval by 6 times to = 6.
Philae achieved 2.72 (9.78) speedup in average (P90) CCT over Aalo. The higher speedup compared to the 150-node runs (1.50) comes from higher scalability of Philae. In 900-node runs, Aalo was not able to finish receiving updates, calculating new rates and updating local agents of new rates within in 37% of the intervals, whereas Philae only missed the deadline in 10% of the intervals. For 150-node runs these values are 16% for Aalo and 1% for Philae. The 21% increase in missed scheduling intervals in 900-node runs in Aalo resulted in local agents executing more frequently with outdated rates. As a result, Philae achieved even higher speedup in 900-node runs.
As discussed in §2.3, Aalo’s poorer coordinator scalability comes from more frequent updates from local agents and more frequent rate allocation, which result in longer coordinator CPU time in each scheduling interval. Table 3 shows the average coordinator CPU usage per interval and its breakdown. We see that (1) on average Philae spends much less time than Aalo in receiving updates from local agents, because Philae does not need updates from local agents at every interval – on average in every scheduling interval Philae receives updates from 49 local agents whereas Aalo receives from 429 local agents, and (2) on average Philae spends much less time calculating new rates and send new rates. This is because rate calculation in Philae is triggered by events and Philae did not have to flush rates in 66% of the intervals.
4.4 Robustness to network error
As discussed in §3, unlike Aalo, Philae’s coordinator does not need constant updates from local agents to sort coflows in priority queues. This simplifies Philae’s design and makes it robust to network error. To evaluate the benefit of this property of Philae, we evaluated Aalo and Philae 5 times with the same configuration using the FB trace. Table 5 shows the mean-normalized standard deviation in the 10, 50, 90 percentile and the average CCT across the 5 runs. The lower values for Philae indicates that it is more robust to network dynamics than Aalo.
4.5 Resource Utilization
Finally, we evaluate, for both Philae and Aalo, the resource utilization at the coordinator and the local agents (Table 6) in terms of CPU and memory usage. We measure the overheads in two cases: (1) Overall: average during the entire execution of the trace, (2) Busy: the 90-th percentile utilization indicating the performance during busy periods due to a large number of coflows arriving. As shown in Table 6, Philae agents have similar utilization as Aalo at the local nodes, where the CPU and memory utilization are minimal even during busy times. The global coordinator of Philae consumes much lower server resources than Aalo – the CPU utilization is 3.4 lower than Aalo on average, and 2.6 than Aalo during busy periods. This is due to Philae’s event triggered communication and sampling-based learning, which significantly lowers its communication frequency with local agents when compared to Aalo. The lower resource utilization of the global coordinator enables Philae to scale to a lager cluster size than Aalo.
5 Related Work
Coflow scheduling: In this paper, we have shown Philae outperforms prior-art non-clairvoyant coflow scheduler Aalo from more efficient learning of coflow sizes online. In , Aalo was shown to outperform previous non-clairvoyant coflow schedulers Baraat  by using global coordination, and Orchestra  by avoiding head-of-line blocking.
assume prior knowledge of coflows upon arrival. Varys runs a shortest-effective-bottleneck-first heuristic for inter-coflow scheduling and performs per-flow rate allocation at the coordinator. Sincronia improves the scalability of the centralized coordinator of Varys by only calculating the coflow ordering at the coordinator (by solving an LP) and offloading flow rate allocation to individual local agents. Sincronia is orthogonal toPhilae; once coflow sizes are learned through sampling, ideas from Sincronia can be adopted in Philae to order coflows and offload rate allocation to local ports. Prioritized work conservation in Sincronia helps in mitigating the effect of starvation, which arises as a result of ordering coflows in the optimal order. However, it still does not guarantee complete freedom from starvation. CODA  tackles an orthogonal problem of identifying flows of individual coflows online.
However, recent studies [26, 19] have shown various reasons why it is not very plausible to learn flow sizes from applications beforehand. For example, many applications stream data as soon as data are generated and thus the application does not know the flow sizes until flow completion, and learning flow sizes from applications requires changing either the network stack or the applications.
Flow scheduling: There exist a rich body of prior work on flow scheduling. Efforts to minimize flow completion time (FCT), both with prior information (pFabric ) and without prior information (e.g., Fastpass ), fall short in minimizing CCTs which depend on the completion of the last flow . Similarly, Hedera  and MicroTE  schedule the flows with the goal of reducing the overall FCT, which again is different from reducing the overall CCT of coflows.
Job scheduling: There have been much work on scheduling in analytic systems and storage at scale by improving speculative tasks [47, 13, 12], improving locality [45, 11], and end-point flexibility [18, 43]. The coflow abstraction is complimentary to these work, and can benefit from them. Combining coflow them remains a future work.
State-of-the-art online coflow schedulers approximate the classic SJF by implicitly learning coflow sizes and pay a high penalty for large coflows. We propose the novel idea of sampling in the spatial dimension of coflows to explicitly and efficiently learn coflow sizes online to enable efficient online SJF scheduling. Our extensive simulation and testbed experiments show the new design offers significant performance improvement over prior art. Further, the sampling-in-spatial-dimension technique can be generalized to other distributed scheduling problems such as cluster job scheduling. We have made our simulator publicly available at https://github.com/coflowPhilae/simulator .
-  Apache hadoop. http://hadoop.apache.org.
-  Apache spark. http://spark.apache.org.
-  Apache tez. http://tez.apache.org.
-  Coflow trace from facebook datacenter. https://github.com/coflow/coflow-benchmark.
-  Microsoft azure. http://azure.microsoft.com.
-  Philae simulator. https://github.com/coflowPhilae/simulator.
-  Philae tech report. https://github.com/coflowPhilae/techreport-ton.
-  Saksham Agarwal, Shijin Rajakrishnan, Akshay Narayan, Rachit Agarwal, David Shmoys, and Amin Vahdat. Sincronia: Near-optimal network design for coflows. SIGCOMM ’18, pages 16–29. ACM, 2018.
-  Mohammad Al-Fares, Sivasankar Radhakrishnan, Barath Raghavan, Nelson Huang, and Amin Vahdat. Hedera: Dynamic flow scheduling for data center networks. NSDI’10, pages 19–19. USENIX, 2010.
-  Mohammad Alizadeh, Shuang Yang, Milad Sharif, Sachin Katti, Nick McKeown, Balaji Prabhakar, and Scott Shenker. pfabric: Minimal near-optimal datacenter transport. SIGCOMM ’13, pages 435–446. ACM.
-  Ganesh Ananthanarayanan, Sameer Agarwal, Srikanth Kandula, Albert Greenberg, Ion Stoica, Duke Harlan, and Ed Harris. Scarlett: Coping with skewed content popularity in mapreduce clusters. EuroSys ’11, pages 287–300, New York, NY, USA, 2011. ACM.
-  Ganesh Ananthanarayanan, Ali Ghodsi, Scott Shenker, and Ion Stoica. Effective straggler mitigation: Attack of the clones. nsdi’13, pages 185–198, Berkeley, CA, USA, 2013. USENIX.
Ganesh Ananthanarayanan, Srikanth Kandula, Albert Greenberg, Ion Stoica, Yi Lu,
Bikas Saha, and Edward Harris.
Reining in the outliers in map-reduce clusters using mantri.OSDI’10, pages 265–278, Berkeley, CA, USA, 2010. USENIX Association.
-  Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002.
-  Theophilus Benson, Ashok Anand, Aditya Akella, and Ming Zhang. Microte: Fine grained traffic engineering for data centers. CoNEXT ’11, pages 8:1–8:12, New York, NY, USA, 2011.
-  Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1):1–122, 2012.
-  Inho Cho, Keon Jang, and Dongsu Han. Credit-scheduled delay-bounded congestion control for datacenters. SIGCOMM ’17, pages 239–252, New York, NY, USA, 2017. ACM.
-  Mosharaf Chowdhury, Srikanth Kandula, and Ion Stoica. Leveraging endpoint flexibility in data-intensive clusters. SIGCOMM ’13, pages 231–242, New York, NY, USA, 2013. ACM.
-  Mosharaf Chowdhury and Ion Stoica. Efficient coflow scheduling without prior knowledge. SIGCOMM ’15, pages 393–406. ACM.
-  Mosharaf Chowdhury and Ion Stoica. Coflow: A networking abstraction for cluster applications. HotNets-XI, pages 31–36, New York, 2012.
-  Mosharaf Chowdhury, Matei Zaharia, Justin Ma, Michael I. Jordan, and Ion Stoica. Managing data transfers in computer clusters with orchestra. SIGCOMM ’11, pages 98–109. ACM, 2011.
-  Mosharaf Chowdhury, Yuan Zhong, and Ion Stoica. Efficient coflow scheduling with varys. SIGCOMM ’14, pages 443–454, 2014.
-  Tyson Condie, Neil Conway, Peter Alvaro, Joseph M. Hellerstein, Khaled Elmeleegy, and Russell Sears. Mapreduce online. NSDI’10, pages 21–21, Berkeley, CA, USA, 2010. USENIX.
-  Jeffrey Dean and Sanjay Ghemawat. Mapreduce: Simplified data processing on large clusters. In Commun. ACM, volume 51, pages 107–113. ACM, January 2008.
-  Fahad R. Dogar, Thomas Karagiannis, Hitesh Ballani, and Antony Rowstron. Decentralized task-aware scheduling for data center networks. SIGCOMM ’14, pages 431–442, New York, NY, USA, 2014. ACM.
-  Vojislav Dukic, Sangeetha Abdu Jyothi, Bojan Karlas, Muhsen Owaida, Ce Zhang, and Ankit Singla. Is advance knowledge of flow sizes a plausible assumption? NSDI, pages 565–580, Boston, MA, 2019. USENIX.
-  Dror G. Feitelson and Morris A. Jette. Improved utilization and responsiveness with gang scheduling. IPPS ’97, pages 238–261, London, UK, UK, 1997. Springer-Verlag.
-  John Gittins, Kevin Glazebrook, and Richard Weber. Multi-armed bandit allocation indices. John Wiley & Sons, 2011.
-  Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, 1963.
-  Xin Sunny Huang, Xiaoye Steven Sun, and T.S. Eugene Ng. Sunflow: Efficient optical circuit scheduling for coflows. CoNEXT ’16. ACM.
-  Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, and Dennis Fetterly. Dryad: Distributed data-parallel programs from sequential building blocks. EuroSys ’07, pages 59–72. ACM, 2007.
-  Akshay Jajoo, Rohan Gandhi, and Y. Charlie Hu. Graviton: Twisting space and time to speed-up coflows. In 8th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 16), Denver, CO, 2016. USENIX.
-  Akshay Jajoo, Rohan Gandhi, Y. Charlie Hu, and Cheng-Kok Koh. Saath: Speeding up coflows by exploiting the spatial dimension. In Proceedings of the 13th International Conference on Emerging Networking EXperiments and Technologies, CoNEXT ’17, pages 439–450. ACM.
-  Akshay Jajoo, Y. Charlie Hu, and Xiaojun Lin. Your coflow has many flows: Sampling them for fun and speed. In 2019 USENIX Annual Technical Conference, pages 833–848, Renton, WA.
-  Virajith Jalaparti, Peter Bodik, Ishai Menache, Sriram Rao, Konstantin Makarychev, and Matthew Caesar. Network-aware scheduling for data-parallel jobs: Plan when you can. SIGCOMM ’15, pages 407–420. ACM.
-  Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985.
-  David A. Lifka. The anl/ibm sp scheduling system. IPPS ’95, pages 295–303, London, UK, UK, 1995. Springer-Verlag.
-  Masoud Moshref, Minlan Yu, Ramesh Govindan, and Amin Vahdat. Trumpet: Timely and precise triggers in data centers. SIGCOMM ’16, pages 129–143, New York, NY, USA, 2016. ACM.
-  Stanley Lemeshow Paul S. Levy. Sampling of Populations: Methods and Applications. Wiley, 4 edition, Jun 2012.
-  Jonathan Perry, Amy Ousterhout, Hari Balakrishnan, Devavrat Shah, and Hans Fugal. Fastpass: A centralized ”zero-queue” datacenter network. SIGCOMM ’14, pages 307–318. ACM, 2014.
-  Christopher J. Rossbach, Yuan Yu, Jon Currey, Jean-Philippe Martin, and Dennis Fetterly. Dandelion: A compiler and runtime for heterogeneous systems. SOSP ’13, pages 49–68, New York, 2013.
-  Uwe Schwiegelshohn and Ramin Yahyapour. Analysis of first-come-first-serve parallel job scheduling. SODA ’98, pages 629–638, Philadelphia, PA, USA, 1998. Society for Industrial and Applied Mathematics.
-  David Shue, Michael J. Freedman, and Anees Shaikh. Performance isolation and fairness for multi-tenant cloud storage. OSDI’12, pages 349–362, Berkeley, CA, USA, 2012.
-  Abraham Silberschatz, Peter B. Galvin, and Greg Gagne. Process Scheduling. Operating System Concepts. John Wiley & Sons, 8 edition.
-  Matei Zaharia, Dhruba Borthakur, Joydeep Sen Sarma, Khaled Elmeleegy, Scott Shenker, and Ion Stoica. Delay scheduling: A simple technique for achieving locality and fairness in cluster scheduling. EuroSys ’10, pages 265–278, New York, NY, USA, 2010. ACM.
-  Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, and Ion Stoica. Spark: Cluster computing with working sets. HotCloud’10, pages 10–10, Berkeley, CA, USA, 2010.
-  Matei Zaharia, Andy Konwinski, Anthony D. Joseph, Randy Katz, and Ion Stoica. Improving mapreduce performance in heterogeneous environments. OSDI’08, pages 29–42, Berkeley, CA, USA, 2008. USENIX.
-  Hong Zhang, Li Chen, Bairen Yi, Kai Chen, Mosharaf Chowdhury, and Yanhui Geng. Coda: Toward automatically identifying and scheduling coflows in the dark. SIGCOMM ’16, pages 160–173. ACM.