1 Introduction
Work stealing is an efficient and popular paradigm for scheduling multithreaded computations. While its practical benefits have been known for decades [4, 8] and several researchers have found applications of the paradigm [2, 5, 9, 10], Blumofe and Leiserson [3] were the first to give a theoretical analysis of work stealing. Their scheduler executes a fully strict (i.e., wellstructured) multithreaded computations on processors within an expected time of , where is the minimum serial execution time of the multithreaded computation (the work of the computation) and is the minimum execution time with an infinite number of processors (the span of the computation.)
In multithreaded computations, it sometimes occurs that a processor performs some computations and stores the results in its cache. Therefore, a workstealing algorithm could potentially benefit from exploiting locality, i.e., having processors work on their own work as much as possible. Indeed, an experiment by Acar et al. [1] demonstrates that exploiting locality can improve the performance of the workstealing algorithm by up to 80%. Similarly, Guo et al. [6] found that localityaware scheduling can achieve up to 2.6 speedup over localityoblivious scheduling. In addition, workstealing strategies that exploit locality have been proposed. Hierarchical work stealing, considered by Min et al. [11] and Quintin and Wagner [12], contains mechanisms that find the nearest victim thread to preserve locality and determine the amount of work to steal based on the locality of the victim thread. More recently, Paudel et al. [13] explored a selection of tasks based on the applicationlevel task locality rather than hardware memory topology.
In this paper, we investigate a variant of the workstealing algorithm that we call the localized workstealing algorithm. In the localized workstealing algorithm, when a processor is free, it makes a steal attempt to get back its own work. We call this type of steal a stealback. We show that the expected running time of the algorithm is , and that under the “even distribution of free agents assumption”, the expected running time of the algorithm is . In addition, we obtain another runningtime bound based on ratios between the sizes of serial tasks in the computation. If denotes the maximum ratio between the largest and the smallest serial tasks of a processor after removing a total of serial tasks across all processors from consideration, then the expected running time of the algorithm is .
This paper is organized as follows. Section 2 introduces the setting that we consider throughout the paper. Section 3 analyzes the localized workstealing algorithm using the delaysequence argument. Section 4 analyzes the algorithm using amortization arguments. Section 5 considers variants of the localized workstealing algorithm. Finally, Section 6 concludes and suggests directions for future work.
2 Localized WorkStealing Algorithm
Consider a setting with processors. Each processor owns some pieces of work, which we call serial tasks. Each serial task takes a positive integer amount of time to complete, which we define as the size of the serial task. We assume that different serial tasks can be done in parallel and model the work of each processor as a binary tree whose leaves are the serial tasks of that processor. The trees are balanced in terms of the number of serial tasks on each branch, but the order in which the tasks occur in the binary tree is assumed to be given to us. We then connect the roots as a binary tree of height , so that we obtain a larger binary tree whose leaves are the serial tasks of all processors.
As usual, we define as the work of the computation, and as the span of the computation. The span corresponds to the height of the aforementioned larger binary tree plus the size of the largest serial task. In addition, we define as the height of the tree not including the part connecting the processors of height at the top or the serial tasks at the bottom. Since corresponds to a smaller part of the tree than , we have .
The randomized workstealing algorithm [3] suggests that whenever a processor is free, it should “steal” randomly from a processor that still has work left to do. In our model, stealing means taking away one of the two main branches of the tree corresponding to a particular processor, in particular, the branch that the processor is not working on. The randomized workstealing algorithm performs
steal attempts with probability at least
, and the execution time is with probability at least .This paper investigates a localized variant of the workstealing algorithm. In this variant, whenever a processor is free, it first checks whether some other processors are working on its work. If so, it “steals back” randomly only from these processors. Otherwise, it steals randomly as usual. We call the two types of steal a general steal and a stealback. The intuition behind this variant is that sometimes a processor performs some computations and stores the results in its cache. Therefore, a workstealing algorithm could potentially benefit from exploiting locality, i.e., having processors work on their own work as much as possible.
We make a simplifying assumption that each processor maintains a list of the other processors that are working on its work. When a general steal occurs, the stealer adds its name to the list of the owner of the serial task that it has just stolen (not necessarily the same as the processor from which it has just stolen.) For example, if processor steals a serial task owned by processor from processor , then adds its name to the ’s list (and not ’s list.) When a stealback is unsuccessful, the owner removes the name of the target processor from its list, since the target processor has finished the owner’s work.
An example of an execution of localized workstealing algorithm can be found in [14]. We assume that the overhead for maintaining the list and dealing with contention for stealbacks is constant. This assumption is reasonable because adding (and later removing) the name of a processor to a list is done when a general steal occurs, and hence can be amortized with general steals. Randomizing a processor from the list to steal back from takes constant time. When multiple processors attempt to steal back from the same processor simultaneously, we allow an arbitrary processor to succeed and the remaining processors to fail, and hence do not require extra processing time.
3 DelaySequence Argument
In this section, we apply the delaysequence argument to establish an upper bound on the running time of the localized workstealing algorithm. The delaysequence argument is used in [3] to show that the randomized workstealing algorithm performs steal attempts with probability at least . We show that under the “even distribution of free agents assumption”, the expected running time of the algorithm is . We also show a weaker bound that without the assumption, the expected running time of the algorithm is .
Since the amount of work done in a computation is always given by
, independent of the sequence of steals, we focus on estimating the number of steals. We start with the following definition.
Definition 1
The even distribution of free agents assumption is the assumption that when there are owners left (and thus free agents), the free agents are evenly distributed working on the work of the owners. That is, each owner has processors working on its work.
While this assumption might not hold in the localized workstealing algorithm as presented here, it is intuitively more likely to hold under the hashing modification presented in Section 5. When the assumption does not hold, we obtain a weaker bound as given in Theorem 3.3.
Before we begin the proof of our theorem, we briefly summarize the delaysequence argument as used by Blumofe and Leiserson [3]. The intuition behind the delaysequence argument is that in a random process in which multiple paths of the process occur simultaneously, such as work stealing, there exists some path that finishes last. We call this path the critical path. The goal of the delaysequence argument is to show that it is unlikely that the process takes a long time to finish by showing that it is unlikely that the critical path takes a long time to finish. To this end, we break down the process into rounds. We define a round so that in each round, there is a constant probability that the critical path is shortened. (In the case of work stealing, this means there exists a steal on the critical path.) This will allow us to conclude that there are not too many rounds, and consequently not too many steals in the process.
Theorem 3.1
With the even distribution of free agents assumption, the number of steal attempts is with probability at least , and the expected number of steal attempts is .
Proof
Consider any processor. At timestep , let denote the number of general steals occurring at that timestep, and let
be the random variable
We define a round to be a consecutive number of timesteps such that
and such that this inequality is not satisfied if we remove the last timestep from the round. Note that this condition is analogous to the condition of a round in [3], where the number of steals is between and . Here we have the term corresponding to general steals and the term corresponding to stealbacks.
We define the critical path of the processor to be the path from the top of its binary tree to the serial task of the processor whose execution finishes last. We show that any round has a probability of at least of reducing the length of the critical path.
We compute the probability that a round does not reduce the length of the critical path. Each general steal has a probability of at least of stealing off the critical path and thus reducing its length. Each stealback by the processor has a probability of of reducing the length of the critical path. At timestep , the probability of not reducing the length of the critical path is therefore
where we used the inequality for all real numbers . Therefore, the probability of not reducing the length of the critical path during the whole round is at most
Note that this bound remains true even when there are concurrent thieves, since we are concerned with the probability that in a given round the length of the critical path is not reduced. If there are concurrent thieves trying to make a steal on the critical path, one of them will be successful, and the other unsuccessful thieves do not play a role in our analysis.
With this definition of a round, we can now apply the delaysequence argument as in [3]. Note that in a single timestep , we have and . Consequently, in every round, we have
Suppose that over the course of the whole execution, we have , where for some sufficiently large constant . Then there must be at least rounds. Since each round has a probability of at most of not reducing the length of the critical path, the delaysequence argument yields that the probability that is at most .
We apply the same argument to every processor. Suppose without loss of generality that processor 1’s work is completed first, then processor 2’s work, and so on, up to processor ’s work. Let denote the number of general steals up to the timestep when processor ’s work is completed, and let denote the value of the random variable corresponding to processor . In particular, is the total number of general steals during the execution, which we also denote by . We have
Now we use our even distribution of free agents assumption. This means that when processor steals back, there are at most processors working on its work. Hence whenever . Letting be the number of stealbacks performed by processor , we have
For processor , this says
In particular, we have
For processor , we have
Since , we have
Since grows as , adding up the estimates for each of the processors and using the union bound, we have
Substituting with yields the desired bound.
Since the tail of the distribution decreases exponentially, the expectation bound follows.
The bound on the execution time follows from Theorem 3.1.
Theorem 3.2
With the even distribution of free agents assumption, the expected running time, including scheduling overhead, is . Moreover, for any , with probability at least , the execution time on processors is .
Proof
The amount of work is , and Theorem 3.1 gives a bound on the number of steal attempts. We add up the two quantities and divide by to complete the proof.
Without the even distribution of free agents assumption, we obtain a weaker bound, as the following theorem shows.
Theorem 3.3
The number of steal attempts is with probability at least .
Proof
Again, the bound on the execution time follows from Theorem 3.3.
Theorem 3.4
The expected running time of the localized workstealing algorithm, including scheduling overhead, is . Moreover, for any , with probability at least , the execution time on processors is .
Proof
The amount of work is , and Theorem 3.3 gives a bound on the number of steal attempts. We add up the two quantities and divide by to complete the proof.
Remark 1
In the delaysequence argument, it is not sufficient to consider the critical path of only one processor (e.g., the processor that finishes last.)
For example, suppose that there are 3 processors, , and . owns 50 serial tasks of size 1 and 1 serial task of size 100, owns 1 serial task of size 1 and 1 serial task of size 1000, and owns no serial task. At the beginning of the execution, has a probability of 1/2 of stealing from . If it steals from and gets stuck with the serial task of size 100, will perform several stealbacks from , while the critical path is on ’s subtree.
Hence, the stealbacks by do not contribute toward reducing the length of the critical path.
We briefly discuss the scalability of our localized workstealing strategy. The bound provided by Blumofe and Leiserson [3] means that when , we achieve linear speedup, i.e., . Indeed, when , we have that , which implies that the term is the dominant term in the sum . On the other hand, for our bound of , when , we have that , and hence the term dominates in the sum . As a result, we achieve linear speedup in localized work stealing when . In other words, we have squarerooted the effective parallelism. Thus the application scales, but not as readily as in vanilla randomized work stealing.
4 Amortization Analysis
In this section, we apply amortization arguments to obtain bounds on the running time of the localized workstealing algorithm. We show that if denotes the maximum ratio between the largest and the smallest serial tasks of a processor after removing a total of serial tasks across all processors from consideration, then the expected running time of the algorithm is .
We begin with a simple bound on the number of stealbacks.
Theorem 4.1
The number of stealbacks is at most with high probability.
Proof
Every successful stealback can be amortized by the work done by the stealer in the timestep following the stealback. Every unsuccessful stealback can be amortized by a general steal. Indeed, recall our assumption that after each unsuccessful stealback, the target processor is removed from the owner’s list. Hence each general steal can generate at most one unsuccessful stealback. Since there are at most general steals with high probability, we obtain the desired bound.
The next theorem amortizes each stealback against general steals, using the height of the tree to estimate the number of general steals.
Theorem 4.2
Let denote the number of general steals in the computation, and let denote the height of the tree not including the part connecting the processors of height at the top or the serial tasks at the bottom. (In particular, ) Then there are at most stealback attempts.
Proof
Suppose that a processor steals back from another processor . This means that earlier, performed a general steal on which resulted in this stealback. We amortize the stealback against the general steal. Each general steal generates at most stealbacks (or , to be more precise, since there can be an unsuccessful stealback after completed all of ’s work and erased ’s name from its list.) Since there are general steals in our computation, there are at most stealback attempts.
After performed the general steal on , it is possible some other processor makes a general steal on . This does not hurt our analysis. When steals back from , we amortize the stealback against the general steal that makes on , not the general steal that makes on .
Since there are at most general steals with high probability, Theorem 4.2 shows that there are at most steals in total with high probability.
The next theorem again amortizes each stealback against general steals, but this time also using the size of the serial tasks to estimate the number of general steals.
Theorem 4.3
Define and as in Theorem 4.2, and let be any positive integer. Remove a total of at most serial tasks from consideration. (For example, it is a good idea to exclude the largest or the smallest serial tasks.) For each processor , let denote the ratio between its largest and the smallest serial tasks after the removal. Let . Then the total number of stealback attempts is .
Proof
There can be at most stealbacks performed on subtrees that include one of the serial tasks, since each subtree has height at most .
Consider any other stealback that processor performs on processor . It is performed against a subtree that does not include one of the serial tasks. Therefore, it obtains at least of the total work in that subtree, leaving at most of the total work in ’s subtree. We amortize the stealback against the general steal that performed on earlier.
How many stealbacks can that general steal generate? We first assume that there are no general steals performed on or during the stealbacks. Then, can only steal back at most half of ’s work (since is working all the time, and thus will finish half of its work by the time steals half of its work). To obtain the estimate, we solve for such that
and we obtain
By integration, we have
so that
or
Since and are off each other by only a constant factor, grows as . This means that one random steal will be amortized against at most stealbacks. Combined with the estimate involving from Theorem 4.2, we have the desired bound, assuming that there are no general steals performed on or during these stealbacks.
Now we show that this last assumption is in fact unnecessary. That is, if there are general steals performed on or during these stealbacks, our estimate still holds. If a general steal is performed on after steals back from , we amortize this stealback against this general steal instead of against the general steal that made on . Since each general steal can be amortized against in this way by at most one stealback, our estimate holds.
On the other hand, if a general steal is performed on , then the stealbacks that has performed on become an even higher proportion of ’s work, and the remaining stealbacks proceed as usual. So our estimate also holds in this case.
Applying Theorem 4.3, we may choose serial tasks to exclude from the computation of without paying any extra “penalty”, since the penalty is the same as the number of general steals. After we have excluded these serial tasks, if turns out to be constant, we obtain the desired bound on the number of stealbacks. The next theorem formalizes this fact.
Theorem 4.4
Define and as in Theorem 4.2, and remove any serial tasks from consideration. For each processor , let denote the ratio between its largest and the smallest serial tasks after the removal. Let . Then the expected execution time on processors is .
Proof
The amount of work is , and Theorem 4.3 gives a bound on the number of stealback attempts in terms of the number of steal attempts. Since we know that the expected number of steal attempts is , the expected number of stealback attempts is . We add this to the amount of work and divide by to complete the proof.
Remark 2
In the general case, it is not sufficient to amortize the stealbacks against the general steals. That is, there can be (asymptotically) more stealbacks than general steals, as is shown by the following example.
Suppose that the adversary has control over the general steals. When there are owners left, the adversary picks one of them, say . The other owners are stuck on a large serial task while ’s task is being completed. The free agents perform general steals so that ’s tree is split evenly (in terms of the number of serial tasks, not the actual amount of work) among the processors. Then finishes its work, while the other processors are stuck on a large serial task. performs repeated stealbacks on the processors until each of them is only down to its large serial task. Then they finish, and we are down to owners. In this case, stealbacks are performed, but only general steals.
In particular, it is not sufficient to use the bound on the number of general steals as a “black box” to bound the number of stealbacks. We still need to use the fact that the general steals are random.
5 Other Strategies
In this section, we consider two variants of the localized workstealing algorithm. The first variant, hashing, is designed to alleviate the problem of pileup in the localized workstealing algorithm. It assigns an equal probability in a stealback to each owner that has work left. In the second variant, mugging, a stealback takes all or almost all of the work of the processor being stolen from. A simple amortization argument yields an expected number of steals of .
Hashing
Intuitively, the way in which the general steals are set up in the localized workstealing algorithm supports pileup on certain processors’ work. Indeed, if there are several processors working on processor ’s work, the next general steal is more likely to get ’s work, in turn further increasing the number of processors working on ’s work.
A possible modification of the general steal, which we call hashing, operates as follows: first choose an owner uniformly at random among the owners who still has work left, then choose a processor that is working on that owner’s work uniformly at random.
Loosely speaking, this modification helps in the critical path analysis both with regard to the general steals and to the stealbacks. Previously, if there are owners left, a general steal has a probability of hitting one of the remaining critical paths. Now, suppose there are processors working on the owners’ work, where . The probability of hitting one of the critical paths is
by the arithmeticharmonic mean inequality
[7]. Also, the modified algorithm chooses the owner randomly, giving each owner an equal probability of being stolen from.Mugging
A possible modification of the stealback, which we call mugging, operates as follows: instead of taking only the top thread from ’s deque during a stealback (i.e. half the tree), takes either (1) the whole deque, except for the thread that is working on; or (2) the whole deque, including the thread that is working on (in effect preempting .) Figure 1 shows the processor of in each of the cases.
Figure 1(a) corresponds to the unmodified case, Figure 1(b) to case (1), and Figure 1(c) to case (2). The yellow threads are the ones that steals from , while the white threads are the ones that is working on. In Figure 1(c), the bottom thread is preempted by ’s steal.
In both modifications here, each general steal can generate at most one stealback. Therefore, the expected number of stealbacks is , and the expected number of total steals is also .
(a) Work stealing 
(b) Variant (1) of mugging 
(c) Variant (2) of mugging 
6 Conclusion and Future Work
In this paper, we have established runningtime bounds on the localized workstealing algorithm based on the delaysequence argument and on amortization analysis. Here we suggest two possible directions for future work:

This paper focuses on the setting in which the computation is modeled by binary trees. Can we achieve similar bounds for more general computational settings, e.g., one in which the computation is modeled by directed acyclic graphs (DAG)?

The hashing variant of the localized workstealing algorithm (Section 5) is designed to counter the effect of pileup on certain processors’ work. What guarantees can we prove on the running time or the number of steals?
References
 [1] Umut A. Acar, Guy E. Blelloch, and Robert D. Blumofe. The data locality of work stealing. In Proceedings of the Twelfth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA), pages 1–12, July 2000.
 [2] Nimar S. Arora, Robert D. Blumofe, and C. Greg Plaxton. Thread scheduling for multiprogrammed multiprocessors. In Proceedings of the Tenth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA), pages 119–129, June 1998.
 [3] Robert D. Blumofe and Charles E. Leiserson. Scheduling multithreaded computations by work stealing. Journal of the ACM, 46(5):720–748, 1999.
 [4] F. Warren Burton and M. Ronan Sleep. Executing functional programs on a virtual tree of processors. In Proceedings of the 1981 Conference on Functional Programming Languages and Computer Architecture, pages 187–194, 1981.
 [5] James Dinan, D. Brian Larkins, P. Sadayappan, Sriram Krishnamoorthy, and Jarek Nieplocha. Scalable work stealing. In Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis (SC), November 2009.
 [6] Yi Guo, Jisheng Zhao, Vincent Cave, and Vivek Sarkar. SLAW: A scalable localityaware adaptive workstealing scheduler. In IEEE International Symposium on Parallel & Distributed Processing (IPDPS), April 2010.
 [7] Philip Wagala Gwanyama. The HMGMAMQM inequalities. The College Mathematics Journal, 35(1):47–50, January 2004.
 [8] Robert H. Halstead, Jr. Implementation of Multilisp: Lisp on a multiprocessor. In Proceedings of the 1984 ACM Symposium on LISP and Functional Programming, pages 9–17, 1984.
 [9] Richard M. Karp and Yanjun Zhang. Randomized parallel algorithms for backtrack search and branchandbound computation. Journal of the ACM, 40(3):765–789, July 1993.
 [10] Charles E. Leiserson, Tao B. Schardl, and Warut Suksompong. Upper bounds on number of steals in rooted trees. Theory of Computing Systems, Forthcoming.
 [11] SeungJai Min, Costin Iancu, and Katherine Yelick. Hierarchical work stealing on manycore clusters. In Fifth Conference on Partitioned Global Address Space Programming Models (PGAS), October 2011.
 [12] JeanNoël Quintin and Frédéric Wagner. Hierarchical work stealing. EuroPar 2010  Parallel Processing, pages 217–229, 2010.
 [13] Jeeva Paudel, Olivier Tardeu, and José Nelson Amaral. On the merits of distributed workstealing on selective localityaware tasks. 42nd International Conference on Parallel Processing (ICPP), pages 100–109, 2013.
 [14] Warut Suksompong, Bounds on multithreaded computations by work stealing. Master’s Thesis, Massachusetts Institute of Technology, 2014.
Comments
There are no comments yet.