On the analysis of scheduling algorithms for structured parallel computations

10/24/2018
by   Guilherme Rito, et al.
0

Algorithms for scheduling structured parallel computations have been widely studied in the literature. For some time now, Work Stealing is one of the most popular for scheduling such computations, and its performance has been studied in both theory and practice. Although it delivers provably good performances, the effectiveness of its underlying load balancing strategy is known to be limited for certain classes of computations, particularly the ones exhibiting irregular parallelism (e.g. depth first searches). Many studies have addressed this limitation from a purely load balancing perspective, viewing computations as sets of independent tasks, and then analyzing the expected amount of work attached to each processor as the execution progresses. However, these studies make strong assumptions regarding work generation which, despite being standard from a queuing theory perspective --- where work generation can be assumed to follow some random distribution --- do not match the reality of structured parallel computations --- where the work generation is not random, only depending on the structure of a computation. In this paper, we introduce a formal framework for studying the performance of structured computation schedulers, define a criterion that is appropriate for measuring their performance, and present a methodology for analyzing the performance of randomized schedulers. We demonstrate the convenience of this methodology by using it to prove that the performance of Work Stealing is limited, and to analyze the performance of a Work Stealing and Spreading algorithm, which overcomes Work Stealing's limitation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2018

Scheduling computations with provably low synchronization overheads

Work Stealing has been a very successful algorithm for scheduling parall...
research
11/30/2021

Atos: A Task-Parallel GPU Dynamic Scheduling Framework for Dynamic Irregular Computations

We present Atos, a task-parallel GPU dynamic scheduling framework that i...
research
06/09/2021

LB4OMP: A Dynamic Load Balancing Library for Multithreaded Applications

Exascale computing systems will exhibit high degrees of hierarchical par...
research
10/17/2018

Load balancing with heterogeneous schedulers

Load balancing is a common approach in web server farms or inventory rou...
research
05/14/2021

Efficient Parallel Self-Adjusting Computation

Self-adjusting computation is an approach for automatically producing dy...
research
06/12/2020

Streaming Computations with Region-Based State on SIMD Architectures

Streaming computations on massive data sets are an attractive candidate ...
research
07/22/2019

Recursion, Probability, Convolution and Classification for Computations

The main motivation of this work was practical, to offer computationally...

Please sign up or login with your details

Forgot password? Click here to reset