Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers

03/20/2020
by   Dan Alistarh, et al.
0

Several classic problems in graph processing and computational geometry are solved via incremental algorithms, which split computation into a series of small tasks acting on shared state, which gets updated progressively. While the sequential variant of such algorithms usually specifies a fixed (but sometimes random) order in which the tasks should be performed, a standard approach to parallelizing such algorithms is to relax this constraint to allow for out-of-order parallel execution. This is the case for parallel implementations of Dijkstra's single-source shortest-paths algorithm (SSSP), and for parallel Delaunay mesh triangulation. While many software frameworks parallelize incremental computation in this way, it is still not well understood whether this relaxed ordering approach can still provide any complexity guarantees. In this paper, we address this problem, and analyze the efficiency guarantees provided by a range of incremental algorithms when parallelized via relaxed schedulers. We show that, for algorithms such as Delaunay mesh triangulation and sorting by insertion, schedulers with a maximum relaxation factor of k in terms of the maximum priority inversion allowed will introduce a maximum amount of wasted work of O(log(n) poly (k) ), where n is the number of tasks to be executed. For SSSP, we show that the additional work is O(poly (k) d_max / w_min), where d_max is the maximum distance between two nodes, and w_min is the minimum such distance. In practical settings where n ≫ k, this suggests that the overheads of relaxation will be outweighed by the improved scalability of the relaxed scheduler. On the negative side, we provide lower bounds showing that certain algorithms will inherently incur a non-trivial amount of wasted work due to scheduler relaxation, even for relatively benign relaxed schedulers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2018

Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms

There has been significant progress in understanding the parallelism inh...
research
11/05/2019

Parallel Approximate Undirected Shortest Paths Via Low Hop Emulators

We present a (1+ε)-approximate parallel algorithm for computing shortest...
research
04/03/2018

Distributionally Linearizable Data Structures

Relaxed concurrent data structures have become increasingly popular, due...
research
08/23/2023

Incremental Property Directed Reachability

Property Directed Reachability (PDR) is a widely used technique for form...
research
10/12/2018

Parallelism in Randomized Incremental Algorithms

In this paper we show that many sequential randomized incremental algori...
research
11/24/2022

Multidimensional rank-one convexification of incremental damage models at finite strains

This paper presents computationally feasible rank-one relaxation algorit...

Please sign up or login with your details

Forgot password? Click here to reset