In the Congested Clique model of distributed computing, we are given a graph with nodes, where all nodes can send a (possibly different) message with -bits to every other node in the graph in each round. In the context of distributed graph algorithms, the input graph is a subset of the communication graph. In addition to theoretical interest in this model, it has also recently gained a lot of attention, due to its connections to practical distributed and big data platforms such as MapReduce (e.g. ) and related platforms such as Spark and Hadoop (e.g. ).
Distance problems, such as single-source shortest path (SSSP) and multi-source shortest path (MSSP), have been widely studied in different models. A fundamental structure that has been used for solving these problems is a hopset. Given a graph , a -hopset with hopbound , is a set of edges added to such that for any pair of nodes and in , there is a path with at most hops in with length within of the shortest path between and in . The approximation ratio is also referred to as distortion or stretch. We generally want to have sparse hopsets with small hopbound. Intuitively, a hopset can be seen as adding a number of "shortcut edges" that serve as reducing the graph diameter at the expense of a small loss of accuracy. Once a hopset is preprocessed, we can use it as many times as needed for distance queries, and the query time will be the hopbound.
There is a natural tradeoff between the size and the hop-bound (or the query time) of a hopset. In an extreme-case one could store the complete adjacency list-or equivalently add edges, and then query distance in constant time. Other than the fact that computing all-pairs shortest-path is generally slow, we often do not have enough space to store the whole adjacency list for large-scale graphs. There is a line of work that focuses on designing data structures with small size, say
, in which distances can be estimated up tostretch in small query time. Examples of such structures are Thorup-Zwick distance oracles  or -spanners. Hopsets offer a different tradeoff: a hopset gives an accuracy of (rather than ) at the expense of a larger query time (polylogarithmic instead of a small constant). It is therefore crucial to keep the hopbound as small as possible, since the hopbound will basically determine the query time and is more important than preprocessing time. However, even in centralized settings there are existential limitations in this tradeoff. There is a lower bound argument by  stating that there are graphs for which we can not have a hopbound of and size , for arbitrary .
In a recent result, Censor-hillel et al.  gave a fast Congested Clique algorithm that constructs a hopset with hopbound and size . While we can use their hopsets to compute distances efficiently, one shortcoming of such a construction is the large space. In particular, if the original graph has size , we would be storing more edges than the initial input. This is undesirable due to the large scale nature of data in modern distributed platforms. It is therefore natural to find algorithms that use less space, possibly in exchange for a slightly weaker hopbound (but still polylogarithmic). This is our main goal in this paper. We extend the result of  by constructing sparse hopsets with size for a constant and polylogarithmic hopbound in polylogarithmic time in Congested Clique. This is the first Congested Clique construction of sparse hopsets with polylogarithmic hopbound that uses only polylogarithmic number of rounds. This implies that we can store a sparse auxiliary data structure that can be used later to query distances (from multiple sources) in polylogarithmic time.
Our hopset construction is based on a combination of techniques used in Cohen  (with some modifications) and the centralized construction of Huang and Pettie . We also use another result of  that allows us to efficiently compute -approximate multi-source shortest path distances from sources.
One tool that we use in our construction is a hop-limited neighborhood cover construction, which may be of independent interest. Roughly speaking, a -neighborhood cover is a collection of clusters, such that there is a cluster that contains the neighborhood of radius around each node, and such that each node overlaps with at most clusters. In an -limited -neighborhood cover only balls with radius using paths with at most -hops are contained in a cluster.
We note that many of the techniques we use in our construction are borrowed from the PRAM literature. We hope that this paper provides some insight into connections between these different but relevant models.
1.1 Our contribution
The state-of-the-art construction of sparse hopsets in Congested Clique is the results of Elkin and Neiman  (and similar results in [10, 11]), but these algorithms requires polynomial number of rounds for constructing hopsets with polylogarithmic hopbound. The construction of Censor-Hillel et al.  is a special case of hopsets of . They construct hopsets of size with hopbounds. They can construct such a hopset in rounds of Congested Clique using sparse matrix multiplication techniques. However,  does not give any explicit results for constructing sparser hopsets. It is possible that their techniques will also lead to faster Congested Clique algorithms for constructing general (sparse) hopsets of [10, 12]. But here we use a new hopset construction that has a very different structure than hopsets of  and with improved guarantees. Not only does our hopset construction run in polylogarithmic rounds, but it also yields a better size and hopbound tradeoff over the state-of-the-art Congested Clique construction of . Prior to  and  the hopsets proposed for Congested Clique had superpolylogarithmic hopbound of  or polynomial  hopbound. More formally, we provide an algorithm with the following guarantees: Given a weighted222For simplicity, we assume that the weights are polynomial. This assumption can be relaxed using standard reductions that introduce extra polylogarithmic factors in time (and hopbound) (e.g. see ,, ). graph , for any , there is a Congested Clique algorithm that computes a -hopset of size with hopbound
with high probability inrounds. To comapre this with the efficient variant of the Congested Clique hopsets of , we note that for a hopset of size , we get a hopbound , whereas  gets a hopbound of . Thus our hopbound guarantee is a factor of improvement over construction of . Also, their more efficient algorithm runs in rounds of Congested Clique, where is a parameter that impacts the hopbound ( is a constant when their hopbound is polylogarithmic). They have another algorithm that uses an extra polynomial factor in running time to obtain constant hopbound333If we allow extra polynomial factors in the running time we may also get a constant hopbound (we would need to change the parameters of the neighborhood cover construction, and change how we iteratively use smaller scales). However this is inconsistent with our main motivation of getting polylogarithmic round complexity.. We note that the construction of  has similar guarantees to , with two differences: it eliminates a factor (or more generally the dependence on aspect ratio) in the hopset size, but has a slightly worse hopbound in their fastest regime.
Our construction is mainly based on the ideas of  with a few key differences that take advantage of the power of Congested Clique. While the hopsets of  significantly improve over hopsets of  in centralized settings, the construction of  has certain properties that makes it adaptable for a better Congested Clique algorithm. In particular,  uses a notion of small and big clusters, and we can utilize this separation in Congested Clique. We change the algorithm of , in such a way that leads to adding fewer edges for small clusters. This leads to sparser hopsets and improves the overall round complexity. The key idea is that by using the right parameter settings, in Congested Clique we can send the whole topology of a small cluster to a single node, the cluster center, and then compute the best known hopset locally. It is possible to perform these operations specifically in Congested Clique due to a well-known routing algorithm by Lenzen . We can then combine Theorem 1.1 with a source detection algorithm by  (formally stated in Lemma 3) to get the following result for computing multiple-source shortest path queries. Given a weighted graph there is a Congested Clique algorithm that constructs a data structure of size in rounds, where , such that after construction we can query -stretch distances from sources to all nodes in in rounds with high probability.
In a related result, the problem of single-source shortest path (SSSP) in Congested Clique was also studied in , where they use continuous optimization techniques for solving transshipment. Firstly, their algorithm takes a large polylogarithmic round complexity, and has a high dependence on . But we can have a significantly smaller running time depending on the hopset size. In other words, for hopsets with a reasonable density (e.g. with size , where ) we get a much smaller polylogarithmic factor for computing -SSSP. This can be further reduced if we allow denser hopsets.
More importantly, an approach such as  is mainly suited for SSSP. One limitation with their approach is that for computing multiple distance queries we need to repeat the algorithm for each query. For example, for computing the shortest path from sources to all nodes, we have to repeat the whole algorithm times. But constructing a hopset will let us run many such queries in parallel in rounds, where is the hopbound. Moreover, we can compute multi-source shortest path from sources in parallel for all the sources using the source detection algorithm of .
Neighborhood and Pairwise Covers.
In Section 4, we focus on an efficient construction of a limited pairwise cover (or neighborhood cover) in the CONGEST model, which is a tool that we use in our hopset construction. Given a weighted graph , a -pairwise cover, as defined by , is a collection of subsets of with the following properties. 1) the diameter of each cluster is , 2) , . In other words, the sum of the sizes of all clusters is , and the sum of all edge occurrences in the clusters is , 3) for every path with (weighted) length at most , there exists a cluster where .
Pairwise covers are similar to neighborhood covers of Awerbuch and Peleg  with two differences: in a -neighborhood cover, there must be a cluster that contains the neighborhood of radius around each node rather than only paths of length . Neighborhood covers also need an additional property that each node is in at most clusters. While for our purposes the path covering property is enough, in distributed settings we need the property that each node overlaps with few clusters to ensure that there is no congestion bottleneck. The main subtlety in constructing a general -pairwise (or neighborhood) cover is that we may need to explore paths of hops, and thus it is not clear how this can be done in polylogarithmic time. To resolve this,  proposed a relaxed construction called -limited -pairwise cover. This structure has all the above properties but only for paths with at most -hops. More formally, the third property will be relaxed to require that for every path of weight at most with at most hops there exists a cluster where . We can define an -limited -neighborhood cover similarly.
A randomized algorithm that constructs -limited pairwise covers with high probability in depth in the PRAM model was given by . The ideas used in  for constructing work-efficient PRAM hopsets, can also be used to construct -limited pairwise and neighborhood covers in PRAM. However, they do not explicitly construct limited pairwise covers. In distributed settings, a recent construction for sparse neighborhood covers in unweighted graphs in the CONGEST model was given by . However, even by generalizing their result to weighted graphs, in order to cover distances for large values of the algorithm would take rounds for reasons described above.
To the best of our knowledge, an efficient algorithm for constructing -limited pairwise-covers (or limited neighborhood covers) is not directly studied in the CONGEST and Congested Clique literature. Our first contribution is such an algorithm: we use the low-diameter decomposition construction of Miller et al.  for weighted graphs, combined with a rounding technique due to  to construct -limited -pairwise covers in rounds in the CONGEST model. Importantly, is a parameter independent of , which we will set to a polylogarithmic value throughout our hopset construction. Our algorithm is similar to the algorithm of , but with some adaptations needed for implementation in the CONGEST model. Formally, we get the following result: Given a weighted graph , there is an algorithm that constructs an -limited -pairwise cover in rounds in the CONGEST model, with high probability. Moreover, a pairwise cover for paths with -hops with length in 444The algorithm and analysis can easily be extended to paths with length for any constant . can be constructed in rounds with high probability.
As a side result555Our MPC results can be seen as a straight-forward combination of results of , ,  and simulation of . But since both the construction and the model are closely relevant to our Congested Clique algorithms, we find it useful to include this discussion., we note that pairwise covers can also be constructed efficiently in the Massively Parallel Computation model (MPC) (even when memory per machine is strictly sublinear). This in turn leads to a better running time for -MSSP from sources (and consequently SSSP), in rounds in a variant of the model where we assume the overall memory of (equivalently, we have more machines than in the standard MPC model). We consider this variant since in practice it is plausible that there are more machines, while due to the large-scale nature of data in these settings, using less memory per machine is often more crucial.
Dinitz and Nazari  construct -hopsets (based on hopsets of ) with polylogarithmic hopbound when the overall memory is , but they argue that using the existing hopset constructions, this would take polynomial number of rounds in MPC. They further show that if the overall memory is by a polynomial factor larger (i.e. if the overall memory is for a constant ), then hopsets with polylogarithmic hopbound can be constructed in polylogarithmic time. We can also use this extra-memory idea, first to argue that using the hopsets of  instead of the hopsets of  we can get a smaller hopbound when the overall memory is . Then we observe that if we use a faster -limited pairwise cover algorithm (based on the construction of ) instead of the pairwise covers that  uses, we can shave off a polylogarithmic factor in the construction time. This -limited -pairwise cover construction may also be of independent interest in MPC. More formally, we get faster algorithms for -MSSP: Given an undirected weighted graph , we can compute -MSSP from sources in rounds of MPC, when memory per machine is and the overall memory is ) (i.e. there are machines). The difference between this result and  is that they give a more general result where the overall memory is for a parameter . But in the special case of , we get a hopbound of , whereas in this case they get a hopbound of . We also note that the main focus of  is constructing Thorup-Zwick distance sketches. As explained earlier, these structures offer a different tradeoff: much weaker accuracy (-stretch), but better query time (constant rounds rather than polylogarithmic) and less space after preprocessing ( instead in the case of hopsets). More details on the MPC algorithm can be found in Section 6.
1.2 Overview of techniques.
Our hopset has a similar structure to hopsets of , but with some changes both in construction and the analysis. We also take advantage of multiple primitives that are specific to Congested Clique such as Lenzen’s routing and a recent result of . First, we explain the -limited -neighborhood cover construction and then we explain the hopset algorithm.
-limited neighborhood covers.
As described earlier, our algorithm for constructing a -neighborhood cover is based on a combination of the low-diameter decomposition of , and a rounding technique originally proposed by . At a high level, in the low-diameter decomposition algorithm of [21, 20], each node chooses a radius
based on an exponential random variable. Then each nodejoins the cluster of node that minimizes the shifted distance from , which is defined as . This leads to a partition of the graph, and we can show that by repeating this process we will get a -neighborhood cover. Since partitions for constructing a -neighborhood covers directly will be slow for large values of , we focus on the -limited -pairwise covers. To construct these, consider all pairs of nodes within distance in each iteration. We round up the weights of each edge in the graph based on values and . We then construct a low-diameter-decomposition based on the new weights, such that the diameter of each cluster is (rather than based on the original weights). The rounding scheme is such that the -limited paths with length in the original graph will be explored. Intuitively, this means that on the rounded graph we need to explore a neighborhood with fewer hops, which will lead to a faster construction. We can then repeat this process for times for different distance intervals. The details of this rounding scheme can be found in Section 4.
First we describe the sequential hopset construction and will then choose the parameters appropriately for our distributed construction. Let be a a parameter that we will set later. The (sequential) structure of the hopset is as follows: In each iteration we consider pair of nodes such that , and we call the interval a distance scale. Then for distance scales we set and construct a -pairwise cover. We let big clusters be the clusters that have size at least and small clusters have size less than , where is a constant parameter. Then we construct a hopset with small hopbound on each of the small clusters. This is the main structural difference with the construction of  that adds a clique for the smallest hopsets. We then add a star from the center node of each big cluster to every other node in that cluster, and add a complete graph at the center of large clusters. Whenever we add an edge, we set the weight to be the distance between the two endpoints. In the distributed construction, the weight will be an estimate of this distance that we will describe later.
Roughly speaking, constructing a hopset on small clusters rather than constructing a clique as  does, allows us to set the size threshold of small clusters larger, while keeping the number of edges added small. This in turn reduces the number of big cluster centers we have to deal with. Such a modification can be very well tuned to the Congested Clique model. By setting , we will have small cluster that will at most edges. Then a well-known routing algorithm by Lenzen  can be used to send all these edges to the cluster center. The cluster center can then compute a hopset locally. For this we use current best-known centralized construction by . The other challenge is that we need to compute pairwise distances between all big cluster centers. In  this step is done by running Bellman-Ford instances from different sources in parallel. But directly implementing this in distributed settings would need rounds due to congestion. This is where we use a recent result by  stating that we can compute -approximate distances from sources in time. We point out that in , in order to get sparse hopsets, they use a recursive construction for small clusters. Such a recursion would introduce significant overhead in the hopbound guarantee. Here we show that in Congested Clique by using the tools described above we can avoid using the recursive construction and still compute sparse hopsets.
We explain briefly why the constructed hopset has the size and hopbound properties stated in Theorem 1.1. To see this, we use similar arguments as in : for a distance scale consider a shortest path of length at most , and consider segments of length on this path. By definition of a -pairwise cover, each such segment is contained in a cluster. If this segment is in a small cluster, there is a corresponding path with at most edges, where is the hopbound of the local construction. For big clusters, we either add a single edge, or if there is more than one big cluster, the whole segment between these clusters has a corresponding edge in the hopset. By similar considerations and by the triangle inequality we can show that the stretch of the replaced path is . We need a tighter size analysis than the one used in  to prove the desired sparsity. We use a straight-forward bucketing argument as follows: for each cluster of size , edges will be added. Then by noting that are at most clusters with this size we can bound the overall size.
Bounding the exploration depth.
For large values of , the shortest path explorations up to distance could take rounds in distributed settings. To keep the round complexity small, we use the following idea from  (also used in  and ): we can use the hopset edges constructed for smaller distance scales for constructing hopset edges for larger distance scales more efficiently. The intuition behind this idea is that any path with length can be divided into two segments, such that for each of these segments we already have a -stretch path with hops using the edges added for smaller distance scales. This allows us to limit the explorations only to paths with hops in each iteration. This process will impact the accuracy, and so in order to keep the error small we have to construct the hopsets for a fixed scale at a higher accuracy. This is where a factor polylogarithmic in will be introduced in the hopbound, which can generally be avoided in the centralized constructions (e.g. see [8, 11]). This idea is formalized in Lemma 3.
2 Models and Notation
Given a weighted undirected graph , and a pair we denote the (weighted) shortest path distance by . We denote by the length of the shortest path between and among the paths that use at most hops, and call this the -hop limited distance between and . For each node , we denote the (weighted) radius neighborhood around by , and we let be the set of all nodes such that there is path of (weighted) length at most between and such that has at most hops.
For parameter , a graph is called a -hopset for the graph , if in graph obtained by adding edges of , we have for every pair of vertices. The parameter is called the hopbound of the hopset.
We construct limited neighborhood covers in the more classical CONGEST model, in which we are given an undirected graph , and in each round nodes can send a message of -bits to each of their neighbors in (different messages can be sent along different edges). In the Congested Clique model, we are given a graph with nodes, where all nodes can send a message with -bits to every other node in the graph in each round . In other words, this is a stronger variant of the CONGEST model, in all nodes can communicate with each other directly.
We also consider the Massively Parallel Computation, or MPC model . In this model, an input of size which is arbitrarily distributed over machines, each of which has memory for some . In the standard MPC model, every machine can communicate with every other to at most other machines arbitrarily. Generally, for graph problems the total memory is words. But here we a consider a variation of the model in which the total memory can be larger, while the memory per machine is strictly sublinear in . In other words, each machine has memory, where .
Even though we do not give any new PRAM results, we use multiple tools from PRAM literature. In the PRAM model666We just use a simple abstraction without details of the exact parallel model (EREW, CRCW, etc), since PRAM is not our focus and there are reductions with small overhead between these variants., a set of processors perform computations by reading and writing on a shared memory in parallel. The total amount of computation performed by all processors is called the work, and the number of parallel rounds is called the depth.
3 Algorithmic Tools.
In this section we describe several algorithmic tools from previous work that we will be using.
Bounding the shortest path exploration.
As explained earlier, for an efficient hopset construction, we need to first compute hopsets for smaller distance scales and then use the new edges for computing future distances. This will let us limit the shortest path explorations to a logarithmic number of hops in each round. More formally, [[8, 11]] Let be the hopset edges for distance scale with hopbound . Then for any pair where , there is a path with hops in with length -approximate of the shortest path between and . Roughly speaking, the above lemma implies that we can use previously added edges and only run Bellman-Ford for rounds for each iteration of our algorithm.
Given a set of messages such that each node is source and destination of at most messages, these messages can all be routed to their destination in time in Congested Clique .
Multi-source shortest path and source detection in Congested Clique.
We use the following two results by . First result is a multi-source shortest path algorithm that we use as a subroutine in our hopset construction: [MSSP, ] Given a weighted and undirected graph, there is an algorithm that computes -approximate distances distances from a set of sources in rounds in the Congested Clique model. The second result solves a special case of the so-called source-detection problem that we use to prove Corollary 1.1: [Source detection, ] Given a fixed set of sources , we can compute -hop limited distances from all nodes to each of the nodes in in rounds in the Congested Clique model.
4 Neighborhood covers using low-diameter decomposition
In this section, we describe an algorithm for constructing pairwise covers in the CONGEST model. We first give an algorithm for -pairwise covers in weighted graphs that runs in rounds. We then provide an -limited -pairwise cover that runs in rounds. Clearly, the CONGEST algorithm can also be used in Congested Clique with the same guarantees. We will use the low-diameter decomposition algorithm that was proposed in  and extended (to weighted graphs) in  for computing pairwise covers in PRAM. First we state their PRAM result:
[MPX [20, 21]] Given a weighted an undirected graph , there is a randomized parallel algorithm that partitions into clusters such that w.h.p. the (strong) diameter of each cluster is at most . This algorithm has depth777Depending on the exact PRAM model considered the depth may have a small extra factor of . w.h.p. and work.
We denote the algorithm of  for a parameter by LDD(), which is as follows: each node first chooses a random radius
based on an exponential distribution. Each node joins the cluster of node . Ties can be broken aribtrarily. It is easy to see that based on simple properties of exponential random variables the weak diameter of each cluster is with high probability. But it can be shown that the clusters also have strong bounded diameter of (as argued in [21, 20]). This means the diameter of the subgraph induced by each cluster is as opposed to the weak diameter guarantee, which bounds the diameter with between each pair of nodes in the cluster based on distaces in . The second property is that we can lower bound the probability that the neighborhood around each node is fully contained inside one cluster by a constant. This was shown in , but we give a proof sketch for completeness.
[Padding property888Lemma 2.2 in  upper bounds the probability that a ball overlaps with or more clusters, but Lemma 8 is a straightforward corollary of this claim., ] Let be a partition in support of the LDD algorithm. For each node , the probability that there exists such that is at least .
For each node we will consider the subgraph induced by . For each node , consider the random variable . Let denote the largest over , and let denote the second largest value. We argue that the probability that intersects more than one cluster is at most . This event occurs when and are within of each other. Therefore we only need to bound the probability that . This now follows from Lemma 4.4. of  that claims the following: given a sequence of exponential random variables , and arbitrary values the probability that largest and second largest values are within of each other is at most . This implies the probability that intersects more than one cluster is at most and this proves the claim. For more details see , . ∎
In order to compute -neighborhood covers sequentially, we can use the above theorem by setting and repeating the partition algorithm times. It follows from a standard Chernoff bound that the desired properties hold with high probability. Implementing this algorithm in distributed settings may take rounds for large values of . To resolve this, we use a relaxed notion similar to the notion of -limited -pairwise cover proposed in . This structure has all the properties of a -pairwise cover but the path covering property only holds for paths with at most -hops. More formally, the third property will be relaxed to require that for every path of weight at most with at most hops there exists a cluster where . We define an -limited -neighborhood cover similarly: for each node , there is a cluster such that . In , Cohen shows that we can construct -limited -pairwise covers in parallel depth, independent of . We will show that this concept can also be utilized to limit the number of rounds for LDD() partitions to .
-limited -pairwise cover.
Since running LDD() by setting will require many rounds, we cannot directly use the weighted variant of LDD. Instead we use a rounding idea that allows to run LDD() on the graph obtained from rounded weights, only for , at the cost of a small loss in accuracy. This idea was proposed by  and is used widely in PRAM literature (e.g. , ). In the context of distributed algorithms a similar approach was used by  in CONGEST, but directly applying the result of  to our settings will require a polynomial running time, since we would need to run the algorithm from many (polynomial) sources. The idea is based on the following observation: consider a path with at most hops, such that for a fixed . Then by slightly changing the weights of each edge by a small additive factor such that for the new weight it holds for an arbitrary . We then get . This can be achieved by setting , where .
[] Given a weighted graph , and a parameter , there is a rounding scheme that constructs another graph such that any path with at most hops and weight in , has in . Moreover, , where .
We can now run LDD for on , and each path with at most hops will be fully contained in some cluster with probability at least . We can then recover an estimate to the original length by setting , and we have . Same as before, by repeating the LDD algorithm times we will get an -limited -neighborhood cover. We first argue that this algorithm can be implemented rounds of the CONGEST model. A similar construction was used in  for -neighborhood covers in CONGEST. But the result of  only focuses on unweighted graphs, and would take rounds.
Given a weighted graph , there is an algorithm that constructs an -limited -pairwise cover in rounds in the CONGEST model, with high probability. Moreover, a pairwise cover for paths with -hops with length in can be constructed in rounds with high probability.
As we argued by using the rounding technique of , for any pair of nodes such that we can restrict our attention to another graph with rounded weights. We construct a pairwise cover on by running the LDD, algorithm times independently.
We argue that each run of LDD takes rounds in the CONGEST model. First we observe that each node only needs to broadcasts the value to all the nodes within its neighborhood, since a node will not join the cluster of if . We can now use a simple induction to prove the claim. In each round, each node will forward the radius and distances corresponding to the node that maximizes among over all the messages that has received. We now argue that each node will receive the message from the node in rounds. Consider any path , where . If then in one single round sends to . Assume now that receives the message in round . Then will compute (after receiving distance estimates from all neighbors), and forwards to all neighbors including . Therefore in round , has received the message , and can compute . Therefore this algorithm will terminate after rounds. Since is an exponential random variable with parameter , we know that maximum of these exponential random variables is with high probability. Now we need to repeat the partition algorithm times and will pipeline the broadcasts for different runs. Clearly, each node is in at most in clusters. A standard Chernoff bound in combination with Lemma 8 implies that with high probability after repetition of the -limited LDD algorithm, for each path with at most hops and length , there will be a cluster such that . We then repeat this process for distance scales to get an -limited -pairwise cover. ∎
As we will see, since in our hopset construction we consider different distance scales and need to compute pairwise for a fixed scale, this step takes only rounds.
For constructing pairwise covers, we need the diameter guarantee of for all clusters. While running LDD gives a diameter guarantee of on , we note that the construction ensures that clusters have diameter on . Since we argued that every hop path with length will fall into a cluster with high probability, the diameter guarantee of on will imply that the corresponding cluster in will have length . More formally, for any pair of nodes there is a path with length in . Let denote the cluster that contains this path. Consider each segment of length in is in and will be have length in (by Lemma 4), and thus there will be a path of length based on weights in in .
Extension to neighborhood covers.
While Cohen shows that for the parallel construction of hopsets pairwise covers are enough, for the distributed implementation we need one more property: each vertex should overlap with at most clusters. Moreover, the algorithm used in Theorem 4 provides the stronger guarantee that there will be a cluster that contains the neighborhood of (weighted) radius from each vertex with high probability, rather than only containing paths of length . In other words, a similar analysis shows that with high probability an -limited neighborhood cover can be constructed in rounds of the CONGEST model. That is, for each node , the -limited -neighborhood of will be fully contained in a cluster with high probability. However, for our purposes the path covering property suffices.
5 Congested Clique Hopset Construction.
In this section we describe our main algorithm. Similar to the sequential construction described we consider different distance scales , and handle each scale separately. In each iteration, we construct a sparse -limited -neighborhood cover as described in Section 4. Then the clusters will be divided into small and big clusters, and each case will be handled differently. So far the construction is similar to . The key new idea is that for Congested Clique, by setting the parameters carefully we can send the topology corresponding to a small cluster to the cluster center, and build a hopset locally. Here we need to use the fact that each node is in at most clusters, which is a property that we get from our neighborhood cover construction. We will also need to compute pairwise distances between big clusters centers. For this step, we use the algorithm of  that computes -multi-source shortest path from sources (Lemma 3). We note that while during our construction we construct the denser hopsets of  as auxiliary structure, these extra edges will be removed at the end of each distance scale.
Finally, we use Lemma 3 to use the hopset edges added for smaller distance scales to construct the larger distance scales. For this to give us a for an arbitrary , we first let be the error parameter. Since we use paths with error for each scale, to compute distances for the next scale, a multiplicative factor in the stretch will be added in each iteration. This means that after iterations the error will be . We can simply rescale the error parameter by setting to get arbitrary error overall of .
Throughout our analysis w.l.o.g we assume the minimum edge weight is one. Otherwise, we can scale all the edge weights by the minimum edge weight. We also assume the aspect ratio is polynomial. Otherwise we can use reductions from previous work to reduce the aspect ratio in exchange in polylogarithmic depth (this will be preprocessing step and will not dominate the overall running time).
An overview of the algorithm is presented in Algorithm 1. By defining small clusters to have size at most , we have that the number of edges in each small cluster is , and hence all the nodes can send their incident edges to the cluster center in constant rounds using Lenzen’s routing . Then the cluster center computes a hopset with size and hopbound locally using Huang-Pettie  centralized construction. The center of a small cluster can send the edges incident to each node in that clusters. Since the size of the hopset on small clusters is always , this can also be done in constant time using Lenzen’s routing.
As explained in Lemma 3, using hopset edges added for smaller scales we can limit all the shortest path explorations to . So we can add the star edges, by running rounds of Bellman-Ford. For adding a clique between centers of large clusters, we will use the -MSSP algorithm of Censor-Hillel et al. 2019  (using Lemma 3. This is possible since there are at most nodes. We disregard all the other edges added in this step for computing these distances after the computation. We now analyze the algorithm and show that it has the properties stated in Theorem 1.1.
The hopbound of hopsets constructed for small clusters, which are based on Huang-Pettie hopsets is . The path between a pair of nodes in each distance scale has segments. The properties of a neighborhood cover imply that each of these segments are w.h.p. contained in one cluster. Each such segment has a corresponding path with hopbound either (if it is contained in a small clusters), or one single edge (if there is only one big cluster). If there is more than one big cluster center, then there is a single edge between the furthest big cluster centers on the path. Hence in the worst case all segments correspond to small clusters and will have a corresponding path of length . Therefore the overall hopbound is .
Recall that large clusters have size at least . The stars added for each big cluster will add edges overall since they are consisted of unions of forests for each scale. The (clique) edges added between centers of big clusters will add edges overall. For small clusters of size , we added a hopset of size (this is the guarantee we get by using Huang-Pettie hopsets), for a parameter . On the other hand, we have at most clusters of size within . Therefore we can estimate the overall number of edges added for these small clusters in each scale by summing over different values for small clusters as follows:
Therefore, the overall size for all scales is .
Fix a distance scale and consider a pair of nodes where . If , since we assumed the minimum edge weight is one, this implies that the shortest path has at most hops and no more edges is needed for this pair. Otherwise, let be the shortest path between and . We look at three different cases and show .
First consider the case where all the clusters on the shortest path between and are small clusters. In this case, we have replaced each segment of length with a path of stretch . By the triangle inequality, overall we get a -stretch. Next, consider a case where there is a single large cluster on this path. The segment corresponding to this single cluster will just add a single additive cost to our distance estimate.
Final case is when there are more than one large clusters. Consider the two furthest large clusters (based on their centers) on , and let their centers be and . We have added one single edge within -stretch of that covers the whole segment between these two centers. Therefore, we have shown that all segments of have a corresponding path within -stretch. As argued, this implies that each scale incurs a multiplicative factor of in the stretch, and thus by setting , and since we assumed that the weights are polynomial we can get -stretch for all scales.
For each of the distance scales, it takes rounds to compute -limited neighborhood covers (Lemma 4). Once the covers are constructed for small clusters we need to run a Bellman-Ford with hops from the center of each big cluster and since each node may overlap with at most clusters this phase takes (each node can pipeline the computation over the clusters it overlaps with). For small clusters, we argued that in rounds (using Lenzen’s routing) the whole small cluster topology can be sent to the cluster center, and after local computation another rounds will be enough for cluster center to send back the new hopset edges to the destination node. Finally, using the result of  we can compute -approximation from big cluster centers ( sources) in time. Therefore the overall running time is .
Application to multi-source queries.
6 Massively Parallel Hopsets and MSSP
In this section, we argue that in a variation of the MPC model where the overall memory is we can construct hopsets with small hopbound efficiently, and this in turn gives us a fast algorithm for multi-source shortest path in this case. This result relies on an observation made in , stating that the PRAM hopset constructions (e.g. , ) that use processors with depth can be implemented in MPC, even when the memory per machine is strictly sublinear, in rounds if we assume that the overall memory available is . Once a -hopset is constructed, the Bellman-Ford subroutine described in  can be used to compute -stretch distances from nodes to all other nodes in .
Results of  are based on hopsets of , and their constructions may use less overall memory in general, but they get a worse hopbound than ours in the special case that the total memory is . In this case, we get an improved hopbound of , whereas their result gives a hopbound of . In particular, we use the PRAM hopset construction of  (instead of ), which can be simulated in the MPC model with strictly sublinear memory per machine (using a reduction of ) to construct hopsets with hopbound . The only difference between our construction and  is using a faster algorithm for constructing -limited pairwise covers based on the algorithm of . First we note that our -limited -neighborhood cover construction can be constructed in MPC based on a very similar algorithm and analysis as in Section 4. This step can be done only using overall memory (or memory for a single-scale) in rounds. Observe that the construction of -neighborhood covers for different scales can all be done in parallel with an extra logarithmic overhead in the total memory. Similarly since each of the low diameter partitions are independent, the repetitions of the LDD algorithm can also be parallelized. This result is also implied by results in , combined with goodrich . We have, There is an algorithm that runs in rounds of MPC and w.h.p. computes an -limited -neighborhood cover, where memory per machines is and the overall memory is .
Given a pairwise cover, assuming that in MPC we have total memory, we can construct a -hopset of size . This hopset is a special case of hopsets of : we add a clique for small clusters, a star centerd at each big cluster, and a clique between big cluster centers. As stated, the main difference in our algorithm is that we use the algorithm of Lemma 6 for constructing pairwise covers, rather than the algorithm of . This leads to a construction time of , whereas a direct reduction from  would have construction time of , which is how long it takes to construct their limited pairwise covers. Hence combining Lemma 6 with simulating the PRAM construction of , and the Bellman-Ford primitives described in , we can construct a hopset of size in time with hopbound . Given an undirected weighted graph , and parameters , we can w.h.p. construct an -hopset of size in rounds of MPC, using memory per machine, and the overall memory of (i.e. there are machines), where hopbound is . The analysis is very similar to the arguments in previous sections and previous work. Similarly, for -MSSP we get, Given an undirected weighted graph , after a preprocessing step of rounds, we can w.h.p. compute -multi source shortest path queries from sources in rounds of MPC, when the memory per machine is , and the overall memory required for preprocessing is . At a high-level since we have overall memory of , to each node , we can assign a block of memory of size . Then using aggregations primitives (e.g. see ), we can store and update the distances from up to sources. Therefore given a hopset with hopbound , we can compute distances from sources by running parallel Bellman-Ford.
-  Amir Abboud, Greg Bodwin, and Seth Pettie. A hierarchy of lower bounds for sublinear additive spanners. SIAM Journal on Computing, 2018.
-  Baruch Awerbuch and David Peleg. Sparse partitions. In Proceedings of the Symposium on Foundations of Computer Science. IEEE, 1990.
-  Paul Beame, Paraschos Koutris, and Dan Suciu. Communication steps for parallel query processing. In Proceedings of the Symposium on Principles of Database Systems. ACM, 2013.
-  Ruben Becker, Andreas Karrenbauer, Sebastian Krinninger, and Christoph Lenzen. Near-optimal approximate shortest paths and transshipment in distributed and streaming models. In 31st International Symposium on Distributed Computing, 2017.
-  Soheil Behnezhad, Mahsa Derakhshan, and MohammadTaghi Hajiaghayi. Brief announcement: Semi-mapreduce meets congested clique. arXiv preprint arXiv:1802.10297, 2018.
-  Keren Censor-Hillel, Michal Dory, Janne H Korhonen, and Dean Leitersdorf. Fast approximate shortest paths in the congested clique. In Proceedings of the Symposium on Principles of Distributed Computing. ACM, 2019.
-  Edith Cohen. Fast algorithms for constructing t-spanners and paths with stretch t. SIAM Journal on Computing, 1998.
-  Edith Cohen. Polylog-time and near-linear work approximation scheme for undirected shortest paths. Journal of the ACM (JACM), 2000.
-  Michael Dinitz and Yasamin Nazari. Massively parallel approximate distance sketches. OPODIS, 2019.
-  Michael Elkin and Ofer Neiman. Near-optimal distributed routing with low memory. In Proceedings of the ACM Symposium on Principles of Distributed Computing. ACM, 2018.
-  Michael Elkin and Ofer Neiman. Hopsets with constant hopbound, and applications to approximate shortest paths. SIAM Journal on Computing, 2019.
-  Michael Elkin and Ofer Neiman. Linear-size hopsets with small hopbound, and constant-hopbound hopsets in RNC. In Proceedings of the ACM Symposium on Parallelism in Algorithms and Architectures. ACM, 2019.
-  Michael T Goodrich, Nodari Sitchinava, and Qin Zhang. Sorting, searching, and simulation in the mapreduce framework. In Proceedings of the International Symposium on Algorithms and Computation. Springer, 2011.
-  James W Hegeman and Sriram V Pemmaraju. Lessons from the congested clique applied to mapreduce. Theoretical Computer Science, 2015.
-  Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. A deterministic almost-tight distributed algorithm for approximating single-source shortest paths. In Proceedings of the Symposium on Theory of Computing. ACM, 2016.
-  Shang-En Huang and Seth Pettie. Thorup–Zwick emulators are universally optimal hopsets. Information Processing Letters, 2019.
-  Philip N Klein and Sairam Subramanian. A randomized parallel algorithm for single-source shortest paths. Journal of Algorithms, 1997.
-  Christoph Lenzen. Optimal deterministic routing and sorting on the congested clique. In Proceedings of the ACM Symposium on Principles of Distributed computing. ACM, 2013.
-  Zvi Lotker, Boaz Patt-Shamir, Elan Pavlov, and David Peleg. Minimum-weight spanning tree construction in communication rounds. SIAM Journal on Computing, 2005.
-  Gary L Miller, Richard Peng, Adrian Vladu, and Shen Chen Xu. Improved parallel algorithms for spanners and hopsets. In Proceedings of the Symposium on Parallelism in Algorithms and Architectures. ACM, 2015.
-  Gary L Miller, Richard Peng, and Shen Chen Xu. Parallel graph decompositions using random shifts. In Proceedings of the ACM Symposium on Parallelism in algorithms and architectures. ACM, 2013.
-  Danupon Nanongkai. Distributed approximation algorithms for weighted shortest paths. In Proceedings of the ACM Symposium on Theory of Computing. ACM, 2014.
-  Merav Parter and Eylon Yogev. Low congestion cycle covers and their applications. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 2019.
-  Mikkel Thorup and Uri Zwick. Approximate distance oracles. Journal of the ACM (JACM), 2005.