    # How fast can you update your MST? (Dynamic algorithms for cluster computing)

Imagine a large graph that is being processed by a cluster of computers, e.g., described by the k-machine model or the Massively Parallel Computation Model. The graph, however, is not static; instead it is receiving a constant stream of updates. How fast can the cluster process the stream of updates? The fundamental question we want to ask in this paper is whether we can update the graph fast enough to keep up with the stream. We focus specifically on the problem of maintaining a minimum spanning tree (MST), and we give an algorithm for the k-machine model that can process O(k) graph updates per O(1) rounds with high probability. (And these results carry over to the Massively Parallel Computation (MPC) model.) We also show a lower bound, i.e., it is impossible to process k^1+ϵ updates in O(1) rounds. Thus we provide a nearly tight answer to the question of how fast a cluster can respond to a stream of graph modifications while maintaining an MST.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

There are two different approaches for dealing with very large graphs depending on whether the graph is static or dynamic.

If the graph is static, it can be distributed on a cluster of machines that can then process the graph. In the distributed algorithms world, this might be represented using the -machine model (Klauck et al., 2015), where the graph is randomly distributed among servers, each of which can send bits to each of the other machines in each communication round. Alternatively, this might be represented using the Massively Parallel Computation (MPC) model (Karloff et al., 2010), which is designed to capture the performance of Map-Reduce systems. In both cases, you can efficiently find the minimum spanning tree (MST) of a graph (Ghaffari and Choo, ; Klauck et al., 2015).

If the graph is dynamic, a different approach is used. Most of the research has focused on storing as little information about the graph as possible, and treating the updates to the graph as a stream of updates. In this case, graph sketches can be used to store an approximate minimum spanning tree in space, processing updates to the edges as they arrive, one at a time (Ahn et al., 2012).

The question we ask in this paper is whether these two approaches can be combined. How fast can a distributed cluster, e.g., a -machine system, process a stream of updates to a minimum spanning tree. Recently, Italiano et al. (Italiano et al., 2019) gave a first answer to this question: they showed how to maintain an approximate MST, handling each individual update in rounds. This raised the natural question: can we maintain an exact MST, and if so, what is the fastest rate of updates that we can handle?

The main result of this paper, then, is an algorithm for maintaining an exact MST, where the graph is distributed among servers and the cluster receives a stream of updates to the graph, adding and deleting edges. Moreover, our algorithm can handle up to requests in rounds (with high probability), allowing for significant churn in the graph if is large. The same basic approach can be used in the MPC model.

To maintain the MST, we build on the nice idea proposed by Italiano et al. (Italiano et al., 2019) of using Euler tours to represent the MST. When many edges are being added to a graph, the MST may change significantly, and we develop a graph structural property that allows us to quickly determine which edges need to be added and removed from the MST. When many edges are deleted from a graph, a different approach is needed. We reduce the problem to an MST in the CONGESTED-CLIQUE model, carefully simulating the algorithm by (Jurdzinski and Nowicki, 2017). (A natural approach at simulation would take time, and more care is needed in the reduction to achieve time.) Thus we can handle edge insertions and deletions in only rounds with high probability.

A natural follow-up question is whether it is possible to do better. We show a matching lower bound: it is impossible to handle requests in rounds. Thus, if the cluster needs to keep up with the incoming stream, then it can only handle updates per round without falling behind the stream of updates.

## 2. Background and Related Work

Large scale graphs have recently become a topic of increasing interest. Since these large scale graphs do not fit on a single machine, the graph is stored in a distributed setting, and the algorithms are distributed in nature. However, distributed graph algorithms of the past, e.g., the CONGEST model (Peleg, 2011), tend to treat each vertex as a single machine, while these new graphs algorithms store many vertices on a single machine, changing the nature of the distributed graph algorithms. Two models that attempt to model these large scale graphs are the -machine model (Klauck et al., 2015), and the popular Massively Parallel Computational(MPC) model (Karloff et al., 2010). These models have been relatively well studied (Pandurangan et al., 2015, 2018; Ghaffari et al., 2019; Goodrich et al., 2011; Lattanzi et al., 2011), and similar techniques are involved in both models.

The connectivity and MST problems have been extensively studied in the MPC and -machine models. More recently, building on work in (Ghaffari and Parter, 2016; Hegeman et al., 2015), Jurdzinski and Nowicki (Jurdzinski and Nowicki, 2017) described an algorithm that constructs an MST in rounds with space per machine in the MPC model.111In fact, the algorithm is presented for the CONGESTED-CLIQUE, but it implies an MPC algorithm. It is generally believed that no round connectivity algorithm exists in the MPC model for the sublinear regime where each machine has space. Assadi et al. (Assadi et al., 2019) however, demonstrate how to obtain such round complexity when the graphs involved are sparse.

Dynamic graph algorithms in this setting have recently come up as a topic of interest. To the best of our knowledge, Italiano et al. (Italiano et al., 2019) was the first paper to look at dynamic updates in the MPC model. They introduce the dynamic MPC model, several dynamic problems in the MPC model, as well as their solutions to them. In particular, they described how Euler tours can be used to solve the dynamic connectivity and dynamic approximate MST problems in communication rounds. The natural extension, batch dynamic algorithms have also been very recently studied where more than one update arrives per round. Dhulipala et al. (Dhulipala et al., ) build on work on the Euler Tour Tree data structure in the parallel setting (Acar et al., 2019; Tseng et al., ), and demonstrate a batch-dynamic connectivity algorithm in the MPC Model using sketching techniques. As of yet, we know of no existing work on the dynamic exact MST problem or the batch-dynamic MST problem.

Our work has more in common with the work by Italiano et al. (Italiano et al., 2019): we generalise their results and demonstrate how Euler tours can be used to solve the dynamic MST problem in communication rounds. We also demonstrate how it is, in fact, possible to resolve queries in rounds with high probability, where is the number of machines. Lastly, we show that it is not possible to do much better than queries in rounds, by proving a lower bound. There can be no algorithm that can resolve queries in rounds.

While they worked in the MPC model, we will primarily describe our results for the -machine model, while then describing how our approach carries over to the MPC model.

## 3. Models and Problems

In this section, we describe the main models for cluster computing, and define the MST and Dynamic MST problems.

### k-Machine Model

We focus on the k-machine model described by Klauck et al. (Klauck et al., 2015), and highlight some of the differences between the -machine model and the MPC model. We are given a graph , with vertices and edges.

Graph distribution: In the k-machine model, we assume that the graph in question is distributed across machines in the random vertex partition model. The vertices of the graph are distributed uniformly at random across these machines, so that each vertex has a chance of being on any one machine. If a vertex is distributed onto a machine, so are all of the edges it is a part of.

Communication: Communications occur in synchronous rounds. The communication topology of these machines is a clique, with bidirectional links between any two machines that can only send bits a round.

Space restrictions: Due to the large sizes of graphs involved, we impose a space restraint on each machine. At any point in time, each machine can only use an additional space, a constant factor amount of additional space over the space required to store the edges. Since each machine receives from up to input communication channels each round, we also assume that each machine can also use space with no problems. Hence we use space.

### MPC Model

The MPC model, described by Karloff et al. (Karloff et al., 2010), is usually phrased with the amount of space being an input parameter instead of the number of machines . However, since these algorithms usually only use a constant factor more space over the problem size, we can in fact also think of the MPC model having the number of machines as an input parameter instead of the amount of space .

Space restrictions: The MPC model has machines, each with up to space, with , where here hides additional log factors.

Communication: Communications also occur in synchronous rounds. Machines can communicate as much as they like with any other machine, as long as for each machine, the total communications in each round is . Contrast this with the -machine model, where machines can communicate up to a total of sized messages each round, and notice that the two models scale in opposite directions: for the -machine model, more machines allows for more inter-machine bandwidth; for the MPC model, more machines means less inter-machine bandwidth.

Graph distribution: Since machines can exchange all of their data in one round, the graph data can be distributed arbitrarily after a single round at the beginning of the algorithm.

For both the -machine model, as well as the MPC model, we are primarily focused on minimizing the round complexity, while respecting the space constraints.

We highlight the primary differences between the -machine and the MPC models here. As we will see, Lenzen’s routing lemma (Lenzen, 2013) ensures that the methods of communication are not a real difference between the two models, and that the primary difference is in the scaling of the bandwidth with respect to the number of machines. We will find that in general, algorithms in the -machine model tend to work on the premise that there are a small number of machines, so that the vertex partitioning model makes more sense. Algorithms in the MPC model tend to restrict the amount of space on each machine, and we see most works focus on the amount of space required on each machine for each algorithm to work. Most work  (Assadi et al., 2019; Ghaffari and Uitto, 2018) on the connectivity and MST problems generally work in the regime where each machine has space that is .

Here, we also highlight the CONGESTED CLIQUE (Lotker et al., 2003) model. While not a model intended for the study of large scale graphs, we find that results in this model are particularly illuminating. The CONGESTED CLIQUE model can be thought of as the special case of the -machine model where . Instead of a random vertex partition model, we instead have a bijection between machine and vertex, and have each machine contain its vertex’s edge information. The communication topology is a clique, with bidirectional links between any two machines that can only send bits a round.

### MST and Dynamic MST

The MST problem is as follows. Given a weighted undirected graph , find a spanning tree such that the total sum of the weights of the edges in this spanning tree is minimised. In the context of the -machine model, we ask only that the machines know if the edges that live on their machines are in the MST or not, since storing the actual MST itself on each machine requires too much space.

The dynamic MST problem introduces edge additions and edge deletions. Whenever an edge is added or deleted, only the two machines this edge lives on knows about the update. We ask that the machines know if the edges that live on their machines are in the MST or not, just as in the static case.

## 4. Preliminaries

In this section, we discuss some of the basic communication primitives in the -machine and MPC models.

### Lenzen Routing

We begin by recalling Lenzen’s routing lemma (Lenzen, 2013).

###### Theorem 4.1 ().

The following problems can be solved in communication rounds in a fully connected system of nodes:

1. Routing: Each node is the source or the destination of up to messages of size . Only the sources know the destinations of the messages and the contents.

2. Sorting: Each node is given up to comparable keys of size . Node needs to learn the keys with indices from to .

Lenzen’s routing lemma tells us that the MPC communication model and the -machine communication models are only different up to constant factors from each other, and that the only real restriction is the total bandwidth during each communication round. In the -machine model, this bandwidth scales with the number of machines, while in the MPC model, this bandwidth scales inversely with the number of machines.

A machine performs a broadcast if it sends the same bits through all of its communication links during that communication round. The following lemma is in the spirit of the “Conversion Theorem” (Theorem 4.1 of (Klauck et al., 2015)). While they used a randomized routing approach to obtain bounds, we demonstrate that a deterministic approach gives us bounds.

###### Lemma 4.2 ().

Any algorithm in the -machine model that performs a total of broadcasts and/or max computations in sets, with the broadcasts and computations within each set having no dependencies, can be completed in a total of rounds.

This lemma is relatively straightforward, and its proof is available in the appendix.

## 5. Dynamic MST: One at a Time

Before going into the batch dynamic MST algorithm, we first describe the dynamic MST algorithm in this section that can handle one update a time. In the following section, we show how to generalize this approach to updates at a time. The main goal of this section is to prove the following theorem:

###### Theorem 5.1 ().

There is an algorithm that maintains a dynamic MST in communication rounds for each update. If the graph is initially not empty, then initialization of the data structure after the MST instance has been solved takes rounds.

We split updates into edge additions and edge deletions, handled separately. When an edge is added, we do cycle deletion to restore the MST. When an edge is deleted, we add back the minimal edge across the induced cut. As in Italiano et al. (Italiano et al., 2019), where Euler tours were used to solve the dynamic connectivity and dynamic approximate MST problems, we make use of the same basic approach. Euler tours were first used in the dynamic MST problem by Henzinger et al. (Henzinger and King, 1995)

### 5.1. Euler tours

An Euler tour in a general graph is a path that visits each edge exactly once. In the context of an MST, we treat each edge as a bidrectional edge, and an Euler tour refers to a cycle that visits each edge exactly twice. An Euler tour is the same as a depth first search edge visit order, but it is generally more useful to think of an Euler tour as a cycle. An example of an MST with an Euler tour over it can be seen in figure 1:

We call the start of the Euler tour the root of the Euler tour. In general, when we refer to the root of an MST with an Euler tour structure over it, we refer to the start of the Euler tour.

There are several different ways in which an Euler tour can be described. In Henzinger et al. (Henzinger and King, 1995) as well as the approach used by Italiano et al. (Italiano et al., 2019), the Euler tour was described by keeping track of the order in which the vertices are visited. We employ a slightly different approach, and label the edges in the order in which they are traversed through.

We augment each edge with these two integer values, and call the smaller one , and the larger one for each of them. We now have three important lemmas:

###### Lemma 5.2 ().

Consider an MST , rooted at , and some cut edge with labeled values and . In the graph , an edge is not in the same component as the vertex iff and .

###### Proof.

Let be the component separated from . In the Eulerian cycle , notice that denotes the time the Eulerian cycle enters the component , and is the time it leaves the component . As such, all the edges that are in the component will be visited between and , and will hence have values between and . ∎

###### Lemma 5.3 ().

Consider an MST rooted at . Consider any arbitrary vertex that is not . The edge with the highest labeling with one endpoint touching and the edge with the smallest labelling with one endpoint touching are the same edge .

###### Proof.

Let be the first edge that the Euler tour crosses to enter . This is the desired edge , since it is the first and last time the Euler tour visits the vertex .∎

Let be the root of the Euler cycle, and let be any vertex, we call the edge as in lemma 5.3, the parent edge of with respect to . In figure 1, the parent edge of with respect to is the edge .

###### Lemma 5.4 ().

Consider an MST , and some Euler tour. Let be the root of this Euler tour. Consider any arbitrary vertex that is not . Let be the parent edge of with respect to . An edge is on the path from to iff and .

###### Proof.

Notice that an edge is on the path from to iff it is a cut edge that when removed partitions and onto two separate halves.

() Suppose and . Then, applying lemma 5.2 to the cut edge tells us that the edge is not in the same partition as with the cut edge , so is a cut edge that separates and and we are done. If and , then the edge is precisely the parent edge of with respect to , and is the first time the component is visited, and is hence also a cut edge.

() For the other direction, suppose is a cut edge separating and . If does not touch , then by lemma 5.2, the parent edge of satisfies and . If does indeed touch , then must be the parent edge of with respect to , and we have and .∎

Importantly, lemmas 5.2 and 5.4 give us a way to determine where edges are in the MST, from just the two values and of any edge.

### 5.2. Data structures

To represent our Euler tour, we augment each edge in the MST with:

1. The two integer values from our Euler tour, and the direction.

2. The size of the Euler tour this edge is in.

This additional edge information requires a constant factor more space over the original edge information. For each machine, we also store:

1. For each neighbouring vertex, the Euler tour information of a single arbitrary edge of that neighbour.

This requires an amount of space equal to the number of neighbours, which is bounded by the number of edges on each machine, and is hence again a constant factor more space over the original edge information.

### 5.3. Maintaining the data structures

We begin first by demonstrating several transformations that can be made in the Euler tour structures, and the number of rounds of communications required for each of them.

###### Lemma 5.5 ().

Euler tours can be re-rooted after broadcasts.

###### Proof.

Suppose we wish to reroot the Euler tour to some vertex . To do so, vertex broadcasts the edge value of any outgoing edge, say . Each machine now subtracts from all edge values on its machines, taken modulo . This maintains the Euler tour structure, since Euler tours are cycles. ∎

###### Lemma 5.6 ().

Consider an MST with an Euler tour structure over it. Given an edge in the MST that disconnects the MST into two separate trees, we can delete it and maintain the two separate Euler tours after broadcasts.

###### Proof.

The edge being deleted broadcasts its two values and . To restore the Euler tour property in both disconnected trees, we simply apply the following function to the weights globally:

 f(w)=⎧⎨⎩w,for wemin and wemax

Notice that this results in two Euler tours. The values in the component connected to the root have to have their values connected again, and have the large values shifted down by the number of edges removed, , while the values in the component disconnected from the root have to have their values shifted down to .

We also have that the sizes of the Euler tours have to be updated. We apply the following function to the edge with weight and size globally:

 g(w,s)=⎧⎨⎩s−(emax−emin+1),for wemin and wemax

The only remaining thing to maintain is the additional Euler tour edge. In the event that machines used the edge as the edge chosen edge, since the edge was deleted, a new replacement edge is required. Here we just have both and broadcast a new edge of theirs, and we are done.∎

###### Lemma 5.7 ().

Consider two MSTs and , both with an Euler tour structure over them. Given an edge that connects the two MSTs, we can combine the two MST and maintain the Euler tour after broadcasts.

###### Proof.

The machines hosting and both broadcast the size of their individual Euler tours and respectively, as well as the value of an outgoing edge from and say and respectively. The new size of the Euler tour is then , and the Euler tour values are updated by the function and for the two Euler trees respectively:

 fM1(w) ={w,for w=a fM2(w) =a+1+(w−bmods2)

The new edge has the values and . This describes the Euler tour starting from , passing through into at step , and then passing back through to continue the Euler tour in .

Notice that no additional work is required for the additional Euler tour edge value chosen for each neighbour.∎

Lemmas 5.6 and 5.7 allow us to update the MST by deleting and then adding edges into the MST as required. All that remains is for us to demonstrate that the Euler tour structure allows us to determine the edges to be deleted and added when an update in occurs.

In this section, we describe how the edges to be added/deleted can be determined when an update arrives using the Euler tour structure in communication rounds.

We perform edge additions as follows: We see that in the event that any edge is added, to maintain the MST, we add that edge to the MST, find the unique cycle that is created, and remove the largest weight edge from the MST. To do so, we will on any input , have each machine determine if any of their edges that is in the current MST is on the path from to , then a leader node will find the global maximum from the largest value from each machine. We see that each of the machines can determine if the edge is on the path from to as follows:

1. Reroot the tree to using lemma 5.5.

2. determines its parent edge and broadcasts and .

3. Edges are on the path from to iff and .

4. A max query is run on edges in this set.

By lemma 5.4, the edges on the path from to are labeled with values such that and . Now, each machine can figure out which of their edges that are in the MST have values that satisfy this property, and a leader node can figure out the global maximum. The leader node compares the current largest weight edge with the new edge, and makes the graph changes as required.

#### 5.4.2. Edge Deletions

To complete edge deletions, recall that to maintain the MST after an edge in the MST is deleted, we can find the minimum weight edge across the cut and add it back into the MST.

Lemma 5.2 states that given an edge in the MST that bipartitions the graph, edge is not in the same component as the root iff and .

Let be the vertices that live on machine , and let be the neighbouring vertices of in the graph . We determine the minimum edge across the cut as follows:

1. The edge being deleted broadcasts the values and .

2. For each vertex : Pick an arbitrary edge connected to , with Euler tour value and .

• If and or and is pointing away from or and is pointing towards , label vertex ”with root”

• If and or and is pointing towards or and is pointing away from , label vertex ”away from root

3. A min query is run on the edges that have endpoints with different labels.

This is the reason why we store an additional Euler tour edge value for all neighbours, as it allows the machines to determine if edges fall on different sides of the cut.

Since each step only requires broadcasts, we are done.

### 5.5. Initialisation

In the work by Klauck et al. (Klauck et al., 2015), to demonstrate the power of their conversion theorem, they described how a Boruvka style component merging approach could allow us to construct an MST in rounds. Using our Rerouting Lemma, the same approach yields the following:

###### Theorem 5.8 ().

We can construct an MST in the -machine model in communication rounds.

The proof of this is a straightforward simulation of the Boruvka style MST algorithm using our rerouting lemma. A full proof is available in the appendix.

To complete the usage of Euler tours in our Dynamic MST problem, we demonstrate that the Euler tour structure can be initialised in the same initial round complexity.

However, notice that a naive implementation, merging the Euler tour data structures as the components are merged is not sufficient. Our merge procedure only allows us to complete merges in pairs, but the merges required after a phase of the component merging algorithm could involve an arbitrary number of trees. The dependencies that might result would not guarantee the round complexity required. In a round where we would have to merge three components in a line, our previous approach would not allow us to complete this in a single round, since we would have to first merge the first two, then merge the resulting two components. 222An alternate approach is to find a maximal matching of components to merge, but this is, perhaps, simpler. And it is useful later to be able to updated multiple edges in the MST at once.

We demonstrate that we are able to merge Euler tours in communication rounds. Specifically, we prove the following lemma about -way merging:

###### Lemma 5.9 ().

Consider any forest with an Euler tour structure over each individual tree. Given a set of MST edge additions or MST edge deletions that do not create cycles, we can complete all said updates in communication rounds.

###### Proof.

Suppose these updates are ordered. (If they are not ordered, order them lexicographically.) To complete updates at once, we do the following:

• An outgoing edge’s Euler tour values from each endpoint.

• The size of the Euler tour of each endpoint.

• The Euler tour values of the edge if it is a deleted edge.

2. Each machine performs the updates in order, updating the above three values as necessary.

Notice that at any point in time, combining two Euler tours, or separating two Euler tours only requires the above three values to be broadcast. Each machine can keep track of these values, and update them as necessary throughout the process to ensure that they are still relevant after merges and separations. Notice that outgoing edges are only involved in the edge addition case, and as such will never be deleted.

Additional work to update the the Euler tour information of neighbours only has to be completed if edges are deleted. Since at most such vertices are affected, we can just broadcast them all at the end of the process.

Since each step only requires broadcasts, by our rerouting lemma A.2, this can be achieved in communication rounds. ∎

This establishes the procedure for initialising the Euler tour trees, and our entire algorithm is complete, and our algorithm is complete. As a result, we have proven Theorem 5.1, the main result for this section.

Notice that this -way merging lemma allows us to initialise the MST irregardless of how the MST is built. As such, independent of how the MST is determined, this process always takes communication rounds.

We notice here that this problem seems to lend itself well to batch updates. Updates to the tree, as well as broadcasts done to determine which edges are to be deleted or added to restore the MST only require broadcasts each. This seems to suggest the possibility of resolving updates in communication rounds if dependencies could be avoided.

## 6. Batch Dynamic MST

In this section, we present our main contribution: the batch dynamic minimum spanning tree algorithm. For the batch dynamic MST problem, we have updates arrive, with each update only arriving at the two machines where the updated edge resides. The algorithm is required to determine the MST after these updates are resolved, where each machine knows which of their edges is in the MST. The main goal of this section is to prove the following theorem:

###### Theorem 6.1 ().

There is a dynamic MST algorithm in the -machine model that can satisfy dynamic edge updates in communication rounds, initialisation in rounds (if the graph is initially non-empty), while using space, i.e., at most a constant factor more space more than the original space necessary to store the graph . The algorithm is deterministic worst case in the edge addition case, and is a Las Vegas randomized algorithm for the edge deletion case, completing in rounds with high probability for each attempt.

For both edge additions and deletions, our k-way updating algorithm described in Section 5.5 allows us to reconstruct the trees as necessary once we know which edges to add or remove. As such, we only have to describe the procedure to determine which those edges are.

We now prove the following lemma:

###### Lemma 6.2 ().

Given a set of edge updates, we can determine the new MST in communication rounds. Each machine will know if each of its edges are in the MST or not.

When edges are added, it is not immediately clear how we can simultaneously find a set of edges to delete so that the remaining graph is both cycle free and connected. For example, if we were to pick the original cycles induced by a single new edge, as well as the existing MST edges, the maximal weight edges in all these cycles might be the same edge. Here in figure 2, where the bold lines represent edges in the original MST and the dotted lines represent new edges being added, the edge labeled is in all three cycles, and might be the only edge deleted, if it were the heaviest weight edge in the graph.

It is also difficult to describe the cycles to run max queries on. Cycles could be described through the series of added edges they pass through, but such descriptions could be of length . Figure 2. Example 2: Bold edges are edges in the MST, solid edges are edges in the graph G, dotted edges are new edges being added

The main insight is to notice that there are only essentially edges that matter. We first begin with some intuition as to what this means. Consider again Figure 2. We first remove edges that are not in any cycles, since they are irrelevant, and can never be considered for removal. We look at the graph as if it were the original MST, with additional edges attached. In Figure 3, we can see this process in action. Figure 3. Example 2, removing irrelevant edges to obtain M′, then contracting to obtain M′′. The shaded vertex is the sole vertex in B

Crucially, we wish to decompose the original MST into non-intersecting paths such that at most one edge can be removed from each of the paths. As an example, refer to the decomposition of example 2 into the 5 paths described described by the third image in figure 3. Notice for example, that amongst the three edges in path 1, only one of the three edges can be deleted, if not the graph will become disconnected. After which, we can consider the contracted graph to the right, and solve the MST problem on that graph instead.

We now prove the key claim of this section:

###### Lemma 6.3 ().

Given any MST, and any set of edges to connect vertices in the MST, we can decompose the edges of the MST into disjoint sets such that:

• At most one edge from each set can be removed while maintaining connectedness in the MST and the new edges.

• Each edge is in some set.

###### Proof.

To perform this decomposition, we first remove all edges that are not part of cycles, and place them all in one set. Call the remaining forest . We split into paths by the following set of vertices:

• Vertices that are one endpoint of the edges being added, call this set .

• Vertices that have degree more than 2 in , for example the shaded vertex in example 2, call this set .

consists of all edges which are a part of a cycle, which are edges that are on the shortest path from some two vertices in . Any leaf in this forest must be some element in , if not the edge connecting to that leaf cannot be on the shortest path from some two vertices in , and hence cannot be involved in a cycle.

Now, since is maximally of size , we have that the number of leaves in is at most 2k. Since consists of the vertices that have degree two in , the number of elements in is bounded by the number of leafs in by a degree double count. Hence, . Trivially the sets are disjoint.

We can now think of the induced tree , with the vertices being the elements in and , and the edges being the paths connecting elements in and in . Since this new graph is a forest, the number of edges it can have is bounded by the number of vertices, and is hence . Hence the number of sets constructed is . In figure 3, the graph consists of the solid edges of , and the dotted edges that are the newly added edges.

Next, notice that each of these sets is a path from some element in to some element in . We wish to show that at most one edge can be removed from any such set. Suppose otherwise, and the two edges that can be removed are the edges and appearing in that order on the path. Since the remaining graph is still connected, there must be some path in from to , not passing through . Consider any such path, and let be the last vertex on the path from to that is visited on this path during the first time it leaves the path. Consider the partitioning of the MST induced by the edge , and let be the last vertex visited in the component with on this said path, at the first time it leaves said component.

Now, cuts across this partition, and the edge it crossed the partition with is not part of the MST, so it is one endpoint of one of the edges, and is in . As such, is then either , and is in , and we obtain a contradiction, or the path from to consists of edges that are part of cycles, and the degree of is greater than 2, and is in , also a contradiction.

As such, each of these sets can have at most one edge removed, whilst maintaining connectivity in the original MST edges and the new edges.

Lastly, since each edge that is part of some cycle is in these sets, and all the other edges are in the first set, all edges are part of some set and we are done. ∎

Our strategy is to run a max-query on each of these sets. After which, we only have to consider these edges, as well as the original new edges that are being added, and solve a contracted MST of size that can fit on a single machine. All these edges, as well as their endpoints can then be sent to all machines, and then each machine can resolve the MST on their own simultaneously. We reduce the problem to a problem on the contracted graph with only edges.

The algorithm goes in rounds as follows:

1. All the new edges being added are broadcast to all machines, so that all machines know the set .

2. Vertices in broadcast the Euler tour values of one of its edges.

3. Vertices in determine if their edges are part of shortest paths between any elements in , and broadcast all such edges.

4. All vertices determine if they are in .

5. All vertices in and broadcast the Euler tour values of all edges connected to them that are part of a shortest path.

6. All machines build a picture of the tree induced, and conducts max queries for the sets.

7. All of the maximums in the sets are broadcast, and each machine determines the new MST, and the edges to be deleted.

8. Euler tours are updated.

We now describe how steps 3 and 4 work in detail. In step 3, vertices in determine if their edges are part of shortest paths between any two vertices in . Notice that all such edges are on the shortest path from themselves to some other element in . Hence, to determine if their edges are shortest path edges, they simulate the rerooting process, rerooting the tree to each of the other possible values in , and checking to see if the edges they have are indeed parent edges with respect to some other member of . The edges that are parent edges after some reroot to some element in are the edges that are on shortest paths.

Notice too that there are only such edges, since there are only paths in . Hence broadcasting all these edges will take communication rounds.

To determine if a vertex in , it has to check that it has degree larger than in the graph induced only by shortest paths. To do so, it has to check that it has at least 3 edges connecting to it that are on shortest paths between elements in .

Recall Lemma 5.4, which states that if is the root of the Euler tour, then an edge is on the path between and iff and , where is the parent edge of . However, we cannot directly apply this result, since the values of and are only known to the machine hosting after rerooting the tree. To obtain all the values and would require up to broadcasts for each of them, for each other possible value of in , for a total of broadcasts.

What is important is that the edge that is broadcast after the rerooting process is always the parent edge. As such, to avoid this problem, each machine simulates the rerooting process, and determines what the values of and would be, since they have been given all parent edges in step 3.

To complete step 4, for each vertex , for each edge connected to it, the machine checks for all the pairs of values in , and simulates the tree rerooting process, and checks to see if the edge is indeed on the shortest path between the two vertices. Now, each vertex will know its degree, and can determine if it’s in .

After the sets and are determined, all vertices broadcast all the parent edges of any member of with respect to any member of . Again, since there are only intervals, there can only be such values.

In step 6, given the values of the Euler tour, each machine can independently build a picture of the induced tree, by placing the edges and vertices in the correct order. It then for each of the sets, determines membership of the set using lemma 5.4, since it has all parent edges. It then finds the maximum weight edge in this set, and sends it to some machine for collation to find a global maximum in each set. Notice here that we can assign which machine does the collation deterministically, we simply order the paths based on the order in which they appear in the Euler tour, and take mod . This results in max queries that can be completed in communication rounds.

After which, each machine knows precisely which edges are relevant, and can obtain the new MST. We simply delete the correct edges, with ties broken by lexicographical order, and maintain the Euler tour structure.

### 6.2. Edge Deletions

We now focus on edge deletions:

###### Lemma 6.4 ().

Given a set of edge deletions, after communication rounds with high probability, we can determine the new MST. Each machine will know if each of its edges are in the MST or not.

Edge deletions only affect the MST if the deleted edges were originally in the MST. After edge deletions occur, our MST is decomposed into components defined by the deleted edges, and we have to find minimum edges that reconnect our components. This can be reduced to solving a new MST instance on machines and a graph with vertices. This is the same as solving the MST problem in the CONGESTED CLIQUE model(with the exception that we allow for bits of communication, over bits of communication). This problem has been solved very recently by Jurdziński and Nowicki in 2017 (Jurdzinski and Nowicki, 2017) using a randomized approach. Their algorithm does not use more than space.

To complete the reduction, we have to demonstrate how to convert our setting to the CONGESTED CLIQUE setting, where each machine has all the edges of its vertex. Notice here why this is not trivial. From our Euler tour data structure, we can tell for each edge, the two components it bridges. However, we cannot tell what the minimum weight edge that bridges any two components and are, without first conducting a min query that takes a broadcast. Doing min queries to determine all the edges in our new graph takes rounds.

Here, we circumvent this problem by noticing the following fact. On each machine, there can only be edges that can possibly be in the MST. If there are more than edges that are candidates for the MST, there must be a cycle, and the machine knows that the largest weight edge on this cycle cannot be in the MST. Now, there are at most candidate edges on each machine, instead of , and we can apply Lenzen’s routing theorem.

Notice also that we cannot apply Lenzen’s routing theorem directly, since there might be more than edges that connect to a component. Multiple machines may have candidate edges that bridge some two components and . The algorithm is as follows:

1. Broadcast all Euler tour values of edges being deleted and label disconnected components in Euler tour order.

2. Determine which components each edge lies across.

3. Each machine does cycle deletion to obtain up to candidate edges.

4. Apply Lenzen’s routing theorem to sort the candidate lexicographically.

5. Each machine keeps only the smallest weight edge across any two components.

6. Each machine communicates with its two neighbouring machines (by index), to ensure that there are no duplicates.

7. Use Lenzen’s routing theorem to send all edges touching component to machine .

8. Run Jurdziński and Nowicki’s MST algorithm.

Steps 1 and 2 can be completed applying similar ideas to the single edge deletion case described in Section 5.4.2. We construct equivalence classes as follows. Each machine first receives the Euler tour values of the edges being deleted, and lists them out in order. The smaller value of each pair of values is then represented with an open bracket, and the larger value of each pair is represented with a close bracket. All values that are contained in the same pair of brackets, and are at the same nested depth are in the same equivalence class. Each equivalence class then corresponds to the Euler tour values of a connected component. Components are labeled in order. Figure 3 illustrates this process.

Just as in Section 5.4.2, we can determine which components each edge lies across with the neighbouring edge’s Euler tour values. With the Euler tour values and , we can determine which component the endpoint is in, by looking at where this value lies in the set of brackets determined above. In the event that the edge chosen is one of the boundary edge values(eg. 13 in figure 4), the direction of the edge is used to determine the side of the component it lies on.

Once all the edges are labeled with the components they cut across, each machine can do cycle deletion on all of the edges that they have on their machines, to determine at most candidate edges that could possibly be in the new MST.

## 7. Lower Bounds

For a lower bound, we demonstrate that it is not possible to complete much more than queries in rounds.

###### Theorem 7.1 ().

For any constant , there is a sequence of batch updates, each of size , such that the total time required to complete these batch updates is .

In (Klauck et al., 2015), it was proven that the lower bound for the MST instance problem in the -machine model is , and that the class of graphs which requires this time complexity is the following class of graphs , where and are two bit long binary string. The graph consists of vertices, denoted by . There is an edge from to , and for each , there is an edge from to iff , and there is an edge from to iff . There is also a guarantee that the graph is connected, and that for each , .

Importantly, this class of graphs has a number of edges linear in the number of vertices.

The series of batch updates is then as follows. We pick vertices, and use the first batch updates to delete all edges that have both endpoints in this set of vertices, giving us an empty clique of size . The next updates occur in pairs where we add in a random instance of the above kind, then delete it. When we add in the graph, we add it in with weights that are a global minimum. Since these new edges are all globally minimum, at the end of this batch of updates, this MST instance has to be included in the global MST, and each of these batch updates has to take communication rounds, by the result in (Klauck et al., 2015). This series of additions and deletions will then require at least communication rounds, which is .

While for the proof in (Klauck et al., 2015), was treated as a constant, and the result was with high probability in , it is easy to verify that the entire proof still holds with high probability in when we set . We include a copy of the proof in the appendix.

## 8. MPC Model

The above algorithm in the -machine model maps over almost exactly to the MPC model. The key issue to focus on is the space usage: in the -machine model, we need space on each machine. In the sublinear regime for the MPC model, we are not able to store all the edges of a high degree vertex on a single machine. To adjust from the -machine model to the MPC model, we have to shift from a vertex partitioning model to an edge partitioning model, which we can do since the MPC model allows for information to be arbitrarily reorganized. The number of queries we can resolve also scales differently, as the communication bandwidths for the -machine model and the MPC model scale differently.

###### Theorem 8.1 ().

In the MPC model with machines and space on each machine for some constant , such that , there is a dynamic MST algorithm that can satisfy dynamic edge updates in communication rounds while using space, at most constant factor more space over the original space necessary to store the graph . The algorithm is deterministic worst case in the edge addition case, and a Las Vegas style algorithm for the edge deletion case, with it being with high probability for a success in each attempt. The data structure required can be initialised in rounds.

We modify some parts of the -machine algorithm to guarantee that the space requirements are satisfied, and follow an edge partitioning model. Each machine stores a set of edges of the graph. We however do not completely disregard the vertex partitioning model. To make it easier to complete certain vertex operations, we duplicate all edges, and store the edges on the machines lexicographically, so that any vertex is on a contiguous set of machines.

Some adjustments to the data structure have to be made to satisfy the edge partitioning model. Instead of storing the Euler tour information of a single arbitrary edge for each neighbour, we move this information onto each edge instead. For any edge , we additionally store an arbitrary Euler tour edge of and an arbitrary Euler tour edge of .

A crucial difference here is in the initialisation process. Applying the initialisation argument in the -machine model gives us an initialisation time of rounds. However, MSTs in the MPC model can be solved in rounds in general, much faster than this initialisation time.

To initialise the Euler tour data structure in rounds, we use a modified version of the Borůvka style component merging algorithm. The primary obstacle is to ensure that the edges we choose to merge do not create dependencies. Merging two components is the same as in the -machine case, but in the MPC model, we can also merge stars. For any component , and an arbitrary number of components connected to this component we can merge them in rounds. We describe this merging process later.

To determine the stars (these do not have to include all neighbours of the central vertex) that are to be merged, we do the following. In one iteration of Borůvka’s algorithm, we determine the minimum outgoing edge from each component. This set of edges forms a forest . We orient the edges in this forest by orienting the edges along the minimum outgoing edge directions, with edges pointing towards each other determined by vertex id. Then, we apply the Cole-Vishkin coloring algorithm (Cole and Vishkin, 1986) on this oriented tree to get a 3-coloring of the forest .

Now, WLOG, let be the most frequently appearing color in this coloring of . Each component colored with picks its minimum outgoing edge, and merges through this edge.

The resulting set of chosen edges cannot have any paths of length 3, and is hence a collection of stars. There are such edges too, resulting in Borůvka steps in total. What remains is to demonstrate that each round of merging can be completed in rounds.

We sort the components lexicographically. For each component, we call the first machine that holds that component the leader machine for that component. From the previous step, we have obtained some collection of stars , with the centre of each star being the component . Now, each is unaware of the components it is supposed to merge with, but its leader node can obtain the vertices it is supposed to merge with through a converge-cast.

Importantly here, what allows us to complete the converge-cast successfully for an arbitrary number of components, despite the leader node having only bandwidth, is the nature of the Euler tour values required. We illustrate this process. Suppose some component wishes to merge with through the edge , with . Notice that to complete the merge, each component merging with only needs to know its displacement in the Euler tour. The machine hosting the edge on the side sends the size of the component on the side to the machine hosting the edge on the side. The machine hosting the edge sums the sizes, for the converge-cast towards the leader node. After receiving the total sizes, the leader node can calculate the required displacements in the Euler tour, and send back the correct values.

In the -machine model algorithm, we extensively use broadcast and converge-cast steps. In the MPC model, it is easy to see that broadcasts and converge-casts can be completed in rounds using broadcast and converge-cast trees(Ghaffari and Choo, ). This is because for some constant , and these trees grow by a factor of each round, so these broadcasts take rounds.

There are only two places in our algorithm where we make use of the fact that a single machine holds all the information about a vertex. We check that it is fine in both cases:

• For -way merging, broadcasting an outgoing edge’s Euler tour value from each endpoint of an added edge can be done by the leader machine for that node.

• For the edge addition case, vertices verifying that they are indeed in is a simple degree check, which can be completed in a single round by sending the leader machine for that vertex the number of edges that are in .

We highlight a section of interest. In the edge deletion case, we reduce to solving an MST instance of size . While solving the MST instance in rounds in the sublinear regime is currently an open problem, notice that our batch size scaling to our bandwidth guarantees that we are always in the linear regime, which has been solved.

## 9. Conclusion

In this paper, we have explored how fast a cluster computing environment can maintain a minimum spanning tree subject to a sequence of updates. Essentially, it comes down to the communication bandwidth. In the -machine model, we can handle edge updates in rounds. In the MPC model where each machine has space , we can handle edge updates in rounds. (Of note, our contributions do not involve sketching techniques, as is common in earlier approaches, although the MST subroutine we use for the deletion case does.) We also demonstrate a lower bound for the -machine model, showing that it is not possible for an algorithm to resolve queries in communication rounds. One observation is that the Euler tour data structure is especially useful in the context of dynamic MST in this distributed setting.

Future directions include expanding the approach to the problem of Steiner trees in the -machine model, a structure very similar to minimum spanning trees. Alternatively, under a different set of restrictions (Pandurangan et al., 2018), it is possible to construct an MST in the -machine model faster, in communication rounds. We wonder if it also possible to achieve this in the dynamic situation, obtaining updates in rounds. (In this case, only one endpoint knows that an edge is in the MST. Surprisingly, this allows (Pandurangan et al., 2018) to beat the lower bound.) We also would like to explore whether the approaches described here translate well into other distributed models.

#### Acknowledgments

Thanks to Michael Bender and Martin Farach-Colton for conversations about data stream processing. Thanks to Faith Ellen for useful feedback.

## References

• U. A. Acar, D. Anderson, G. E. Blelloch, and L. Dhulipala (2019) Parallel batch-dynamic graph connectivity. CoRR abs/1903.08794. External Links: Link, 1903.08794 Cited by: §2.
• K. J. Ahn, S. Guha, and A. McGregor (2012) Graph sketches: sparsification, spanners, and subgraphs. In Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS ’12, New York, NY, USA, pp. 5–14. External Links: ISBN 9781450312486, Link, Document Cited by: §1.
• S. Assadi, X. Sun, and O. Weinstein (2019) Massively parallel algorithms for finding well-connected components in sparse graphs. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, PODC ’19, New York, NY, USA, pp. 461–470. External Links: ISBN 9781450362177, Link, Document Cited by: §2, §3.
• R. Cole and U. Vishkin (1986) Deterministic coin tossing with applications to optimal parallel list ranking. Inf. Control 70 (1), pp. 32–53. External Links: ISSN 0019-9958, Link, Document Cited by: §8.
•  L. Dhulipala, D. Durfee, J. Kulkarni, R. Peng, S. Sawlani, and X. Sun Parallel batch-dynamic graphs: algorithms and lower bounds. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, pp. 1300–1319. External Links: Cited by: §2.
• M. Ghaffari, F. Kuhn, and J. Uitto (2019) Conditional hardness results for massively parallel computation from distributed lower bounds. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), Vol. , pp. 1650–1663. External Links: Document, ISSN 1523-8288 Cited by: §2.
•  M. Ghaffari and D. Choo Massively parallel algorithms. External Links: Link Cited by: §1, §8.
• M. Ghaffari and M. Parter (2016) MST in log-star rounds of congested clique. In Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing, PODC ’16, New York, NY, USA, pp. 19–28. External Links: ISBN 9781450339643, Link, Document Cited by: §2.
• M. Ghaffari and J. Uitto (2018) Sparsifying distributed algorithms with ramifications in massively parallel computation and centralized local computation. CoRR abs/1807.06251. External Links: Link, 1807.06251 Cited by: §3.
• M. T. Goodrich, N. Sitchinava, and Q. Zhang (2011) Sorting, searching, and simulation in the mapreduce framework. In Algorithms and Computation, T. Asano, S. Nakano, Y. Okamoto, and O. Watanabe (Eds.), Berlin, Heidelberg, pp. 374–383. External Links: ISBN 978-3-642-25591-5 Cited by: §2.
• J. W. Hegeman, G. Pandurangan, S. V. Pemmaraju, V. B. Sardeshmukh, and M. Scquizzato (2015) Toward optimal bounds in the congested clique: graph connectivity and mst. In Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing, PODC ’15, New York, NY, USA, pp. 91–100. External Links: ISBN 9781450336178, Link, Document Cited by: §2.
• M. R. Henzinger and V. King (1995) Randomized dynamic graph algorithms with polylogarithmic time per operation. In Proceedings of the Twenty-Seventh Annual ACM Symposium on Theory of Computing, STOC ’95, New York, NY, USA, pp. 519–527. External Links: ISBN 0897917189, Link, Document Cited by: §5.1, §5.
• G. F. Italiano, S. Lattanzi, V. S. Mirrokni, and N. Parotsidis (2019) Dynamic algorithms for the massively parallel computation model. CoRR abs/1905.09175. External Links: Link, 1905.09175 Cited by: §1, §1, §2, §2, §5.1, §5.
• T. Jurdzinski and K. Nowicki (2017) MST in O(1) rounds of the congested clique. CoRR abs/1707.08484. External Links: Link, 1707.08484 Cited by: §1, §2, §6.2.
• H. Karloff, S. Suri, and S. Vassilvitskii (2010) A model of computation for mapreduce. pp. 938–948. External Links: Document Cited by: §1, §2, §3.
• H. Klauck, D. Nanongkai, G. Pandurangan, and P. Robinson (2015) Distributed computation of large-scale graph problems. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’15, USA, pp. 391–410. Cited by: §A.1, §A.3, §1, §2, §3, §4, §5.5, §7, §7, §7.
• S. Lattanzi, B. Moseley, S. Suri, and S. Vassilvitskii (2011) Filtering: a method for solving graph problems in mapreduce. In Proceedings of the Twenty-Third Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA ’11, New York, NY, USA, pp. 85–94. External Links: ISBN 9781450307437, Link, Document Cited by: §2.
• C. Lenzen (2013) Optimal deterministic routing and sorting on the congested clique. Proceedings of the 2013 ACM symposium on Principles of distributed computing - PODC ’13. External Links: ISBN 9781450320658, Link, Document Cited by: §3, §4.
• Z. Lotker, E. Pavlov, B. Patt-Shamir, and D. Peleg (2003) MST construction in o(log log n) communication rounds. In Proceedings of the Fifteenth Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA ’03, New York, NY, USA, pp. 94–100. External Links: ISBN 1581136617, Link, Document Cited by: §3.
• G. Pandurangan, P. Robinson, and M. Scquizzato (2015) Almost optimal distributed algorithms for large-scale graph problems. CoRR abs/1503.02353. External Links: Link, 1503.02353 Cited by: §2.
• G. Pandurangan, P. Robinson, and M. Scquizzato (2018) Fast distributed algorithms for connectivity and mst in large graphs. ACM Trans. Parallel Comput. 5 (1). External Links: ISSN 2329-4949, Link, Document Cited by: §2, §9.
• D. Peleg (2011) Distributed computing 25th international symposium, disc 2011, rome, italy, september 20-22, 2011: proceedings. Springer. Cited by: §2.
•  T. Tseng, L. Dhulipala, and G. Blelloch Batch-parallel euler tour trees. In 2019 Proceedings of the Twenty-First Workshop on Algorithm Engineering and Experiments (ALENEX), pp. 92–106. External Links: Cited by: §2.

## Appendix A Omitted proofs

### a.1. Rerouting Lemma

###### Lemma A.1 ().

Any algorithm in the -machine model that performs a total of broadcasts in sets, with the broadcasts within each set having no dependencies, can be completed in a total of rounds.

###### Proof.

We can use a re-routing strategy to resolve this problem. Suppose machine has to complete broadcasts during round . If we naively complete all the broadcasts, we will require a total of communication rounds.

Notice that in the event where one machine has to complete significantly more broadcasts than the other machines, a rerouting strategy is useful. Instead of machine broadcasting information, it instead sends sets of different information to each of the other machines, and those machines can broadcast the information instead. If total broadcasts are to be completed in a set, we show that this can in fact be achieved in rounds. The algorithm proceeds as follows:

1. Each machine broadcasts the number of broadcasts it has to do in this set to each other machine.

2. The messages to be broadcast are globally ordered, by machine number, then by message number. Repeat the following two round procedure times. During iteration :

1. Message ordered is sent to machine from the source machine.

Notice that step 1 is essential to the success of this algorithm, since it guarantees that no two messages will be sent to the same machine in step 2a). The ordering within each machine does not need to be known by all machines, but the number of messages on other machines that have priority over it does. ∎

Importantly, this strategy also applies to converge-casts. In particular, this rerouting and re-balancing strategy also works for subroutines such as a max computation, where each machine produces a value, and a global maximum is desired.

Suppose machine needs to know the maximum of these values, but is also caught up with doing several broadcasts of its own. It can reroute this max computation to any other machine , and have all machines send this information instead. The process occurs as follows:

1. Machine tells machine that it requires the max computation

2. Machine broadcasts to all other machines, requesting for this information, using up the communication edges of for rounds.

3. All machines send this information to machine , using up the communication edges of for another rounds.

4. Machine then sends this information back to machine .

This completes the converge-cast. This gives us the stronger lemma:

###### Lemma A.2 ().

Any algorithm in the -machine model that performs a total of broadcasts and/or max computations in sets, with the broadcasts and computations within each set having no dependencies, can be completed in a total of rounds.

This lemma also implies that the MST construction problem can be solved in rounds by simulating the Boruvka style component merging algorithm, instead of the rounds as described in (Klauck et al., 2015).

### a.2. MST algorithm

###### Theorem A.3 ().

We can construct an MST in the -machine model in communication rounds.

###### Proof.

We begin with each vertex being its own component. In each phase, we take each component and find the minimum outgoing edge, and add it to the MST, merging the two components. Finding the minimum outgoing edge is essentially a single min-query, and the merging of two components can be done in a single broadcast, to update the component names.

After each phase, the number of components decreases by at least a factor of two, there are at most phases. The total number of min-queries across these phases is , since we have it bounded by minimum outgoing edge queries. The total number of merges is , so the algorithm requires a total of broadcasts and min-queries, and can be completed in rounds applying our rerouting lemma A.2. ∎

### a.3. Lower bound theorem proof

Here, we replicate the proof in (Klauck et al., 2015) that at least communication rounds are required to determine an MST of with vertices.

###### Theorem A.4 ().

Every public-coin -error randomized protocol on a -machine network, sending bits per round, that computes a spanning tree of a -node input graph has an expected round complexity of

###### Proof.

Let . The class of graphs is , where and are two bit long binary string. The graph consists of vertices, denoted by . There is an edge from to , and for each , there is an edge from to iff , and there is an edge from to iff . There is also a guarantee that the graph is connected, so that for each , . The total number of graphs in this class of graphs is .

With probability , the vertices and are on different machines., say and . To guarantee that the output is a spanning tree, the machines hosting and have to figure out which of the edges to use in the spanning tree. The proof will demonstrate that to accomplish this, there has to be a large amount of information flow. Specifically, the proof demonstrates that the conditional entropy has to change by a large amount.

Before any communications occur, the conditional entropy is :

 H(Y|X) =∑xPr(X=x)⋅H(Y|X=x) =3−bb∑l=0(bl)2l⋅log2l =3−bb∑l=0b−1(b−1l)2l+1 =2b/3

Since the -machine model employs the random vertex partition model, the machine hosting , knows not only , but also some vertices of and their edges, giving it some bits of . Let

be the random variable denoting the amount of information that

has. Employing a Chernoff bound, we can see that knows at most bits of for some small constant with probability . This error probability is exponentially small in . In this error situation, at most bits of entropy can be lost, giving us a total reduction in entropy less than . Hence, we have that .

We now calculate the entropy at the end of the algorithm. With probability , the algorithm succeeds in producing a spanning tree. One of or will output at most edges after the algorithm ends. WLOG, let this be . Let be the random variable of edges in the output of , and let be the transcript of all messages to . Now, we have that , since we can simulate the algorithm and calculate both and from and .

We now estimate

. Again, we can use a Chernoff bound to obtain that with error probability exponential in . Now, since outputs at most edges, at least edges in have to be known from . This gives us edges that are free. These edges in that are unknown have to correspond to edges in that have been chosen to be in the spanning tree, so there are at most such edges. This gives us at most

 ∑l

possibilities for . This gives us the remaining entropy to be:

 H(Y|X,E) ≤Pr(|Y|<2b/3+ζb)(log(b/2b/6+ζb)+logb)+o(1) ≤H(1/3+2ζ)b/2+o(b)

Now, this gives us that , and hence has to have received messages of size . Given that there are channels, with bits per channel, this gives us the desired result.

and are on the same machine with probability , but , and we are done. ∎