Parallel computation frameworks and storage systems, such as MapReduce, Hadoop and Spark, have been proven to be highly effective methods for representing and analyzing the massive datasets that appear in the world today. Due to the importance of this new class of systems, models of parallel computation capturing the power of such systems have been increasingly studied in recent years, with the Massively Parallel Computation () model [KSV10] now serving as the canonical model. In recent years the model has seen the development of algorithms for fundamental problems, including clustering [EIM11, BMV+12b, BBL+14, YV18], connectivity problems [RMC+13, KLM+14, ASS+18, ASW19d, ASZ19], optimization [MKS+13, EN15, BEN+16], dynamic programming [IMS17, BBD+18], to name several as well as many other fundamental graph and optimization problems [BKV12a, ANO+14, KMV+15, AG18, AK17, ASS17, ASS+18, BFU18, CŁM+18, ŁMW18, ONA18, ABB+19a, ACK19b, AKZ19c, BDE+19a, BHH19b, GKM+18, GKU19, GU19, HSS19]. Perhaps the main goal in these algorithms has been solving the problems in a constant number of communication rounds while minimizing the total communication in a round. Obtaining low round-complexity is well motivated due to the high cost of a communication round in practice, which is often between minutes and hours [KSV10]. Furthermore, since communication between processors tends to be much more costly than local computation, ensuring low communication per-round is also an important criteria for evaluating algorithms in the model [SAS+13, BKS13].
Perhaps surprisingly, many natural problems such as dynamic programming [IMS17] and submodular maximization [BEN+16] can in fact be solved or approximated in a constant number of communication rounds in model. However, despite considerable effort, we are still far from obtaining constant-round algorithms for many natural problems in the setting where the space-per-machine is restricted to be sublinear in the number of vertices in the graph (this setting is arguably the most reasonable modeling choice, since real-world graphs can have trillions of vertices). For example, no constant round algorithms are known for a problem as simple as connectivity in an undirected graph, where the current best bound is rounds in general [KSV10, RMC+13, KLM+14, ASS+18, ŁMW18, ASW19d]. Other examples include a round algorithm for approximate graph matching [ONA18, GU19], and a -round algorithm for vertex coloring [CFG+19]. Even distinguishing between a single cycle of size and two cycles of size has been conjectured to require rounds [KSV10, RMC+13, KLM+14, RVW18, YV18, GKU19, IM19]. Based on this conjecture, recent studies have shown that several other graph related problems, such as maximum matching, vertex cover, maximum independent set and single-linkage clustering cannot be solved in a constant number of rounds [YV18, GKU19].
On the other hand, most large-scale databases are not formed by a single atomic snapshot, but form rather gradually through an accretion of updates. Real world examples of this include the construction of social networks [LZY10], the accumulation of log files [HI15], or even the gradual change of the Internet itself [DG08, KTF09, MAB+10]. In each of these examples, the database is gradually formed over a period of months, if not years, of updates, each of which is significantly smaller than the whole database. It is often the case that the updates are grouped together, and are periodically processed by the database as a batch. Furthermore, it is not uncommon to periodically re-index the data structure to handle a large number of queries between sets of updates.
In this paper, motivated by the gradual change in real-world datasets through batches of updates, we consider the problem of maintaining graph properties in dynamically changing graphs in the model. Our objective is to maintain the graph property for batches of updates, while achieving a constant number of rounds of computation in addition to also minimizing the total communication between machines in a given round.
Specifically, we initiate the study of parallel batch-dynamic graph problems in , in which an update contains a number of mixed edge insertions and deletions. We believe that batch-dynamic algorithms in capture the aforementioned real world examples of gradually changing databases, and provide an efficient distributed solution when the size of the update is large compared to single update dynamic algorithms. We note that a similar model for dynamic graph problems in was recently studied by Italiano et al. [ILM+19]. However, they focus on the scenario where every update only contains a single edge insertion or deletion. Parallel batch-dynamic algorithms were also recently studied in the shared-memory setting by Tseng et al. [TDB19] for the forest-connectivity problem and Acar et al. [AAB+19] for dynamic graph connectivity. However, the depth of these algorithms is at least , and it is not immediately clear whether these results can be extended to low (constant) round-complexity batch-dynamic algorithms in the setting.
We also study the power of dynamic algorithms in the setting by considering a natural “semi-online” version of the connectivity problem which we call adaptive connectivity. We show that the adaptive connectivity problem is -complete, and therefore in some sense inherently sequential, at least in the centralized setting. In contrast to this lower bound in the centralized setting, we show that in the model there is a batch-dynamic algorithm that can process adaptive batches with size proportional to the space per-machine in a constant number of rounds. Note that such an algorithm in the centralized setting (even one that ran in slightly sublinear depth per batch) would imply an algorithm for the Circuit Value Problem with polynomial speedup, thus solving a longstanding open problem in the parallel complexity landscape.
1.1 Our Results
Since graph connectivity proves to be an effective representative for the aforementioned difficulty of graph problems in the model, the focus of this paper is studying graph connectivity and adaptive graph connectivity in the batch-dynamic model.
The dynamic connectivity problem is to determine if a given pair of vertices belongs to same connected component in the graph as the graph undergoes (batches of) edge insertions and deletions. The dynamic connectivity algorithm developed in this paper is based on a hierarchical partitioning scheme that requires a more intricate incorporation of sketching based data structures for the sequential setting. Not only does our scheme allow us to achieve a constant number of rounds, but it also allows us to achieve a total communication bound that is linear with respect to the batch size with only an additional factor.
In the model with memory per machine
we can maintain a dynamic undirected graph on edges which,
for constants , and integer such that , can handle the
following operations with high probability:
, can handle the following operations with high probability:
A batch of up to edge insertions/deletions, using rounds.
Query up to pairs of vertices for 1-edge-connectivity, using rounds.
Furthermore, the total communication for handling a batch of operations is , and the total space used across all machines is .
Adaptive Connectivity and Lower-Bounds in the Batch-Dynamic Model
In the adaptive connectivity problem, we are given a sequence of query/update pairs. The problem is to process each query/update pair in order, where each query determines whether or not a given pair of vertices belongs to the same connected component of the graph, and applies the corresponding dynamic update to the graph if the query succeeds. We obtain the following corollary by applying our batch-dynamic connectivity algorithm, Theorem 1.1.
In the model with memory per machine we can maintain a dynamic undirected graph on edges which for constants , and integer such that can handle the following operation with high probability:
An adaptive batch of up to (query, edge insertions/deletions) pairs, using rounds.
Furthermore, the total communication for handling a batch of operations is , and the total space used across all machines is .
We also provide a lower-bound for the adaptive connectivity problem in the centralized setting, showing that the problem is -complete under reduction. -completeness is a standard notion of parallel hardness [KRS90, GHR+95, BM96]. As a consequence of our reduction, we show that the adaptive connectivity algorithm does not admit a parallel algorithm in the centralized setting with polynomial speedup, unless the (Topologically-Ordered) Circuit Value Problem admits a parallel algorithm with polynomial speedup, which is a long-standing open problem in parallel complexity literature.
The adaptive connectivity problem is -complete under reductions.
By observing that our reduction, and the reductions proving the hardness for the Circuit Value Problem can be done in rounds of , we have the following corollary in the setting.
In the model with memory per machine for some constant , if adaptive connectivity on a sequence of size can be solved in rounds, then every problem in can be solved in rounds.
1.2 Batch-Dynamic Model
In this section, we first introduce the massively parallel computation () model, followed by the batch-dynamic model which is the main focus of this paper.
Massively Parallel Computation () Model.
The Massively Parallel Computation () model is a widely accepted theoretical model for parallel computation [KSV10]. Here, the input graph has vertices and at most edges at any given instant. We are given processors/machines, each with local memory for storage .111Throughout this paper, and hide polylogarithmic terms in the size of the input. Note that we usually assume , for some . This is because the model is relevant only when the number of machines and the local memory per machine are significantly smaller than the size of the input.
The computation in the model proceeds via rounds. Initially, the input data is distributed across the processors arbitrarily. During each round, each processor runs a polynomial-time algorithm on the data which it contains locally. Between rounds, each machine receives at most amount of data from other machines. The total data received by all machines between any two rounds is termed as the communication cost. Note that no computation can occur between rounds, and equivalently, no communication can occur during a round.
The aim for our algorithms in this model is twofold. Firstly and most importantly, we want to minimize the number of rounds required for our algorithm, since this cost is the major bottleneck of massively parallel algorithms in practice. Ideally, we would want this number to be as low as . Secondly, we want to decrease the maximum communication cost over all rounds, since the costs of communication between processors in practice are massive in comparison to local computation.
At a high-level, our model works as follows. Similar to recent works by Acar et al. [AAB+19] and Tseng et al. [TDB19], we assume that the graph undergoes batches of insertions and deletions, and in the initial round of each computation, an update or query batch is distributed to an arbitrary machine. The underlying computational model used is the model, and assume that space per machine is strongly sublinear with respect to the number of vertices of the graph, that is, for some constant .
More formally, we assume there are two kinds of operations in a batch:
Update: A set of edge insertions/deletions of size up to .
Query: A set of graph property queries of size up to .
For every batch of updates, the algorithm needs to properly maintain the graph according to the edge insertions/deletions such that the algorithm can accurately answer a batch of queries at any instant. We believe that considering batches of updates and queries most closely relates to practice where often multiple updates occur in the examined network before another query is made. Furthermore, in the model there is a distinction between a batch of updates and a single update, unlike the standard model, because it is possible for the batch update to be made in parallel, and handling batch updates or queries is as efficient as handling a single update or query, especially in terms of the number of communication rounds.
We use two criteria to measure the efficiency of parallel dynamic algorithms: the number of communication rounds and the total communication between different machines. Note that massively parallel algorithms for static problems are often most concerned with communication rounds. In contrast, we also optimize the total communication in the dynamic setting, since the total communication becomes a bottleneck for practice when overall data size is very huge, especially when the update is much smaller than the total information of the graph. Ideally, we want to handle batches of updates and queries in constant communication rounds and sublinear total communication with respect to the number of vertices in the graph.
The key algorithmic difference between the dynamic model we introduce here and the model is that we can decide how to partition the input into processors as updates occur to the graph.
Dynamic problems in the model were studied in the very recent paper by Italiano et al. [ILM+19]. Their result only explicitly considers the single update case. In the batch-dynamic scenario, the result of [ILM+19] generalizes but has higher dependencies on batch sizes in both number of rounds and total communication.Our incorporation of graph sketching, fast contraction, and batch search trees are all critical for obtaining our optimized dependencies on batch sizes.
1.3 Our Techniques
In this section we give in-depth discussion of the primary techniques used to achieve the results presented in the previous section.
Without loss of generality, we assume that the batch of updates is either only edge insertions or only edge deletions. For a mixed update batch with both insertions and deletions, we can simply handle the edge deletions first, and then the edge insertions. In case the same edge is being inserted and deleted, we simply eliminate both operations.
Similar to previous results on dynamic connectivity [FRE85, GI92, HK99, HDT01, AGM12, KKM13, GKK+15, NS17, WUL17, NSW17], we maintain a maximal spanning forest. This forest encodes the connectivity information in the graph, and more importantly, undergoes few changes per update to the graph. Specifically:
An insert can cause at most two trees in to be joined to form a single tree.
A delete may split a tree into two, but if there exists another edge between these two resulting trees, they should then be connected together to ensure that the forest is maximal.
Our dynamic trees data structure adapts the recently developed parallel batch-dynamic data structure for maintaining a maximal spanning forest in the shared-memory setting by Tseng et al. [TDB19] to the model. Specifically, [TDB19] give a parallel batch-dynamic algorithm that runs in depth w.h.p. to insert new edges to the spanning forest, to remove existing edges in the spanning forest, or to query the IDs of the spanning tree containing the given vertices. We show that the data structure can be modified to achieve round-complexity and communication for any small constant satisfying in the setting. In addition, if we associate with each vertex a key of length , then we can query and update a batch of key values in round-complexity and communication.
With a parallel batch-dynamic data structure to maintain a maximal spanning forest, a batch of edge insertions or edge queries for the dynamic connectivity problem can be handled in round-complexity and communication for any constant . Our strategy for insertions and queries is similar to the dynamic connectivity algorithm of Italiano et al. [ILM+19]: A set of edge queries can be handled by querying the IDs of the spanning tree of all the vertices involved. Two vertices are in the same connected component if and only if their IDs are equal. To process a batch of edge insertions, we maintain the maximal spanning forest by first identifying the set of edges in the given batch that join different spanning trees without creating cycles using ID queries, and then inserting these edges to the spanning forest, by linking their respective trees.
Handling a set of edge deletions, however, is more complex. This is because if some spanning forest edges are removed, then we need to find replacement edges which are in the graph, but previously not in the spanning forest, that can be added to the spanning forest without creating cycles. To facilitate this, we incorporate developments in sketching based sequential data structures for dynamic connectivity [AGM12, KKM13].
To construct a sketch of parameter for a graph, we first independently sample every edge of the graph with probability , and then set the sketch for each vertex to be the XOR of the IDs for all the sampled edges which are incident to the vertex. A sketch has the property that for any subset of vertices, the XOR of the sketches of these vertices equals to the XOR of the IDs for all the sampled edges leaving the vertex subset. In particular, if there is only a single sampled edge leaving the vertex subset, then the XOR of the sketches of these vertices equals to the ID of the edge leaving the vertex subset.
The high level idea of [AGM12, KKM13] is to use sketches for each current connected component to sample previous non-tree edges going out of the connected component using sketches with different parameters, and use these edges to merge connected components that are separated after deleting some tree edges. We visualize this process by representing each connected component as a vertex in a multigraph, and finding a replacement non-tree edge between the two components as the process of merging these two vertices. At first glance, it seems like we can translate this approach to the model by storing all the sketches for each connected component in a single machine. However, directly translating such a data structure leads to either communication rounds or total communication per update batch. To see this, let us look at some intuitive ideas to adapt this data structure to the model, and provide some insight into why they have certain limitations:
Sketch on the original graph: For this case, once we use the sketch to sample an edge going out of a given connected component, we only know the ID of the two vertices of the edge, but not the two connected components the edge connects. Obtaining the information about which connected components the endpoints belong to requires communication, because a single machine cannot store the connected component ID of each vertex in the graph. Hence, to contract all the connected components using sampled edges for each connected component, we need one round of communication. Since we may need to reconnect as many as connected components ( is the number of deletions, i.e., the batch size), this approach could possibly require communication rounds.
Sketch on the contracted graph where every connected component is contracted to a single vertex: To do this, each edge needs to know which connected components its endpoints belong to. If we split a connected component into several new connected components after deleting some tree edges, the edges whose vertices previously belong to same connected component may now belong to different connected components. To let each edge know which connected components its endpoints belong to, we need to broadcast the mapping between vertices and connected components to all the related edges. Hence, the total communication can be as large as . To further illustrate this difficulty via an example, consider the scenario that the current maximal spanning forest is a path of vertices, and a batch of edge deletions break the path into short paths. In this case, almost all the vertices change their connected component IDs. In order to find edges previously not in the maximal spanning forest to link these path, every edge needs to know if the two vertices of the edge belong to same connected component or not, and to do this, the update of connected component ID for vertices of every edge requires communication.
The high level idea of our solution is to speed up the “contraction” process such that constant iterations suffice to shrink all the connected components into a single vertex. To do this, sampling edges leaving each connected component in each iterations (as previous work) is not enough, because of the existence of low conductance graph. Hence, we need to sample a much larger number of edges leaving each connected component. Following this intuition, we prove a fast contraction lemma which shows that picking edges out of each component finds all connecting non-tree edges between components within iterations.
However, a complication that arises with the aforementioned fast contraction lemma is that it requires the edges leaving a component to be independently sampled. But the edges sampled by a single sketch are correlated. This correlation comes from the fact that a sketch outputs an edge leaving a connected component if and only if there is only one sampled edge leaving that connected component. To address this issue, we construct an independent sample extractor to identify enough edges that are eventually sampled independently based on the sketches and show that these edges are enough to simulate the independent sampling process required by the fast contraction lemma.
We discuss these two ideas in depth below. In the rest of this section, we assume without loss of generality that every current connected component is contracted into a single vertex, since the sampled edges are canceled under the XOR operation for sketches.
Fast Contraction Lemma.
We first define a random process for edge sampling (which we term ContractionSampling ) in Definition 1.5. The underlying motivation for such a definition is that the edges obtained from the sketch are not independently sampled. So, we tweak the sampling process via an independent sample extractor, which can then produce edges which obey the random process ContractionSampling . Before discussing this independent sample extractor, we will first outline how edges sampled using ContractionSampling suffice for fast contraction.
Definition 1.5 (ContractionSampling process).
The random process ContractionSampling for a multigraph and an integer is defined as follows: each vertex independently draws samples for some integer such that
the outcome of each can be an either an edge incident to or ;
for every edge incident to vertex ,
We show that in each connected component, if we contract edges sampled by the ContractionSampling process, the number of edges remaining reduces by a polynomial factor with high probability by taking .
Consider the following contraction scheme starting with a multigraph on vertices and (multi) edges: For a fixed integer ,
let be a set of edges sampled by the ContractionSampling process;
contract vertices belonging to same connected component of graph into a new graph as follows: each vertex of represents a connected component in the sampled graph , and there is an edge between two vertices iff there is an edge in between the components corresponding to and , with edge multiplicity equal to the sum of multiplicity of edges in between the components corresponding to and .
Then the resultant graph has at most (multi) edges with high probability.
Based on Lemma 1.6, if we iteratively apply the ContractionSampling process with and shrink connected components using sampled edges into a single vertex, then every connected component of the multigraph becomes a singleton vertex in rounds with high probability.
Lemma 1.6 can be shown using a straightforward argument for simple graphs. However, in the case of multigraphs (our graphs are multigraphs because there can be more than one edge between two components), this argument is not as easy. It is possible that for a connected component , a large number of edges leaving will go to another connected component . Hence, in one round, the sampled edges leaving may all go to . From this perspective, we cannot use a simple degree-based counting argument to show that every connected component merges with at least other connected components if it connected to at least other connected components.
To deal with parallel edges, and to prove that the contraction occurs in constant, rather than rounds, we make use of a more combinatorial analysis. Before giving some intuition about this proof, we define some useful terminology.
Definition 1.7 (Conductance).
Given a graph and a subset of vertices , the conductance of w.r.t. is defined as
The conductance of a graph is a measure of how “well-knit” a graph is. Such graphs are of consequence to us because the more well-knit the graph is, the faster it contracts into a singleton vertex. We use the expander decomposition lemma from [ST11], which says that any connected multigraph can be partitioned into such subgraphs.
Lemma 1.8 ([St11], Section 7.1.).
Given a parameter , any graph with vertices and edges can be partitioned into groups of vertices such that
the conductance of each is at least ;
the number of edges between the ’s is at most .
For each such “well-knit” subgraph to collapse in one round of sampling, the sampled edges in must form a spanning subgraph of . One way to achieve this is to generate a spectral sparsifier of [SS11] - which can be obtained by sampling each edge with a probability at least times its effective resistance. The effective resistance of an edge is the amount of current that would pass through it when unit voltage difference is applied across its end points, which is a measure of how important it is to the subgraph being well-knit.
As the last piece of the puzzle, we show that the edges sampled by the ContractionSampling process do satisfy the required sampling constraint to produce a spectral sparsifier of . Since each such subgraph collapses, Lemma 1.8 also tells us that only a small fraction of edges are leftover in , as claimed in Lemma 1.6.
It is important to note that although we introduce sophisticated tools such as expander partitioning and spectral sparsifiers, these tools are only used in the proof and not in the actual algorithm to find replacement edges.
From Sketches to Independent Samples.
On a high level, our idea to achieve fast contraction is based on using independent sketches. However, we cannot directly claim that these sketches simulate a ContractionSampling procedure, as required by the fast contraction lemma (Lemma 1.6). This is because ContractionSampling requires the edges being sampled independently. Instead, each sketch as given by [AGM12, KKM13] gives a set of edges are constructed as follows:
Pick each edge independently with probability , where is the parameter of the sketch.
For each vertex which has exact one sampled edge incident to it, output the sampled incident edge.
The second step means the samples picked out of two vertices are correlated. Given a vertex , let
be the random variable for the edge picked in Step 2 of above sketch construction process. Consider an example with two adjacent verticesand . If the outcome of is the edge , then the outcome of cannot be an edge other than . Hence two random variables and are correlated.
This issue is a direct side-effect of the faster contraction procedure. Previous uses of sketching only needs to find one edge leaving per component, which suffices for rounds. However, our goal is to terminate in a constant number of rounds. This means we need to claim much larger connected components among the sampled edges. For this purpose, we need independence because most results on independence between edges require some correlation between the edges picked.
Instead, we show that each sketch still generates a large number of independent edge samples. That is, while all the samples generated by a copy of the sketch are dependent on each other, a sufficiently large subset of it is in fact, independent. Furthermore, observe that contractions can only make more progress when more edges are considered. So it suffices to show that this particular subset we choose makes enough progress. Formally, we prove the following lemma.
Given an integer and a multigraph of vertices, independent sketches simulates a ContractionSampling process. Furthermore, for every edge sampled by the ContractionSampling process, there exists a sketch and a vertex such that the value of the sketch on the vertex is exactly the ID of that edge.
Our starting observation is that for a bipartite graph, sketching process gives independent edge samples for vertices from the same side: For a bipartite graph , the process of sampling edges, and picking all edges incident to degree one vertices of satisfies the property that all the edges picked are independent.
To extend this observation to general graph, we consider a bipartition of the graph, , and view the random sampling of edges from the sketch as a two-step process:
First, we sample all edges within each bipartition and .
Then we sample the edges independently.
After first step, we remove vertices from that have some sampled edges incident to. The second step gives a set of edges, from which we keep ones incident to some degree one vertices from . Based on the observation of bipartite graph, the edges kept in the second step are independent (condition on the outcome of the first step).
To bound the probability of picking an edge crossing the bipartition, we will first lower bound the probability that the incident vertex from
remains after the first step, and then check that the second step on the bipartite graph is equivalent to an independent process on the involved edges. The overall lower bound on the probability of an edge picked then follows from combining the probability of an edge being picked in one of these processes with the probability that the corresponding vertices remain after the first step and the initial pruning of vertices. With this probability estimation, we show thatindependent sketches are enough to boost the probability of picking the edge to the required lower bound by the ContractionSampling process.
At the end, we show that random bipartition of the graph is enough to make sure that every edge appears in at least one of the bipartition, and then Lemma 1.9 follows.
Adaptive Connectivity and Lower-Bounds in the Batch-Dynamic Model
The adaptive connectivity problem is the “semi-online” version of the connectivity problem where the entire adaptive batch of operations is given to the algorithm in advance, but the algorithm must apply the query/update pairs in the batch in order, that is each pair on the graph defined by applying the prefix of updates before it. We note that the problem is closely related to offline dynamic problems, for example for offline dynamic minimum spanning tree and connectivity [EPP94]. The main difference is that in the offline problem the updates (edge insertions/deletions) are not adaptive, and are therefore not conditionally run based on the queries. We also note here that every problem that admits a static algorithm also admits an algorithm for the offline variant of the problem. The idea is to run, in parallel for each query, the static algorithm on the input graph unioned with the prefix of the updates occuring before the query. Assuming the static algorithm is in , this gives a offline algorithm (note that obtaining work-efficient parallel offline algorithms for problems like minimum spanning tree and connectivity is an interesting problem that we are not aware of any results for).
Compared to this positive result in the setting without adaptivity, the situation is very different once the updates are allowed to adaptively depend on the results of the previous query, since the simple black-box reduction given for the offline setting above is no longer possible. In particular, we show the following lower bound for the adaptive connectivity problem which holds in the centralized setting: the adaptive connectivity problem is -complete, that is unless , there is no algorithm for the problem. The adaptive connectivity problem is clearly in since we can just run a sequential dynamic connectivity algorithm to solve it. To prove the hardness result, we give a low-depth reduction from the Circuit Value Problem (CVP), one of the canonical -complete problems. The idea is to take the gates in the circuit in some topological-order (note that the version of CVP where the gates are topologically ordered is also -complete), and transform the evaluation of the circuit into the execution of an adaptive sequence of connectivity queries. We give an reduction which evaluates a circuit using adaptive connectivity queries as follows. The reduction maintains that all gates that evaluate to are contained in a single connected component connected to some root vertex, . Then, to determine whether the next gate in the topological order, , evaluates to the reduction runs a connectivity query testing whether the vertices corresponding to and are connected in the current graph, and adds an edge , thereby including it in the connected component of gates if the query is true. Similarly, we reduce evaluating gates to two queries, which check whether () is reachable and add an edge from in either case if so. A gate is handled almost similarly, except that the query checks whether is disconnected from . Given the topological ordering of the circuit, generating the sequence of adaptive queries can be done in depth and therefore the reduction works in .
In contrast, in the setting, we show that we can achieve rounds for adaptive batches with size proportional to the space per machine. Our algorithm for adaptive connectivity follows naturally from our batch-dynamic connectivity algorithm based on the following idea: we assume that every edge deletion in the batch actually occurs, and compute a set of replacement edges in for the (speculatively) deleted edges. Computing the replacement edges can be done in the same round-complexity and communication cost as a static batch of deletions using Theorem 1.1. Since the number of replacement edges is at most , all of the replacements can be sent to a single machine, which then simulates the sequential adaptive algorithm on the graph induced by vertices affected by the batch in a single round. We note that the upper-bound in does not contradict the -completeness result, although achieving a similar result for the depth of adaptive connectivity in the centralized setting for batches of size would be extremely surprising since it would imply a polynomial-time algorithm for the (Topologically Ordered) Circuit Value Problem with sub-linear depth and therefore polynomial speedup.
Section 2 describes the full version of the high level idea for graph connectivity. Section 3 contains a discussion of the data structure we used to handle batch-update in constant round. Section 4 gives a proof of our fast contraction lemma. Section 5 gives a proof of our independent sample extractor from sketches. Section 6 presents the algorithm for graph connectivity and the correctness proof. Lastly, we present our lower and upper bounds for the adaptive connectivity problem in Section 7.
In this section we prove our result for 1-edge-connectivity, restated here:
Parallel Batch-Dynamic Data Structure.
Similar to previous results on dynamic connectivity [FRE85, GI92, HK99, HDT01, AGM12, KKM13, GKK+15, NS17, WUL17, NSW17], our data structure is based on maintaining a maximal spanning forest, which we denote using . Formally, we define it as follows.
Definition 2.1 (Maximal spanning forest).
Given a graph , we call a maximal spanning forest of if is a subgraph of consisting of a spanning tree in every connected component of .
Note that this is more specific than a spanning forest, which is simply a spanning subgraph of containing no cycles. This forest encodes the connectivity information in the graph, and more importantly, undergoes few changes per update to the graph. Specifically:
An insert can cause at most two trees in to be joined to form a single tree.
A delete may split a tree into two, but if there exists another edge between these two resulting trees, they should then be connected together to ensure that the forest is maximal.
Note that aside from identifying an edge between two trees formed when deleting an edge from some tree, all other operations are tree operations. Specifically, in the static case, these operations can be entirely encapsulated via tree data structures such as dynamic trees [ST83] or Top-Trees [AHL+05]. We start by ensuring that such building blocks also exist in the setting. In Section 3, we show that a forest can also be maintained efficiently in rounds and low communication in the model (Theorem 2.2). In this section, we build upon this data structure and show how to process updates and 1-edge-connectivity queries while maintaining a maximal spanning forest of .
Let indicate the tree (component) in to which a vertex belongs. We define the component ID of as the as the ID of this . We represent the trees in the forest using the following data structure. We describe the data structure in more detail in Section 3.
In the model with memory per machine for some constant , for any constant and a key length such that , we can maintain a dynamic forest in space , with each vertex augmented with a key of length ( is a summable element from a semi-group),
: Insert a batch of edges into .
: Delete edges from .
: Given a batch of vertices, return their component IDs in .
: For each , update the value of to .
: For each , return the value of .
: Given a set of vertices, compute for each ,
under the provided semi-group operation.
Moreover, all operations can be performed in rounds and
Link and Cut operations can be performed in communication per round,
ID can be performed in communication per round,
UpdateKey, GetKey and ComponentSum operations can be performed in communication per round.
Edge insertions and queries can be handled by above dynamic data structure: for a set of edge queries, we use the ID operation to query the ID of all the vertices. Two vertices are in the same connected component if and only if their IDs are same. For a batch of edge insertions, we maintain the spanning forest by first identifying all the inserted edges that join different connected components using ID operation, and then using the Link operations to put these edges into the forest.
The process of handling a set of edge deletions is more complex. This is because, if some spanning forest edges are removed, then we need to find replacement edges in the graph which were previously not in the spanning forest, but can be added to maintain the desired spanning forest. To do this, we use the the augmentation of tree nodes with and the ComponentSum operation to accommodate each vertex storing “sketches” in order to find replacement edges upon deletions.
Sketching Based Approach Overview.
At the core of the Delete operation is an adaptation of the sketching based approach for finding replacement edges by Ahn et al. [AGM12] and Kapron et al. [KKM13]. Since we rely on these sketches heavily, we go into some detail about the approach here. Without loss of generality, we assume every edge has a unique -bit ID, which is generated by a random function on the two vertices involved.
For a vertex , this scheme sets to the XOR of the edge IDs of all the edges incident to (which we assume to be integers):
For a subset of vertices , we define as the set of edges with exactly one endpoint in . Then, taking the total XOR over all the vertices in gives (by associativity of XOR)
So if there is only one edge leaving , this XOR over all vertices in returns precisely the ID of this edge. To address the case with multiple edges crossing a cut, Ahn et al. [AGM12] and Kapron et al. [KKM13] sampled multiple subsets of edges at different rates to ensure that no matter how many edges are actually crossing, with high probability one sample picks only one of them. This redundancy does not cause issues because the edge query procedures also serve as a way to remove false positives.
We formally define the sketch as follows:
A sketch with parameter of a graph is defined as follows:
Every edge is sampled independently with probability . Let be the set of sampled edges.
For every vertex , let
We say a sketch generates edge if there exists a vertex such that . The variant of this sketching result that we will use is stated as follows in Lemma 2.4.
Assume we maintain a sketch for each of , and let denote the sketches on vertex ,
upon insertion/deletion of an edge, we can maintain all ’s in update time;
for any subset of vertices , from the value
we can compute edge IDs so that for any edge , the probability that one of these IDs is is at least .
Fast Contraction Lemma.
As XOR is a semi-group operation, we can use these sketches in conjunction with the dynamic forest data structure given in Theorem 2.2 to check whether a tree resulting from an edge deletion has any outgoing edges. In particular, copies of this sketch structure allow us to find a replacement edge with high probability after deleting a single edge in rounds and total communication. Our algorithm then essentially “contracts” these edges found, thus essentially reconnecting temporarily disconnected trees in .
However, a straightforward generalization of the above method to deleting a batch of edges results in an overhead of , because it’s possible that this random contraction process may take up to rounds. Consider for example a length path: if we pick random edges from each vertex, then each edge on the path is omitted by both of its endpoints with constant probability. So in the case of a path, we only reduce the number of remaining edges by a constant factor in expectation, leading to a total of about rounds. With our assumption of and queries arriving in batches of , this will lead to a round count that’s up to .
We address this with a natural modification motivated by the path example: instead of keeping independent copies of the sketching data structures, we keep copies, for some small constant , which enables us to sample random edges leaving each connected component at any point. As this process only deals with edges leaving connected components, we can also view these connected components as individual vertices. The overall algorithm then becomes a repeated contraction process on a multi-graph: at each round, each vertex picks random edges incident to it, and contracts the graph along all picked edges. Our key structural result is a lemma that shows that this process terminates in rounds with high probability. To formally state the lemma, we first define a random process of sampling edges in a graph.
Below is our structural lemma, which we prove in Section 4.
Independent Sample Extractor From Sketches.
On a high level, our idea is to use independent sketches to simulate the required ContractionSampling process, and then apply Lemma 1.6. However, we cannot do this naively, because ContractionSampling requires the edges being sampled independently, whereas the sketch from Lemma 2.4 does not satisfy this property. Recall that the sketch generated at a vertex can correspond to an edge (say ) if no other edge adjacent to was sampled in the same sketch. Consider an example where two edges and are sampled by the graph. This means that no other edge from or can be sampled in that same sketch, implying the sampling process is not independent.
We would like to remark that this is not an issue for previous sketching based connectivity algorithms (e.g. [AGM12, KKM13]), because in [AGM12, KKM13], each time, any current connected component only needs to find an arbitrary edge leaving the connected component. In this way, if most current connected components find an arbitrary edge leaving the component, then after contracting connected components using sampled edges, the total number of connected components reduce by at least a constant factor. In this way, after iterations, each connected component shrinks into a single vertex. But in our case the contraction lemma requires edges being sampled independently. Hence, we cannot directly apply Lemma 1.6 on sketches.
To get around this issue, we construct an independent edge sample extractor from the sketches and show that with high probability, this extractor will extract a set of independent edge samples that are equivalent to being sampled from a ContractionSampling random process, as required by Lemma 1.6. One key observation is that if the graph is bipartite, then sketch values on the vertices from one side of the bipartite graph are independent, because every edge sample is only related to one sketch value. The high level idea of our extractor is then to extract bipartite graphs from sketches, such that each edge appears in many bipartite graphs with high probability. For each sketch, consider the following random process:
For each vertex of the graph, randomly assign a color of red or yellow. Then we can construct a bipartite graph with red vertices on one side, yellow vertices on the other side, and an edge is in the bipartite graph if and only if the color of one endpoint is red, and the other endpoint is yellow. Note that this step is not related to the process of sketch construction.
Independently sample every edge not in the bipartite graph with probability same as the probability of sampling used in the sketch.
For each red vertex whose incident edges were not sampled in Step 2, independently sample every edge incident to the vertex in the bipartite graph with probability same as that used in the sketch.
Choose all the edges sampled in Step 3 which do not share a red vertex with any other sampled edge.
We show that the edges obtained in Step 4 are sampled independently (conditioned on the outcome of Step 2). Another way to see this independence is to partition all the independent random variables in the process of generating all the sketches into two random processes and (based on the bipartite graph generated for each sketch) such that and are independent and simulate a required ContractionSampling process in the following sense:
After implementing the random process and based on the outcome of , define a ContractionSampling process as required by Lemma 1.6.
The random process simulates the defined ContractionSampling process in the following sense: there is a partition of the independent random variables of random process into groups satisfying the following conditions:
There is a bijection between groups and random variables of the ContractionSampling process.
For each group, there exists a function of the random variables in the group such that the function is equivalent to the corresponding random variable of the ContractionSampling process.
Furthermore, all the edges sampled by the defined ContractionSampling process are generated by the sketches (meaning that there exist a vertex and a sketch such that sketch on the vertex is the ID of the sampled edge). In this way, we argue that the edges generated by all the sketches contains a set of edges generated by a ContractionSampling process so that we can apply Lemma 1.6.
More formally, we define the simulation between two random processes as follows.
We say a set of independent random variables simulates another set of independent random variables if there exists a set of random variables such that with constant probability, after fixing all the random variables of , there are subsets (depending on the outcome of the random process for ) satisfying
are mutually disjoint.
For every , there exist a random variable which is a function of random variables in , denoted as , such that is same to the random variable .
And we show that the process of generating sketches simulates the random process in the contraction lemma.
3 Batch-Dynamic Trees in
In this section we describe a simple batch-dynamic tree data structure in the setting. Our data structure is based on a recently developed parallel batch-dynamic data structure in the shared-memory setting [TDB19]. Specifically, Tseng et al. give a parallel batch-dynamic tree that supports batches of links, cuts, and queries for the representative of a vertex in expected work and depth w.h.p. Their batch-dynamic trees data structure represents each tree in the forest using an Euler-tour Tree (ETT) structure [HK99], in which each tree is represented as the cyclic sequence of its Euler tour, broken at an arbitrary point. The underlying sequence representation is a concurrent skip list implementation that supports batch join and split operations. Augmented trees are obtained by augmenting the underlying sequence representation.
We show that the structure can be modified to achieve low round-complexity and communication in the setting. We now define the batch-dynamic trees interface and describe how to extend the data structure into the setting. The main difficulty encountered in the shared-memory setting is that nodes are stored in separate memory locations and refer to each other via pointers. Therefore, when traversing the skip list at some level to find a node’s ancestor at level , it requires traversing all nodes that occur before (or after) it at level . We show that by changing the sampling probability to , we can ensure that each level has size , each level can be stored within a single machine and thus this search can be done within a single round. The new sampling probability also ensures that the number of levels is w.h.p. which is important for achieving our bounds.
Batch-Dynamic Trees Interface. A batch-parallel dynamic trees data structure represents a forest as it undergoes batches of links, cuts, and connectivity queries. A Link links two trees in the forest. A Cut deletes an edge from the forest, breaking one tree into two trees. A ID query returns a unique representative for the tree containing a vertex. Formally the data structure supports the following operations:
takes an array of edges and adds them to the graph . The input edges must not create a cycle in .
takes an array of edges and removes them from the graph .
takes an array of vertex ids and returns an array containing the representative of each . The representative of a node, is a unique value s.t. iff and are in the same tree.
Furthermore, the trees can be augmented with values ranging over a domain , and a commutative function . The trees can be made to support queries for the sum according to on arbitrary subtrees, but for the purposes of this paper queries over the entire tree suffice. The interface is extended with the following two primitives:
takes an array of vertex id, value pairs and updates the value for to .
takes an array of vertex ids and returns an array containing the value of each , .
takes an array of vertex ids and returns an array containing where is the tree containing , is the value for node , and the sum is computed according to .
We show the following theorem in this section. Let be a parameter controlling the size of the keys stored at each node and let be a parameter controlling the size of the blocks stored internally within a single machine.
Let be a parameter controlling the keysize, and be a constant controlling the blocksize s.t. and . Then, in the model with memory per machine there is an augmented batch-dynamic tree data structure in that supports batches of up to Link, Cut, ID, UpdateKey, GetKey, and ComponentSum operations in rounds per operation w.h.p. where .
Furthermore, the batch operations cost
communication per round w.h.p. for UpdateKey, GetKey, and ComponentSum
communication per round w.h.p. for Link and Cut and
communication per round for ID.
3.1 Augmented Batch-Dynamic Sequences in
In order to obtain Theorem 3.1, we first show how to implement augmented batch-dynamic sequences in few rounds of . In particular, we will show the following lemma. Note that achieving a similar bound on the round-complexity for large batches, e.g., batches of size , would disprove the -cycle conjecture. We refer to [TDB19] for the precise definition of the sequence interface.
Let be a parameter controlling the keysize, and be a constant controlling the blocksize s.t. and . Then, in the model with memory per machine there is an augmented batch-dynamic sequence data structure in that supports batches of up to Split, Join, ID, UpdateKey, GetKey, and SequenceSum operations in rounds per operation w.h.p. where .
Furthermore, the batch operations cost
communication per round w.h.p. for UpdateKey, GetKey, and SequenceSum
communication per round w.h.p. for Split and Join and
communication per round for ID.
For the sake of simplicity we discuss the case where and (i.e. values that fit within a constant number of machine words), and describe how to generalize the idea to larger values at the end of the sub-section.
Sequence Data Structure. As in Tseng et al. [TDB19] we use a skip list as the underlying sequence data structure. Instead of sampling nodes with constant probability to join the next level, we sample them with probability . It is easy to see that this ensures that the number of levels in the list is w.h.p. since is a constant greater than . Furthermore, the largest number of nodes in some level that “see” a node at level as their left or right ancestor is w.h.p. We say that the left (right) block of a node belonging to level are all of its siblings to the left (right) before the next level node. As previously discussed, in the setting we should intuitively try to exploit the locality afforded by the model to store the blocks (contiguous segments of a level) on a single machine. Since each block fits within a single machine w.h.p., operations within a block can be done in 1 round, and since there are levels, the total round complexity will be as desired. Since the ideas and data structure are similar to Tseng et al. [TDB19], we only provide the high-level details and refer the reader to their paper for pseudocode.
Join. The join operation takes a batch of pairs of sequence elements to join, where each pair contains the rightmost element of one sequence and the leftmost element of another sequence. We process the levels one by one. Consider a join of . We scan the blocks for and to find their left and right ancestors, and join them. In the subsequent round, these ancestors take the place of and we recursively continue until all levels are processed. Observe that at each level, for each join we process we may create a new block, with elements. In summary, the overall round-complexity of the operation is w.h.p., and the amount of communication needed is w.h.p.
Split. The split operation takes a batch of sequence elements at which to split the sequences they belong to by deleting the edge to the right of the element. We process the levels one by one. Consider a split at a node . On each level, we first find the left and right ancestors as in case of join. We then send all nodes splitting a given block to the machine storing that block, and split it in a single round. Then, we recurse on the next level. If the left and right ancestors of were connected, we call split on the left right ancestor at the next level. The overall round-complexity is w.h.p., and the amount of communication needed is w.h.p.
Augmentation and Other Operations. Each node in the skip list stores an augmented value which represents the sum of all augmented values of elements in the block for which it is a left ancestor. Note that these values are affected when performing splits and joins above, but are easily updated within the same round-complexity by computing the correct sum within any block that was modified and updating its left ancestor. SetKey operations, which take a batch of sequence elements and update the augmented values at these nodes can be handled similarly in the same round-complexity as join and split above. Note that this structure supports efficient range queries over the augmented value, but for the purposes of this paper, returning the augmented value for an entire sequence (SequenceSum) is sufficient, and this can clearly be done in rounds and communication. Similarly, returning a representative node (ID) for the sequence can be done in rounds w.h.p. and communication by finding the top-most level for the sequence containing the queried node, and returning the lexicographically first element in this block.
Handling Large Values. Note that if the values have super-constant size, i.e. size for some s.t. we can recover similar bounds as follows. Since the blocks have size and each value has size the overall size of the block is . Therefore blocks can still be stored within a single machine without changing the sampling parameter. Storing large values affects the bounds as follows. First, the communication cost of performing splits and joins grows by a factor of due to the increased block size. Second, the cost of getting, setting, and performing a component sum grows by a factor of as well, since values are returned, each of size . Therefore the communication cost of all operations other than finding a represntative increase by a multiplicative factor. Finally, note that the bounds on round-complexity are not affected, since nodes are still sampled with probability .
3.2 Augmented Batch-Dynamic Trees in
We now show how to implement augmented batch-dynamic trees in , finishing the proof of Theorem 3.1. We focus on the case where (we are storing constant size words) and explain how the bounds are affected for larger .
Forest Data Structure. We represent trees in the forest by storing the Euler tour of the tree in a sequence data structure. If the forest is augmented under some domain and commutative function , we apply this augmentation to the underlying sequences.
Link. Given a batch of link operations (which are guaranteed to be acyclic) we update the forest structure as follows. Consider a link . We first perform a batch split operation on the underlying sequences at all for , which splits the Euler tours of the underlying trees at the nodes incident to a link. Next, we send all of the updates to a single machine to establish the order in which joins incident to a single vertex are carried out. Finally, we perform a batch join operation using the order found in the previous round to link together multiple joins incident to a single vertex. Since we perform a constant number of batch-sequence operations with batches of size , the overall round complexity is w.h.p. by our bounds on sequences, and the overall communication is w.h.p.
Cut. Given a batch of cut operations, we update the forest structure as follows. Consider a cut . The idea is to splice this edge out of the Euler tour by splitting before and after and in the tour. The tour is then repaired by joining the neighbors of these nodes appropriately. In the case of batch cuts, we perform a batch split for the step above. For batch cuts, notice that many edges incident to a node could be deleted, and therefore we may need to traverse a sequence of deleted edges before finding the next neighbor to join. We handle this by sending all deleted edges and their neighbors to a single machine, which determines which nodes should be joined together to repair the tour. Finally, we repair the tours by performing a batch join operation. Since we perform a constant number of batch-sequence operations with batches of size the overall round complexity is w.h.p. by our bounds on sequences, and the overall communication is w.h.p.
Augmentation, Other Operations and Large Values. Note that the underlying sequences handle updating the augmented values, and that updating the augmented values at some nodes trivially maps to an set call on the underlying sequences. Therefore the bounds for GetKey and SetKey are identical to that of sequences. Similarly, the bounds for ID are identical to that of the sequence structure. For super-constant size values, the bounds are affected exactly as in the case for augmented sequences with large values. The communication costs for all operations other than ID grow by an factor and the round-complexity is unchanged. This completes the proof of Theorem 3.1.
4 Fast Contraction
Lemma 1.6 is important in proving that our algorithm can find replacement edges in the spanning forest quickly in the event of a batch of edges being deleted. The proof idea is as follows. We first show that there exists a partitioning of the vertices such that the edges within the partitions collapse in a single iteration.
To do this, we first need to define a few terms relating to expansion criteria of a graph. Let denote the degree of a vertex in graph . For edges in a partition to collapse in a single iteration, we need each partition to be sufficiently “well-knit”. This property can be quantified using the notion of conductance.
The following lemma proves the existence of a partitioning such that each partition has high conductance.
Now that we have a suitable partitioning, we want to find a strategy of picking edges in a decentralized fashion such that all edges within a partition collapse with high probability. One way to do this is to pick edges which form a spectral sparsifier of . The following lemma by Spielman and Srivastava [SS11] helps in this regard: we use more recent interpretations of it that take sampling dependencies into account.
On a graph , let be independent random distributions over edges such that the total probability of an edge being picked is at least times its effective resistance, then a random sample from is connected with high probability.
Now we want to show that the random process ContractionSampling (Defintion 1.5) where each vertex draws samples actually satisfies the property mentioned in Lemma 4.1, i.e., all edges are picked with probability at least their effective resistance. To show this, we first need the following inequality given by Cheeger.
Lemma 4.2 ([Am85]).
Given a graph , for any subset of vertices with conductance , we have
where is the Laplacian matrix of the subgraph of induced by . is the diagonal matrix with degrees of vertices in .
Let be a subset of vertices of such that for some . For an edge , where , the effective resistance of measured in , , satisfies
From Lemma 4.2, we get that
Using this, along with the definition , gives us that
We have for any subset that:
Furthermore, for every vertex , we get
which when substituted into Equation 1 gives
Substituting for completes the proof. ∎
Now, we have enough ammunition to prove Lemma 1.6.
Proof of Lemma 1.6.
From Lemma 1.8, we know that our graph can be partitioned into expanders with conductance at least . Now, let be one such partition and let be an edge contained in . From the definition of the random process in Definition 1.5, we know that for an edge , the probability that it is sampled by either or is at least
where the inequality follows from Lemma 4.3. Since each such edge is chosen with probability greater than times its effective resistance w.r.t. , from Lemma 4.1, we know that the edges chosen within are connected with high probability.
Thus, we are left with the edges between the partitions, the number of which is bounded by edges, ∎
5 Independent Sample Extractor From Sketches
In this section, we prove Lemma 1.9, which shows how we extract independent edge samples from the sketches, which are inherently dependent.
We start with the definition of an induced bipartite multigraph. Given a multigraph of vertices, we say is an induced bipartite multigraph of if is partitioned into two disjoint vertex sets and and an edge of belongs to if and only if the edge contains one vertex from and one vertex from .
For a fixed multigraph and an induced bipartite multigraph of , we conceptually divide the process of generating a sketch with parameter into two phases:
Phase 1. Independently sample each edge not in the bipartite graph with probability .
Phase 2. Independently sample each edge in the bipartite graph with probability .
Given a multigraph and an induced bipartite multigraph of , with probability at least , independent sketches simulate the following random process: Every vertex is associated with independent variables for some integer satisfying
The outcome of each can be an edge incident to or .
For every edge incident to vertex ,
Furthermore, for every edge sampled by the above random process, there exists a sketch and a vertex such that the value of the sketch on the vertex equals the edge ID.
that there are sketches corresponding to each . Let denote the parameter of -th sketch.
Let denote the number of edges in . We use to denote the random variables denoting edges being present in the -th sketch. Hence, the random process of generating all the sketches corresponds to sampling random variables .
Let be the set of random variables in Phase 1 of all the sketches. We define another random process based on the outcome of as follows: For -th sketch and any vertex , if no edge incident to vertex was sampled in Phase 1 of the -th sketch, then we define a new independent random variable such that
if is in graph and incident to vertex , and
If at least one edge incident to vertex was sampled in Phase 1 of the -th sketch, then we do not define random variable .
Now, for an arbitrary , let
For a single sketch with parameter , the probability that no edge incident to was sampled in Phase 1 is
Applying Chernoff bound, with probability , at least random variables are defined such that with probability , equals exactly equals edge for every in incident to . Hence, for any edge incident to in graph , we have
By union bound, with probability , all the defined random variables ’s form the required random process.
In the rest of this proof, we show that Phase 2 of each sketch simulates the generation of the defined random varibles . For every defined random variable , we let
denote the random variable for the -th sketch which corresponds to edges incident to vertex in graph . It is easy to verify that . Furthermore, all the ’s are mutually disjoint. We define a function
Since all the random variables in are independent, we have
for any edge incident to in , and
Then the lemma follows. ∎
Using the above lemma, we can now prove Lemma 1.9.
Proof of Lemma 1.9.
We repeat the following process times:
Every vertex is independently assigned the color red with probability 1/2, or is assigned yellow otherwise.
Let be the vertices with red color and be all the vertices with yellow color. Construct the induced bipartite multigraph , where contains all the edges of with one red vertex and one yellow vertex.
By Chernoff bound and union bound, with probability at least , for every edge and a vertex contained by the edge , there is a sampled bipartite multigraph such that and .
Assuming every edge belongs to at least one sampled bipartite graph. For each sampled bipartite multigraph, we assign sketches. The lemma follows by applying Lemma 1.9 for every bipartite multigraph and its assigned sketches, ∎
6 Connectivity Algorithms and Correctness
We give the algorithms for batch edge queries, batch edge insertions, and batch edge deletions and prove the correctness in Section 6.1, Section 6.2 and Section 6.3 respectively. Putting together Lemmas 6.1, 6.3 and 6.2 then gives the overall result as stated in Theorem 1.1.
Throughout this section, we will use the batch-dynamic tree data structure discussed in Section 3 to maintain
a maximal spanning forest of the graph,
a key for every vertex , where
is a vector ofsketch values on vertex ,
an edge list data structure which can be used to check if an edge is in the graph given an edge ID.
6.1 Algorithm for Batch Edge Queries
Since is a maximal spanning tree, the query operations are directly provided by calling ID on all involved vertices. Pseudocode of this routine is in Algorithm 1.
The algorithm Query (Algorithm 1) correctly answers connectivity queries and takes rounds, each with total communication at most .
The correctness and performance bounds follow from the fact that is a maximal spanning forest of and from Theorem 2.2. ∎
6.2 Algorithm for Batch Edge Insertions
Given a batch of edge insertions, we want to identify a subset of edges from the batch that are going to add to to maintain the invariant that is a maximal spanning forest. To do this, we use ID operation to find IDs of all the involved vertices in the edge insertion batch. Then we construct a graph which initially contains all the edges in the edge insertion batch, and then contracts vertices from same connected component of to a single vertex. Since this graph contains edges, we can put this graph into a single machine, and compute a spanning forest of . We maintain the maximal spanning forest by adding edges in to . We also maintain the edge list data structure by adding inserted edges to the list, and maintain the sketches for the involved vertices by the UpdateKey operation. Pseudocode of the batched insertion routine is in Algorithm 2.