DeepAI
Log In Sign Up

On Distributed Listing of Cliques

07/10/2020
by   Keren Censor-Hillel, et al.
0

We show an Õ(n^p/(p+2))-round algorithm in the model for listing of K_p (a clique with p nodes), for all p =4, p≥ 6. For p = 5, we show an Õ(n^3/4)-round algorithm. For p=4 and p=5, our results improve upon the previous state-of-the-art of O(n^5/6+o(1)) and O(n^21/22+o(1)), respectively, by Eden et al. [DISC 2019]. For all p≥ 6, ours is the first sub-linear round algorithm for K_p listing. We leverage the recent expander decomposition algorithm of Chang et al. [SODA 2019] to create clusters with a good mixing time. Three key novelties in our algorithm are: (1) we carefully iterate our listing process with coupled values of min-degree within the clusters and arboricity outside the clusters, (2) all the listing is done within the cluster, which necessitates new techniques for bringing into the cluster the information about all edges that can potentially form K_p instances with the cluster edges, and (3) within each cluster we use a sparsity-aware listing algorithm, which is faster than a general listing algorithm and which we can allow the cluster to use since we make sure to sparsify the graph as the iterations proceed. As a byproduct of our algorithm, we show an optimal sparsity-aware algorithm for K_p listing, which runs in Θ̃(1 + m/n^1 + 2/p) rounds in the model. Previously, Pandurangan et al. [SPAA 2018], Chang et al. [SODA 2019], and Censor-Hillel et al. [TCS 2020] showed sparsity-aware algorithms for the case of p = 3, yet ours is the first such sparsity aware algorithm for p ≥ 4.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

11/14/2022

Massively Parallel Algorithms for b-Matching

This paper presents an O(loglogd̅) round massively parallel algorithm fo...
02/19/2013

Breaking the Small Cluster Barrier of Graph Clustering

This paper investigates graph clustering in the planted cluster model in...
11/14/2020

Tight Distributed Listing of Cliques

Much progress has recently been made in understanding the complexity lan...
05/13/2021

On Sparsity Awareness in Distributed Computations

We extract a core principle underlying seemingly different fundamental d...
05/06/2018

FASK with Interventional Knowledge Recovers Edges from the Sachs Model

We report a procedure that, in one step from continuous data with minima...
10/03/2016

Improving Accuracy and Scalability of the PC Algorithm by Maximizing P-value

A number of attempts have been made to improve accuracy and/or scalabili...
11/25/2017

Optimal Gossip Algorithms for Exact and Approximate Quantile Computations

This paper gives drastically faster gossip algorithms to compute exact a...

1 Introduction

The problem of listing cliques of size , as well as many additional subgraph-related problems, is a fundamental problem that has been extensively studied in many computational settings. Given a subgraph and a graph , the problem of -listing (also referred to as enumeration) requires that every node outputs a set of instances of , such that the union of all outputs is the list of all instances of in .

We achieve listing in the model111In the model, the -node graph is the communication graph and messages of bits can be sent in synchronous rounds. in a sub-linear number of rounds, for all , and in rounds for .222We use the notation to hide polylogarithmic multiplicative factors. All the logarithms in the paper are in base 2.

The first breakthrough in this area was the sub-linear algorithm for listing of Izumi and Le Gall [15], which was followed by the insightful algorithms of Chang et al. [4] and Chang and Saranurak [5] who brought the complexity down to a tight number of rounds. When , many additional challenges arise for listing, with some obstacles already appearing at , and others at . Recently, Eden et al. [8] presented the first sub-linear algorithms for listing, running in and rounds, respectively, overcoming some significant obstacles.

For , no sub-linear time algorithms were known for listing prior to our work.

Our algorithm relies on a new set of techniques which simultaneously solve listing in a sub-linear number of rounds, for all . We leverage the recent expander decomposition algorithm of Chang et al. [4] to create clusters with a good mixing time. Three key novelties in our algorithm are: (1) we carefully iterate our listing process with coupled values of min-degree within the clusters and arboricity outside the clusters, (2) all the listing is done within the cluster, which necessitates new techniques for bringing into the cluster the information about all edges that can potentially form instances with the cluster edges, and (3) within each cluster we use a sparsity-aware listing algorithm, which is faster than a general listing algorithm and which we can allow the cluster to use since we make sure to sparsify the graph as the iterations proceed.

The following is the formal statement of our main contribution.

Theorem 1.1.

For all , there exists an algorithm for -listing in the model which completes in rounds, w.h.p..

Notice that for all , the term dominates. For the case of , we are able to remove the first term and achieve an even faster algorithm which takes , giving us the following.

Theorem 1.2.

There exists an algorithm for -listing in the model which completes in rounds, w.h.p..

Nonetheless, for the lone case of , the term remains and dominates the second. Most of the paper is devoted to proving Theorem 1.1, and in Section 3 we show the modifications required in order to get rid of the first term for the case of and prove Theorem 1.2.

Notice that our results get closer to the lower bound of shown in Fischer et al. [10].

Lastly, we also present the following result in the model.333In the model, the -node graph is the input graph and messages of bits can be sent in synchronous rounds between any two nodes.

Theorem 1.3.

For all , there exists an algorithm for -listing in the model which completes in rounds, w.h.p..

Here, is the number of edges in the input graph. This algorithm is a byproduct our sparsity aware algorithm used in proving Theorem 1.1, and so its formal proof is deferred to Section 4.

1.1 The challenges

The ingenious listing algorithms of [4, 5] construct and apply expander decompositions which break up the input graph into dense clusters with good mixing times. Then, each cluster lists all the instances which have at least one edge within the cluster itself. When moving to listing with , a critical dissimilarity arises: a instance with a single edge in a specific cluster can also have edges which are not incident to any of the cluster nodes, unlike in the case. This difference raises two main challenges which we address throughout the paper:

Challenge 1. After applying the expander decomposition, for each cluster we need to ensure that any edge which participates in a instance involving some edge inside the cluster, such that is not incident to any of the cluster nodes, is known to some node in the cluster.

Challenge 2. We need to perform the listing process efficiently within each cluster, despite the fact that after bringing edges into a cluster, the amount of information the cluster has to process can be substantially larger than the bandwidth available within the cluster.

In Eden et al. [8], the first challenge is tackled for the case. This is done by splitting the nodes outside a cluster into heavy and light nodes, where heavy nodes have the required bandwidth in order to send their entire neighborhood into the cluster, while light nodes do not have many neighbors inside the cluster and thus can, with few queries to the cluster nodes, list all the which they share with the cluster nodes. This novel technique resolves the Challenge 1.1. However, overcoming the second challenge is necessary for further improving the runtime.

In the cases of , both challenges remain, since unlike in , there can be three nodes outside a cluster involved in a instance with a cluster edge. Thus, now a light node would also have to learn about edges outside the cluster, in order to determine if it is in a , incurring an overhead of too many rounds. For this reason, the algorithm for in [8] takes a very different approach than the one they present for .

1.2 Our approach

The key ingredients of our approach for solving these challenges are controlling the sparsity of the problem assigned to each cluster, and creating a sparsity-aware algorithm based on a wide array of critical observations. Our result presents a unified algorithm which solves Challenge 1.1 in rounds, regardless of the value of , and then solves Challenge 1.1 in rounds. These guiding principles utilized in solving these challenges may turn useful for other subgraph related problems in the model.

We first present how to overcome Challenge 1.1, since the solution for Challenge 1.1 relies on it.


Coping with Challenge 1.1: Controlling the bandwidth vs. problem size ratio. A necessary (though insufficient) requirement for speeding up the round complexity in the model is ensuring that the bandwidth available to each cluster is proportional to the size of the problem assigned to it, that is, to the number of edges for which it must perform listing.

To see this, consider the case of in the and the models. The round complexity of listing in the model is rounds, as mentioned above. Nonetheless, as shown by Pandurangan et al. [18] and by Censor-Hillel et al. [3], if the input graph is sparse, it is possible to perform listing in rounds in the model, and even in rounds if , where is the number of edges in . Intuitively, for similar reasons, it should hold that using a algorithm in a cluster with nodes in order to list all instances in an input graph with nodes and edges, should incur a round complexity which is .

This intuition carries over to all and, as such, when using the expander decomposition, we should assign each cluster a listing problem where the number of input edges and the bandwidth available are closely related – we ensure that the ratio between these values is at most .

Assigning a not-too-large listing problem to clusters was first done in [4] in order to get the -round algorithm for listing in the model, and we ensure this in the significantly more challenging case of . The reason this case is drastically more difficult is due to Challenge 1.1 which applies only for and not for .

It is therefore paramount to control the size of the problem given to each cluster. Each cluster is assigned a single task: to list all the which contain at least one edge inside the cluster. Each such can have three types of edges: edges inside the cluster, edges crossing the cluster boundary (one node inside the cluster and one outside), and edges entirely outside the cluster, that touch two neighbors of the cluster. We achieve this control using the following strategies.


Coping with Challenge 1.1: Keeping minimal degree and arboricity close together. Our key approach in order to ensure that the number of edges of the first, second, and third types is proportional to the bandwidth used inside the clusters, is to make sure that the minimal degree inside the clusters is always very close to the arboricity of the entire graph.

We do this by employing two, nested, iterative processes. The outer process decreases the arboricity and the inner processes decreases the average degree in the graph. These new iterative procedures are the key concepts of our algorithm which control the ratio between the computation bandwidth and the problem size.

We get two major advantages by having these iterative processes. First, we promise that the ratio between the number of edges brought into the cluster and the number of edges inside the cluster is roughly , as required. Second, we guarantee that the number of edges inside the cluster is very close to the bandwidth that we actually use for routing,

which is the product of the number of nodes in the cluster and the minimal degree within the cluster. This allows us to avoid the partitioning of vertices into degree classes that is done in [4, 5].


Coping with Challenge 1.1: Sparsity-aware listing. As stated, controlling the ratio between bandwidth and problem size is a necessary condition for fast listing, yet, this condition is insufficient on its own. Therefore, we leverage our approach of decreasing arboricity to argue that the graph becomes sparse as the algorithm progresses, which enables us to utilize an efficient sparsity aware algorithm. To this extent, we create a novel -style sparsity-aware listing algorithm for all . Notice that previously [18, 3, 4] showed algorithms with similar properties, yet only for . Further, in Section 4, we prove that this algorithm can also be used in the model itself as a general sparsity aware algorithm.


Coping with Challenge 1.1: Delaying treatment of bad edges to future iterations. Finally, we need to ensure that all the edges outside the cluster which could possibly generate a instance with some edge in the cluster become known in the cluster. This property has not been previously achieved, and is the key for what allows our algorithm to work for of all , simultaneously. To this extent, we enhance the technique of considering heavy and light nodes as first defined by Eden et al. [8]

. Nodes outside the cluster are classified as either heavy or light, depending on how many neighbors they have within the cluster.

In [8], heavy nodes send their neighbors into the cluster, while light nodes list instances themselves.

Our algorithm brings all neighboring edges into the cluster itself. The huge challenge with light nodes is that they may have much information to send into the cluster, but only a small bandwidth into the cluster to use for sending this information.

Here, we observe that since light nodes have few cluster neighbors, then, on average, most of the cluster nodes should have few light neighbors outside the cluster. Thus, we detect problematic nodes within the clusters (those which have too many light neighbors) and move the edges inside the cluster which are connected to them to the next iterations of the algorithm. This ensures that each remaining cluster node has few enough light neighbors, ensuring that the cluster does not need to learn many edges involving light nodes and thus all those edges can be sent efficiently into the cluster.

We mention that the triangle listing algorithm of [4] also delays treatment of some edges to future iterations. However, these are different edges and this is done for different reasons than ours. In the triangle listing algorithm, the edges are moved in order to bound the number of edges crossing the cluster boundary that need to be processed because they are a part of the input for the cluster (but they are already known to the cluster). In our algorithm, the reason for moving edges is in order to bound the number of light neighbors that a cluster node has, so that we bound the amount of information it has has to learn.

Lastly, we must also ensure that after sending the information from outside the cluster into it, no single node in the cluster becomes responsible for too many edges from outside the cluster, since otherwise it would not be possible to perform the sparsity-aware algorithm efficiently. Therefore, we leverage the guarantees we maintain regarding the arboricity of the graph during our iterations in order to be able to generate a load-balanced partition of the edges from outside the cluster.

1.3 Related Work

As mentioned, the first sublinear algorithm for clique listing in the model is due to Izumi and Le Gall [15], who showed a -round algorithm for listing triangles. This was followed by a -round algorithm of Chang et al. [4] and a -round algorithm of Chang and Saranurak [5]. The latter is tight up to polylogarithmic factors, due to a matching lower bound by Pandurangan et al. [18] and Izumi and Le Gall [15]. This is also the current state-of-the-art for triangle detection, requiring that some node indicates the existence of a triangle if there is such, for which it is only known that a single round does not suffice, by either deterministic or randomized algorithms, due to Abboud et al. [1] and Fischer et al. [10], respectively.

Recently, a result by Huang et al. [14] showed that it is possible to solve triangle listing in rounds in the model, where denotes the maximal degree in the graph. This is the first algorithm which is sub-linear in for this problem. In fact, their solution also holds for the more difficult version of triangle listing, known as local triangle listing, where each triangle needs to be reported by at least of one of its three member nodes. This problem is known to take rounds due to [15].

For cliques of size , the first sublinear algorithms were given by Eden et al. [8], who showed that can be listed in rounds and that can be listed in rounds.

Fischer et al. [10] show a lower bound of for listing. For the detection version of cliques the only lower bound known is due to Czumaj and Konrad [6], who show that rounds are needed for detection for all and that rounds are needed for detection for all .

The core method of using an expander decomposition has been widely used before, but was first given for the model by Chang et al. [4]. A different decomposition was given in [5], both for listing triangles. Eden et al. [8] use this decomposition to create another type of layered decomposition, which they use for and listing, as well as for showing how to list arbitrary -node subgraphs in rounds, for constant .

For cycles, Drucker et al. [7] showed that for fixed , detection requires rounds, where is the Turan number that counts the maximum number of edges that an -node graph can have without containing an isomorphic subgraph to

. For odd values of

this implies a lower bound of , while for it implies a lower bound of . The latter was then extended by Korhonen and Rybicki [17] who make the lower bound apply for any even value of . They also show an algorithm for that completes within a linear number of rounds for any constant , implying that for constant odd values the complexity is . For even values, Fischer et al. [10] showed that can be solved in rounds, which was later improved by Eden et al. [8] to rounds for odd , and at most rounds for even .444The notation refers to the notation, while treating as a constant in terms of multiplicative factors to the round complexity.

Even et al. [9] and [17] also show algorithms for detection of trees and additional subgraphs. Additional lower bounds for subgraph detection are given in [10], showing a lower bound of rounds for a family of graphs with nodes. Additional lower bounds are given by Gonen and Oshman in [13].

2 Sub-linear -listing, for

2.1 Preliminaries

Throughout the algorithm, we use the expander decomposition of [4],555We note that our algorithmic techniques are fundamentally incompatible with the improved expander decomposition seen in [5], due to the fact that we heavily rely on a result related to the arboricity of parts of the decomposition – a notion which is central to [4] but which exhibits an obstacle towards triangle listing and hence is successfully removed in [5]. and therefore we define here notation which relates to this. We begin by defining the notion of clusters, which are components that have a lower bound on the degrees of their vertices as well as a small mixing time, where mixing time roughly denotes the number of rounds required for a random walk to reach the stationary distribution.

Definition 2.1 (Clusters [4]).

Given a graph , a set is an -cluster w.r.t , if it is a maximal connected component in the graph and it has the following properties: (1) each node has , and (2) the mixing time of in is .

Our algorithm relies on having a decomposition of the graph into such clusters, defined as follows.

Definition 2.2 (-Expander Decomposition [4]).

Given a graph and , a -decomposition of is a partition of its edge set into , such that the following hold:

  • is such that each maximal connected component w.r.t to that includes more than one node is an -cluster. Further, for each cluster in , there is a unique identifier known to all nodes of the cluster, and each node knows which of its edges are in and to which cluster it belongs.

  • The arboricity of the subgraph induced by is at most . Further, there exists an orientation of the edges such that , where is the set of edges of oriented away from , and . Each node knows which of its edges are in .

  • .

A -expander decomposition has been constructed by Chang et al. [4], giving the following.

Theorem 2.3 (-Decomposition Construction [4]).

There exists an algorithm for constructing a -expander decomposition in the model which completes in rounds.

The algorithm given in [4] also promises that each cluster has an ID that is known to all cluster nodes.

Our algorithms rely on the ability to perform quick routing within the clusters in the expander decomposition. We use the following theorem which follows from the routing algorithms of [11] and [12]. This theorem appears as Theorem 4.1 in [4] and is discussed more in-depth in Section 3 of [5].

Theorem 2.4.

Intra-Component Routing. Let be a graph and . Let be an -cluster in . If every node in has at most messages it needs to send and receive, then there exists an algorithm in the model that routes all messages within in rounds.666The constant factors used in the exponents are different (personal communication with the authors of [5]). That is, the statement holds if each node wants to send and receive messages in a total of rounds, for some constants . Thus, direct usage of this theorem would negatively impact our final results and would add a factor of to the round complexities of the listing algorithms we show. However, similarly to the discussion found in Section 3 of [5], in our case it is also possible to overcome this extra term due to a trade-off present in the routing algorithm, since our final round complexities are .

We emphasize that Theorem 2.4 only uses the edges of for routing, thus one can route in multiple clusters in parallel. Further, Lemma 4.1 in [4], also provides us with the following Lemma 2.5 which is used in the final part of our algorithm.

Lemma 2.5.

Intra-Component ID Assignment. Let be a graph and , and the -clusters in the above expander decomposition of w.r.t. . Then it is possible in rounds, in the model, to compute new ID assignments, , for each out of , in parallel.

We note the following remark which splits listing into two cases, when and when .

Remark 2.6.

Notice that for , the lower bound for listing is , and, therefore, for these values of , one can trivially list all in rounds by having each node broadcast its neighborhood. Thus, we can assume for the rest of our algorithm that .

Lastly, we require the following input partitioning lemma, which appears as Lemma 4.2 in [4].

Lemma 2.7.

[4, Lemma 4.2]

Given a graph with edges and vertices, generate a subset by letting each node join

independently with probability

. Suppose that the maximum degree is and . Then, with probability at least , the number of edges in the subgraph induced by is at most .

We are now ready to prove our main contribution.

  • For all , there exists an algorithm for -listing in the model which completes in rounds, w.h.p..

2.2 Iteratively decreasing the arboricity

One of the main ingredients in proving Theorem 1.1 is an algorithm which removes edges from the graph in order to decrease its arboricity, while listing instances that contain at least one of the removed edges. This is formally given as follows.

Theorem 2.8.

For all , there exists an algorithm denoted LIST, which, given a graph with arboricity at most , along with an orientation of its edges with a maximum out-degree of , such that , splits into two edge sets , such that the arboricity in is at most , the edges of are oriented with a maximum out-degree of at most , and LIST lists all instances in which have at least one edge in . The algorithm completes in rounds.

For the following discussion, we assume that , for some value of , and denote by . Notice that , and thus we can restate the theorem as having to ensure the arboricity of is at most . Our algorithm runs in rounds, which, due to the choice of , is equivalent to .

We use Theorem 2.8 iteratively on to prove Theorem 1.1, as follows.

  • The high-level approach of this proof is to use Theorem 2.8 iteratively on a sequence of graphs with decreasing arboricity. Notice that all these graphs have the same node set, and thus the value of , the number of nodes in the graph, is well defined and does not change throughout the algorithm.

    We denote , and let . We set , which clearly gives that the arboricity in is at most and allows us to run Algorithm LIST using . This creates a partition and lists all instances which have at least one edge in . This finishes within rounds.

    We are now left with the task of listing all instances in that have no edge in . In other words, we need to list all instances which are fully contained in . We define and notice that the arboricity in is at most . Therefore, we set , and . We run Algorithm LIST on , which completes in rounds. Notice that this number of rounds is exactly the same as for the first invocation of Algorithm LIST, since both and decrease by the same amount, .

    We continue iteratively applying Algorithm LIST with , and . We do this for at most iterations, as long as and . Once we get a or , we stop and observe that and thus or . At this stage, every node broadcasts its outgoing edges to all its neighbors in rounds of communication, which ends the algorithm by listing all remaining instances (those that are contained in ).

    To summarize the number of rounds, note that we iterate times and in each iteration we run Algorithm LIST in rounds. Lastly, during the final step of the algorithm, the nodes broadcast whatever is left of their outgoing edges to their remaining neighbors, taking rounds. Overall, since , the total number of rounds is , completing the proof. ∎

2.3 Iterative arboricity-listing while decreasing the number of edges

We now show Algorithm LIST from Theorem 2.8. We rely on the following procedure, which is the core of Algorithm LIST.

Theorem 2.9.

For all , there exists an algorithm denoted ARB-LIST, which, given a graph with arboricity that is split to two edge sets, , such that has arboricity , for a value and a value such that , and , and , along with an orientation of its edges with a maximum out-degree of , splits the graph into three edge sets and , such that the arboricity in is , the edges of are oriented with a maximum out-degree of , the size of is bounded by , and ARB-LIST lists all instances in which have at least one edge in . The algorithm completes in rounds.

Before proving Theorem 2.9, we show how it completes the proof of Theorem 2.8, as follows.

  • The high-level approach of this proof is to use Theorem 2.9 iteratively on a sequence of graphs with a decreasing number of edges.

    We begin with the graph , and denote . We apply Algorithm ARB-LIST on this partition, and get a new partition , such that the arboricity in is , the edges of are oriented with a maximum out-degree of , the size of is bounded by , and ARB-LIST lists all instances which have at least one edge in . This finishes within rounds.

    We are now left with the task of listing all instances in that have no edge in . In other words, we need to list all instances which are contained in . We apply Algorithm ARB-LIST again with and , getting the new . Notice that ARB-LIST now lists all in which have at least one edge in . Thus, so far, ARB-LIST listed all in with at least one edge in , since if any such has an edge in then that would have already been listed by the first invocation of ARB-LIST. Thus, we can remove from the graph and continue with . These two sets maintain that the arboricity of (with a known corresponding orientation) and .

    We continue iteratively applying Algorithm ARB-LIST on
    , obtaining that the arboricity of is at most and that . We do this for iterations, until , which implies that , and has an arboricity that is bounded by , as needed. During this iterative process, Algorithm ARB-LIST lists all the instances which have at least one edge in .

    To summarize the number of rounds, note that we iterate times and in each iteration we run Algorithm ARB-LIST in rounds, giving the claimed complexity. ∎

2.4 Algorithm Arb-List

This subsection contains the proof of Theorem 2.9.

The high-level idea of Algorithm ARB-LIST is running the expander decomposition with the given value , on the graph , producing . Then, we set , select some , and move the rest of the edges to . The choice of which edges to move is made so that it is easier to list all the instances of with at least one edge in compared with listing all instances with at least one edge in . To make this precise, we say that an edge is a goal edge, if the algorithm promises to list all instances of which contain . Using this terminology, ARB-LIST sets as goal edges, while edges that are moved from to are not goal edges (we call them bad edges).

However, if we simply remove edges from clusters in , we are no longer guaranteeing the properties of the cluster, such as an efficient mixing time. Thus, a crucial point for our algorithm to work is that we consider edges in as not being goal edges, but we still use them for communication in the clusters.

We now show how to choose which edges to move and then how to list all the with at least one edge in . Both of these tasks are completed in rounds. Notice that the initial expander decomposition takes , since . Thus, we achieve the required round complexity for Algorithm ARB-LIST.

2.4.1 Choosing bad edges and learning edges from outside the cluster


Primarily, since we run the expander decomposition on , we get that . Thus, in order to maintain the required guarantee that , we can move at most edges from to . This is thus the bound we strive to achieve on the number of edges moved. Nonetheless, since we do not focus on optimizing constant factors, we will show that the fraction of edges moved is .

Consider a single cluster , and let be the number of nodes in . Notice that has at least edges inside it due to the decomposition, yet at most edges since the arboricity of the graph is .

We now show how all edges that are not in , and could potentially form instances with remaining goal edges in , become known to nodes of . These are edges between two nodes that are neighbors of the cluster. This process moves some edges from to , in order to ensure that not too many edges from outside the cluster are brought into it.


Bad edges and learning edges from outside the cluster: At this stage, we wish to bound the amount of information which needs to enter the cluster by removing edges in which require too many edges from outside to be brought in. Every node broadcasts to its neighbors outside a message that indicates that it is in cluster (recall that every node knows the ID of its cluster). Each neighbor of counts how many neighbors in it has, and denotes this value by . If , then is called a -heavy node, and otherwise it is called -light.

Each -heavy node has at most outgoing edges due to the arboricity of graph, and thus sends such edges into the cluster , by sending each of its neighbors in a chunk of at most of its outgoing edges. Note that this implies that each edge between two -heavy nodes is thus known to some node .

For handling the edges of -light nodes, we first need to account for nodes in which have too many -light neighbors. For each node , we denote by the number of -light neighbors it has. If then we say that is a bad node. Every edge in that connects two bad nodes, is called a bad edge, and is moved from to and thus is no longer a goal edge. We claim that there are at most a edges which are bad edges. To see why, note that the total number of edges between nodes in and -light nodes is , since there are at most -light nodes, and each has at most neighbors in . Therefore, there are at most bad nodes, where the last inequality is since . To now bound the number of edges removed, recall that the arboricity of the graph is , and so there are at most edges between bad nodes. On the other hand, the cluster has at least edges inside it, where the equality follows from the choice of w.r.t. . Therefore, we removed at most of the cluster edges, and thus, summing across all clusters, we removed a total of edges, as claimed.

At this point, each good node has at most -light neighbors. Each such node broadcasts its -light neighbors to every neighbor that node has outside , and receives from a list in which each item indicates whether a -light neighbor of is also connected to . Note that this implies that each edge between two neighbors of where one endpoint is -light is thus known to some node . In Section 2.4.2, we use this to show that knows all the graph edges which can potentially form a instance with at least one remaining goal edge in .

We now bound the number of rounds we used so far, and the number of edges held by each node . Notice that each node receives at most edges from each neighbor of . This is because if is -heavy then it sends at most edges when sending all its outgoing edges into the cluster, and, if is a good node, sends at most additional edges when responding to after tells about all of its -light neighbors (if is a bad node, no messages of the second type are sent). Thus, since , our runtime is bounded by for this step. Further, every node receives at most edges from outside the cluster.

Remark 2.10.

We showed that each node learns at most edges that are completely outside the cluster. This is our desired bound since we know that can send and receive at least messages quickly inside the cluster, and thus in rounds, we later redistribute these edges inside the cluster in a load-balanced way.

2.4.2 Proving that all required edges are known to


In this section we show that each edge outside of which can potentially form a instance with at least one goal edge is known to some node in . Let be some instance which contains at least one goal edge in . Notice that all the other edges in can be either: inside (goal or non-goal edges), crossing the boundary of , or entirely outside . Each edge of the first two types is obviously known to some node in , and thus it remains to show that all the edges outside in are known to some node or nodes in .

Notice that it suffices to show that any edge outside of which can form a with a goal edge of is known to some node in , since if is in a instance with , then it is also in a instance with . Thus, let be a instance such that and is a goal edge of . We show that is known to some node in .

Case 1: heavy-to-heavy edges

If both are -heavy, then the edge is directed away from one of them, and so that node sent to one of its cluster neighbors.

Case 2: edge with a -light endpoint

Assume w.l.o.g. that is -light. Since is a goal edge of , then at least one of its endpoints, w.l.o.g. assume it is , is a good node. Thus, node sent the neighbor to and responded to that exists and so node knows about .

2.4.3 Simulating a sparsity-aware -style -listing algorithm

What remains is to show our new sparsity-aware algorithm for -listing, and prove that it can be executed efficiently within each cluster. Let be a cluster with nodes denoted by . Consider the set of edges that form an instance of with at least one goal edge in . We have that each such edge is known to some node in . We begin by running the algorithm from Lemma 2.5 for assigning new IDs in to the nodes of , and from now on the nodes use these new IDs.

The main algorithmic ideas presented in this section are as follows. Prior to this step, every cluster reached a stage where the nodes of know all the information required in order to list all involving at least one edge in . This was done by ensuring that each edge outside of which forms a involving at least one edge in is now known to at least one node in . Now, the nodes of must efficiently communicate this information within the cluster in order to actually list all such . Primarily, we reshuffle the edges known to the nodes of such that each node assumes responsibility for roughly the same amount of edges. Next, we create a randomized partition of the entire graph and show that the number of edges between any two parts of the partition are roughly the same. By doing so, we exploit the sparsity of the graph which we developed throughout the algorithm. Finally, each node in the cluster selects parts from the generated, randomized partition, and learns all the edges between these parts. By ensuring the every selection of parts is chosen by some node in the cluster, we guarantee that every with at least one edge inside is listed.


Reshuffling the edges: In order to ensure a load-balanced and efficient execution of our sparsity aware algorithm later, we need all edges which are known to nodes in – whether they are edges in , crossing the cluster boundary of , or completely outside – to be grouped according to the node from which they are directed away from. Concretely, for each node (whether or ), we want to have a single node which knows all of the edges directed away from . Recall that since the graph has arboricity, and we know a corresponding orientation of the edges, then there are at most edges directed away from . Therefore, each node takes responsibility for nodes in the graph. Precisely, the node with new ID is responsible for the nodes whose (original) ID is in the range . Using the routing algorithm of Theorem 2.4, each node routes any edge which it originally receives from outside the cluster, and any edge which is directed away from itself, to the node inside the cluster which are now responsible for the node from which that edge is outgoing. By Remark 2.10, each node learns at most edges from outside the cluster that must be routed. Further, since the arboricity of the graph is , every node also has at most additional edges which are directed away from it and that must also be routed by . At the end of the reshuffling, node is responsible for at most edges (this is because ). Therefore, by Theorem 2.4, the reshuffling procedure completes in rounds.


Partitioning the graph: We create a partition of the entire graph, with roughly equally-sized parts. To do so, every node , for each node out of the nodes outside the cluster which simulates, chooses uniformly at random which part in the node joins. All in all, node makes choices and broadcasts them to all nodes of . This means that node sends and receives messages, and thus this completes in rounds, using the algorithm from Theorem 2.4, where we used .

Since there are at most edges in the graph, using a union bound with Lemma 2.7 gives that, with high probability, the number of edges between any two parts in is . Note that the conditions needed in Lemma 2.7 are satisfied since , where the first inequality is since and the last inequality is since , and so obviously the maximal degree in the graph is below this value.


Listing by learning graph edges: Each node is assigned, in a predetermined, balanced manner, parts in . The new IDs of the nodes are used to decide which parts they get, and since the nodes of have new IDs in , each node can locally compute which parts were assigned to which node. Precisely, node views the -radix representation of its new ID and uses the digits in the representation in order to determine the parts assigned to it. Node then needs to learn all the edges between the parts that are assigned to it and list all instances of that it observes. Since the assignment is predetermined, any node in the cluster which holds an edge which node needs to learn, can send the edge to . In order to do so in a load-balanced way, node sends such an edge to node only if in the orientation of the graph the edge is oriented away from one of the nodes which it simulates.

The number of messages each node receives is . We know that , and therefore, . It remains to show that each node also sends at most messages, and then by Theorem 2.4, this part completes in rounds. Notice that due to Remark 2.6, we can hide the term with the notation.

To show that node sends at most messages, recall that is responsible for at most edges in the graph. Each such edge needs to be sent to every node which selected the parts which contain both endpoints of that edge, and thus each edge is sent to at most nodes777As stated above, the part assignment is by the -radix representation of the ID of a node. We denote by the part assigned to a node as the value of the digit of the -radix representation of the ID of that node. Let be two the parts in the partition which hold the endpoints of a given edge. There are nodes which were assigned as their first parts. This is because nodes are assigned their first part as , and out of those nodes, a fraction are assigned as their second part. We then complete the bound by multiplying by since we need to deliver to all nodes which are assigned and not just those assigned these parts as their first and second parts, respectively.. Thus, sends at most messages, as claimed.

3 Faster Listing: in rounds

We now present an additional improvement which overcomes the additive complexity in the previous algorithm for the case of . We manage to completely overcome this challenge, by not sending edges incident to -light nodes into the cluster , and thus we solve listing in rounds.

  • There exists an algorithm for -listing in the model which completes in rounds, w.h.p..

Proof.

In order to get the improved runtime for