Low Congestion Cycle Covers and their Applications

12/09/2018
by   Merav Parter, et al.
0

A cycle cover of a bridgeless graph G is a collection of simple cycles in G such that each edge e appears on at least one cycle. The common objective in cycle cover computation is to minimize the total lengths of all cycles. Motivated by applications to distributed computation, we introduce the notion of low-congestion cycle covers, in which all cycles in the cycle collection are both short and nearly edge-disjoint. Formally, a (d,c)-cycle cover of a graph G is a collection of cycles in G in which each cycle is of length at most d and each edge participates in at least one cycle and at most c cycles. A-priori, it is not clear that cycle covers that enjoy both a small overlap and a short cycle length even exist, nor if it is possible to efficiently find them. Perhaps quite surprisingly, we prove the following: Every bridgeless graph of diameter D admits a (d,c)-cycle cover where d = Õ(D) and c=Õ(1). These parameters are existentially tight up to polylogarithmic terms. Furthermore, we show how to extend our result to achieve universally optimal cycle covers. Let C_e is the length of the shortest cycle that covers e, and let OPT(G)= _e ∈ G C_e. We show that every bridgeless graph admits a (d,c)-cycle cover where d = Õ(OPT(G)) and c=Õ(1). We demonstrate the usefulness of low congestion cycle covers in different settings of resilient computation. For instance, we consider a Byzantine fault model where in each round, the adversary chooses a single message and corrupt in an arbitrarily manner. We provide a compiler that turns any r-round distributed algorithm for a graph G with diameter D, into an equivalent fault tolerant algorithm with r· poly(D) rounds.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/30/2019

Short cycle covers of cubic graphs and intersecting 5-circuits

A cycle cover of a graph is a collection of cycles such that each edge o...
12/04/2017

Distributed Computing Made Secure: A New Cycle Cover Theorem

In the area of distributed graph algorithms a number of network's entiti...
10/11/2018

Short Cycles via Low-Diameter Decompositions

We present improved algorithms for short cycle decomposition of a graph....
04/14/2020

Round-Efficient Distributed Byzantine Computation

We present the first round efficient algorithms for several fundamental ...
01/16/2020

Tourneys and the Fast Generation and Obfuscation of Closed Knight's Tours

New algorithms for generating closed knight's tours are obtained by gene...
05/18/2020

Context-aware and Scale-insensitive Temporal Repetition Counting

Temporal repetition counting aims to estimate the number of cycles of a ...
01/11/2022

Parallel Acyclic Joins with Canonical Edge Covers

In PODS'21, Hu presented an algorithm in the massively parallel computat...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A cycle cover of a graph is a collection of cycles such that each edge of appears in at least one of the cycles. Cycle covers were introduced by Itai and Rodeh [IR78] in 1978 with the objective to cover all edges of a bridgeless111A graph is bridgeless, if any single edge removal keeps the graph connected. graph with cycles of total minimum length. This objective finds applications in centralized routing, robot navigation and fault-tolerant optical networks [HO01]. For instance, in the related Chinese Postman Problem, introduced in 1962 by Guan [Gua62, EJ73], the objective is to compute the shortest tour that covers each edge by a cycle. Szekeresand [Sze73] and Seymour [Sey79] independently have conjectured that every bridgeless graph has a cycle cover in which each edge is covered by exactly two cycles, this is known as the double cycle cover conjecture. Many variants of cycle covers have been studied throughout the years from the combinatorial and the optimization point of views [Fan92, Tho97, HO01, IMM05, BM05, KNY05, Man09, KN16].

1.1 Low Congestion Cycle Covers

Motivated by various applications for resilient distributed computing, we introduce a new notion of low-congestion cycle covers: a collection of cycles that cover all graph edges by cycles that are both short and almost edge-disjoint. The efficiency of our low-congestion cover is measured by the key parameters of packet routing [LMR94]: dilation (length of largest cycle) and congestion (maximum edge overlap of cycles). Formally, a -cycle cover of a graph is a collection of cycles in in which each cycle is of length at most d, and each edge participates in at least one cycle and at most c cycles. Using the beautiful result of Leighton, Maggs and Rao [LMR94] and the follow-up of [Gha15b], a -cycle cover allows one to route information on all cycles simultaneously in time .

Since -vertex graphs with at least edges have girth , one can cover all but edges in , by edge-disjoint cycles of length (e.g., by repeatedly omitting short cycles from ). For a bridgeless graph with diameter , it is easy to cover the remaining graph edges with cycles of length , which is optimal (e.g., the cycle graph). This can be done by covering each edge using the alternative - shortest path in . Although providing short cycles, such an approach might create cycles with a large overlap, e.g., where a single edge appears on many (e.g., ) of the cycles. Indeed, a-priori, it is not clear that cycle covers that enjoy both low congestion and short lengths, say , even exist, nor if it is possible to efficiently find them. Perhaps surprisingly, our main result shows that such covers exist and in particular, one can enjoy a dilation of while incurring only a poly-logarithmic congestion.

Theorem 1 (Low Congestion Cycle Cover).

Every bridgeless graph with diameter has a -cycle cover where and . That is, the edges of can be covered by cycles such that each cycle is of length at most and each edge participates in at most cycles.

Theorem 1 is existentially optimal up to poly-logarithmic factors, e.g., the cycle graph. We also study cycle covers that are universally-optimal with respect to the input graph (up to log-factors). By using neighborhood covers [ABCP98], we show how to convert the existentially optimal construction into a universally optimal one:

Theorem 2 (Optimal Cycle Cover, Informal).

There exists a construction of (nearly) universally optimal -cycle covers with and , where is the best possible cycle length (i.e., even without the congestion constraint).

In fact, our algorithm can be made nearly optimal with respect to each individual edge. That is, we can construct a cycle cover that covers each edge by a cycle whose length is where is the shortest cycle in that goes through . The congestion for any edge remains .

Turning to the distributed setting, we also provide a construction of cycle covers for the family of minor-closed graphs. Our construction is (nearly) optimal in terms of both its run-time and in the parameters of the cycle cover. Minor-closed graphs have recently attracted a lot of attention in the setting of distributed network optimization [GH16, HIZ16a, GP17, HLZ18, LMR18].

Theorem 3 (Optimal Cycle Cover Construction for Minor Close Graphs, Informal).

For the family of minor closed graphs, there exists an -round algorithm that constructs -cycle cover with , where is equal to the best possible cycle length (i.e., even without the constraint on the congestion).

Natural Generalizations of Low-Congestion Cycle Covers.

Interestingly, our cycle cover constructions are quite flexible and naturally generalize to other related graph structures. For example, a -two-edge-disjoint cycle cover of a -edge connected graphs is a collection of cycles such that each edge is covered by at least two edge disjoint cycles in , each edge appears on at most c cycles, and each cycle is of length at most d. In other words, such cycle cover provides edge disjoint paths between every neighboring nodes, these paths are short and nearly edge disjoint.

As we will describe next, we use this notation of two-edge-disjoint cycle covers in the context of fault tolerant algorithms. Towards this end, we show:

Theorem 4.

[Two-edge-Disjoint Cycle Covers, Informal] Every -edge connected -vertex graph with diameter has a -two-edge-disjoint cycle cover with and .

It is also quite straightforward to adapt the construction of Theorem 4 to yield -edge disjoint covers which cover every edge by edge disjoint cycles. These variants are also related to the notions of length-bounded cuts and flows [BEH10], and find applications in fault tolerant computation.

Cycle covers can be extended even further, one interesting example is “ covers”, where it is required to cover all paths of length at most in by simple cycles. The cost of such an extension has an overhead of in the dilation and congestion, where is the maximum degree in . These variant might find applications in secure computation.

Finally, in a companion work [PY17] cycle cover are used to construct a new graph structure called private neighborhood trees which serve the basis of a compiler for secure distributed algorithms.

Low-Congestion Covers as a Backbone in Distributed Algorithms.

Many of the underlying graph algorithms in the  model are based (either directly or indirectly) on low-congestion communication backbones. Ghaffari and Haeupler introduced the notion of low-congestion shortcuts for planar graphs [GH16]. These shortcuts have been shown to be useful for a wide range of problems, including MST, Min-Cut [HHW18], shortest path computation [HL18] and other problems [GP17, Li18]. Low congestion shortcuts have been studied also for bounded genus graphs [HIZ16a], bounded treewidth graphs [HIZ16b] and recently also for general graphs [HHW18]. Ghaffari considered shallow-tree packing, collection of small depth and nearly edge disjoint trees for the purpose of distributed broadcast [Gha15a].

Our low-congestion cycle covers join this wide family of low-congestion covers – the graph theoretical infrastructures that underlay efficient algorithms in the  model. It is noteworthy that our cycle cover constructions are based on novel and independent ideas and are technically not related to any of the existing low congestion graph structures.

1.2 Distributed Compiler for Resilient Computation

Our motivation for defining low-congestion cycle covers is rooted in the setting of distributed computation in a faulty or non-trusted environment. In this work, we consider two types of malicious adversaries, a byzantine adversary that can corrupt messages, and an eavesdropper adversary that listens on graph edges and show how to compile an algorithm to be resilient to such adversaries.

We present a new general framework for resilient computation in the  model [Pel00] of distributed computing. In this model, execution proceeds in synchronous rounds and in each round, each node can send a message of size to each of its neighbors.

The low-congestion cycle covers give raise to a simulation methodology (in the spirit of synchronizes [Awe85]) that can take any -round distributed algorithm and compile it into a resilient one, while incurring a blowup in the round complexity as a function of network’s diameter. In the high-level, omitting many technicalities, our applications use the fact that the cycle cover provides each edge two-edge-disjoint paths: a direct one using the edge and an indirect one using the cycle that covers . Our low-congestion covers allows one to send information on all cycles in essentially the same round complexity as sending a message on a single cycle.

Compiler for Byzantine Adversary.

Fault tolerant computation [BOGW88, Gär99] concerns with the efficient information exchange in communication networks whose nodes or edges are subject to Byzantine faults. The three cornerstone problems in the area of fault tolerant computation are: consensus [DPPU88, Fis83, FLP85, KR01, LZKS13], broadcasting (i.e., one to all) [PS89, Pel96, BDP97, KKP01, PP05] and gossiping (i.e., all to all) [BP93, BH94, CHT17]. A plentiful list of fault tolerant algorithms have been devised for these problems and various fault and communication models have been considered, see [Pel96] for a survey on this topic.

In the area of interactive coding, the common model considers an adversary that can corrupt at most a fraction, known as error rate, of the messages sent throughout the entire protocol. Hoza and Schulman [HS16] showed a general compiler for synchronous distributed algorithms that handles an adversarial error rate of while incurring a constant communication overhead. [CHGH18] extended this result for the asynchronous setting, see [Gel17] for additional error models in interactive coding.

In our applications, we consider a Byzantine adversary that can corrupt a single message in each round, regardless of the number of messages sent over all. This is different, and in some sense incomparable, to the adversarial model in interactive coding, where the adversary is limited to corrupt only a bounded fraction of all message. On the one hand, the latter adversary is stronger than ours as it allows to corrupt potentially many messages in a given round. On the other hand, in the case where the original protocol sends a linear number (in the number of vertices) of messages in a given round, our adversary is stronger as the interactive coding adversary which handles only error rate of cannot corrupt a single edge in each and every round. As will be elaborated more in the technical sections, this adversarial setting calls for the stronger variant of cycle covers in which each edge is covered by two edge-disjoint cycles (as discussed in Theorem 4).

Theorem 5.

(Compiler for Byzantine Adversary, Informal) Assume that a cycle cover and a two-edge-disjoint cycle cover are computed in a (fault-free) preprocessing phase. Then any distributed algorithm can be compiled into an equivalent algorithm that is resilient to a Byzantine adversary while incurring an overhead of in the number of rounds.

Compiler Against Eavesdropping.

Our second application considers an eavesdropper adversary that in each round can listen on one of the graph edges of his choice. The goal is to take an algorithm and compile it to an equivalent algorithm with the guarantee that the adversary learns nothing (in the information theoretic sense) regarding the messages of . This application perfectly fits the cycle cover infrastructure. We show:

Theorem 6 (Compiler for Eavesdropping, Informal).

Assume that -cycle cover is computed in a preprocessing phase. Then any distributed algorithm can be compiled into an algorithm that is resilient to an eavesdropping adversary while incurring an overhead of in the number of rounds.

In a companion work [PY17], low-congestion cycle covers are used to build up a more massive infrastructure that provides much stronger security guarantees. In the setting of [PY17], the adversary takes over a single node in the network and the goal is for all nodes to learn nothing on inputs and outputs of other nodes. This calls for combining the graph theory with cryptographic tools to get a compiler that is both efficient and secure.

Our Focus.

We note that the main focus in this paper is to study low-congestion cycle covers from an algorithmic and combinatorial perspective, as well as to demonstrate their applications for resilient computation. In these distributed applications, it is assumed that the cycle covers are constructed in a preprocessing phase and are given in a distributed manner (e.g., each edge knows the cycles that go through it). Such preprocessing should be done only once per graph.

Though our focus is not in the distributed implementation of constructing these cycle covers, we do address this setting to some extent by: (1) providing a preprocessing algorithm with rounds that constructs the covers for general graphs; (2) providing a (nearly) optimal construction for the family of minor closed graphs. A sublinear distributed construction of cycle covers for general graphs requires considerably extra work and appears in a follow-up work [PY17].

1.3 Preliminaries

Graph Notations.

For a rooted tree , and , let be the subtree of rooted at , and let be the tree path between and , when is clear from the context, we may omit it and simply write . For a vertex , let be the parent of in . Let be a - path (possibly ) and be a - path, we denote by to be the concatenation of the two paths.

The fundamental cycle of an edge is the cycle formed by taking and the tree path between and in , i.e., . For , let be the length (in edges) of the shortest path in .

For every integer , let . When , we simply write . Let be the degree of in . For a subset of edges , let be the number of edges incident to in . For a subset of nodes , let . For a subset of vertices , let be the induced subgraph on .

Fact 1.

[Moore Bound, [Bol04]] Every -vertex graph with at least edges has a cycle of length at most .

The Communication Model.

We use a standard message passing model, the  model [Pel00], where the execution proceeds in synchronous rounds and in each round, each node can send a message of size to each of its neighbors. In this model, local computation at each node is for free and the primary complexity measure is the number of communication rounds. Each node holds a processor with a unique and arbitrary ID of bits.

Definition 1 (Secret Sharing).

Let be a message. The message is secret shared to shares by choosing random strings conditioned on . Each

is called a share, and notice that the joint distribution of any

shares is uniform over .

2 Technical Overview

2.1 Low Congestion Cycle Covers

We next give an overview of the construction of low congestion cycle covers of Theorem 1. The proof proof appears in Section 3.

Let be a bridgeless -vertex graph with diameter . Our approach is based on constructing a BFS tree rooted at an arbitrary vertex in the graph and covering the edges by two procedures: the first constructs a low congestion cycle cover for the non-tree edges and the second covers the tree edges.

Covering the Non-Tree Edges.

Let be the set of non-tree edges. Since the diameter222The graph might be disconnected, when referring to its diameter, we refer to the maximum diameter in each connected component of . of might be large (e.g., ), to cover the edges of by short cycles (i.e., of length ), one must use the edges of . A naïve approach is to cover every edge in by taking its fundamental cycle in (i.e., using the - path in ). Although this yields short cycles, the congestion on the tree edges might become . The key challenge is to use the edges of (as we indeed have to) in a way that the output cycles would be short without overloading any tree edge more than times.

Our approach is based on using the edges of the tree only for the purpose of connecting nodes that are somewhat close to each other (under some definition of closeness to be described later), in a way that would balance the overload on each tree edge. To realize this approach, we define a specific way of partitioning the nodes of the tree into blocks according to . In a very rough manner, a block consists of a set of nodes that have few incident edges in . To define these blocks, we number the nodes based on post-order traversal in and partition them into blocks containing nodes with consecutive numbering. The density of a block is the number of edges in with an endpoint in . Letting b be some threshold of constant value on the density, the blocks are partitioned such that every block is either (1) a singleton block consisting of one node with at least b edges in or (2) consists of at least two nodes but has a density bounded by . As a result, the number of blocks is not too large (say, at most ).

To cover the edges of by cycles, the algorithm considers the contracted graph obtained by contracting all nodes in a given block into one supernode and connecting two supernodes and , if there is an edge in whose one endpoint is in , and the other endpoint is in . This graph is in fact a multigraph as it might contain self-loops or multi-edges. We now use the fact that any -vertex graph with at least edges has girth . Since the contracted graph contains at most nodes and has edges, its girth is . The algorithm then repeatedly finds (edge-disjoint) short cycles (of length ) in this contracted graph333That is, it computes a short cycle , omit the edges of from the contracted graph and repeat., until we are left with at most edges. The cycles computed in the contracted graph are then translated to cycles in the original graph by using the tree paths between nodes belonging to the same supernode (block). We note that this translation might result in cycles that are non-simple, and this is handled later on.

Our key insight is that even though the tree paths connecting two nodes in a given block might be long, i.e., of length , we show that every tree edge is “used” by at most two blocks. That is, for each edge of the tree, there are at most 2 blocks such that the tree path of nodes in the block passes through . (If a block has only a single node, then it will use no tree edges.) Since the (non-singleton) blocks have constant density, we are able to bound the congestion on each tree edge . The translation of cycles in the contracted graph to cycles in the original graph yields -length cycles in the original graph where every edge belongs to cycles.

The above step already covers of the edges in . We continue this process for times until all edges of are covered, and thus get a factor in the congestion.

Finally, to make the output cycle simple, we have an additional “cleanup” step (procedure ) which takes the output collection of non-simple cycles and produces a collection of simple ones. In this process, some of the edges in the non-simple cycles might be omitted, however, we prove that only tree edges might get omitted and all non-tree edges remain covered by the simple cycles. This concludes the high level idea of covering the non-tree edges. We note the our blocking definition is quite useful also for distributed implementations. The reason is that although the blocks are not independent, in the sense that the tree path connecting two nodes in a given block pass through other blocks, this independence is very limited. The fact that each tree edge is used in the tree paths for only two blocks allows us also to work distributively on all blocks simultaneously (see Section B.1).

Covering the Tree Edges.

Covering the tree edges turns out to be the harder case where new ideas are required. Specifically, whereas for the non-tree edges our goal is to find cycles that use the tree edge as rarely as possible, here we aim to find cycles that cover all tree edges, but still avoid using a particular tree edge in too many cycles.

The construction is based on the notion of swap edges. For every tree edge , define the swap edge of by to be an arbitrary edge in that restores the connectivity of . Since the graph is -edge connected such an edge is guaranteed to exist for every . Let (i.e., ) and . Let be the endpoint of that do not belong to (i.e., the subtree rooted at ), thus .

The algorithm for covering the tree edges is recursive, where in each step we split the tree into two edge disjoint subtrees that are balanced in terms of number of edges. To perform a recursive step, we would like to break the problem into two independent subproblems, one that covers the edges of and the other that covers the edges of . However, observe that there might be edges where the only cycle that covers them444Recall that the graph is two edge connected. passes through (and vice versa).

Specifically, we will consider all tree edges , whose second endpoint of their swap edge is in . To cover these tree edges, we employ two procedures, one on and the other on that together form the desired cycles (for an illustration, see Figures 9 and 7). First, we mark all nodes such that their is in . Then, we use an Algorithm called ([KR95] and Lemma 4.3.2 [Pel00]) which solves the following problem: given a rooted tree and a set of marked nodes for , find a matching of these vertices into pairs such that the tree paths connecting the matched pairs are edge-disjoint.

We employ Algorithm on with the marked nodes as described above. Then for every pair that got matched by Algorithm , we add a virtual edge between and in . Since this virtual edge is a non-tree edge with both endpoints in , we have translated the dependency between and to covering a non-tree edge in . At this point, we can simply use Algorithm on the tree and the non-virtual edges. This computes a cycle collection which covers all virtual edges . In the final step, we replace each virtual edge with an - path that consists of the tree path , and the paths between and (as well as the path connecting and ).

This above description is simplified and avoids many details and complications that we had to address in the full algorithm. For instance, in our algorithm, a given tree edge might be responsible for the covering of up to many tree edges. This prevents us from using the edge disjoint paths of Algorithm in a naïve manner. In particular, our algorithm has to avoid the multiple appearance of a given tree edge on the same cycle as in such a case, when making the cycle simple that tree edge might get omitted and will no longer be covered. See Section 3 for the precise details of the proof, and see Figure 1 for a summary of our algorithm.

Algorithm

  1. Construct a BFS tree of .

  2. Let be the set of non-tree edges.

  3. Repeat times:

    1. Partition the nodes of with block density b with respect to (uncovered edges) .

    2. While there are edges for such that for all , and are in the same block and and are in the same block (with respect to the partitioning ):

      • Add the cycle to .

      • Remove the covered edges from .

  4. (see Figure 10).

  5. Output .

Figure 1: Procedure for constructing low-congestion covers.

2.2 Universally Optimal Cycle Covers

In this section we describe how to transform the construction of Section 2.1 into an universally optimal construction: covering each edge in by almost the shortest possible cycle while having almost no overlap between cycles. Let be the shortest cycle covering in and . Clearly, there are graphs with diameter and . We show:

Theorem 2 (Rephrased).

For any bridgeless graph , one can construct an cycle cover . Also, each edge has a cycle in containing such that .

We will use the fact that our cycle cover algorithm of Section 2.1 does not require to be bridgeless, but rather covers every edge that appears on some cycle in . We call such cycle cover algorithm nice.

Our approach is based on the notion of neighborhood covers (also known as ball carving). The -neighborhood cover [ABCP96] of the graph is a collection of clusters in the graph such that (i) every vertex has a cluster that contains its entire -neighborhood, (ii) the diameter of is and (iii) every vertex belongs to clusters in . The key observation is that if each edge appears on a cycle of length at most , then there must be a (small diameter) subgraph that fully contains this cycle.

The algorithm starts by computing an -neighborhood cover which decomposes into almost-disjoint subgraph , each with diameter . Next, a cycle cover is constructed in each subgraph by applying algorithm of Section 2.1 where and . The final cycle cover is the union of all these covers . Since , the length of all cycles is . Turning to congestion, since each vertex appears on many subgraphs, taking the union of all cycles increases the total congestion by only factor. Finally it remains to show that all edges are covered. Since each edge appears on a cycle in of length at most , there exists a cluster, say that contains all the vertices of . We have that appears on a cycle in and hence it is covered by the cycles of . To provide a cycle cover that is almost-optimal with respect each edge, we repeat the above procedures for many times, in the application, the algorithm constructs -neighborhood cover, applies Alg. in each of the resulting clusters and by that covers all edges with . The detailed analysis and pseudocodes is in Section 3.3.

2.3 Application to Resilient Distributed Computation

Our study of low congestion cycle cover is motivated by applications in distributed computing. We given an overview of our two applications to resilient distributed computation that uses the framework of our cycle cover. Both applications are compilers for distributed algorithms in the standard  model. In this model, each node can send a message of size to each of its neighbors in each rounds (the full definition of the model appears in Section 1.3). The full details of the compilers appear in Section 4.

Byzantine Faults.

In this setting, there is an adversary that can maliciously modify messages sent over the edges of the graph. The adversary is allowed to do the following. In each round, he picks a single message passed on the edge and corrupts it in an arbitrary manner (i.e., modifying the sent message, or even completely dropping the message). The recipient of the corrupted message is not notified of the corruption. The adversary is assumed to know the inputs to all the nodes, and the entire history of communications up to the present. It then picks which edge to corrupt adaptively using this information.

Our goal is to compile any distributed algorithm into an resilient one while incurring a small blowup in the number of rounds. The compiled algorithm has the exact same output as for all nodes even in the presence of such an adversary. Our compiler assumes a preprocessing phase of the graph, which is fault-free, in which the cycle covers are computed. The preprocessing phase computes a -cycle covers and a -two-edge disjoint variant using Theorem 4 (see Section 3.4 for details regarding two-edge disjoints cycle cover).

For the simplicity of this overview, we give a description of our compiler assuming that the bandwidth on each edge is . This is the basis for the final compiler that uses the standard bandwidth of . We note that this last modification is straightforward in a model without an adversary, e.g., by blowing up the round complexity by a factor of , or by using more efficient scheduling techniques such as [LMR94, Gha15b]. However, such transformations fail in the presence of the adversary since two messages that are sent in the same round might be sent in different rounds after this transformation. This allows the adversary to modify both of the messages – which could not be obtained before the transformations, i.e., in the large bandwidth protocol.

The key idea is to use the three edge-disjoint, low-congestion paths between any neighboring pairs and provided by the two-edge disjoint cycle covers. Let , where is an upper bound on the length these paths. Consider round of algorithm . For every edge , let be the message that sends to in round of algorithm . Each of these messages is going to be sent using rounds, on the three edge-disjoint routes. The messages will be sent repeatedly on the edge disjoint paths, in a pipeline manner, throughout the rounds. That is, in each of the rounds, node repeatedly sends the message along the three edge disjoint paths to . Each intermediate node forwards any message received on a path to its successor on that path. The endpoint recovers the message by taking the majority of the received messages in these rounds. Let be the lengths of the two edge-disjoint paths connecting and (in addition to the edge ). We prove that the fraction of uncorrupted messages received by is at least

Thus, regardless of the adversary’s strategy, the majority of the messages received by are correct, allowing to recover the message.

Our final compiler that works in the  model with bandwidth is more complex. As explained above, using scheduling to reduce congestion might be risky. Our approach compiles each round of algorithm in two phases. The first phase uses the standard cycle cover to reduce the number of “risky receivers” from down to . The second phase restricts attention to these remaining messages which will be re-send along the three edge-disjoint paths in a similar manner to the description above. The fact that we do not know in advance which messages will be handled in the second phase, poses some obstacles and calls for a very careful scheduling scheme. See Section 4 for the detailed compiler and its analysis.

Eavesdropping.

In this setting, an adversary eavesdrops on an single (adversarially chosen) edge in each round. The goal is to prevent the adversary from learning anything, in the information-theoretic sense, on any of the messages sent throughout the protocol. Here we use the two edge disjoint paths, between neighbors, that the cycle cover provides us in a different way. Instead of repeating the message, we “secret share” it.

Consider an edge and let be the message sent on . The sender secret shares555We say is secret shared to shares by choosing random strings conditioned on . Each is called a share, and notice that the joint distribution of any shares is uniform over and thus provides no information on the message . the message to random shares such that . The first shares of the message, namely , will be sent on the direct edge, in each of the rounds of phase , and the share is sent via the - path . At the end of these rounds, receives messages. Since the adversary can learn at most shares out of the shares, we know that he did not learn anything (in the information-theoretic sense) about the message . See the full details in Section 4.

2.4 Distributed Algorithm for Minor-Closed Graphs

We next turn to consider the distributed construction of low-congestion covers for the family of minor-closed graphs. We will highlight here the main ideas for constructing cycle covers with and within rounds. Similarly to Section 2.2, applying the below construction in each component of the neighborhood cover, yields a nearly optimal cycle cover with and , where is the best dilation of any cycle cover in , regardless of the congestion constraint.

The distributed output of the cycle cover construction is as follows: each edge knows the edge IDs of all the edges that are covered by cycles that pass through . Let for the universal constant of the minor closed family of (see creftypecap 2).

The algorithm begins by constructing a BFS tree in rounds. Here we focus on the covering procedure of the non-tree edges. Covering the tree edges is done by a reduction to the non-tree just like in the centralized construction.

The algorithm consists of phases, each takes rounds. In each phase , we are given a subset that remains to be covered and the algorithm constructs a cycle cover , that is shown to cover most of the edges, as follows:

Step (S1): Tree Decomposition into Subtree Blocks.

The tree is decomposed into vertex disjoint subtrees, which we call blocks. These blocks have different properties compared to those of the algorithm in Section 2.1. The density of a block is the number of edges in that are incident to nodes of the block. Ideally, we would want the densities of the blocks to be bounded by . Unfortunately, this cannot be achieved while requiring the blocks to be vertex disjoint subtrees. Our blocks might have an arbitrarily large density, and this would be handled in the analysis.

The tree decomposition works layer by layer from the bottom of the tree up the root. The weight of a node , , is the number of uncovered non-tree edges incident to . Each node of the layer sends to its parent in , the residual weight of its subtree, namely, the total weight of all the vertices in its subtree that are not yet assigned to blocks. A parent that receives the residual weight from its children does the following. Let be the sum of the total residual weight of its children plus . If , then declares a block and down-casts its ID to all relevant descendants in its subtree (this ID serves as the block-ID). Otherwise, it passes to its parent.

Step (S2): Covering Half of the Edges.

The algorithm constructs a cycle collection that covers two types of -edges: (i) edges with both endpoints in the same block and (ii) pairs of edges whose endpoints connect the same pair of blocks . That is, the edges in that are not covered are those that connect vertices in blocks and no other edge in connects these pair of blocks.

The root of each block is responsible for computing these edges in its block, and to compute their cycles, as follows. All nodes exchange the block-ID with their neighbors. Then, each node sends to the root of its block the block IDs of its neighbors in . This allows each root to identity the relevant edges incident to its block. The analysis shows that despite the fact that the density of the block might be large, this step can be done in rounds. Edges with both endpoints in the same block are covered by taking their fundamental cycle666The fundamental cycle of an edge is the cycle formed by taking and the - path in . into . For the second type, the root arbitrarily matches pairs of edges that connect vertices in the same pair of blocks. For each matched pair of edges with endpoints in block and , the cycle for covering these edges defined by (i.e., taking the tree paths in each block). Thus, the cycles have length . (see Figure 13 for an illustration). This completes the description of phase .

Covering Argument via Super-Graph.

We show that most of the -edges belongs to the two types of edges covered by the algorithm. This statement does not hold for general graphs, and exploits the properties minor closed families. Let be the subset of edges that are not covered in phase . We consider the super-graph of obtained by contracting the tree edges inside each block. Since the blocks are vertex disjoint, the resulting super-graph has one super-node per block and the edges connecting these super-nodes. By the properties of phase , the super-graph does not contain multiple edges or self-loops. The reason is that every self-loop corresponds to an edge in that connects two nodes inside one block. Multiple edges between two blocks correspond to two -edges that connect endpoints in the same pair of blocks. Both of these edges are covered in phase . Since the density of each block with respect to is at least b, the super-graph contains at most super-nodes and edges. As the super-graph belongs to the family of minor-closed as well, we have that edges, and thus , as required. The key observation for bounding the congestion on the edges is:

Observation 1.

Let be a tree edge (where is closer to the root) and let be the block of and . Letting , it holds that .

This observation essentially implies that blocks can be treated as if they have bounded densities, hence taking the tree-paths of blocks into the cycles keeps the congestion bounded. The distributed algorithm for covering the tree edges essentially mimics the centralized construction of Section 3. For the computation of the swap edges distributively we will use the algorithm of Section 4.1 in [GP16]. The full analysis of the algorithm as well as the prcoedure that covers the tree edges, appear in Section 5.

3 Low Congestion Cycle Cover

We give the formal definition of a cycle cover and prove our main theorem regarding low-congestion cycle covers.

Definition 2 (Low-Congestion Cycle Cover).

For a given graph , a low-congestion cycle cover of is a collection of cycles that cover all edges of such that each cycle is of length at most and each edge appears in at most cycles in . That is, for every it holds that .

We also consider partial covers, that cover only a subset of edges . We say that a cycle cover is a cycle cover for , if all cycles are of length at most d, each edge of appears in at least one of the cycles of , and no edge in appears in more than cycles in . That is, in this restricted definition, the covering is with respect to the subset of edges , however, the congestion limitation is with respect to all graph edges.

The main contribution of this section is an existential result regarding cycle covers with low congestion. Namely, we show that any graph that is 2-edge connected has a cycle cover where each cycle is at most the diameter of the graph (up to factors) and each edge is covered by cycles. Moreover, the proof is actually constructive, and yields a polynomial time algorithm that computes such a cycle cover.

Theorem 1 (Rephrased).

For every bridgeless -vertex graph with diameter , there exists a -cycle cover with and .

The construction of a -cycle cover starts by constructing a BFS tree . The algorithm has two sub-procedures: the first computes a cycle collection for covering the non-tree edges , the second computes a cycle collection for covering the tree edges . We describe each cover separately. The pseudo-code for the algorithm is given in Figure 2. The algorithm uses two procedures, and which are given in Section 3.1 and Section 3.2 respectively.

Algorithm

  1. Construct a BFS tree of (with respect to edge set ).

  2. Let be all non-tree edges, and let be all tree edges.

  3. .

  4. Output .

Figure 2: Centralized algorithm for finding a cycle cover of a graph .

3.1 Covering Non-Tree Edges

Covering the non-tree edge mainly uses the fact that while the graph has many edges, then the girth is small. Specifically, using Fact 1, with we get that the girth of a graph with at least edges is at most . Hence, as long as that the graph has at least edges, a cycle of length can be found. We get that all but edges in are covered by edge-disjoint cycles of length .

In this subsection, we show that the set of edges , i.e., the set of non-tree edges can be covered by a -cycle cover denoted . Actually, what we show is slightly more general: if the tree is of depth the length of the cycles is at most . Lemma 1 will be useful for covering the tree-edges as well in Section 3.2.

Lemma 1.

Let be a -vertex graph, let be a spanning tree of depth . Then, there exists an -cycle cover for the edges of .

An additional useful property of the cover is that despite the fact that the length of the cycles in is , each cycle is used to cover edges.

Lemma 2.

Each cycle in is used to cover edges in .

The rest of this subsection is devoted to the proof of Lemma 1. A key component in the proof is a partitioning of the nodes of the tree into blocks. The partitioning is based on a numbering of the nodes from to and grouping nodes with consecutive numbers into blocks under certain restrictions. We define a numbering of the nodes

by traversing the nodes of the tree in post-order traversal. That is, we let if is the node traversed. Using this mapping, we proceed to defining a partitioning of the nodes into blocks and show some of their useful properties.

For a block of nodes and a subset of non-tree edges , the notation is the number of edges in that have an endpoint in the set (counting multiplicities). We call this the density of block with respect to . For a subset of edges , and a density bound b (which will be set to a constant), an -partitioning is a partitioning of the nodes of the graph into blocks that satisfies the following properties:

  1. Every block consists of a consecutive subset of nodes (w.r.t. their numbering).

  2. If then consists of a single node.

  3. The total number of blocks is at most .

Claim 1.

For any b and , there exists an -partitioning partitioning of the nodes of satisfying the above properties.

Proof.

This partitioning can be constructed by a greedy algorithm that traverses nodes of in increasing order of their numbering and groups them into blocks while the density of the block does not exceed b (see Figure 3 for the precise procedure).

Algorithm

  1. Let be an empty partition, and let be an empty block.

  2. Traverse the nodes of in post-order, and for each node do:

    1. If add to .

    2. Otherwise, add the block to and initialize a new block .

  3. Output .

Figure 3: Partitioning procedure.

Indeed, properties 1 and 2 are satisfied directly by the construction. For property 3, let be the number of blocks with . For such a block let be the block that comes after . By the construction, we know that satisfies . Let be the final partitioning. Then, we have pairs of blocks that have density at least b and the rest of the blocks that have density at least . Formally, we have

On the other hand, since it is a partitioning of we have that . Thus, we get that and therefore as required. ∎

Our algorithm for covering the edges of makes use of this block partitioning with . For any two nodes , The algorithm begins with an empty collection and then performs iterations where each iteration works as follows: Let be the set of uncovered edges (initially ). Then, we partition the nodes of with respect to and density parameter b. Finally, we search for cycles of length between the blocks. If such a cycle exists, we map it to a cycle in by connecting nodes within a block by the path in the tree . This way a cycle of length between the blocks translates to a cycle of length in the original graph . Denote the resulting collection by .

We note that the cycles might not be simple. This might happen if and only if the tree paths and intersect for some . Notice that the if an edge appears more than once in a cycle, then it must be a tree edge. Thus, we can transform any non-simple cycle into a collection of simple cycles that cover all edges that appeared only once in (the formal procedure is given at Figure 5). Since these cycle are constructed to cover only non-tree edges, this transformation does not damage the covering of the edges. The formal description of the algorithm is given in Figure 4.

Algorithm

  1. Initialize a cover as an empty set.

  2. Repeat times:

    1. Let be the subset of all uncovered edges.

    2. Construct an -partitioning of the nodes of .

    3. While there are edges for such that for all , and are in the same block and and are in the same block (with respect to the partitioning ):

      • Add the cycle to .

      • Remove from .

  3. Output .

Figure 4: Procedure for covering non-tree edges.

Algorithm

  1. While there is a cycle with a vertex that appears more than once:

    1. Remove from .

    2. Let and define .

    3. Let be such that for all , and let .

    4. For all let , and if , add to .

  2. Output .

Figure 5: Procedure making all cycles in simple.

We proceed with the analysis of the algorithm, and show that it yields the desired cycle cover. That is, we show three things: that every cycle has length at most , that each edge is covered by at most cycles, and that each edge has at least one cycle covering it.

Cycle Length.

The bound of the cycle length follows directly from the construction. The cycles added to the collection are of the form , where each are paths in the tree and thus are of length at most . Notice that the simplification process of the cycles can only make the cycles shorter. Since we get that the cycle lengths are bounded by .

Congestion.

To bound the congestion of the cycle cover we exploit the structure of the partitioning, and the fact that each block in the partition has a low density. We begin by showing that by the post-order numbering, all nodes in a given subtree have a continuous range of numbers. For every , let be the minimal number of a node in the subtree of rooted by . That is, and similarly let .

Claim 2.

For every and for every it holds that (1) and (2) iff .

Proof.

The proof is by induction on the depth of . For the base case, we consider the leaf nodes , and hence with -depth, the claim holds vacuously. Assume that the claim holds for nodes in level and consider now a node in level . Let be the children of ordered from left to right. By the post-order traversal, the root is the last vertex visited in and hence . Since the traversal of starts right after finishing the traversal of for every , it holds that . Using the induction assumption for , we get that all the nodes in have numbering in the range and any other node not in is not in this range. Finally, and so the claim holds. ∎

The cycles that we computed contains tree paths that connect two nodes and in the same block. Thus, to bound the congestion on a tree edge we need to bound the number of blocks that contain a pair such that passes through . The next claim shows that every edge in the tree is effected by at most 2 blocks.

Claim 3.

Let be a tree edge and define . Then, for every .

Proof.

Let where is closer to the root in , and let be two nodes in the same block such that . Let be the least common ancestor of and in (it might be that ), then the tree path between and can be written as . Without loss of generality, assume that . This implies that but . Hence, the block of and intersects the nodes of . Each block consists of a consecutive set of nodes, and by Claim 2 also consists of a consecutive set of nodes with numbering in the range , thus there are at most two such blocks that intersect , i.e., blocks that contains both a vertex with and a vertex with , and the claim follows. ∎

Finally, we use the above claims to bound the congestion. Consider any tree edge where is closer to the root than . Recall that be the subtree of rooted at . Fix an iteration of the algorithm. We characterize all cycles in that go through this edge.

For any cycle that passes through there must be a block and two nodes such that . By Claim 3, we know that there are that at each iteration of the algorithm, there are at most two such blocks that can affect the congestion of . Moreover, we claim that each such block has density at most b. Otherwise it would be a block containing a single node, say , and thus the path is empty and cannot contain the edge . For each edge in we construct a single cycle in , and thus for each one of the two blocks that affect the number of pairs such that is bounded by (each pair has two edges in the block and we know that the total number of edges is bounded by b).

To summarize the above, we get that for each iteration, that are at most 2 blocks that can contribute to the congestion of an edge : one block that intersects but has also nodes smaller than