Random k-out subgraph leaves only O(n/k) inter-component edges

09/24/2019 ∙ by Jacob Holm, et al. ∙ Københavns Uni University of Victoria Tel Aviv University 0

Each vertex of an arbitrary simple graph on n vertices chooses k random incident edges. What is the expected number of edges in the original graph that connect different connected components of the sampled subgraph? We prove that the answer is O(n/k), when k> clog n, for some large enough c. We conjecture that the same holds for smaller values of k, possibly for any k> 2. Such a result is best possible for any k> 2. As an application, we use this sampling result to obtain a one-way communication protocol with private randomness for finding a spanning forest of a graph in which each vertex sends only O(√(n)log n) bits to a referee.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sampling edges is a natural way of trying to infer properties of a graph when accessing the whole graph is either not possible or too expensive. We consider a scenario in which each vertex is resource constrained and can only sample of its incident edges, or all edges if its degree is at most . This corresponds to the -out model that was mainly studied in the context of the complete graph. (See references and discussion in Section 1.2.) Here we are interested in properties of this sampling model when applied to arbitrary simple graphs.

Let be an arbitrary simple graph on vertices. Each vertex independently picks random adjacent edges. Let be the resulting subgraph. How many edges of  connect different connected components of ? (These edges are referred to as inter-component edges.) We prove that for , for a sufficiently large constant , the expected number of such edges is . We conjecture that the same result also holds for much smaller values of , possibly even for every . The statement is false for . No such result was obtained or conjectured before, for any value of . Simple examples show that this result is best possible for any . The proof we provide is fairly intricate. Our result also sheds light on other sampling models.

Given its generality, we hope that our new sampling theorem would find many applications. As a first such application, we show how the sampling theorem, together with other ideas, can be used to obtain a one-way communication protocol with private randomness for finding a spanning forest of an input graph in which each vertex sends only  bits to a referee. No private randomness protocol in which each vertex sends only bits was known before.

1.1 Our results

We begin with a formal definition of the -out model.

Definition 1.1 (Random -out subgraphs)

Let be a simple undirected graph. Sample a subset of the edges by the following process: Each vertex independently chooses of its incident edges, each subset of this size is equally likely. An edge is included in  if and only if it was chosen by at least one of its endpoints. The subgraph is said to be a random -out subgraph of .

In the above definition, we treat each undirected edge as two directed edges and . Each one of these directed edges is sampled independently. At the end, the direction of the sampled edges is ignored and duplicate edges are removed.

Although each vertex chooses only adjacent edges, the resulting subgraph is not necessarily of maximum degree , as an edge may be chosen by either of its endpoints. In particular, if is a star and , then  is always the original graph, as each leaf must choose the edge connecting it to the center. The choices made by the center are irrelevant. However, for any graph , the resulting subgraph  is always -degenerate, i.e., it can be oriented so that the outdegree of each vertex is at most . We just keep the orientation of the sampled directed edges.) As a consequence the arboricity of is also at most .

The main result of this paper is:

Theorem 1.2 (Main Theorem for -out)

Let be an arbitrary undirected -vertex graph and let , where  is a large enough constant. Let be a random -out subgraph of . Then the expected number of edges in  that connect different connected components of  is .

It is easy to see that the theorem is best possible for any and . Let be a graph composed of two cliques of size , connected by a matching of size

. With probability at least

, no edge from the matching is chosen, in which case all the edges of the matching are inter-component, i.e., connect different connected components of .

Another example, of a very different nature, that shows that Theorem 1.2 is best possible is the following. Let be an arbitrary tree on vertices. Form by connecting each vertex of  to new leaves, making . Each original tree edge has probability at least of not being chosen by , and (independent) probability at least of not being chosen by . Thus the probability of not being in is . The expected number of edges of  that connect different connected components of is therefore at least , which is .

It follows immediately from Theorem 1.2 that there is a constant such that the probability that the number of inter-component edges is greater than is at most , for every , and this tail bound is tight. (See Corollary 2.22.)

We conjecture that Theorem 1.2 holds whenever , and possibly even for every .

Conjecture 1.3 (Conjecture for -out)

Let be an arbitrary undirected -vertex graph and let , where  is a large enough constant. Let be a random -out subgraph of . Then, the expected number of edges in  that connect different connected components of  is .

A closely related sampling model, in which we do most of the work, is the following:

Definition 1.4 (Random expected -out subgraphs)

Let be a simple undirected graph. Sample a subset of the edges by the following process: Each vertex samples each one of its incident edges independently with probability . Thus, each vertex of degree at least  samples an expected number of  edges. An edge is included in  if and only if it was sampled by at least one of its endpoints. The subgraph is said to be a random expected -out subgraph of .

Let be an edge. If either or  are of degree at most , then is always sampled. Otherwise, is sampled with probability . Equivalently, if we view  as two directed edges and , then is sampled with probability and is sampled with probability . (Recall that the directions of the sampled directed edges are ignored.) Choices made for different edges are completely independent.

We show below (Lemma 2.1) that when , for a sufficiently large constant , the expected number of inter-component edges with respect to an expected -out subgraph is essentially sandwiched between the corresponding expectations for (exact) -out and (exact) -out subgraphs.

To prove Theorem 1.2 for , for a sufficiently large , it is thus sufficient to prove the following theorem, which we do in Section 2.

Theorem 1.5 (Main Theorem for expected -out)

Let be an arbitrary undirected -vertex graph and let , where  is a large enough constant. Let be a random expected -out subgraph of . Then, the expected number of edges in  that connect different connected components of  is .

The requirement in Theorem 1.5 is essential, and thus the bound in the theorem is best possible for the expected -out model. Let be the complete graph on  vertices and let . The expected -out model is then equivalent to the classical model with . The probability that a given vertex is isolated is then and the expected number of edges connecting different connected components is . Thus, the claim of the theorem is false when .

This also explains the difficulty of extending Theorem 1.2 to the regime. It is conceivable that Theorem 1.2 holds for any .

As an interesting corollary of Theorems 1.2 and 1.5 we get:

Corollary 1.6 (Random -out subgraph of a -edge connected graph)

Let be a -edge connected -vertex graph and let , where is a large enough constant. Let be a -out or an expected -out random subgraph of . Then is connected with probability at least .

Proof:  Let the probability that is not connected. As is -edge connected, if is not connected, then the number of inter-component edges is at least , and the expected number of inter-component edges is at least . By Theorem 1.2 or 1.5 this expectation is at most , for some constant . Thus, and the result follows if . We also need  to be large enough for Theorem 1.2 or 1.5 to hold.

Both the -out and expected -out models favor the selection of edges incident to low degree vertices. In Section 1.3 we compare the -out sampling model to the standard model of picking each edge with some fixed probability  and explain why the -out model gives much better results, in certain cases, using the same total number of sampled edges.

On regular or almost regular graphs, the -out sampling model is essentially identical to the model of sampling each edge independently with probability . Surprisingly, Theorem 1.5 implies a new result in this model for almost regular graphs, see Theorem 1.9 below.

An appealing feature of the -out model is that the sampling can be implemented in a distributed manner, as the choices of different vertices are independent. Requiring each vertex to choose only  edges is a natural constraint in many settings, e.g., if a vertex has to communicate the edges it selected to other vertices or to a referee. It is exactly such a setting (see Section 1.4) that motivated us. However, we believe that the new sampling theorems have importance beyond the concrete applications we give here.

In many settings, including the application described in Section 1.4, the number of inter-component edges is a measure of the “work” that still needs to be done after “processing” the sampled subgraph.

1.2 Previous results in the -out model

The -out model is first mentioned in a question of Ulam in “The Scottish Book”111“The Scottish Book” was a notebook used in the 1930’s and 1940’s by mathematicians of the Lwów School of Mathematics in Poland to collect problems. The notebook was named after the “Scottish Café” where it was kept. Among the contributors to the book were Stefan Banach, John von Neumann and Stanislaw Ulam. [18].

PROBLEM 38: ULAM
Let there be given elements (persons). To each element we attach others among the given at random (these are friends of a given person). What is the probability that from every element one can get to every other element through a chain of mutual friends? (The relation of friendship is not necessarily symmetric!) Find (0 or 1?).

While this explicitly defines a directed model, most answers in the literature are for the corresponding undirected model. It is not difficult to prove, see [18], that for the resulting undirected graph is connected with probability tending to , while for the graph is connected with probability tending to .

Let be a random -out subgraph of the complete graph on  vertices, as in Definition 1.1. Fenner and Frieze [5] prove that for , is -vertex and -edge connected with probability tending to , as tends to . Frieze [7] proved that when is even then has a perfect matching with probability tending to , if , and tending to  if . Bohman and Frieze [3] prove that has a Hamiltonian cycle with probability tending to , if , and tending to , if . All these results can also be found in a chapter on random -out graphs in the book of Frieze and Karoński [6].

Frieze et al. [8] consider a random subgraph obtained by taking an arbitrary spanning forest of a graph , and random outgoing edges from each vertex. They prove that the resulting random subgraph has some desirable expansion properties with probability tending to .

Frieze and Johansson [9] consider random -out subgraphs of graphs of minimum degree , for some . They show that if , then the random -out subgraph is -connected with probability tending to . Thus they generalize the earlier results of Fenner and Frieze [5] for a complete base graph to arbitrary base graphs with sufficiently high minimum degree. Frieze and Johansson [9] points out that the generalization fails for lower degrees: there are connected graphs with minimum degree where a random -out subgraph is not even expected to be connected.

Our results are quite different from all the results cited above. We consider random -out subgraphs of an arbitrary base graph . As the graph  is arbitrary, we cannot expect the random -out subgraph to be connected, with high probability. We focus instead on the question of how closely a random -out subgraph of  captures the connectivity of . We do that by bounding the expected number of inter-component edges, i.e., the number of edges of  that connect different connected components of the sampled subgraph.

To the best of our knowledge, no result similar to our Corollary 1.6 was known before. It replaces the requirement of a very high minimum degree made in Frieze and Johansson [9] by a much weaker connectivity requirement. However, the resulting random -out subgraph is only guaranteed to be connected with probability , not with a probability tending to . This is best possible.

In a very recent paper [10], Ghaffari et al. used 2-out sampling to get faster randomized algorithms for edge connectivity. One of their lemmas is that, with high probability, the number of components in a random 2-out subgraph is where is the smallest degree. The same bound on the number of components is tight for -out for any . This result complements our bound on the number of inter-component edges, and may inspire further investigations into the properties of random -out subgraphs.

1.3 Sampling each edge independently with probability 

The most widely studied random graph model is, of course, , in which each edge of the complete graph on  vertices is sampled, independently, with probability . There are literally thousands of papers written on such random graphs.

The model can also be used to construct a random subgraph of a general base graph . The most relevant result to our study is the following theorem:

Theorem 1.7 (Karger, Klein and Tarjan [15])

Let be an arbitrary simple graph and let . Let be a random subgraph of  obtained by selecting each edge of  independently with probability . Then, the expected number of edges of  that connect different connected components of  is at most .

Theorem 1.7 is a special case of a theorem of Karger et al. [15] that deals with weighted graphs. The more general theorem states that if is a minimum spanning forest of , then the expected number of edges in  that are -light, i.e., can be used to improve , is at most . When the graph is unweighted, i.e., all edge weights are , an edge is -light if and only if it connects different connected components of . An alternative proof, of the weighted version, using backward analysis was obtained by Chan [4]. The weighted theorem was used by Karger et al. [15] to obtain a linear expected time algorithm for finding Minimum Spanning Trees.

Theorem 1.7, as stated, was used by Karger et al. [16] and Halperin and Zwick [12] to obtain almost optimal and then optimal randomized EREW PRAM algorithms for finding connected components and spanning trees.

1.3.1 Comparing -out sampling and independent sampling

Let us compare our new Theorem 1.2 with Theorem 1.7. Let be a general -vertex -edge graph. Theorem 1.7 produces a sample of expected size . To obtain a random subgraph with the same number of edges we choose . The expected number of edges connecting different connected components is then . Note that this is a huge improvement over the bound of Theorem 1.7 when the graph is dense, i.e., . Alternatively, if we express the expressions in term of , the bound for -out is , while the bound for independent sampling is only .

To highlight the difference between the two sampling schemes, and to show that the gap between and can actually occur, consider the following situation. Let be a graph composed of a clique of size and cliques of size . All cliques are disjoint. The number of edges is . If , then . Consider a vertex in one of the small cliques. With a constant probability none of its incident edges are sampled, in which case it contributes to the expectation. The expected number of edges connecting different connected components is . Theorem 1.7 is thus asymptotically tight in this case. The corresponding bound for -out is , a factor of  smaller. This example shows that it is much wiser, in certain situations, to sample edges incident on low degree vertices with higher probabilities.

1.3.2 Improved result for independent sampling for almost regular graphs

While Theorem 1.7 is best possible for general graphs, we show that it can be improved for almost regular graphs.

Definition 1.8 (Almost regular graphs)

A graph is said to be almost -regular if , for every . A graph is almost regular if it is almost -regular for some .

If is -regular, then expected -out sampling is equivalent to sampling each edge independently with probability . The models are very closely related if  is almost -regular. Namely, -out sampling produces a subsample of the sample obtained by sampling each edge with probability . Thus, Theorem 1.5 immediately implies the following new result for independent sampling.

Theorem 1.9 (Independent sampling of almost regular graphs)

Let be an almost regular graph and let . Let be a random subgraph of  obtained by selecting each edge of  independently with probability . Then, the expected number of edges of  that connect different connected components of  is .

We note that Theorem 1.9 cannot be extended to the weighted case. Consider a complete weighted graph on  vertices in the which the weights of all edges incident on a vertex are distinct. It is easy to see that the expect number of -light edges is .

1.4 Applications in distributed computing

The problem that led us to consider the sampling model discussed in this paper is the following. Each vertex in an undirected graph only knows its neighbors. It can send a single message to a referee which should then determine, with high probability, a spanning forest of the graph. How many bits does each vertex need to send the referee?

If the vertices have access to public randomness, the answer is . The upper bound follows easily from Ahn et al. [2]. A matching lower bound was recently obtained by Nelson and Yu [19].

In Section 3 we show, using the sampling theorem (Theorem 1.2) with , and a few other ideas, that bits are sufficient when the vertices only have access to private randomness. Nothing better than was previously known with private randomness.

The best known deterministic protocol uses bits and it is open whether this is optimal.

In the MapReduce-like model (see [17]), messages are passed in the form of pairs of bits which is the wordsize. We assume there are machines and each can send, receive, and compute with no more than words in any round. In each round if the words with the same key can fit on one machine, then one machine will receive them all, process them and emit a new set of messages for the next round. Here we assume each edge of the input graph appears twice, as and . As the machines are stateless, these will be recirculated in every round. It is easily seen that if , the one-way communication algorithm with public randomness cited above leads to a one round Monte Carlo algorithm in the MapReduce-like model. Each machine which receives the edges incident to a particular vertex will compute the corresponding -bit message for the referee and send it using -bit messages all tagged with a special key. One machine will receive all these messages from all machines and can then act as the referee to compute a spanning forest. If no public randomness is available, then this must be preceded by a round in which a random string is created by one machine and a copy is sent with a key for each vertex.

A consequence of Theorem 1.2 is an almost equally simple four-round algorithm to compute a spanning forest which requires space of only words. Note that for , this is a factor less memory than above. As before, for each vertex, there is a machine which receives the edges incident to the vertex and it sends out, in round 1, its up to sampled edges to a referee. This is done by giving all these messages a special key so that they are received by one machine that can act as the referee. The referee computes the spanning forest of the -out subgraph. The spanning forest can be distributed to machines in two rounds (see Jurdziński and Nowicki [13]). We assume again that the input graph is recirculated and for each vertex there is a machine which receives all its incident edges and the spanning forest of the -out subgraph. This machine can determine which of its incident edges connect up different components of the -out subgraph, and use a special key to send them out. The spanning forest of the -out subgraph is also recirculated with the special key. Since there are no more than such edges, a single machine will receive all these inter-component edges and the edges of the -out spanning forest and compute the spanning forest of the original graph. If Theorem 1.2 holds with , this would be an extremely simple four-round algorithm with .

Jurdziński and Nowicki [13] obtained a round algorithm with , but it is much more complicated.

2 Proof of the sampling theorem

2.1 Relation between -out and expected -out models

The following simple lemma shows that for sufficiently large , the expected and (exact) -out models have essentially the same expected number of inter-component edges.

Lemma 2.1

Let be an arbitrary undirected -vertex graph. For any , let be the number of inter-component edges in with respect to a -out subgraph, and let be the number of inter-component edges in with respect to an expected -out subgraph. Then for where is a large enough constant, .

Proof:  Consider the directed edges in a random expected -out subgraph. For any , let  be the set of outgoing edges of  in this subgraph, and let . Note that given , is a random subset of the outgoing edges of  of size . If , then includes all edges incident to . Assume that . Let be the set of outgoing edges of  in . If , then a random subset of size of is a random subset of  of size . Thus, it has the same distribution as the subset of edges chosen by  in the exact -out model. Similarly, if , then a random subset of  of size that contains has exactly the same distribution as the edges chosen by  in the exact -out model.

Let be the event that for every vertex with . Then

(1)

The first inequality holds because assuming , we can add random edges to the expected -out subgraph to get an exact -out subgraph, as explained above. Adding random edges to a sampled subgraph can only decrease the number of inter-component edges. Similarly, the second inequality holds because assuming , we can randomly remove edges from the expected -out subgraph to get an exact -out subgraph, and this can only increase the number of inter-component edges. It is easy to check that the required containments hold also for vertices of degree less than .

Let be the probability that for at least one with . Then and thus . Setting and combining with (1) gives

By definition, , so . For any vertex with , we have , so by Chernoff bounds and . A union bound then gives . For , where , we then have .

The first inequality , for , shows that Theorem 1.5 implies Theorem 1.2.

2.2 Overview of the proof of the sampling theorem for expected -out

We prove Theorem 1.5 by sampling edges gradually. We construct two sequences of random subgraphs , and , for some , all subgraphs of . We also have , for . Each subgraph is obtained by sampling more edges from and adding them to . Each subgraph is obtained from  by removing some edges that are inter-component edges of , and possibly some additional edges not contained in . Initially contains all edges that touch a vertex of degree at most . Note that all these edges are contained in every (expected) -out sample. We let , the original graph. At the end of the process, and have the same connected components, viewed as sets of vertices.

Edges are sampled and added during the process in a way that ensures that can be extended into an expected -out sample of . Adding more edges to  can only reduce the number of inter-component edges. All the edges of  that are inter-component with respect to , and therefore also with respect to , must be in . Thus, to prove the theorem it is enough to prove that .

The round sampling model

A convenient way to view the expected -out sampling is that for every (directed) edge , where , we independently generate a uniform random number , and include (the undirected version of)  in the sampled subgraph if and only if . (In what follows, we let stand for both the undirected edges of the original graph and their corresponding directed edges.)

Edges are sampled in rounds. For every edge , we generate a sequence of numbers such that , where is the number of rounds. Edge  is sampled in round  if and only if . An edge is thus sampled in any one of the rounds, if and only if . The last round is special and is just used to ensure that for every .

In the beginning of round , where , we set for every (directed) edge such that . The values may depend on the sampling outcomes in previous rounds. That is, for all , we know , , and all the . In particular, we know if  has already been sampled, i.e., if .

We do not choose the number of rounds in advance. At the end of round , we may decide to end round sampling with one last special round, by setting and for all edges .

Given that was not sampled in rounds , the probability that it is sampled in round  is . In the analysis, we adopt the opposite view: We choose and then set . We can pick any sequence of such that .

General sampling strategy

More specifically, to move from  to , we look at a smallest connected component of  that is not a complete connected component of . We assign edges that emanate from  some carefully chosen non-zero sampling probabilities , being careful not to exceed the overall sampling probability of each edge. We add edges to  according to these probabilities. We hope that the addition of new edges, if any, connects  to at least one other component of . If this happens, i.e., at least one of the edges of the cut defined by  in  was sampled, we let . Otherwise, we let be  with the edges of the cut removed. (Note that this ensures that is also a connected component of .) We are essentially deciding to ‘give-up’ on the edges of and assume that they will end up as inter-component edges of . If , we perform an additional trimming operation that removes some additional edges from , but still maintaining . Trimming will be explained later.

Each round reduces the number of connected components of  that are not connected components of . Thus, for some , all the connected components in  are also connected components of  and the process stops.

We let , the number of edges that were removed from  to form . Note that 

is a random variable. Our goal is to bound

. To bound , we consider conditional expectations of the form , where is a transcript of everything that happened during the construction of and . (Actually, and give a full account of what happened. The last two subgraphs  and  on their own do not tell the whole story, as they do not specify when each edge of  was added.)

Let

be the probability distribution induced on full transcripts of the process described above. Let

be the probability distribution of transcripts of the first  rounds. Given a full transcript , we let be its restriction to the first rounds. Clearly, if is chosen according to , then is distributed according to . Then,

We prove that for every transcript . This is done using judiciously chosen sub-sampling probabilities and an intricate amortized analysis that shows that charging each vertex a cost of is enough to cover the expected total number of inter-component edges.

For a given transcript , the expression has the following meaning. The transcript  gives a full description of what happens during the whole process. No randomness is left. However, in the th round we compute the expectation of  given the past , without peeping into the future.

(The above discussion ignores the fact that is also a random variable. This can be easily fixed by adding dummy rounds that ensure that we always have .)

We are thus left with the task of bounding for a specific transcript . Let  be the component we try to connect at the -th round. The expectation is the size of cut multiplied by the probability that no edge from this cut is sampled. (This ignores trimming that will be explained later.) We need to bound this quantity and then decide how to split it among the vertices of . As mentioned, the total cost of each vertex, during all iterations, should be . In deciding how to split the cost, we take the full history  into account.

Having explained the general picture, we need to get down to the details. In the following we use , and instead of , (or ) and .

2.3 Growing and trimming

Our proof repeatedly uses growth and trimming steps:

  • Growth Step: Pick a smallest connected component of which is not a complete connected component of  and try to connect it to other components of  by some limited sampling of edges adjacent to  (as shall be described later). If the step fails to connect to other components, remove all of the edges in the cut defined by from .

  • Trimming Step: If is such that , remove from  all edges that connect  to vertices not in the connected component of  in .

A schematic description of the process is given in Figure 1. The red components on the left are components that will not grow anymore, as none of the edges going out of them is in . The green components on the right are components that may still grow. Solid edges are edges that were already sampled, i.e., are in . Dashed edges are edges in , i.e., edges that were not sampled yet, but may still be sampled. Finally, dotted edges are edges that were removed from  and thus will not be sampled. Unfilled vertices represent vertices that were trimmed.

Figure 1: A schematic description of the repeated growing and trimming process.

By choosing a smallest component at each growth step we ensure that the size of the component is at least doubled, if the step is successful. Each vertex can therefore participate in at most such growth steps. (Components may also grow as a result of other components growing into them.)

A trimming step may trigger a cascade of additional trimming steps. When the process ends, for every vertex which has edges in  that connect it to vertices not in the connected component of  in , we have . This is important in the proofs below as it shows that we can sample edges with probabilities proportional to . The following lemma claims that trimming does not remove too many additional edges.

Lemma 2.2

The number of edges removed from  during Trimming steps is at most the number of edges removed from  during (unsuccesful) Growing steps.

Proof:  We use a simple amortization argument. We say that a vertex  is active if is part of a component in  that can still grow and if it was not trimmed yet. Suppose that a vertex is trimmed when for some . (We use .) We define the potential of an active vertex  to be , i.e., times the number of edges of  that were deleted, for some to be chosen later. The amortized cost of deleting an edge following a failed growth step, where belongs to the component that failed to grow, is at most , where  is the actual cost of deleting the edge, and is the increase of potential of  if it remains active. When a vertex  is trimmed, its potential is at least . We want this potential to pay for the amortized cost of deleting the at most remaining edges of , i.e., , which is satisfied if we choose . For we have . It follows that the total number of edges deleted is at most twice the number of edges deleting following failed growth steps.

Thus, it is enough to bound the number of edges removed during Growing steps. Moreover, during Growing steps we can always assume that the degrees of relevant vertices in  are the same as their degrees in up to a multiplicative constant. Up to a constant factor, we can therefore assume that an edge is sampled with probability .

2.4 High level details

During growth steps, we distinguish between two types of components.

  • High-degree: A component  that contains a vertex such that .

  • Low-degree: A component  such that for every vertex we have .

High-degree components are easier to deal with when . Note that if , then at least half of the edges of  leave the component . Thus, a random edge of  leaves  with a probability of least .

To grow a high-degree component we pick some vertex with and repeatedly sample its edges until an edge connecting it to a different connected component is found. This is a slight deviation from the framework of Section 2.2, as we might exceed the sampling probabilities of certain edges. We show, however, that this only happens with a very small probability, and is therefore not problematic. (See the formal argument after Lemma 2.3.)

The main challenge, where novel ideas need to be used, is in the growth of low-degree components.

To grow a low-degree component , we sample each edge adjacent to a vertex with probability , where is a constant to be chosen later. As the size of the component containing a vertex  at least doubles between two consecutive growth steps in which the component of  participates, it is easy to check that the total sampling probability is not exceeded.

2.5 Growth of high-degree components

When , high-degree components can be easily grown using a simple technique.

To grow a high-degree component we pick a vertex with and perform rounds in which we sample each edge of  with probability . We do that until an edge connecting  to another component is sampled. The next lemma shows that with high probability no sampling probability is exceeded, assuming that .

Lemma 2.3

With high probability, the over all probability in which an edge touching a vertex of degree  is sampled during all growth steps of high-degree components is .

Proof:  Let be a vertex of degree . In each sampling round in the growth of a high-degree component in which  is the chosen highest degree vertex, we sample each edge adjacent to  with probability . The probability that at least one edge is sampled in such a round is at least . If an edge is sampled, then with probability at least  it leaves the component and the process stops. Thus, each round is successful with a probability of at least .

Each time  belongs to a component that undergoes a growth step, the size of the component of  doubles. Thus, participates in at most such steps. The total number of sampling rounds in which participates, over all growth steps, is thus stochastically dominated by a negative binomial random variable , the number of times a coin that comes up “head” with probability needs to be flipped until seeing “heads”. We want to bound .

Let be a binomial random variable that gives the number of “heads” in throws of a coin that comes up “head” with probability . Clearly .

By Chernoff’s bound, if is a binomial random variable with and , then . In our case and . Thus, . (Note that .)

Thus by a simple union bound, the probability that any vertex, and hence any directed edge, participates in more than rounds is at most .

When , then , so the probability of exceeding any sampling probability during the growth of high-degree components is at most . This is much more than what we need.

With a very low probability, our scheme might exceed the allowed sampling probability of certain edges. This is a slight deviation from the framework of Section 2.2. This is justified as follows. Let  be the event that no sampling probability is exceeded. By Lemma 2.3, we have that . We prove below that , where is the number inter-component edges. To prove Theorem 1.5, we need to show that . However, as , we have , as required.

2.6 Main challenge: Growth of low-degree components

To grow a low-degree component we sample each edge adjacent to a non-trimmed vertex with probability , where is a constant to be chosen later.

Lemma 2.4

The probability with which an edge adjacent on vertex  is sampled from  during growth steps of low-degree components is .

Proof:  Let be all the low-degree components containing throughout the entire process until  is trimmed. Let  be the graph  when component  is grown. Note that , for every . The probability with which  is sampled when growing  is . We also have . As is a low-degree component we have . Thus, the total probability with which  is sampled from is

For a component , denote by the number of edges in in the cut defined by .

We need to show that . As a warm-up, we show that .

Proof: [That .] Let be the low-degree component that we are currently growing. We are sampling each edge, and in particular each edge of the cut, with probability . If none of the edges of the cut is sampled, then all the edges in the cut are added to . The expected number of edges added to , divided by  is:

The function is maximized at and its maximum value is . Thus, the maximum value of the expression above is .

Thus to cover this expected cost, each vertex only needs to pay each time it participates in a growing component. As this happens at most times, the total cost per vertex is at most , and the total cost for all vertices is .

To prove that , we need a much more elaborate argument. We start with several definitions.

Definition 2.5 (Maximum degree)

The maximum degree of a component  is defined to be

We also let be a vertex of maximum degree in . (Ties broken arbitrarily.)

Definition 2.6 (Density)

The density of a component  is defined to be

Definition 2.7 (Density level of a component)

The density level of a component is defined to be the unique integer  such that .

Definition 2.8 (Density level of a vertex)

Let and let be all the low-degree components to which belonged so far. The level is defined to be

where is the level of  when it participated in a growth step. If did not participate yet in a low-degree component, then . Note that cannot increase.

Definition 2.9 (Cost of a component)

Let be a low-degree component currently participating in a growth step. We define the cost of to be

The following simple technical lemma is used several times in what follows.

Lemma 2.10

The function , where , attains its maximum value at the point and is decreasing for .

Proof:  The claim is immediate as

Lemma 2.11

Let be a low-degree component. Then, the expected number of edges removed from  during the growth step of  is at most .

Proof:  Let and let . Every non-trimmed vertex samples each of its incident edges with probability at least . Thus, the probability of missing all cut edges is