Log In Sign Up

Cutting resilient networks -- complete binary trees

by   Xing Shi Cai, et al.
Uppsala universitet

In our previous work, we introduced the random k-cut number for rooted graphs. In this paper, we show that the distribution of the k-cut number in complete binary trees of size n, after rescaling, is asymptotically a periodic function of n - n. Thus there are different limit distributions for different subsequences, where these limits are similar to weakly 1-stable distributions. This generalizes the result for the case k = 1, i.e., the traditional cutting model, by Janson.


page 1

page 2

page 3

page 4


Edge-Cuts and Rooted Spanning Trees

We give a closed form formula to determine the size of a k-respecting cu...

Edge Multiway Cut and Node Multiway Cut are NP-complete on subcubic graphs

We show that Edge Multiway Cut (also called Multiterminal Cut) and Node ...

On the Collection of Fringe Subtrees in Random Binary Trees

A fringe subtree of a rooted tree is a subtree consisting of one of the ...

Quoting is not Citing: Disentangling Affiliation and Interaction on Twitter

Interaction networks are generally much less homophilic than affiliation...

Computer Vision Methods for Automating Turbot Fish Cutting

This paper is about the design of an automated machine to cut turbot fis...

Equivalence of Models of Cake-Cutting Protocols

The cake-cutting problem involves dividing a heterogeneous, divisible re...

Problem of robotic precision cutting of the geometrically complex shape from an irregular honeycomb grid

The article considers solving the problem of precision cutting of honeyc...

1 Introduction

1.1 The model and the motivation

In our previous work [Cai010], we introduced the -cut number for rooted graphs. Let be an integer. Let be a connected graph of nodes with exactly one node labeled as the root. We remove nodes from the graph using this random procedure:

  1. Initially set every node’s cut-counter to zero, i.e., no node has ever been cut.

  2. Choose one node uniformly at random from the component containing the root and increase its cut-counter by one, i.e., we cut the selected node once.

  3. If this node’s cut-counter hits , i.e., it has been cut times, then remove it from the graph.

  4. If the root has been removed, then stop. Otherwise, go to step 2.

We call the (random) total number of cuts needed for this procedure to end the -cut number and denote it by . Note that in our model nodes are only removed after having been cut times. The traditional cutting model corresponds to the case that .

We can also cut and remove edges instead of nodes using the same process with the modification that we stop when the root has been isolated. We denote the total number of cuts needed for this edge removing process to end by .

The -cut number can be seen as a measure of the difficulty for the destruction of a resilient network. For example, in a botnet, a bot-master controls a large number of compromised computer (bots) for various cybercrimes. To counter attack a botnet means to reduce the number of bots reachable from the bot-master by fixing compromised computers [4413000]. We can view a botnet as a graph and fixing a computer as removing a node from the graph. If we assume that each compromised computer takes -attempts to clean, and each attempt aims at a computer chosen uniformly at random, then the -cut number is precisely the number of attempts of cleaning up needed to completely destroy a botnet.

The case , i.e., the traditional cutting model has been well-studied. It was first introduced by meir70 for uniform random Cayley trees. janson04, janson06 studied one-cuts in binary trees and conditioned Galton-Watson trees. ab14 simplified the proof for the limit distribution of one-cuts in conditioned Galton-Watson trees. The cutting model has also been studied in random recursive trees, see meir74, iksanov07, and drmota09. For binary search trees and split trees, see holmgren10, holmgren11.

In our previous work [Cai010], we mainly analyzed , the -cut number for a path of length , which generalizes the record number in a uniform random permutation. In this paper, we continue our investigation in complete binary trees, i.e., binary trees in which each level is full except possibly for the last level, and the nodes at the last level occupy the leftmost positions. If the last level is also full, then we call the tree a full binary tree.

1.2 An equivalent model

Let be a complete binary tree of size . Let and with the root of the tree as the root of the graph. There is an equivalent way to define . Let

be i.i.d. exponential random variables with mean

. Let . Imagine each node in has an alarm clock and node ’s clock fires at times . If we cut a node when its alarm clock fires, then due to the memoryless property of exponential random variables, we are actually choosing a node uniformly at random to cut.

However, this also means that we are cutting nodes that have already been removed from the tree. Thus for a cut on node at time (for some ) to be counted in , none of its ancestors can have already been cut times, i.e.,


where denotes that is an ancestor of . When the event in (1.1) happens, we say that (or simply ) is an -record and let

be the indicator random variable for this event. Let

be the total number of -records, i.e., . Then obviously . We use this equivalence for the rest of the paper.

By assigning alarm clocks to edges instead of nodes, we can define the edge version of -records and have .

1.3 The main results

To introduce the main results, we need some notations. Let denote the fractional part of , i.e., . Let be the Gamma function [DLMF, 5.2.1]. Let be the upper incomplete Gamma function [DLMF, 8.2.2]. Let . Let be the inverse of . Let .

Theorem 1.1.

Assume that as . Then




, , and are constants defined in Proposition 4.1, and

has an infinitely divisible distribution with the characteristic function


where is a constant defined later in (5.38) and the Lévy measure has support on with density

Theorem 1.2.

Assume the same conditions as in Theorem 1.1. Then


The same holds true for .

Remark 1.1.

Let denote the left-hand-side of (1.6). Another way of formulating Theorem 1.2 is by saying that the distance, e.g., in the Lévy metric, between the distribution of and the distribution of tends to zero as .

Remark 1.2.

We do not have a closed form for . But for specific they are easy to compute with computer algebra systems. When , i.e., when , (1.6) reduces to


and since , (1.5) reduces to


In other words, we recover the result for the traditional cutting model in complete binary trees by [Theorem 1.1]janson04. When , (1.6) reduces to

Remark 1.3.

In Remark 1.5 of [janson04], janson04 mentioned that when , if and are independent copies of , then , but the corresponding statement for three copies of is false. In other words, is roughly similar to a -stable distribution. This extends to general in the sense that


with . This follows by computing the characteristic functions of both sides using (1.4) and by noticing that


In the rest of the paper, we will first compute the expected number and variance of

-records conditioning on , where denotes the root. Then we show that the fluctuation of the total number of -records from its mean is more or less the same as the sum of such fluctuations in each subtree rooted at height , conditioning on what happens below height . This sum can be further approximated by a sum of independent random variables. Finally, we apply a classic theorem regarding the convergence to infinitely divisible distributions by [Theorem 15.23]kallenberg02 to prove Theorem 1.1. Then Theorem 1.2 follows immediately, see section 6.

The proof follows a similar path as janson04 did for the case . However, the analysis for is significantly more complicated.

holmgren10, holmgren11 showed that when , has similar behavior in binary search trees and split trees as in complete binary trees. We are currently trying to prove this for .

2 Some more notations

We collect some of the notations which are used frequently in this paper.

Let be the Gamma function [DLMF, 5.2.1], i.e.,


Note that for . Let and be the upper and lower incomplete Gamma functions respectively [DLMF, 8.2], i.e.,


Thus . Let . We also define .

Let . Let be the inverse of . Note that and .

Let be the height of a complete binary tree of nodes, i.e., . Let . Let .

For node , let be the height of , i.e., the distance (number of edges) between and the root, which we denote by .

Let be conditioned on , i.e., the number of -record, excluding the root, conditioned on that the root is removed (cut the -th time) at time .

For functions and , we write uniformly on to indicate that there exists a constant such that for all . The word uniformly stresses that does not depend on .

We use the notation and in the usual sense, see [janson11].

The notations denote constants that depend on and other parameters but do not depend on .

3 The expectation and the variance

Lemma 3.1.

There exist constants such that


uniformly for all , where .

Remark 3.1.

We do not have a closed form for the constants . However, for fixed , they are easy to find with computer algebra systems. For example, when , (3.1) reduces to


which is trivially true since . When , (3.1) reduces to


Using the series expansion of given by [DLMF, 8.7.3], it is easy to verify that


uniformly for . Taking the binomial expansion of the right-hand-side and ignoring small order terms gives (3.3). ∎

Lemma 3.2.

In the case that the tree is full, i.e., , then




where the implicit constants are defined in the proof.

Remark 3.2.

Again, although we do not have closed forms for the constants , they are not difficult to compute with computer algebra systems. For example, for , we have . For , we have


Let be a node of height . For to be an -record, conditioning on , we need and for every that is an ancestor of . Recall that , where are i.i.d. exponential random variables. Thus are i.i.d. random variables and is a random variable, which are independent from everything else. (See Theorem 2.1.12 of [durrett10]

for the relation between exponential distributions and Gamma distributions.)

The Gamma distribution has the density function


which implies . Thus,


When the tree is full, each level has nodes. Thus in this case




as by [DLMF, 8.7.3]. Thus uniformly for with ,


for some constants , where we expand the left-hand-side using (3.11) and Lemma 3.1, and then omit small order terms.

Note that for and ,


Thus if , by putting the expansion (3.12) into (3.10) and integrating term by term, we get (3.5).

For , it is not difficult to verify that the part of the integral in (3.10) over and the difference are both exponentially small and can be absorbed by the error term. ∎

Lemma 3.3.

If , then




When is a node of height , by (3.9),


where Expanding by [DLMF, 8.7.3] and using Lemma 3.1, we have, uniformly for with


Note that this differs from (3.12) only by the constant in the term . Thus the first equality in (3.14) follows as in Lemma 3.2. The second equality follows by keeping only the main term of . ∎

The next lemma computes when the tree is not full. The reason why it is formulated in terms of will be clear in the proof of Lemma 4.2.

Lemma 3.4.

Let . Let


If , then


Assume first that

. When the tree is not necessarily full, the estimate of

in (3.5) over counts the number of nodes at height by . The contribution of the over counted nodes in (3.5) can be estimated using (3.14). Subtracting this part from (3.5) gives (3.19).

The only other possible case is that and the tree is full. The result follows easily by adding an extra node at height , computing the total expectation of -records for this tree by the case already studied, and subtracting from (3.14). ∎

Corollary 3.1.

We have


where is defined in (1.3).


Lemma 3.4 gives an asymptotic expansion of . To get rid of this conditioning, first consider a full binary tree of height , i.e., a tree of size . It is easy to see that is exactly twice of for . This solves the case when the tree is full.

The general case can be solved similarly. Consider a binary tree, with the right subtree of the root being (possibly not full), and the left subtree of the root being , i.e., a full binary tree of height . This tree has size . Thus is the expected number of -records in , plus the expected number of -record in , which is by the previous paragraph. Thus


which implies (3.20) by Lemma 3.4. ∎

Remark 3.3.

Comparing (3.20) and (1.2) in Theorem 1.1, we see that is concentrated well above their means (at a distance of about ). Thus . See also Remark 1.4 of [janson04].

Remark 3.4.

The simplest case that and the tree is full can also be computed directly by noticing that


where denotes Hurwitz-Lerch zeta function [DLMF, 25.14], denotes the polylogarithm function [DLMF, 25.12], and the last step uses an asymptotic expansion of given in [Cai011].

Lemma 3.5.

We have


Consider two nodes, and of heights and respectively. Let be the node that is furthest away from the root among the common ancestor of and . Let .

We call the pair good if and . Otherwise we call it bad. Assume for now that is good.

Let be the path from the root to . Let .

Note that conditioning on and , the events that is an -record and that is an -record are independent. Thus by Lemma 3.3 and the assumption that is good,


where .

Since is increasing in , (3.24) implies that, after averaging over ,


On the other hand, again by Lemma 3.3 and the assumption that is good,


Therefore, by the definition of in (3.15), the first order term of the above is


For and ,


since is decreasing and . Thus when is good,


Canceling other terms in (3.27) in a similar way shows that


Given , there are at most choices of . Thus


The number of bad pairs is at most


Using the fact that , it follows from (3.31) and (3.32) that


as the lemma claims. ∎

Remark 3.5.

When , i.e., when , by Lemma 3.4,


and by the Lemma 3.5. Thus we recover Lemma 2.2 of [janson04].

Recall that . Let be the nodes of height . Let be the minimum of the for all nodes on the path between the root and .

Lemma 3.6.

We have


The proof uses the estimate of the variance in Lemma 3.5 and exactly the same argument of Lemma 2.3 in [janson04]. We omit the details. ∎

4 Transformation into a triangular array

In this section, we prove Proposition 4.1, which shows that , properly rescaled and shifted, can be written as a sum of independent random variables. Three technical lemmas Lemma 4.1, Lemma 4.2, Lemma 4.3 are needed first.

Proposition 4.1.

Let and . Then





Lemma 4.1.

Recall that has the distribution of the minimum of independent random variables. Let . Let be a constant. Then




the density of is


by the derivative formula


see [DLMF, 8.8.13]. For and , by the inequality [DLMF, 8.10.11],




For and , also by [DLMF, 8.10.11],


where the last inequality follows from that for and . Therefore, similar to (4.11),


Thus we have (4.4).

For (4.5), first by (4.8),


Since is decreasing when , for




Substituting the above inequality into (4.14) and integrating gives (4.5).

For (4.6), note that


where The result follows easily from (4.4) and (4.5). ∎

The next two lemmas first remove the (see Lemma 3.4) hidden in the representation (3.35) then transform it into a sum of independent random variables.

Lemma 4.2.

Let be the size of the subtree rooted at . Then


By Lemma 3.4, we have


where . (This is why we need to formulate Lemma 3.4 in terms of –here is either the height of subtree rooted at , or it is the height of the subtree plus one and the subtree is full.)

We now convert this into an expression in . Let


Then using the identity ,


The first term of the above expression is


since . The terms which do not contain can be bounded similarly. For terms involving , we can use Lemma 4.1. For example, by (4.6), the second term is


In the end, it follows from Lemma 4.1 and simple asymptotic computations that


Since ,


Thus by (3.35), we have


from which (4.18) follows immediately. ∎

Lemma 4.3.

Let be the size of the subtree rooted at the node . Then


Recall that is the minimum of independent random variables , where denotes the path from the root to . Let

. The probability that at least two

are less than is


where we use the approximation of in (3.1) and the series expansion of in [DLMF, 8.7.3]. Thus the probability that this happens for some is .

With probability goes to , there is at most one that is less than on each path . When this happens, by the inequality (4.10),




where in the last step we use . Thus


The lemma follows by putting this into (4.18). ∎

Proof of Proposition 4.1.

Expanding (4.27) and dividing both side by shows that




from both sides of (4.32) gives (4.1). ∎

5 Convergence of the triangular array

By taking subsequences, we can assume that as , and . Thus , , where . Moreover, and


which implies .

Lemma 5.1.

Let . Assume that and . Then as :

  1. For all fixed ,

  2. For all fixed , where is defined in (1.5).

  3. We have


    where is a constant defined later in (5.38).

  4. We have


Let , which are deterministic. It follows from Lemma 5.1 that we can apply Theorem 15.28 in [kallenberg02] with , to show that the triangular array converges in distribution to (defined in Theorem 1.1). Thus by Proposition 4.1, Theorem 1.1 follows immediately.

5.1 Proof of Lemma 5.1 (i)

Recall that in (4.2) we define


where are i.i.d. random variables. Thus , where , see (3.8). Assume for now that . Then, for all fixed .


The function is only defined for . However, we can extend its domain to by letting for . This extension makes (5.5) also valid for , since in this case every expression in (5.5) equals .

By [DLMF, 8.10.11], for ,


Letting , (5.6) implies that


where . Similarly, it follows from (5.6) that


Note that (5.7) and (5.8) also hold for by our extension of . Thus uniformly for all with ,


where the last step uses that . Thus we can apply the series expansion of near in [DLMF, 8.7.3] to (5.5) to get


Therefore this probability tends to zero for all fixed .

5.2 Proof of Lemma 5.1 (ii)

We reuse the notion of good and bad nodes defined in [janson04, pp. 250]. A good node has for some with . All other nodes with height at most are called bad. [eq. 20]janson04 showed that


and that the number of bad nodes is .

As we have shown in (5.10) that . By the same argument as in [janson04, Eq. 21, 22], the bad nodes can be ignored in the proof of (ii), (iii) and (iv) of Lemma 5.1.

Note that for , for large enough, which implies by our extension of . Thus, it follows from (5.10) and (5.11) that


(By the inequality (5.7), the function is well-defined on .)

Let . Then . In other words for . Thus


where the last step uses (5.1). Note that is continuous and decreasing on , with as . By the derivative formula (4.9), the derivative of is




Comparing with (1.5), we see that


and , where the inequality follows from (5.7). Thus Lemma 5.1 (ii) is proved.

5.3 Proof of Lemma 5.1 (iii)

Assume for now that . Let , i.e., . By the upper bound of in (5.16), . Thus we are allowed to write this integral as


For , by the definition of in (5.15),


Using the derivative formula (4.9), one can verify that






Summing (5.21) over and as in (5.17) and simplifying through [DLMF, 8.8.2]


we have




By a similar argument, (5.23) also holds when . (When , (5.23) reduces to , as in [janson04].)

We next compute . By definition, if is good, then with . Let

be the probability density function of

. Differentiating (5.10) shows that uniformly for all and ,




Using again the derivative formula (4.9), one can verify that




Note also that if . Thus


where we use which follows from the inequalities (5.6), (5.7) and (5.8).

If , then


for large. Thus (5.29) reduces to


If , then


and (5.29) reduces to


So we distinguish three cases, , , and , which we refer to as the low part, the high part, and the last part.

The number of good nodes with , is given by (5.11). Thus for the low part, i.e., when is a good node with and ,


where the result has been simplified using (5.22). (The convergence of this sum follows from (5.7) and (5.8).) For the high part, i.e., when is a good node with and ,


And for the last part, i.e., when is good node with ,


Together with (5.23),




(The fact that follows from the inequalities (5.6), (5.7), (5.8).) When , the above is simply , as in Theorem 1.1 of [janson04].

5.4 Proof of Lemma 5.1 (iv)

By the upper bound of in (5.16), . Thus we are allowed to write this integral as