Cutting resilient networks -- complete binary trees

11/14/2018 ∙ by Xing Shi Cai, et al. ∙ Uppsala universitet 0

In our previous work, we introduced the random k-cut number for rooted graphs. In this paper, we show that the distribution of the k-cut number in complete binary trees of size n, after rescaling, is asymptotically a periodic function of n - n. Thus there are different limit distributions for different subsequences, where these limits are similar to weakly 1-stable distributions. This generalizes the result for the case k = 1, i.e., the traditional cutting model, by Janson.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 The model and the motivation

In our previous work [Cai010], we introduced the -cut number for rooted graphs. Let be an integer. Let be a connected graph of nodes with exactly one node labeled as the root. We remove nodes from the graph using this random procedure:

  1. Initially set every node’s cut-counter to zero, i.e., no node has ever been cut.

  2. Choose one node uniformly at random from the component containing the root and increase its cut-counter by one, i.e., we cut the selected node once.

  3. If this node’s cut-counter hits , i.e., it has been cut times, then remove it from the graph.

  4. If the root has been removed, then stop. Otherwise, go to step 2.

We call the (random) total number of cuts needed for this procedure to end the -cut number and denote it by . Note that in our model nodes are only removed after having been cut times. The traditional cutting model corresponds to the case that .

We can also cut and remove edges instead of nodes using the same process with the modification that we stop when the root has been isolated. We denote the total number of cuts needed for this edge removing process to end by .

The -cut number can be seen as a measure of the difficulty for the destruction of a resilient network. For example, in a botnet, a bot-master controls a large number of compromised computer (bots) for various cybercrimes. To counter attack a botnet means to reduce the number of bots reachable from the bot-master by fixing compromised computers [4413000]. We can view a botnet as a graph and fixing a computer as removing a node from the graph. If we assume that each compromised computer takes -attempts to clean, and each attempt aims at a computer chosen uniformly at random, then the -cut number is precisely the number of attempts of cleaning up needed to completely destroy a botnet.

The case , i.e., the traditional cutting model has been well-studied. It was first introduced by meir70 for uniform random Cayley trees. janson04, janson06 studied one-cuts in binary trees and conditioned Galton-Watson trees. ab14 simplified the proof for the limit distribution of one-cuts in conditioned Galton-Watson trees. The cutting model has also been studied in random recursive trees, see meir74, iksanov07, and drmota09. For binary search trees and split trees, see holmgren10, holmgren11.

In our previous work [Cai010], we mainly analyzed , the -cut number for a path of length , which generalizes the record number in a uniform random permutation. In this paper, we continue our investigation in complete binary trees, i.e., binary trees in which each level is full except possibly for the last level, and the nodes at the last level occupy the leftmost positions. If the last level is also full, then we call the tree a full binary tree.

1.2 An equivalent model

Let be a complete binary tree of size . Let and with the root of the tree as the root of the graph. There is an equivalent way to define . Let

be i.i.d. exponential random variables with mean

. Let . Imagine each node in has an alarm clock and node ’s clock fires at times . If we cut a node when its alarm clock fires, then due to the memoryless property of exponential random variables, we are actually choosing a node uniformly at random to cut.

However, this also means that we are cutting nodes that have already been removed from the tree. Thus for a cut on node at time (for some ) to be counted in , none of its ancestors can have already been cut times, i.e.,

(1.1)

where denotes that is an ancestor of . When the event in (1.1) happens, we say that (or simply ) is an -record and let

be the indicator random variable for this event. Let

be the total number of -records, i.e., . Then obviously . We use this equivalence for the rest of the paper.

By assigning alarm clocks to edges instead of nodes, we can define the edge version of -records and have .

1.3 The main results

To introduce the main results, we need some notations. Let denote the fractional part of , i.e., . Let be the Gamma function [DLMF, 5.2.1]. Let be the upper incomplete Gamma function [DLMF, 8.2.2]. Let . Let be the inverse of . Let .

Theorem 1.1.

Assume that as . Then

(1.2)

where

(1.3)

, , and are constants defined in Proposition 4.1, and

has an infinitely divisible distribution with the characteristic function

(1.4)

where is a constant defined later in (5.38) and the Lévy measure has support on with density

(1.5)
Theorem 1.2.

Assume the same conditions as in Theorem 1.1. Then

(1.6)

The same holds true for .

Remark 1.1.

Let denote the left-hand-side of (1.6). Another way of formulating Theorem 1.2 is by saying that the distance, e.g., in the Lévy metric, between the distribution of and the distribution of tends to zero as .

Remark 1.2.

We do not have a closed form for . But for specific they are easy to compute with computer algebra systems. When , i.e., when , (1.6) reduces to

(1.7)

and since , (1.5) reduces to

(1.8)

In other words, we recover the result for the traditional cutting model in complete binary trees by [Theorem 1.1]janson04. When , (1.6) reduces to

(1.9)
Remark 1.3.

In Remark 1.5 of [janson04], janson04 mentioned that when , if and are independent copies of , then , but the corresponding statement for three copies of is false. In other words, is roughly similar to a -stable distribution. This extends to general in the sense that

(1.10)

with . This follows by computing the characteristic functions of both sides using (1.4) and by noticing that

(1.11)

In the rest of the paper, we will first compute the expected number and variance of

-records conditioning on , where denotes the root. Then we show that the fluctuation of the total number of -records from its mean is more or less the same as the sum of such fluctuations in each subtree rooted at height , conditioning on what happens below height . This sum can be further approximated by a sum of independent random variables. Finally, we apply a classic theorem regarding the convergence to infinitely divisible distributions by [Theorem 15.23]kallenberg02 to prove Theorem 1.1. Then Theorem 1.2 follows immediately, see section 6.

The proof follows a similar path as janson04 did for the case . However, the analysis for is significantly more complicated.

holmgren10, holmgren11 showed that when , has similar behavior in binary search trees and split trees as in complete binary trees. We are currently trying to prove this for .

2 Some more notations

We collect some of the notations which are used frequently in this paper.

Let be the Gamma function [DLMF, 5.2.1], i.e.,

(2.1)

Note that for . Let and be the upper and lower incomplete Gamma functions respectively [DLMF, 8.2], i.e.,

(2.2)

Thus . Let . We also define .

Let . Let be the inverse of . Note that and .

Let be the height of a complete binary tree of nodes, i.e., . Let . Let .

For node , let be the height of , i.e., the distance (number of edges) between and the root, which we denote by .

Let be conditioned on , i.e., the number of -record, excluding the root, conditioned on that the root is removed (cut the -th time) at time .

For functions and , we write uniformly on to indicate that there exists a constant such that for all . The word uniformly stresses that does not depend on .

We use the notation and in the usual sense, see [janson11].

The notations denote constants that depend on and other parameters but do not depend on .

3 The expectation and the variance

Lemma 3.1.

There exist constants such that

(3.1)

uniformly for all , where .

Remark 3.1.

We do not have a closed form for the constants . However, for fixed , they are easy to find with computer algebra systems. For example, when , (3.1) reduces to

(3.2)

which is trivially true since . When , (3.1) reduces to

(3.3)
Proof.

Using the series expansion of given by [DLMF, 8.7.3], it is easy to verify that

(3.4)

uniformly for . Taking the binomial expansion of the right-hand-side and ignoring small order terms gives (3.3). ∎

Lemma 3.2.

In the case that the tree is full, i.e., , then

(3.5)

where

(3.6)

where the implicit constants are defined in the proof.

Remark 3.2.

Again, although we do not have closed forms for the constants , they are not difficult to compute with computer algebra systems. For example, for , we have . For , we have

(3.7)
Proof.

Let be a node of height . For to be an -record, conditioning on , we need and for every that is an ancestor of . Recall that , where are i.i.d. exponential random variables. Thus are i.i.d. random variables and is a random variable, which are independent from everything else. (See Theorem 2.1.12 of [durrett10]

for the relation between exponential distributions and Gamma distributions.)

The Gamma distribution has the density function

(3.8)

which implies . Thus,

(3.9)

When the tree is full, each level has nodes. Thus in this case

(3.10)

where

(3.11)

as by [DLMF, 8.7.3]. Thus uniformly for with ,

(3.12)

for some constants , where we expand the left-hand-side using (3.11) and Lemma 3.1, and then omit small order terms.

Note that for and ,

(3.13)

Thus if , by putting the expansion (3.12) into (3.10) and integrating term by term, we get (3.5).

For , it is not difficult to verify that the part of the integral in (3.10) over and the difference are both exponentially small and can be absorbed by the error term. ∎

Lemma 3.3.

If , then

(3.14)

where

(3.15)
Proof.

When is a node of height , by (3.9),

(3.16)

where Expanding by [DLMF, 8.7.3] and using Lemma 3.1, we have, uniformly for with

(3.17)

Note that this differs from (3.12) only by the constant in the term . Thus the first equality in (3.14) follows as in Lemma 3.2. The second equality follows by keeping only the main term of . ∎

The next lemma computes when the tree is not full. The reason why it is formulated in terms of will be clear in the proof of Lemma 4.2.

Lemma 3.4.

Let . Let

(3.18)

If , then

(3.19)
Proof.

Assume first that

. When the tree is not necessarily full, the estimate of

in (3.5) over counts the number of nodes at height by . The contribution of the over counted nodes in (3.5) can be estimated using (3.14). Subtracting this part from (3.5) gives (3.19).

The only other possible case is that and the tree is full. The result follows easily by adding an extra node at height , computing the total expectation of -records for this tree by the case already studied, and subtracting from (3.14). ∎

Corollary 3.1.

We have

(3.20)

where is defined in (1.3).

Proof.

Lemma 3.4 gives an asymptotic expansion of . To get rid of this conditioning, first consider a full binary tree of height , i.e., a tree of size . It is easy to see that is exactly twice of for . This solves the case when the tree is full.

The general case can be solved similarly. Consider a binary tree, with the right subtree of the root being (possibly not full), and the left subtree of the root being , i.e., a full binary tree of height . This tree has size . Thus is the expected number of -records in , plus the expected number of -record in , which is by the previous paragraph. Thus

(3.21)

which implies (3.20) by Lemma 3.4. ∎

Remark 3.3.

Comparing (3.20) and (1.2) in Theorem 1.1, we see that is concentrated well above their means (at a distance of about ). Thus . See also Remark 1.4 of [janson04].

Remark 3.4.

The simplest case that and the tree is full can also be computed directly by noticing that

(3.22)

where denotes Hurwitz-Lerch zeta function [DLMF, 25.14], denotes the polylogarithm function [DLMF, 25.12], and the last step uses an asymptotic expansion of given in [Cai011].

Lemma 3.5.

We have

(3.23)
Proof.

Consider two nodes, and of heights and respectively. Let be the node that is furthest away from the root among the common ancestor of and . Let .

We call the pair good if and . Otherwise we call it bad. Assume for now that is good.

Let be the path from the root to . Let .

Note that conditioning on and , the events that is an -record and that is an -record are independent. Thus by Lemma 3.3 and the assumption that is good,

(3.24)

where .

Since is increasing in , (3.24) implies that, after averaging over ,

(3.25)

On the other hand, again by Lemma 3.3 and the assumption that is good,

(3.26)

Therefore, by the definition of in (3.15), the first order term of the above is

(3.27)

For and ,

(3.28)

since is decreasing and . Thus when is good,

(3.29)

Canceling other terms in (3.27) in a similar way shows that

(3.30)

Given , there are at most choices of . Thus

(3.31)

The number of bad pairs is at most

(3.32)

Using the fact that , it follows from (3.31) and (3.32) that

(3.33)

as the lemma claims. ∎

Remark 3.5.

When , i.e., when , by Lemma 3.4,

(3.34)

and by the Lemma 3.5. Thus we recover Lemma 2.2 of [janson04].

Recall that . Let be the nodes of height . Let be the minimum of the for all nodes on the path between the root and .

Lemma 3.6.

We have

(3.35)
Proof.

The proof uses the estimate of the variance in Lemma 3.5 and exactly the same argument of Lemma 2.3 in [janson04]. We omit the details. ∎

4 Transformation into a triangular array

In this section, we prove Proposition 4.1, which shows that , properly rescaled and shifted, can be written as a sum of independent random variables. Three technical lemmas Lemma 4.1, Lemma 4.2, Lemma 4.3 are needed first.

Proposition 4.1.

Let and . Then

(4.1)

where

(4.2)

and

(4.3)
Lemma 4.1.

Recall that has the distribution of the minimum of independent random variables. Let . Let be a constant. Then

(4.4)
(4.5)
(4.6)
Proof.

Since

(4.7)

the density of is

(4.8)

by the derivative formula

(4.9)

see [DLMF, 8.8.13]. For and , by the inequality [DLMF, 8.10.11],

(4.10)

Therefore,

(4.11)

For and , also by [DLMF, 8.10.11],

(4.12)

where the last inequality follows from that for and . Therefore, similar to (4.11),

(4.13)

Thus we have (4.4).

For (4.5), first by (4.8),

(4.14)

Since is decreasing when , for

(4.15)

Therefore,

(4.16)

Substituting the above inequality into (4.14) and integrating gives (4.5).

For (4.6), note that

(4.17)

where The result follows easily from (4.4) and (4.5). ∎

The next two lemmas first remove the (see Lemma 3.4) hidden in the representation (3.35) then transform it into a sum of independent random variables.

Lemma 4.2.

Let be the size of the subtree rooted at . Then

(4.18)
Proof.

By Lemma 3.4, we have

(4.19)

where . (This is why we need to formulate Lemma 3.4 in terms of –here is either the height of subtree rooted at , or it is the height of the subtree plus one and the subtree is full.)

We now convert this into an expression in . Let

(4.20)

Then using the identity ,

(4.21)

The first term of the above expression is

(4.22)

since . The terms which do not contain can be bounded similarly. For terms involving , we can use Lemma 4.1. For example, by (4.6), the second term is

(4.23)

In the end, it follows from Lemma 4.1 and simple asymptotic computations that

(4.24)

Since ,

(4.25)

Thus by (3.35), we have

(4.26)

from which (4.18) follows immediately. ∎

Lemma 4.3.

Let be the size of the subtree rooted at the node . Then

(4.27)
Proof.

Recall that is the minimum of independent random variables , where denotes the path from the root to . Let

. The probability that at least two

are less than is

(4.28)

where we use the approximation of in (3.1) and the series expansion of in [DLMF, 8.7.3]. Thus the probability that this happens for some is .

With probability goes to , there is at most one that is less than on each path . When this happens, by the inequality (4.10),

(4.29)

Therefore,

(4.30)

where in the last step we use . Thus

(4.31)

The lemma follows by putting this into (4.18). ∎

Proof of Proposition 4.1.

Expanding (4.27) and dividing both side by shows that

(4.32)

Subtracting

(4.33)

from both sides of (4.32) gives (4.1). ∎

5 Convergence of the triangular array

By taking subsequences, we can assume that as , and . Thus , , where . Moreover, and

(5.1)

which implies .

Lemma 5.1.

Let . Assume that and . Then as :

  1. For all fixed ,

  2. For all fixed , where is defined in (1.5).

  3. We have

    (5.2)

    where is a constant defined later in (5.38).

  4. We have

    (5.3)

Let , which are deterministic. It follows from Lemma 5.1 that we can apply Theorem 15.28 in [kallenberg02] with , to show that the triangular array converges in distribution to (defined in Theorem 1.1). Thus by Proposition 4.1, Theorem 1.1 follows immediately.

5.1 Proof of Lemma 5.1 (i)

Recall that in (4.2) we define

(5.4)

where are i.i.d. random variables. Thus , where , see (3.8). Assume for now that . Then, for all fixed .

(5.5)

The function is only defined for . However, we can extend its domain to by letting for . This extension makes (5.5) also valid for , since in this case every expression in (5.5) equals .

By [DLMF, 8.10.11], for ,

(5.6)

Letting , (5.6) implies that

(5.7)

where . Similarly, it follows from (5.6) that

(5.8)

Note that (5.7) and (5.8) also hold for by our extension of . Thus uniformly for all with ,

(5.9)

where the last step uses that . Thus we can apply the series expansion of near in [DLMF, 8.7.3] to (5.5) to get

(5.10)

Therefore this probability tends to zero for all fixed .

5.2 Proof of Lemma 5.1 (ii)

We reuse the notion of good and bad nodes defined in [janson04, pp. 250]. A good node has for some with . All other nodes with height at most are called bad. [eq. 20]janson04 showed that

(5.11)

and that the number of bad nodes is .

As we have shown in (5.10) that . By the same argument as in [janson04, Eq. 21, 22], the bad nodes can be ignored in the proof of (ii), (iii) and (iv) of Lemma 5.1.

Note that for , for large enough, which implies by our extension of . Thus, it follows from (5.10) and (5.11) that

(5.12)

(By the inequality (5.7), the function is well-defined on .)

Let . Then . In other words for . Thus

(5.13)

where the last step uses (5.1). Note that is continuous and decreasing on , with as . By the derivative formula (4.9), the derivative of is

(5.14)

where

(5.15)

Comparing with (1.5), we see that

(5.16)

and , where the inequality follows from (5.7). Thus Lemma 5.1 (ii) is proved.

5.3 Proof of Lemma 5.1 (iii)

Assume for now that . Let , i.e., . By the upper bound of in (5.16), . Thus we are allowed to write this integral as

(5.17)

For , by the definition of in (5.15),

(5.18)

Using the derivative formula (4.9), one can verify that

(5.19)

where

(5.20)

Therefore,

(5.21)

Summing (5.21) over and as in (5.17) and simplifying through [DLMF, 8.8.2]

(5.22)

we have

(5.23)

where

(5.24)

By a similar argument, (5.23) also holds when . (When , (5.23) reduces to , as in [janson04].)

We next compute . By definition, if is good, then with . Let

be the probability density function of

. Differentiating (5.10) shows that uniformly for all and ,

(5.25)

where

(5.26)

Using again the derivative formula (4.9), one can verify that

(5.27)

where

(5.28)

Note also that if . Thus

(5.29)

where we use which follows from the inequalities (5.6), (5.7) and (5.8).

If , then

(5.30)

for large. Thus (5.29) reduces to

(5.31)

If , then

(5.32)

and (5.29) reduces to

(5.33)

So we distinguish three cases, , , and , which we refer to as the low part, the high part, and the last part.

The number of good nodes with , is given by (5.11). Thus for the low part, i.e., when is a good node with and ,

(5.34)

where the result has been simplified using (5.22). (The convergence of this sum follows from (5.7) and (5.8).) For the high part, i.e., when is a good node with and ,

(5.35)

And for the last part, i.e., when is good node with ,

(5.36)

Together with (5.23),

(5.37)

where

(5.38)

(The fact that follows from the inequalities (5.6), (5.7), (5.8).) When , the above is simply , as in Theorem 1.1 of [janson04].

5.4 Proof of Lemma 5.1 (iv)

By the upper bound of in (5.16), . Thus we are allowed to write this integral as