The giant component of the directed configuration model revisited

04/10/2020 ∙ by Xing Shi Cai, et al. ∙ Universitat Politècnica de Catalunya Uppsala universitet 0

We prove a law of large numbers for the order and size of the largest strongly connected component in the directed configuration model. Our result extends previous work by Cooper and Frieze.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and notations

An scc (strongly connected component) in a digraph (directed graph) is a maximal sub-digraph in which there exists a directed path from every node to every other node. In this short note, we analyse the size of the giant component, i.e., the largest scc, in the directed configuration model. This is a continuation of our previous work [4], which studied the diameter of the model.

We briefly introduce the model and our assumptions. For further discussions and references, see [4]. Let be a set of nodes. Let be a bi-degree sequence with . The directed configuration model, , is the random directed multigraph on generated by giving in half-edges (heads) and out half-edges (tails) to node , and then pairing the heads and tails uniformly at random.

Let be the degrees (number of tails and heads) of a uniform random node. Let be the number of in . Let . Consider a sequence of bi-degree sequences . Throughout the paper, we will assume the following condition is satisfied,

Condition 1.1.

There exists a discrete probability distribution

on with such that

  1. converges to in distribution: for every

  2. converges to in expectation and the expectation is finite:

    (1.1)
  3. converges to

    in second moment and they are finite: for

    , ,

    (1.2)

To state the main result, some parameters of are needed. Let

(1.3)

where the inequality follows from conditions (ii) and (iii). Let be the bivariate generating function of . Let and be the survival probabilities of the branching processes with offspring distributions which have generating functions and respectively. In other words, and are, respectively, the smallest positive solutions to the equations

(1.4)

Let be the largest scc in . (If there is more than one such scc, we choose an arbitrary one among them as .) Let be the number of nodes in . Let be the number of edges in . Our main result is the following theorem on :

Theorem 1.2.

Suppose that satisfies 1.1. If , then

(1.5)
(1.6)

in expectation, in second moment and in probability, where

(1.7)

If , then for all with

(1.8)

in expectation and in probability.

Remark 1.3.

Under 1.1, the probability that is simple is bounded away from , see [2, 10]. Thus 1.2 holds for a uniform random simple digraph with degree sequence .

The two cases and are often referred to as subcritical and supercritical regimes. As shown in [4], in the supercritical case, and . In other words, whp (with high probability), the size of the largest scc is bounded in the first case and linear in the second one.

Equation (1.5) in 1.2 was first proved by Cooper and Frieze [5] under stronger conditions including , and . Graf [9, Theorem 4.1] extended the existence of a linear order scc provided that converges uniformly and . 1.1 only implies that , see [4, Corollary 2.4]. In the subcritical case, the results in [5, 9] only show that whp the largest scc has order instead of .

The paper is organized as follows: In Section 2, we study the probability of certain events for branching processes. In Section 3, we recall a graph exploration process defined in [4] and extend it. Section 4 studies the probability that a set of half-edges to reach a large number of other half-edges. Section 5 shows that the number of nodes which can reach and can be reached from many nodes is concentrated around its mean. Then in Section 6 we show that these nodes form the giant. Finally in Section 7 we give an application of 1.2 to binomial random digraphs.

2 Branching processes

Let

be a random variable on

and let be iid (independent and identically distributed) copies of . Let be the generating function of and . Let be a branching process with offspring distribution . If for all , then the branching process is said to survive; otherwise, it is said to become extinct. The following are well-known in the branching process theory (see, e.g., [14, Theorem 3.1] and [1, Theorem I.10.3], respectively):

Lemma 2.1.

Let be the smallest nonnegative solution of . The survival probability is

(2.1)

Moreover, if and only if .

Lemma 2.2.

Assume that . Then there exists a sequence for which , such that , where is a non-negative random variable for which and which is continuously distributed on .

The main result of this section is the following:

Lemma 2.3.

Let be a branching process with offspring distribution with . Let

(2.2)

Then for all and as ,

(2.3)
Proof.

Let . It suffices to show that . We split this probability into

(2.4)

By Theorem 3.4 of [4], there exist constants and (both depending only on ) such that for all ,

(2.5)

Let . Let denote the event that becomes extinct, i.e., for some . If , then and we are done. Thus we can assume that . Then

(2.6)

since a branching process conditioned on becoming extinct has a finite total progeny.

For a lower bound of , note that implies . Thus,

(2.7)

Note that

(2.8)

By Theorem 6 of [12], there exists a sequence with such that for all ,

(2.9)

where is a non-negative random variable for which and which has continuous distribution on . Therefore, for all ,

(2.10)

as . Since is arbitrary, we have

(2.11)

Putting (2.11) and (2.8) into (2.7) gives the desired lower bound. ∎

2.3 can be generalized to multiple iid branching processes as follows:

Corollary 2.4.

Let be independent branching processes with offspring distribution . Assume that . Let

(2.12)

Then for all and as ,

(2.13)
Proof.

Let . Let . By 2.3

(2.14)

and

(2.15)

3 Exploring the graph

We extend the Breadth First Search (BFS) graph exploration process of defined in [4].

For , let be the set of heads/tails incident to the nodes in . Let . For , let be the set of nodes incident to . Let be a partial pairing of half edges in . Let be the set of heads/tails which are paired in . Let . Let be the unpaired heads/tails which are incident to . Let denote the event that is part of . We will explore the graph conditioning on .

We start from an arbitrary set of unpaired tails. In this process, we create random pairings of half-edges one by one and keep each half-edge in exactly one of the four states — active, paired, fatal or undiscovered. Let , , and denote the set of heads/tails in the four states respectively after the -th pairing of half-edges. Initially, let

(3.1)

Then set and proceed as follows:

  1. Let be one of the tails which became active earliest in .

  2. Pair with a head chosen uniformly at random from . Let .

  3. If , then terminate; if , then ; and if , then where .

  4. If terminate; otherwise, , , and go to (i).

Let be a forest with isolated nodes corresponding to . Given , is constructed as follows: if , then construct from by adding child nodes to the node representing , each of which representing a tail in ; otherwise, let . While is an unlabelled forest, its nodes correspond to the tails in . So we can assign a label paired or active to each node of .

Given half-edges and , the distance is the length of the shortest path from to which starts with the edge containing and ends with the edge containing .

If is the last step where a tail at distance from is paired, then satisfies: (i) the height is ; (ii) the set of actives nodes is the -th level. We call a rooted forest incomplete if it satisfies (i)-(ii). We let be the number of paired nodes in .

3.1 Size biased distributions

We recall some notation in [4]. The in- and out-size biased distributions of and are defined

(3.2)
(3.3)

Then, by (i) of 1.1, and , and by (iii) of 1.1,

(3.4)

Let , , and be the survival probabilities of the branching processes with distribution , , and respectively. Then as we have shown in [4], .

3.2 Coupling with branching processes

Consider the probability distribution which satisfies for all ,

(3.5)

In [4, Section 3], it has been shown that in distribution and in expectation. In particular, by (3.4) . Also in [4], we showed that the exploration process starting from one tail can be approximated by a branching process with offspring distribution . Similarly, the extended exploration process starting from can be approximated by independent branching processes with offspring distribution .

For , consider the distributions and defined by

(3.6)
(3.7)

where and are normalising constants.

Let be independent Galton-Watson trees with offspring distribution . Let be an incomplete forest. Let denote that for every , is a root subtree of and all paired nodes of have the same degree in .

The following lemma is a straightforward extension of [4, Lemma 5.3] and we omit its proof:

Lemma 3.1.

Let and let be a partial pairing with . Let with . For every incomplete forest with , we have

(3.8)

4 Expansion probability

Let and be the sets of heads/tails at distance and at most from respectively. From now on, let

(4.1)

Let be the expansion time of defined as

(4.2)

For brevity, we write .

Given a partial pairing of and , we consider the following two events:

(4.3)

The first lemma in this section shows that the probability that both these events happen is close to the survival probability of a branching process.

Lemma 4.1.

Assume that . Fix , and . Then uniformly for all choices of partial pairing and with , , as ,

(4.4)
Proof.

Let be the class of incomplete forests with trees, height and such that only the last level has at least nodes. Let . For and , we have . Let . Let be the sizes of the -th generation of iid branching processes with offspring distribution and let be the survival probability of each one. Since in distribution, we have .

Let . By 2.4 and 3.1, the LHS of (4.4) is

(4.5)

where we used that implies . The lower bound follows from a similar argument. ∎

Our next lemma shows that when is small, and are unlikely to be too close. We omit the proof since it follows from an easy adaptation of the proof in [4, Proposition 7.2].

Lemma 4.2.

Assume that . Fix and . Then uniformly for all choices of partial pairing and with and , we have

(4.6)

The previous lemma allows us to remove in 4.1.

Lemma 4.3.

Assume that . Fix and . Then uniformly for all choices of partial pairing and with , , we have, as ,

(4.7)
(4.8)
Proof.

We will prove it for ; a similar argument works for . Let

(4.9)

Note that the event happens if and only if .

By 4.1, the LHS of (4.7) equals

(4.10)

Since , by [4, Lemma 2.2] we have . By 4.2, for ,

(4.11)

Let be the set of all possible partial pairings in such that happens. Then implies that . Using 4.1 again, we have

(4.12)
(4.13)

Unsurprisingly, 4.3 can be extended to a fixed number of pairs of head-sets and tail-sets:

Lemma 4.4.

Assume that . Fix and . Then uniformly for all disjoint sets of tails and disjoint sets of heads with for , we have, as ,