# Pull and Push&Pull in Random Evolving Graphs

The Push, the Pull and the Push&Pull algorithms are well-studied rumor spreading protocols. In all three, in the beginning one node of a graph is informed. In the Push setting, every round every informed node chooses a neighbor uniformly at random and, if it is not already informed anyway, informs it. In the Pull setting, each round each uninformed node chooses a neighbor uniformly at random and asks it for the rumor; if the asked neighbor is informed, now also the asking node is informed. Push&Pull is a combination of Push and Pull: In each round, each node picks a neighbor uniformly at random. If at least one of both knows the rumor, after this round, both know the rumor. Clementi et al. have considered Push in settings where the underlying graph changes each round. In one setting they investigated, in each round the underlying graph is a newly sampled Erdős-Rényi random graph G(n,p). They show that if p≥ 1/n then with probability 1-o(1) (as n→∞) the number of rounds needed until all nodes are informed is O((n)). Doerr and Kostrygin introduced a general framework to analyze rumor spreading algorithms; using this framework, for a>0 and p=a/n they improved the previous results in the described setting: The expected number of rounds needed by Push was determined to be _2-e^-a(n)+1/(1-e^-a)(n)+O(1); also large deviation bounds were obtained. Using their framework, we investigate Pull and Push&Pull in that setting: We prove that the expected number of rounds needed by Pull to inform all nodes is _2-e^-a(n)+1/a (n)+O(1). Let γ := 2(1-e^-a)-(1-e^-a)^2/a; we prove that the expected number of rounds needed by Push&Pull is _1+γ(n)+1/a(n)+O(1); as a byproduct, we obtain large deviation bounds, too.

## Authors

• 3 publications
• ### Collaborative Broadcast in O(log log n) Rounds

We consider the multihop broadcasting problem for n nodes placed uniform...
06/12/2019 ∙ by Christian Schindelhauer, et al. ∙ 0

The randomized rumor spreading problem generates a big interest in the a...
12/31/2018 ∙ by Dariusz R. Kowalski, et al. ∙ 0

• ### Robustness of Randomized Rumour Spreading

In this work we consider three well-studied broadcast protocols: Push, P...
02/20/2019 ∙ by Rami Daknama, et al. ∙ 0

• ### Distributed Testing of Graph Isomorphism in the CONGEST model

In this paper we study the problem of testing graph isomorphism (GI) in ...
03/01/2020 ∙ by Reut Levi, et al. ∙ 0

• ### Fast Distributed Algorithms for LP-Type Problems of Bounded Dimension

In this paper we present various distributed algorithms for LP-type prob...
04/24/2019 ∙ by Kristian Hinnenthal, et al. ∙ 0

Recently SmartNICs are widely used to accelerate service chains in NFV. ...
05/26/2018 ∙ by Zili Meng, et al. ∙ 0

• ### Topology Dependent Bounds For FAQs

In this paper, we prove topology dependent bounds on the number of round...
03/12/2020 ∙ by Michael Langberg, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The Push, the Pull and the Push&Pull algorithms are important and well-studied rumor spreading protocols [6, 10, 3, 5, 7, 8, 9, 2]. In all three, in the beginning one node of a graph is informed. In the Push setting, every round every informed node chooses a neighbor uniformly at random and, if it is not already informed anyway, informs it. In the Pull setting, each round each uninformed node chooses a neighbor uniformly at random and asks it for the rumor; if the asked neighbor is informed, now also the asking node is informed. Push&Pull is a combination of Push and Pull: In each round, each node picks a neighbor uniformly at random. If at least one of both knows the rumor, then, after this round, both know the rumor.

Recently Clementi et al. have investigated Push on random evolving graphs ([1]), i.e. in a setting where the underlying graph is not fixed but changes over time. One such setting treated in [1] is the following: Each round the underlying graph is a newly (and independently of the previous graphs) sampled Erdős-Rényi random graph . We are interested in large values for , thus all asymptotic notation is with respect to if not explicitly stated differently. Among other results, in [1] it is shown that if then whp (with high probability, i.e. with probability ) the number of rounds needed by Push is . Let and let be a natural number. For Doerr and Kostrygin have improved this bound ([4]). They have shown that the expected number of rounds needed is ; moreover, it is shown that constants exist such that, if (or short ) denotes the needed number of rounds, then for all we have . This was shown by applying a general framework developed in [4]. This framework exploits that many rumor spreading algorithms are sufficiently characterized by the probability of a node to become informed in a round that starts with informed nodes and a bound on the covariances between the indicator variables each indicating whether an uninformed node becomes informed in that round. By bounding and the mentioned covariances, the framework allows to obtain the expected number of rounds needed up to constant additive terms as well as large deviation bounds.

We use this framework to investigate Pull and Push&Pull in random evolving graphs. We show that the expected number of rounds needed by the Pull algorithm in the setting described above (i.e. each round a new is sampled independently of what happened before) is . Let ; then the expected number of rounds needed by Push&Pull is . As a byproduct, we also obtain large deviation bounds.

Particularly the results for Push&Pull are interesting. While both, Push and Pull, need logarithmic time for the last phase of the rumor spreading, when combining them in Push&Pull, Push becomes useless in the last phase which might be unexpected. Another interesting aspect is that Push&Pull in the investigated setting is an example where in the first phase when almost no nodes are informed Push and Pull get in each other’s way in the sense that even at the very beginning they inform significantly fewer nodes than the sum of the numbers of nodes they would have informed individually; in other words: even in the beginning many nodes are informed by Push as well as by Pull.

The remainder of this paper is structured as follows: In section 2 needed preliminaries are considered; in particular this includes the framework introduced in [4]. In section 3 the result for Pull is proven and in section 4 the result for Push&Pull is proven.

## 2 Preliminaries

We start with stating the framework from [4]. Therefore we consider only homogeneous rumor spreading processes characterized as follows: We consider graphs with nodes, in the beginning one node is informed, the other nodes are uninformed. Once a node is informed it remains informed. The process is partitioned into rounds, in each round each uninformed node can become informed. Whenever a round starts with nodes, we assume that there is a (only depending on ) such that each uninformed node becomes informed in that round with probability ; hence is called the success probability. A rumor spreading process as described is called homogeneous ([4]). By suitably bounding the success probability and the covariance numbers defined as follows, bounds on the rumor spreading time (see Definition 2) can be obtained.

###### Definition 1 (Covariance numbers, [4]).

For a given homogeneous process and let be the smallest number such that whenever a round starts with informed nodes for any two uninformed nodes

for the events that these nodes become informed in this round satisfy .

###### Definition 2 (Rumor spreading times, [4]).

Consider a homogeneous rumor spreading process. For all denote the number of informed nodes at the end of the -th round . Let . Let (or short ) denote the time it takes to increase the number of informed nodes from to or more, that is, . We call the rumor spreading time of the process.

If the following exponential growth condition is fulfilled, then Theorem 4 states that there is an exponential growing phase, i.e. if few enough nodes are informed, then the number of informed nodes essentially increases by a constant factor each round and the rumor spreading time can be bounded respectively.

###### Definition 3 (Exponential growth conditions, [4]).

Let be bounded between two positive constants. Let and . We say that a homogeneous rumor spreading process satisfies the upper (respectively lower) exponential growth conditions in if for any big enough the following properties are satisfied for any .

• (respectively ).

• .

In the case of the upper exponential growth condition, we also require .

###### Theorem 4 ([4]).

If a homogeneous rumor spreading process satisfies the upper (lower) exponential growth conditions in , then there are constants such that

 P[T(1,fn)≥(≤)log1+γn(n)+(−)r]≤Aexp(−αr) for all r,n∈N.

When the lower exponential growth conditions are satisfied, then also there is an such that with probability at most nodes are informed at the end of round .

If the following exponential shrinking condition is fulfilled, then Theorem 6 states that there is an exponential shrinking phase, i.e. if enough nodes are informed, then the number of uninformed nodes essentially decreases by a constant factor each round and the rumor spreading time can be bounded respectively.

###### Definition 5 (Exponential shrinking conditions, [4]).

Let be bounded between two positive constants. Let , and . We say that a homogeneous rumor spreading process satisfies the upper (respectively lower) exponential shrinking conditions if for any big enough, the following properties are satisfied for all .

• (respectively )

For the upper exponential shrinking conditions, we also assume that .

###### Theorem 6 ([4]).

If a homogeneous rumor spreading process satisfies the upper (lower) exponential shrinking conditions, then there are such that

 E[T(n−⌊gn⌋,n)]≤(≥)1ρnln(n)+O(1), P[T(n−⌊gn⌋,n)≥(≤)1ρnln(n)+(−)r]≤A′exp(−α′r) for % all r,n∈N.
###### Remark 7.

It suffices to compute and from Theorems 4 and 6 respectively up to additive terms.

We will use the following two well-known facts in our proofs; Fact 9 is a simple consequence of Fact 8.

###### Fact 8.

Let be a sequence of real numbers such that for each we have . Then

###### Fact 9.

Let ; consider an Erdős-Rényi random graph . Let be a node. The probability that is isolated is .

Theorem 10 considers the number of rounds Push needs in the described setting. While we do not need the Theorem for the proof of our results, we state it for completeness.

###### Theorem 10 ([4]).

Let and let be the time the push protocol needs to inform all nodes when in each round a newly sampled Erdős-Rényi random graph is the underlying graph. Then

 E[Tn]=log2−e−a(n)+11−e−aln(n)+O(1)

and there are constants such that for all

 P[|T−E[T]|≥r]≤Aexp(−αr).

It is observed that the obtained rumor spreading time is the same (up to constant terms) as if the underlying graph is a complete graph but message transmissions fail independently with probability which, up to additive terms is the probability that a vertex is isolated. We will see that this also holds for Pull. Interestingly it does not hold for Push&Pull; we provide an explanation in Remark 15.

## 3 Pull in Random Evolving Graphs

###### Theorem 11.

Let and assume that each round a newly sampled Erdős-Rényi random graph is the underlying graph. Then for the rumor spreading time of Pull, , we have

 E[Tn]=log2−e−a(n)+1aln(n)+O(1)

and there are constants such that for all

 P[|Tn−E[Tn]|≥r]≤Aexp(−αr).
###### Beweis.

We want to apply the framework from [4]. We can assume that at the start of each round, the edges of the random graph are not yet sampled. Before the is sampled, each uninformed node has the same probability of getting informed, hence the rumor spreading algorithm is homogeneous. First we consider the covariance numbers. To do this, consider two uninformed nodes and and let and denote the random indicator variables indicating whether or respectively get informed in this round. Note that, as the edges are not yet sampled, there is some positive correlation between and , because if we condition on the event that the uninformed node becomes informed, then it is slightly less likely that and the uninformed node are neighbors which increases the probability that has a higher fraction of informed neighbors and therefore pulls the information more likely. However, the framework from [4] allows for some positive correlation. We will bound the covariance accordingly. Let , and let denote the edge set of the random graph for the current round; define .

 Cov(X,Y)=P[X∩Y]−P[X]P[Y]=P[X]P[Y∣X]−P[X]P[Y]=P[X](P[Y∣X]−P[Y]).

Now consider . We have

 P[Y∣X]≤P[Y∣¬E]=P[Y∩¬E]P[¬E]≤P[Y]P[¬E]=P[Y]1−a/n=P[Y]+O(1/n)

Hence we obtain

 P[X](P[Y∣X]−P[Y])≤P[X]O(1/n)≤knO(1/n).

Therefore the covariance conditions are fulfilled for the exponential growing and shrinking phases.

Now we have to estimate the probability

for an uninformed node to become informed in a round starting with informed nodes. If an uninformed node has a neighbor, i.e. if it is not isolated, then with probability it becomes informed. However, if it is isolated, which according to Fact 9 is the case with probability , the node does not become informed in this round deterministically. Thus . Hence both, upper and lower, exponential growth conditions are fulfilled for arbitrary with . Recall that according to Remark 7, the term is negligible.

Theorem 4 therefore yields

 E[Tn(1,fn)]=log2−e−a(n)+O(1)

and that there are such that for all

 P[Tn(1,fn)≥≤log2−e−a(n)−+r]≤A1exp(−α1r).

Next, for the exponential shrinking conditions, we consider . We have

 1−pn−u=1−n−un−1(1−e−a+O(1/n))=e−a+(1−e−a)un+O(1/n).

The upper and lower exponential shrinking conditions are fulfilled with (because ) for an arbitrary . Note that according to Remark 7, the term is negligible. Theorem 6 therefore yields

 E[Tn(n−⌊gn⌋,n)]=1aln(n)+O(1)

and that there are such that for all

 P[Tn(n−⌊gn⌋,n)≥≤1aln(n)+−r]≤A2exp(−α2r).

Thus, considering the exponential growth phase and the exponential shrinking phase together, we obtain the claim. ∎

## 4 Push&Pull in Random Evolving Graphs

###### Theorem 12.

Let and let . Assume that each round a newly sampled Erdős-Rényi random graph is the underlying graph. Then for the rumor spreading time of Push&Pull, , we have

 E[Tn]=log1+γ(n)+1aln(n)+O(1)

and there are constants such that for all

 P[|Tn−E[Tn]|≥r]≤Aexp(−αr).

Before we prove Theorem 12 we introduce some notation. Consider an uninformed node at the beginning of a round that starts with informed nodes, let ; we will refer to this round as the current round. Let denote the event that is pushed by an informed node in the current round. Analogously let denote the event that pulls the rumour in the current round from an informed node. Further set , i.e.  denotes the event that is pushed or pulls the rumour in the current round. For let denote the event that has exactly informed neighbours . When we write this implicitly defines . Let be an informed node; let denote the event that is pushed by in the current round. Similarly, let denote the event that pulls the information from in the current round. When an index is clear from the context, it may be omitted. Asymptotic notation is with respect to or respectively. We will use Lemma 13 to prove Theorem 12; it quantifies the probability that an uninformed node pulls the information in the current round conditioned on that it gets also pushed by an informed node.

###### Lemma 13.

Let and assume that each round a newly sampled Erdős-Rényi random graph is the underlying graph. Consider a round that starts with informed nodes and set ; assume that the edges are not yet sampled. Let be an uninformed node. Then

 P[PLy∣PHy]=1−e−aa+O(μ) for μ→0.

In order to prove Lemma 13 we will use Lemma 14 that provides a closed form for a certain sum.

###### Lemma 14.

Let , and with . Then

 (1−μ)n−1∑i=0((1−μ)n−1i)(an)i(1−an)(1−μ)n−1−i1i+1=1−(1−an)(1−μ)na(1−μ).
###### Beweis.

It is

 (1−μ)n−1∑i=0((1−μ)n−1i)(an)i(1−an)(1−μ)n−1−i1i+1 =(1−μ)n−1∑i=0((1−μ)n−1)!(i+1)!((1−μ)n−1−i)!(an)i(1−an)(1−μ)n−1−i =(1−μ)n∑i=1((1−μ)n−1)!i!((1−μ)n−i)!(an)i−1(1−an)(1−μ)n−i =na⎛⎝−(1−an)(1−μ)n(1−μ)n+1(1−μ)n(1−μ)n∑i=0((1−μ)n)!i!((1−μ)n−i)!(an)i(1−an)(1−μ)n−i⎞⎠.

Let . It is

 1=(1−μ)n∑i=0P[X=i]=(1−μ)n∑i=0((1−μ)n)!i!((1−μ)n−i)!(an)i(1−an)(1−μ)n−i.

Hence we arrive at

 (1−μ)n−1∑i=0((1−μ)n−1i)(an)i(1−an)(1−μ)n−1−i1i+1 =na(−(1−an)(1−μ)n(1−μ)n+1(1−μ)n) =1−(1−an)(1−μ)na(1−μ).\qed
###### Proof of Lemma 13.

We will omit as an index in this proof, i.e. we will write instead of and so on. First we verify that for all

 P[PH∣INF(j)]≤jP[PH∣INF(1)]. (4.1)

It is

 P[PH∣INF(j)]=P[PH(x1)∪⋯∪PH(xj)∣INF(j)].

Hence by applying the union bound

 P[PH∣INF(j)]≤jP[PH(x1)∣INF(j)]=jP[PH(x1)∣INF(1)]=jP[PH∣INF(1)]

which implies (4.1). Similarly we verify that for all

 P[PH∣INF(j)]≥P[PH∣INF(1)]. (4.2)

It is

 P[PH∣INF(j)] =P[PH(x1)∪(PH(x2)∪⋯∪PH(xj))∣INF(j)]≥P[PH(x1)∣INF(j)] =P[PH(x1)∣INF(1)]=P[PH∣INF(1)]

which implies (4.2). Next we verify that for all

 P[INF(j)∣PH]P[INF(1)∣PH]≤jP[INF(j)]P[INF(1)]. (4.3)

Using Bayes’ Theorem and (

4.1) we obtain

Hence, by again applying Bayes’ Theorem we arrive at

 P[INF(j)∣PH]≤jP[INF(1)∣PH]P[INF(j)]P[INF(1)].

This implies (4.3). Analogously (using (4.2) instead of (4.1)) one verifies

 P[INF(j)∣PH]P[INF(1)∣PH]≥P[INF(j)]P[INF(1)]. (4.4)

Next we show that for any

 P[INF(j)]P[INF(1)]=O(μj−1) for μ→0. (4.5)

We have

 P[INF(j)]P[INF(1)] =(μnj)(an)j(1−an)μn−j(μn1)an(1−an)μn−1=(μn)!j!(μn−j)!μn(an)j−1(1−an)−j+1 =μn−1nμn−2n⋅⋯⋅μn−j+1n⋅aj−1j!(1−an)−j+1=O(μj−1)

which shows (4.5). Using (4.3), (4.4) and (4.5) we can infer that for all

 P[INF(j)∣PH]P[INF(1)∣PH]=O(μj−1). (4.6)

Now we prove

 P[INF(1)∣PH]=1+O(μ). (4.7)

Using (4.6) we get

 P[INF(2)∣PH]+P[INF(3)∣PH]+⋯+P[INF(μn)∣PH]P[INF(1)∣PH]=O(μ).

Therefore

 P[INF(2)∣PH]+P[INF(3)∣PH]+⋯+P[INF(μn)∣PH]=P[INF(1)∣PH]⋅O(μ)

and thus

 P[INF(1)∣PH]+P[INF(2)∣PH]+⋯+P[INF(μn)∣PH]=1=P[INF(1)∣PH]⋅(1+O(μ)).

Hence

 (1+O(μ))P[INF(1)∣PH]=1

which implies (4.7). Using (4.7) we obtain

 P[PL∣PH]=P[PL∣INF(1)]+O(μ). (4.8)

Thus, to finish the proof, it suffices to show

 P[PL∣INF(1)]=1−e−aa+O(μ). (4.9)

For each let denote the event that has exactly uninformed neighbours in the current round. Note that at the beginning of the round there is a fixed number of informed nodes, namely , and a fixed number of uninformed nodes, namely . In particular, for any , and are independent. Hence we have

 P[PL∣INF(1)] =(1−μ)n−1∑i=0P[UNF(i)]1i+1 =(1−μ)n−1∑i=0((1−μ)n−1i)(an)i(1−an)(1−μ)n−1−i1i+1.

Thus, using Lemma 14, we can infer

 P[PL∣INF(1)]=1−(1−an)(1−μ)na(1−μ).

Using Fact 8, this gives

 P[PL∣INF(1)]=1−ea(μ−1)(1−μ)a+O(1/n)=1−ea(μ−1)(1−μ)a+O(μ).

Thus, using the series representation of the exponential function at zero, we obtain

 P[PL∣INF(1)]=1−e−aa+O(μ)

which shows (4.9) and hence completes the proof. ∎

###### Proof of Theorem 12.

We want to use the framework from [4]. To do this, consider a round of the rumour spreading process that starts with informed and uninformed nodes; we will refer to this round as the current round. Let . We can assume that at the start of the round, the edges of the random graph are not yet sampled. We start with showing that the covariance conditions are fulfilled. Therefore consider two uninformed nodes and . As before, denotes the event that and become neighbours in the current round. We have

 Cov(1PPx,1PPy)=P[PPx∩PPy]−P[PPx]P[PPy]=P[PPx](P[PPy∣PPx]−P[PPy]).

It is

 P[PPy∣PPx] ≤P[PPy∣¬E]=P[PPy∩¬E]P[¬E]≤P[PPy]P[¬E]=P[PPy]1−a/n=P[PPy]+O(1/n).

Hence

 Cov(1PPx,1PPy)=P[PPx]⋅O(1/n). (4.10)

From [4] it is known that

 P[PHx]≤k/n(1−e−a+O(1/n))

and therefore

 P[PPx]≤P[PHx]+P[PLx]≤(1−e−a+O(1/n))kn+(1+O(1/n))kn=(2−e−a+O(1/n))kn.

This, together with (4.10), yields

 Cov(1PPx,1PPy)≤(2−e−a)k/n⋅O(1/n).

Hence the covariance conditions are fulfilled for the exponential growth and shrinking conditions.

For the exponential growth phase, we have to estimate the success probability that an uninformed node becomes informed in the current round that starts with informed nodes. In the following, we write and instead of and respectively. Note that and are not independent (as the edges are not sampled yet at the beginning of the round).

It is

 P[PP]=P[PH∪PL]=P[PH]+P[PL]−P[PH∩PL]. (4.11)

To compute we consider the three summands of (4.11) individually:

Term 1 : From [4] it is known that

 μ(1−e−a)(1−k+O(1)2n(1−e−a))≤P[PH]≤μ(1−e−a+O(1/n)). (4.12)

Term 2 : According to Fact 9, is isolated with probability . Thus we have

 P[PL]=(1−e−a+O(1/n))μ. (4.13)

Term 3 : We have

 P[PL∩PH]=P[PL∣PH]P[PH]. (4.14)

Thus, using Lemma 13 and (4.12) we obtain

 P[PL∩PH]=μ(1−e−a)2a+O(μ2) for μ→0.

Combining the three terms in (4.11), where asymptotic notation is with respect to , we obtain

 P[PP]=(2(1−e−a)−(1−e−a)2a+O(μ))μ=(2(1−e−a)−(1−e−a)2a)(1+O(μ))μ.

In particular there is an such that

 pk=P[PP]≥≤(2(1−e−a)−(1−e−a)2a)kn(1−+a∗kn).

Hence there is a constant such that the exponential growth conditions are fulfilled for . Thus Theorem 4 yields

 E[Xn(1,fn)]=log1+γ(n)+O(1)

and that there are constants such that for all

 P[Xn(1,fn)≥≤log1+γ(n)−+r]≤A1exp(−α1r).

Let be an arbitrary constant. To complete the proof we show that is a lower bound for the number of rounds needed to inform all remaining nodes, starting with informed nodes; then the claim follows as Pull provides a matching upper bound for the exponential shrinking phase. Consider an uninformed node . According to Fact 9, is isolated in the current round with probability . If is isolated, then it cannot be informed in the current round. Therefore

 1−P[PP]≥e−a+O(1/n).

Thus the lower exponential shrinking conditions are fulfilled for and can indeed be chosen arbitrarily. Therefore Theorem 6 yields

 E[Xn(n−⌊gn⌋,n)]≥1aln(n)+O(1)

and that there are such that for all

 P[Xn(n−⌊gn⌋,n)≤1aln(n)−r]≤A2exp(−α2r).

Together with the upper bounds that we obtain by considering Pull, this completes the proof. ∎

###### Remark 15.

It is interesting that Push&Pull does — unlike Push and Pull — behave differently on random evolving graphs than on the complete graph with message transmission success probability (which is the probability that a node is not isolated (up to an additive term)). The reason for this is that push and pull operations get in each other’s way, i.e. it has a relevant impact that some nodes get informed in the same round by a push as well as by a pull which makes one of those operations useless. This is in contrast to the situation in the complete graph with message transmission success probability : There in the beginning, Push and Pull essentially do not get in each other’s way, i.e. only very few nodes get informed by Push as well as by Pull in the beginning of the rumor spreading process. The reason for this difference is that in the complete graph each node has neighbors while in the setting of this paper the expected number of neighbors of a node is in each round and therefore here it is much more likely that a relevant fraction of edges is used by Push as well as by Pull.

Another interesting aspect is the behavior of Push&Pull in the last phase of the process: As in the investigated setting both, Push and Pull, need logarithmic time for the exponential shrinking phase, one might conjecture that in the Push&Pull setting Push as well as Pull contribute substantially to the last phase. The reason why this is not the case, i.e. why only Pull contributes to the last phase, is the following: Consider an uninformed node in the last phase, i.e. if most nodes are informed already. In each round we first sample whether is isolated which, according to Fact 9, is the case with probability . If is isolated it cannot be informed in that round, neither by a push nor by a pull. However, if is not isolated it is extremely probable that it becomes informed by a pull attempt. In particular the case that it does become informed by a push but not simultaneously also by a pull is very unlikely. So the problem essentially is that both, pull and push attempts, have to clear the same hurdle, i.e. wait for a round where is not isolated. But after taking this hurdle it is extremely unlikely that a pull does not succeed while a push attempt does succeed.

## Literatur

• [1] A. Clementi, P. Crescenzi, C. Doerr, P. Fraigniaud, F. Pasquale, and R. Silvestri. Rumor spreading in random evolving graphs. Random Structures & Algorithms, 48(2):290–312, 2016.
• [2] S. Daum, F. Kuhn, and Y. Maus. Rumor Spreading with Bounded In-Degree. In Structural Information and Communication Complexity - 23rd International Colloquium, SIROCCO 2016, Helsinki, Finland, July 19-21, 2016, Revised Selected Papers, pages 323–339, 2016.
• [3] A. Demers, D. Greene, C. Hauser, W. Irish, J. Larson, S. Shenker, H. Sturgis, D. Swinehart, and D. Terry. Epidemic algorithms for replicated database maintenance. In Proceedings of the sixth annual ACM Symposium on Principles of distributed computing, pages 1–12. ACM, 1987.
• [4] B. Doerr and A. Kostrygin. Randomized Rumor Spreading Revisited. In LIPIcs-Leibniz International Proceedings in Informatics, volume 80. Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2017.
• [5] U. Feige, D. Peleg, P. Raghavan, and E. Upfal. Randomized broadcast in networks. Random Structures & Algorithms, 1(4):447–460, 1990.
• [6] A. M. Frieze and G. R. Grimmett. The shortest-path problem for graphs with random arc-lengths. Discrete Applied Mathematics, 10(1):57–77, 1985.
• [7] G. Giakkoupis. Tight bounds for rumor spreading in graphs of a given conductance. In 28th International Symposium on Theoretical Aspects of Computer Science, STACS 2011, March 10-12, 2011, Dortmund, Germany, pages 57–68, 2011.
• [8] G. Giakkoupis. Tight Bounds for Rumor Spreading with Vertex Expansion. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 801–815, 2014.
• [9] K. Panagiotou, X. Pérez-Giménez, T. Sauerwald, and H. Sun. Randomized Rumour Spreading: The Effect of the Network Topology. Combinatorics, Probability and Computing, 24(2):457–479, 2015.
• [10] B. Pittel. On Spreading a Rumor. SIAM J. Appl. Math., 47(1):213–223, Mar. 1987.