DeepAI

# Random walks on graphs: new bounds on hitting, meeting, coalescing and returning

We prove new results on lazy random walks on finite graphs. To start, we obtain new estimates on return probabilities P^t(x,x) and the maximum expected hitting time t_ hit, both in terms of the relaxation time. We also prove a discrete-time version of the first-named author's "Meeting time lemma" that bounds the probability of random walk hitting a deterministic trajectory in terms of hitting times of static vertices. The meeting time result is then used to bound the expected full coalescence time of multiple random walks over a graph. This last theorem is a discrete-time version of a result by the first-named author, which had been previously conjectured by Aldous and Fill. Our bounds improve on recent results by Lyons and Oveis-Gharan; Kanade et al; and (in certain regimes) Cooper et al.

• 8 publications
• 14 publications
03/04/2019

### Random Walks on Dynamic Graphs: Mixing Times, HittingTimes, and Return Probabilities

We establish and generalise several bounds for various random walk quant...
12/04/2019

### Robotic Surveillance Based on the Meeting Time of Random Walks

This paper analyzes the meeting time between a pair of pursuer and evade...
06/03/2020

### Time Dependent Biased Random Walks

We study the biased random walk where at each step of a random walk a "c...
07/13/2018

### Pairwise Independent Random Walks can be Slightly Unbounded

A family of problems that have been studied in the context of various st...
12/09/2019

### A Unified Framework of Quantum Walk Search

The main results on quantum walk search are scattered over different, in...
05/22/2020

### Degree Heterogeneity in a Graph Facilitates Quicker Meeting of Random Walkers

Multiple random walks is a model for movement of several independent ran...
11/09/2020

### High Dimensional Expanders: Random Walks, Pseudorandomness, and Unique Games

Higher order random walks (HD-walks) on high dimensional expanders have ...

## 1 Introduction

Random walks on graphs and other reversible Markov chains are fundamental mathematical objects, with applications in algorithms and beyond. In this paper, we derive new bounds on return probabilities, hitting times, and meeting deterministic trajectories for these processes. We also bound the expected full coalescence time of multiple random walks on the same graph.

We state our main results in terms of lazy random walks on graphs, although some of them hold for more general chains (cf. Section 2). In the paper is always a finite, unoriented, connected graph with vertex set , with no loops or parallel edges and vertices. The degree of is denoted by . Minimum, maximum, and average degrees in are denoted by , and , respectively. The parameter always denotes discrete time and takes values in .

###### Definition 1

Lazy random walk (LRW) on is the Markov chain on with transition matrix given by:

 P(x,y):=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩12,y=x;12dx,xy∈E;0, otherwise(x,y∈V).

We let denote trajectories of the chain and , , , denote probabilities and expectations when or has law (respectively). Finally, we denote by the stationary distribution:

 π(x):=dxdavgn(x∈V).

Recall that the hitting time of is

 τA:=inf{t≥0:Xt∈A}.

We write for . The maximum expected hitting time of LRW on ,

 thit:=maxy,x∈VEy[τx],

is a natural parameter for a Markov chain, with connections to electrical network theory, cover times, mixing times and many other aspects of such processes. See [5] (especially Chapter 10) for more information.

### 1.1 Hitting and returning

Our first two results estimate and return probabilities in terms of the relaxation time of (cf. Section 2). These quantities are related by the formula [5, Proposition 10.26]:

 π(x)Eπ[τx]=+∞∑s=0(Ps(x,x)−π(x)). (1)
###### Theorem 1 (Proof in Section 3)

Under Definition 1,

 thit≤20davgdminn√trel+1.
###### Theorem 2 (Proof in Section 3)

Under Definition 1, for any and :

 Pt(x,x)−π(x) ≤ 10dxdmin(1√t+1∧√trel+1t+1).

These Theorems are specific to random walks on graphs. As we explain in Section 2.1, the following bounds are essentially best possible for general reversible chains.

 thit ≤ 2maxx(1−π(x)π(x))trel (which is of order davgndmintrel for a graph); Pt(x,x)−π(x) ≤ (1−t−1rel)t(1−π(x)). (3)

Kanade et al. [4] obtained the following improvement to (1.1) in the graph setting:

 thit≤Kmin{(davgdmin)2√trellogtrel,(davgdmin)5/2tmix}n, (4)

where is the mixing time [5, Theorem 12.5] and is universal. Theorem 1 has better dependence on and on the ratio . Examples in Section 2.2 show that our bound is sharp in both of these parameters.

The best previous result for return probabilities on graphs is due to Lyons and Oveis-Gharan [6, Theorem 4.9]111That result is for regular graphs, but the authors explain right after the proof that it can be extended to all graphs in the form presented here.:

 Pt(x,x)−π(x)<13dmaxdmin1√t+1. (5)

Theorem 2 improves the constant, replaces with , and improves both (5) and (3) when .

###### Remark 1

Anna Ben-Hamou (personal communication) improved our original constants in Theorems 1 and 2. We thank her for her permission to use these improvements.

### 1.2 Results on meeting and coalescing

Our next result bounds the probability that LRW hits a moving target in terms of .

###### Theorem 3 (Proof in Section 4)

Under Definition 1, for any and any sequence ,

 Pπ(∀0≤s≤t:Xs≠hs)≤(1−1thit)t.

This is a discrete-time version of the “Meeting Time Lemma” in [7]. Notice that this Theorem fails for non-lazy random walk on a bipartite graph. On the other hand, Theorem 3 can be extended to all reversible Markov chains with nonnegative spectrum (cf. Section 2 below).

The original application of the continuous-time version of Theorem 3 was to the study of coalescing random walks. Following the argument in [7], we may obtain a discrete-time version of the main result of that paper, which had been previously conjectured by Aldous and Fill [1].

###### Theorem 4 (Definitions in Section 5, proof sketch in Appendix a.2)

Under Definition 1, consider coalescing random walks that evolve according to LRW on . Let denote the expected time until these walks have all coalesced into one. Then with a universal constant.

Kanade et al. [4, Theorem 1.3] obtained a partial form of Theorem 4 where grows slowly with the ratio . In particular, their result implies Theorem 4 when is regular. They also obtained sharp results relating coalescing and meeting times in general [4, Theorems 1.1 and 1.2] that do not follow from our methods.

We finish the introduction with additional comments on our results. Theorem 1 gives a bound on that may be viewed through the lens of electrical network theory. The relation between effective resistances and commute times [5, Chapter 9] gives the following immediate corollary.

###### Corollary 1 (Proof omitted)

Let be finite connected graph. Place unit resistances on each edge of . Then the efffective resistance between any two vertices is bounded by , where is universal.

Intriguingly, we do not know of a direct proof of this result via the Dirichlet form or a network reduction.

Theorem 2 on return probabilities may be used to tighten results on random walk intersections by Peres et al. [8]. Let be the maximum expected intersection time of two independent lazy random walks over from worst-case initial states. Proposition 1.8 in [8] gives the following bound for regular graphs:

 tI≤K√nmin{tunif,trellog(trel+1)}3/4, with K>0 universal

and the uniform mixing time. With Anna Ben-Hamou, we have used Theorem 2 to obtain an improved bound that holds for all graphs:

 tI≤K√davgndmin(trel)3/4, with K>0 universal,

which can be shown to be sharp. This result will appear in the full version of [2].

Finally, the combination of Theorems 4 and 1 leads to the following corollary.

###### Corollary 2 (Proof omitted)

Under Definition 1, let be as in Theorem 4. Then:

 tcoal≤Kdavgdminn√trel, with K>0 universal.

This may be compared to [3, Theorem 1] by Cooper et al.:

 tcoal≤Klog4ntrel+K(d2avg1n∑v∈Vd2v)ntrel.

Our bound is better when is regular or (more generally) when

 1n∑v∈Vd2v≪davgdmin√trel.

On the other hand, Cooper et al.’s bound is stronger when degrees are not balanced (eg. in a star graph). See [3, Section 5] for even stronger bounds when degrees are imbalanced.

## 2 Preliminaries

Before we prove our results, we discuss basic properties of LRW. The books [1, 5] contain much more material on reversible chains and random walks on graphs.

LRW as in Definition 1 is an irreducible chain because is connected. LRW is also reversible: for all . This means that , as an operator over , is is self-adjoint with respect to the inner product:

 ⟨f,g⟩:=∑x∈Vπ(x)f(x)g(x)(f,g∈RV).

So has real spectrum

 λ1=1>λ2≥λ3≥…λn≥−1.

and fact that for all implies . That is, LRW on a finite connected graph is a reversible and irreducible finite Markov chain with nonnegative spectrum. In particular is positive semidefinite and has a positive semidefinite square root . The relaxation time of such a chain is .

Let denote the function that is equal to everywhere. To each

we may associate an eigenfunction

(with ) so that is an orthonormal basis of . We use to denote the norm corresponding to the inner product in .

### 2.1 General upper bounds for hitting and returning

Using the above notation, we explain why the bounds on hitting times (1.1) and return probabilities (3) are nearly optimal for a reversible chain with nonnegative spectrum.

Set , so that . Let be the matrix with entries (). Let

denote the identity matrix. The Markov chain that, at each step, stays put with probability

and jumps to a point picked from with probability , has transition matrix:

 P∗:=(1−γ)I+γΠ

has the same eigenbasis as

, with eigenvalues

(with multiplicity ) and (multiplicity ). Thus and have nonnegative spectrum, the same relaxation time, and moreover for all and . In particular:

 ∀x∈V∀t≥0:Pt(x,x)−π(x)≤Pt∗(x,x)−π(x)=(1−t−1rel)t(1−π(x))

and (with denoting expectation for )

 ∀x∈V:π(x)Eπ[τx]≤π(x)E∗,π[τx]=(1−π(x))trel.

In particular, (3) is exact for , whereas (1.1) is sharp up to a constant factor.

### 2.2 Lower bounds for hitting on graphs

We showed above that, in the graph setting, our Theorem 1 improves upon the best-possible results for chains with nonnegative spectrum. The next two examples show that the dependence of Theorem 1 on and is best possible.

1. A streched expander. Take a -regular expander on vertices. Subdivide each edge into a path of length . The resulting graph has vertices, relaxation time , and .

2. A lolipop. Take a -regular graph on vertices which has constant relaxation time. Connect a path of length to one of the the vertices of . The resulting graph has , and maximum hitting time .

## 3 Return probabilities and hitting times

In this section we prove Theorem 1 on and Theorem 2 on . We will need two lemmas on “Green’s function”:

 gt(x,x):=t∑s=0Ps(x,x). (6)
###### Lemma 1 (Proof in Section 3.1)

Consider , a reversible and irreducible finite Markov chain with nonnegative spectrum, stationary distribution and state space . Fix and . Then:

###### Lemma 2 (Proof in Section 3.2)

Under Definition 1, for and :

 gt(x,x)π(x)≤6davgndmin√t+1.

Let us see how Theorem 1 and Theorem 2 follow from the Propositions.

Proof: [of Theorem 1] For , formula (1) may be rewritten as:

 Eπ[τx]=limt→+∞(gt(x,x)−(t+1)π(x))π(x).

Lemma 1, Lemma 2 and the formula give:

 Eπ[τx]≤ee−1g(⌈trel⌉−1)(x,x)π(x)≤(6ee−1)davgndmin√trel+1.

To finish, we bound and use [5, Lemma 10.2].

Proof:  [of Theorem 2] Using the same results as in the previous proof,

 (t+1)(Pt(x,x)−π(x))≤gt(x,x)−(t+1)π(x)≤(6ee−1)dxdmin√(t+1)∧(trel+1).

The result follows from estimating the constant by .

### 3.1 Sum up to the relaxation time and stop

Proof: [of Lemma 1] We use the notation in Section 2. Note that there exists a such that, for all :

 Pt(x,x)−π(x)=⟨f,Ptf⟩−⟨f,1⟩2=n∑i=2λti⟨Ψi,f⟩2. (7)

Since for each , decreases with . In particular,

 0≤Pt(x,x)−π(x)≤∑ts=0(Ps(x,x)−π(x))t+1=gt(x,x)−(t+1)π(x)t+1.

Now note that, by (7),

 gt(x,x)−(t+1)π(x)=n∑i=2(1−λt+1i1−λi)⟨Ψi,f⟩2≤n∑i=2(11−λi)⟨Ψi,f⟩2,

whereas for , , and

 g⌈trel⌉−1(x,x)−⌈trel⌉π(x)≥n∑i=2(1−e−11−λi)⟨Ψi,f⟩2.

The proof finishes by combining the two last displays.

### 3.2 Estimate on Green’s function

We prove Lemma 2 by adapting [1, Proposition 6.16] to non-regular graphs. We need two Propositions that we prove in Appendix A.1 via electrical network theory.

###### Proposition 1 (Proof in Section a.1)

Under Definition 1,

 trel≤maxx,y∈V(Ex[τy]+Ey[τx])≤6(davgdmin)n2−4.
###### Proposition 2 (Proof in Section a.1)

Under Definition 1, let be nonempty. Take and consider the number of returns to up to time :

 gτA−1(x,x):=Ex[τA−1∑s=0I{Xs=x}].

Then:

 gτA−1(x,x)π(x)≤9(davgndmin)2(1−π(A)).

Proof: [of Lemma 2] Fix and define the set:

 Aα:={y∈V:gt(y,x)≤απ(x)(t+1)}.

We bound the measure of via Markov’s inequality and reversibility:

 1−π(Aα)≤∑y∈Vπ(y)gt(y,x)α(t+1)π(x)=∑y∈Vgt(x,y)α(t+1)=1α.

In particular, when . We now claim that:

 gt(x,x)π(x)≤9(davgndmin)21α+α(t+1). (8)

In fact, assuming this we obtain the Proposition by setting

 α=3davgndmin√t+1, which is which% is >1 when t≤⌈trel⌉ (cf. Proposition~{}???).

To prove (8), we may assume . Note that the number of returns to up to time is at most the sum of returns up to time with those occurring at times . The strong Markov property gives:

 gt(x,x)π(x)≤gτAα−1(x,x)π(x)+Ex[gt(XτAα,x)π(x)],

and inequality (8) follows from Proposition 2 (applied to the first term in the RHS) and the definition of (applied to the second term in the RHS).

## 4 The Meeting Time Theorem in general form

We now present a more general version of the Meeting Time Theorem (Theorem 3 above). We use the same notation and definitions for trajectories, hitting times &c that we defined for LRW. We also take the material and notation from Section 2 for granted.

###### Theorem 5

Consider a reversible and irreducible finite Markov chain with nonnegative spectrum which is defined over a finite set and has stationary measure . Then for any and any choice of :

 Pπ(∀0≤s≤t:Xs≠hs)≤(1−1thit)t.

Proof: Given a linear operator , we denote its operator norm by:

 ∥A∥op:=sup{∥Af∥:f∈RV,∥f∥=1}=sup{⟨g,Af⟩:f,g∈RV,∥f∥=∥g∥=1}.

For , let be a diagonal matrix that that has ’s at entries with and a zero at the entry . is self-adjoint with respect to . Defining

 Mt:=Dh0PDh1PDh2…Dht−1PDht,

we see that

 Pπ(∀0≤s≤t:Xs≠hs)=⟨1,Mt1⟩≤∥Mt∥op.

Thus it suffices to estimate the norm of . We do this in two steps.

• ;

• For any , .

For step 1 we employ (defined in Section 2) and the fact that for each . The norm of may be bounded by:

 ∥Mt∥op=∥∥ ∥∥t−1∏s=0(DhsPDhs+1)∥∥ ∥∥op = ∥∥ ∥∥t−1∏s=0(Dhs√P√PDhs+1)∥∥ ∥∥op (∥⋅∥op submultiplicative) ≤ t−1∏s=0(∥Dhs√P∥op∥√PDhs+1∥op). (9)

This last product of norms contains terms of the form or for elements . Now, for any linear operator with adjoint , and are both equal to . Apply this to and obtain:

 ∥Dh√P∥op=∥√PDh∥op=√∥DhPDh∥op.

Thus step 1 follows from (9). For step 2, we fix . Notice that is self-adjoint and positive semidefinite (because is), so equals the largest eigenvalue of . According to [1, Section 3.6.5 and Theorem 3.33], the largest eigenvalue is equal to for some quasistationary disitribution over . Since , the result follows.

## 5 Coalescing random walks

In this section we present a generalization of Theorem 4.

###### Theorem 6 (Proof sketch in Appendix a.2)

Consider a reversible and irreducible finite Markov chain with nonnegative spectrum which is defined over a finite set . Let denote the full coalescence time of a system of coalescing random walks that evolve according to . Let . Then:

 tcoal≤Kthit, where K>0 is universal.

Let us define the process, postponing its analysis to Appendix A.2. Following [7, Section 3.3], we use a definition in terms of “killed particles”. That is, when several particles meet on the same vertex at the same time, only one the particle with lower index survives (one can alternatively think that the two particles coalesce). To make this formal, consider a Markov chain on a finite set . Write and let

 (Xt(a))t≥0:a=1,2,3,…,n

be independent trajectories on evolving according to , each with initial state . Now define killed trajectories as follows. Let be a “coffin state”. We set for all . Given , assume have been defined. Now let222This time is called in [7].:

 κa:=inf{t≥0:Xt(a)=Yt(b) for some b

For , we set

 Yt(a):={Xt(a),t<κa;∂,t≥κa.

One can check that the set valued process

 St:={Yt(a):1≤a≤n,Yt(a)≠∂}

is a time-homogeneous Markov chain on . It is this process that we call “coalescing random walks” evolving according to ”. The full coalescence time333Called in [7]. is:

 τcoal:=inf{t≥0:|St|=1}.

## Appendix A Appendix

### a.1 Auxiliary results via electrical network theory

We prove here Propositions 1 and 2. As noted in the main text, these results adapt the reasoning of [1, Proposition 6.16] to non-regular graphs.

We work under Definition 1. We will apply the theory of electrical networks [5, Chapter 9] by assigning conductance to each pair . In particular, and for each pair with . We will also use a standard path fact that follows from the argument in the end of the proof of [5, Proposition 10.16, part (b)].

###### Fact 1 (Path fact)

If is a geodesic path in , then .

Proof: [of Proposition 1] The first inequality follows from [5, Lemma 12.17]. To bound the maximum commute time, we note that, by a simple network reduction argument,

 maxx,y∈V(Ex[τy]+Ey[τx])

is at most the diameter of times the resistance of a single edge. The latter quantity is , whereas the diameter is by the Path Fact. Since , the Proposition follows.

Proof: [of Proposition 2] Thanks to [5, Lemma 9.6] and a standard network reduction:

 gτA−1(x,x)π(x)=Reff(x↔A)

where denotes effective resistance. Letting , our main goal will be to show:

 \bf Goal: Reff(x↔A)≤9davgndmin|B|,

as the result then follows from the fact that:

 |B|≤∑x∈Bdxdmin=davgndminπ(B).

To bound the effective resistance, consider first the case . In that case, has at least neighbors in , and (using )

 Reff(x↔A)≤1∑a∈Ac(x,a)≤2davgndx3=6davgndx<9davgndmin|B|.

We now consider the case . Let be as close as possible to (in the graph distance). Since each edge has resistance , we have:

 Reff(x↔A)≤Reff(x↔y)≤(2davgn)dist(x,y). (10)

We now bound the graph distance . Let . If is a shortest path from to , then each of the vertices has all of their neighbors in , because all points of are at distance from . We apply the Path Fact to the geodesic path in the induced subgraph and obtain:

 k≤3|B|dmin+1<(3+32)|B|dmin,

because we are assuming . We deduce from (10) that:

 Reff(x↔A)≤9davgndmin|B|,

as desired.

### a.2 Proof sketch for coalescing random walks

We sketch here the proof of Theorem 6 on coalescing random walks. This Theorem generalizes Theorem 4 from the Introduction.

Proof:  [Sketch of Theorem 6] We explain how the proof of [7] can be modified to work in the discrete time setting. The notation and definitions from Section 5 are taken for granted.

As in [7, Proposition 4.1], it suffices to prove that:

 P(τcoal≥c(tmix+thit))≤1−γ

for some universal . We do this by considering a process where less particles die. That is, assume that for each we have a set of ‘allowed killings”. Define new killed process , , by setting for all ; redefining as

 κAs:=inf{t≥0:Xt(a)=Yt(b) for some b

and defining accordingly. As in [7, Proposition 3.4], we may observe that the full coalescence time of this modified process dominates in the sense that for all .

We then apply the same strategy as in [7, Section 4.2]. We partition so that for each the elements of all preceeded all those of (so ) and for (so is of order

). We define a set of epochs by setting

and defining , , , , as:

 tm := 1+⌈2tmix⌉, tj = 1+tj−1+⌈24ln52jthit⌉.

This follows the definition in the paper except for “” terms. The sets of allowed killings are defined in the same way as in that paper:

1. Epoch # : for ;

2. Epochs # through # : for .

3. Epoch # : set

Letting denote universal constants, we have

 t0:=m∑i=1ti≤2tmix+2(m+1)+c0thit≤c(tmix+thit)

where we have whereas is at least linear in . The proofs of [7, Propositions 4.2 - 4.4] and go through as in that paper when one uses our version of the Meeting Time Lemma, Theorem 5. One can then finish the proof as in that article.

## References

• [1] David Aldous, James Allen Fill. Reversible Markov Chains and Random Walks on Graphs. Book draft available from https://www.stat.berkeley.edu/~aldous/RWG/book.html (1994).
• [2] Anna Ben-Hamou, Roberto I. Oliveira, Yuval Peres. “Estimating graph parameters via random walks with restarts.” In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms (2018).
• [3] Colin Cooper, Robert Elsässer, Hirotaka Ono, and Tomasz Radzik. “Coalescing random walks and voting on connected graphs.” SIAM Journal of Discrete Mathematics, 27(4), 1748 - 1758 (2013).
• [4] Varun Kanade, Frederik Mallman-Trenn, Thomas Sauerwald. “On coalescence time in graphs–When is coalescing as fast as meeting?” arXiv:1611.02460.
• [5] David Levin and Yuval Peres. Markov chains and mixing times (2nd edition). American Mathematical Society (2017).
• [6] Russell Lyons, Shayan Oveis Gharan. “Sharp Bounds on Random Walk Eigenvalues via Spectral Embedding.” International Mathematics Research Notices, rnx082, https://doi.org/10.1093/imrn/rnx082 (2017).
• [7] Roberto I. Oliveira. “On the coalescence time of reversible random walks.” Transactions of the American Mathematical Society, 364, 2109 - 2128 (2012).
• [8] Yuval Peres, Thomas Sauerwald, Perla Sousi, and Alexandre Stauffer. “Intersection and mixing times for reversible chains.” Electronic Journal of Probability, 22, paper no. 12 (2017).