1 Introduction
Random walks on graphs and other reversible Markov chains are fundamental mathematical objects, with applications in algorithms and beyond. In this paper, we derive new bounds on return probabilities, hitting times, and meeting deterministic trajectories for these processes. We also bound the expected full coalescence time of multiple random walks on the same graph.
We state our main results in terms of lazy random walks on graphs, although some of them hold for more general chains (cf. Section 2). In the paper is always a finite, unoriented, connected graph with vertex set , with no loops or parallel edges and vertices. The degree of is denoted by . Minimum, maximum, and average degrees in are denoted by , and , respectively. The parameter always denotes discrete time and takes values in .
Definition 1
Lazy random walk (LRW) on is the Markov chain on with transition matrix given by:
We let denote trajectories of the chain and , , , denote probabilities and expectations when or has law (respectively). Finally, we denote by the stationary distribution:
Recall that the hitting time of is
We write for . The maximum expected hitting time of LRW on ,
is a natural parameter for a Markov chain, with connections to electrical network theory, cover times, mixing times and many other aspects of such processes. See [5] (especially Chapter 10) for more information.
1.1 Hitting and returning
Our first two results estimate and return probabilities in terms of the relaxation time of (cf. Section 2). These quantities are related by the formula [5, Proposition 10.26]:
(1) 
These Theorems are specific to random walks on graphs. As we explain in Section 2.1, the following bounds are essentially best possible for general reversible chains.
(3) 
Kanade et al. [4] obtained the following improvement to (1.1) in the graph setting:
(4) 
where is the mixing time [5, Theorem 12.5] and is universal. Theorem 1 has better dependence on and on the ratio . Examples in Section 2.2 show that our bound is sharp in both of these parameters.
The best previous result for return probabilities on graphs is due to Lyons and OveisGharan [6, Theorem 4.9]^{1}^{1}1That result is for regular graphs, but the authors explain right after the proof that it can be extended to all graphs in the form presented here.:
(5) 
Theorem 2 improves the constant, replaces with , and improves both (5) and (3) when .
1.2 Results on meeting and coalescing
Our next result bounds the probability that LRW hits a moving target in terms of .
This is a discretetime version of the “Meeting Time Lemma” in [7]. Notice that this Theorem fails for nonlazy random walk on a bipartite graph. On the other hand, Theorem 3 can be extended to all reversible Markov chains with nonnegative spectrum (cf. Section 2 below).
The original application of the continuoustime version of Theorem 3 was to the study of coalescing random walks. Following the argument in [7], we may obtain a discretetime version of the main result of that paper, which had been previously conjectured by Aldous and Fill [1].
Theorem 4 (Definitions in Section 5, proof sketch in Appendix a.2)
Under Definition 1, consider coalescing random walks that evolve according to LRW on . Let denote the expected time until these walks have all coalesced into one. Then with a universal constant.
Kanade et al. [4, Theorem 1.3] obtained a partial form of Theorem 4 where grows slowly with the ratio . In particular, their result implies Theorem 4 when is regular. They also obtained sharp results relating coalescing and meeting times in general [4, Theorems 1.1 and 1.2] that do not follow from our methods.
1.3 Additional comments
We finish the introduction with additional comments on our results. Theorem 1 gives a bound on that may be viewed through the lens of electrical network theory. The relation between effective resistances and commute times [5, Chapter 9] gives the following immediate corollary.
Corollary 1 (Proof omitted)
Let be finite connected graph. Place unit resistances on each edge of . Then the efffective resistance between any two vertices is bounded by , where is universal.
Intriguingly, we do not know of a direct proof of this result via the Dirichlet form or a network reduction.
Theorem 2 on return probabilities may be used to tighten results on random walk intersections by Peres et al. [8]. Let be the maximum expected intersection time of two independent lazy random walks over from worstcase initial states. Proposition 1.8 in [8] gives the following bound for regular graphs:
and the uniform mixing time. With Anna BenHamou, we have used Theorem 2 to obtain an improved bound that holds for all graphs:
which can be shown to be sharp. This result will appear in the full version of [2].
2 Preliminaries
Before we prove our results, we discuss basic properties of LRW. The books [1, 5] contain much more material on reversible chains and random walks on graphs.
LRW as in Definition 1 is an irreducible chain because is connected. LRW is also reversible: for all . This means that , as an operator over , is is selfadjoint with respect to the inner product:
So has real spectrum
and fact that for all implies . That is, LRW on a finite connected graph is a reversible and irreducible finite Markov chain with nonnegative spectrum. In particular is positive semidefinite and has a positive semidefinite square root . The relaxation time of such a chain is .
Let denote the function that is equal to everywhere. To each
we may associate an eigenfunction
(with ) so that is an orthonormal basis of . We use to denote the norm corresponding to the inner product in .2.1 General upper bounds for hitting and returning
Using the above notation, we explain why the bounds on hitting times (1.1) and return probabilities (3) are nearly optimal for a reversible chain with nonnegative spectrum.
Set , so that . Let be the matrix with entries (). Let
denote the identity matrix. The Markov chain that, at each step, stays put with probability
and jumps to a point picked from with probability , has transition matrix:has the same eigenbasis as
, with eigenvalues
(with multiplicity ) and (multiplicity ). Thus and have nonnegative spectrum, the same relaxation time, and moreover for all and . In particular:and (with denoting expectation for )
In particular, (3) is exact for , whereas (1.1) is sharp up to a constant factor.
2.2 Lower bounds for hitting on graphs
We showed above that, in the graph setting, our Theorem 1 improves upon the bestpossible results for chains with nonnegative spectrum. The next two examples show that the dependence of Theorem 1 on and is best possible.

A streched expander. Take a regular expander on vertices. Subdivide each edge into a path of length . The resulting graph has vertices, relaxation time , and .

A lolipop. Take a regular graph on vertices which has constant relaxation time. Connect a path of length to one of the the vertices of . The resulting graph has , and maximum hitting time .
3 Return probabilities and hitting times
In this section we prove Theorem 1 on and Theorem 2 on . We will need two lemmas on “Green’s function”:
(6) 
Lemma 1 (Proof in Section 3.1)
Consider , a reversible and irreducible finite Markov chain with nonnegative spectrum, stationary distribution and state space . Fix and . Then:
Proof: [of Theorem 1] For , formula (1) may be rewritten as:
Lemma 1, Lemma 2 and the formula give:
To finish, we bound and use [5, Lemma 10.2].
Proof: [of Theorem 2] Using the same results as in the previous proof,
The result follows from estimating the constant by .
3.1 Sum up to the relaxation time and stop
3.2 Estimate on Green’s function
We prove Lemma 2 by adapting [1, Proposition 6.16] to nonregular graphs. We need two Propositions that we prove in Appendix A.1 via electrical network theory.
Proposition 2 (Proof in Section a.1)
Proof: [of Lemma 2] Fix and define the set:
We bound the measure of via Markov’s inequality and reversibility:
In particular, when . We now claim that:
(8) 
In fact, assuming this we obtain the Proposition by setting
To prove (8), we may assume . Note that the number of returns to up to time is at most the sum of returns up to time with those occurring at times . The strong Markov property gives:
and inequality (8) follows from Proposition 2 (applied to the first term in the RHS) and the definition of (applied to the second term in the RHS).
4 The Meeting Time Theorem in general form
We now present a more general version of the Meeting Time Theorem (Theorem 3 above). We use the same notation and definitions for trajectories, hitting times &c that we defined for LRW. We also take the material and notation from Section 2 for granted.
Theorem 5
Consider a reversible and irreducible finite Markov chain with nonnegative spectrum which is defined over a finite set and has stationary measure . Then for any and any choice of :
Proof: Given a linear operator , we denote its operator norm by:
For , let be a diagonal matrix that that has ’s at entries with and a zero at the entry . is selfadjoint with respect to . Defining
we see that
Thus it suffices to estimate the norm of . We do this in two steps.

;

For any , .
For step 1 we employ (defined in Section 2) and the fact that for each . The norm of may be bounded by:
( submultiplicative)  (9) 
This last product of norms contains terms of the form or for elements . Now, for any linear operator with adjoint , and are both equal to . Apply this to and obtain:
Thus step 1 follows from (9). For step 2, we fix . Notice that is selfadjoint and positive semidefinite (because is), so equals the largest eigenvalue of . According to [1, Section 3.6.5 and Theorem 3.33], the largest eigenvalue is equal to for some quasistationary disitribution over . Since , the result follows.
5 Coalescing random walks
In this section we present a generalization of Theorem 4.
Theorem 6 (Proof sketch in Appendix a.2)
Consider a reversible and irreducible finite Markov chain with nonnegative spectrum which is defined over a finite set . Let denote the full coalescence time of a system of coalescing random walks that evolve according to . Let . Then:
Let us define the process, postponing its analysis to Appendix A.2. Following [7, Section 3.3], we use a definition in terms of “killed particles”. That is, when several particles meet on the same vertex at the same time, only one the particle with lower index survives (one can alternatively think that the two particles coalesce). To make this formal, consider a Markov chain on a finite set . Write and let
be independent trajectories on evolving according to , each with initial state . Now define killed trajectories as follows. Let be a “coffin state”. We set for all . Given , assume have been defined. Now let^{2}^{2}2This time is called in [7].:
For , we set
One can check that the set valued process
is a timehomogeneous Markov chain on . It is this process that we call “coalescing random walks” evolving according to ”. The full coalescence time^{3}^{3}3Called in [7]. is:
Appendix A Appendix
a.1 Auxiliary results via electrical network theory
We prove here Propositions 1 and 2. As noted in the main text, these results adapt the reasoning of [1, Proposition 6.16] to nonregular graphs.
We work under Definition 1. We will apply the theory of electrical networks [5, Chapter 9] by assigning conductance to each pair . In particular, and for each pair with . We will also use a standard path fact that follows from the argument in the end of the proof of [5, Proposition 10.16, part (b)].
Fact 1 (Path fact)
If is a geodesic path in , then .
Proof: [of Proposition 1] The first inequality follows from [5, Lemma 12.17]. To bound the maximum commute time, we note that, by a simple network reduction argument,
is at most the diameter of times the resistance of a single edge. The latter quantity is , whereas the diameter is by the Path Fact. Since , the Proposition follows.
Proof: [of Proposition 2] Thanks to [5, Lemma 9.6] and a standard network reduction:
where denotes effective resistance. Letting , our main goal will be to show:
as the result then follows from the fact that:
To bound the effective resistance, consider first the case . In that case, has at least neighbors in , and (using )
We now consider the case . Let be as close as possible to (in the graph distance). Since each edge has resistance , we have:
(10) 
We now bound the graph distance . Let . If is a shortest path from to , then each of the vertices has all of their neighbors in , because all points of are at distance from . We apply the Path Fact to the geodesic path in the induced subgraph and obtain:
because we are assuming . We deduce from (10) that:
as desired.
a.2 Proof sketch for coalescing random walks
We sketch here the proof of Theorem 6 on coalescing random walks. This Theorem generalizes Theorem 4 from the Introduction.
Proof: [Sketch of Theorem 6] We explain how the proof of [7] can be modified to work in the discrete time setting. The notation and definitions from Section 5 are taken for granted.
As in [7, Proposition 4.1], it suffices to prove that:
for some universal . We do this by considering a process where less particles die. That is, assume that for each we have a set of ‘allowed killings”. Define new killed process , , by setting for all ; redefining as
and defining accordingly. As in [7, Proposition 3.4], we may observe that the full coalescence time of this modified process dominates in the sense that for all .
We then apply the same strategy as in [7, Section 4.2]. We partition so that for each the elements of all preceeded all those of (so ) and for (so is of order
). We define a set of epochs by setting
and defining , , , , as:This follows the definition in the paper except for “” terms. The sets of allowed killings are defined in the same way as in that paper:

Epoch # : for ;

Epochs # through # : for .

Epoch # : set
Letting denote universal constants, we have
where we have whereas is at least linear in . The proofs of [7, Propositions 4.2  4.4] and go through as in that paper when one uses our version of the Meeting Time Lemma, Theorem 5. One can then finish the proof as in that article.
•
References
 [1] David Aldous, James Allen Fill. Reversible Markov Chains and Random Walks on Graphs. Book draft available from https://www.stat.berkeley.edu/~aldous/RWG/book.html (1994).
 [2] Anna BenHamou, Roberto I. Oliveira, Yuval Peres. “Estimating graph parameters via random walks with restarts.” In Proceedings of the TwentyNinth Annual ACMSIAM Symposium on Discrete Algorithms (2018).
 [3] Colin Cooper, Robert Elsässer, Hirotaka Ono, and Tomasz Radzik. “Coalescing random walks and voting on connected graphs.” SIAM Journal of Discrete Mathematics, 27(4), 1748  1758 (2013).
 [4] Varun Kanade, Frederik MallmanTrenn, Thomas Sauerwald. “On coalescence time in graphs–When is coalescing as fast as meeting?” arXiv:1611.02460.
 [5] David Levin and Yuval Peres. Markov chains and mixing times (2nd edition). American Mathematical Society (2017).
 [6] Russell Lyons, Shayan Oveis Gharan. “Sharp Bounds on Random Walk Eigenvalues via Spectral Embedding.” International Mathematics Research Notices, rnx082, https://doi.org/10.1093/imrn/rnx082 (2017).
 [7] Roberto I. Oliveira. “On the coalescence time of reversible random walks.” Transactions of the American Mathematical Society, 364, 2109  2128 (2012).
 [8] Yuval Peres, Thomas Sauerwald, Perla Sousi, and Alexandre Stauffer. “Intersection and mixing times for reversible chains.” Electronic Journal of Probability, 22, paper no. 12 (2017).