1 Introduction
Consider a network that consists of many mobile nodes moving around randomly in some large area. Suppose that each node moves with constant speed, changing its direction of travel at random times, and that one of the nodes carries a message that she wants to transmit to a far away destination in some specific, fixed direction. The message stays with its current carrier until the first time she comes within a certain distance from some other node moving in a better direction, i.e., in a direction closer to that of the intended recipient. In that case, she transmits her message to the other node, and the new carrier then proceeds in the same fashion. What is the longterm average speed with which the message travels towards its destination, as a function of, say, the nodes’ individual speeds and their density? How often, on the average, does the message get transmitted form one node to another?
Networks of this type, where messages propagate via a combination of physical transport (moving with their carrier) and wireless transmissions (being sent from one node to another) belong to the wide class of delaytolerant networks (DTNs) [29]. Examples of DTNs arising in applications include space [2], vehicular [4], sensor [26], and pocketswitched networks [17]. In earlier work by some of the authors [8, 7, 19], the questions of the previous paragraph were considered under very general assumptions on the movement of the nodes and on the protocol under which the message gets transmitted between nodes. In that line of work, as in much of the related earlier work in this area, e.g., [16, 12, 18, 27]
, the complexity of the models involved prohibits the derivation of exact, explicit answers. For that reason, typical results are in the form of asymptotics, approximations, performance bounds, or estimates based on simulation experiments.
In this work we examine two variants of a simple, idealized model, where it is possible to derive explicit, closedform expressions for the performance metrics of interest. We first consider a collection of nodes moving independently on a discrete circle consisting of
locations, in discrete time. Each node maintains their current direction of travel for a geometrically distributed amount of time with parameter
, and one of the nodes carries a message intended to travel as far as possible in the clockwise direction. The message stays with its current carrier unless, while moving counterclockwise, it finds itself in the same location as a different node moving clockwise. In that case the message gets transmitted to the other node, and the same process is repeated.For the case of nodes, in Section 3.1, Theorem 3.1, we show that the longterm average clockwise speed of the message is The proof, given in Section 4, involves the construction of a martingale that solves an associated (discrete) boundary value problem. Similar techniques allow us to compute the average transmission cost , measured as the longterm average number of message transmissions per unit time. In Theorem 3.3 we show that , for all and . Therefore, the message travels a clockwise distance of units between successive jumps (on the average), regardless of . The tradeoff between speed and cost for different values of the parameters and is also discussed in Section 3.1.
Section 3.2 contains continuoustime analogs of Theorems 3.1 and 3.3. Here we consider nodes moving with constant speed on a continuous circle of circumference , changing directions at independent exponential times with rate . The corresponding expressions for the longterm average speed and cost are established in Theorems 3.4 and 3.6, respectively. Again, it turns out that the speed and cost satisfy a simple scalefree relationship: . In other words, the message travels a (clockwise) distance of units between successive jumps, on the average.
The proofs of Theorems 3.4 and 3.6, given in Section 4, involve shorter and somewhat cleaner arguments than their discretetime counterparts. In the continuoustime case, it is more straightforward to construct appropriate solutions to the relevant boundary value problems, which are stated in terms of the infinitesimal generator of the underlying Markov process. What is somewhat cumbersome, is the proof that this Markov process is exponentially ergodic, uniformly in its initial state. The relevant ergodic properties are stated in Proposition 2.1 and Theorem 2.2, both proved in the Appendix.
Although perhaps the most restrictive of our assumptions is that nodes are assumed to move along the circumference of a circle, we note that there has been much recent interest in onedimensional models of DTNs, particularly in connection with the important class of vehicular networks (VANETS); see [4, 5, 30]
and the references therein. Finally, a somewhat less closely related but quite extensively studied problem, in terms of a Markov chain describing the movement of a finite collection of nodes on a circle, is the
server problem introduced in [21]; see, e.g., [11] or [6] for more recent developments.2 Models and Problem Statement
2.1 Random walk on the discrete circle
Let denote the discrete
circle, for a fixed odd
. We place independent random walkers on , located at at time , and with each walker we associate a random direction at time , where is either (clockwise motion) or (counterclockwise motion). The initial positions and directions are arbitrary. The Markov chain evolves on the state space as follows.Let
be a sequence of independent Bernoulli random variables with parameter
. Given the current state , each walker takes a step in the direction given by ,and then decides to either continue moving in the same direction with probability
, or to switch to the opposite direction, with probability :We also define an index process evolving on , with chosen arbitrarily and trying to track walkers that move clockwise: Given , let be defined as above. If and there is at least one more walker, , say, at the same location, , but its direction , then (or a uniformly chosen such if there are multiple candidates). In all other cases, .
It is easy to see from the above construction that is an irreducible and aperiodic chain on the state space consisting of all configurations of the form,
except those where and there is a such that and . Moreover, under the unique invariant distribution of , the distribution of is uniform: The positions
are independent of each other and uniformly distributed on
, and the directions are independent of the positions and each with probability , independently of the others.We are primarily interested in the following three quantities, as functions of and : Direction: What is the limiting distribution of the direction of the message at time ? Speed: What is the longterm average speed of the message? Cost: What is the longterm average number of jumps per unit time? We are also interested in the relationship between the speed and cost: Do higher speeds always imply an increase in cost? Or is there a range of parameter values that improve the speed and cost simultaneously?
2.2 Continuous motion on the circle
Let denote the onedimensional circle of circumference , where is not necessarily an integer. We place independent random walkers on , and with each walker we associate a random direction at time , where is either (clockwise motion) or (counterclockwise motion). The initial positions and directions are arbitrary. The continuoustime Markov process evolves on the state space as follows. The th walker continues moving at constant speed in its present direction,
, say, for an exponentially distributed amount of time with mean
, for some ; during that time its direction remains constant, and afterwards it switches to . The process continues in the same fashion, by choosing a new, independent exponential time for the th walker, and with the different walkers moving independently of one another.We assume that the transitions between directions are such that the sample paths of the process are right continuous, and observe that is strong Markov and, therefore, a Borel right process [28, 13]. And since is compact, is also nonexplosive [24]. The following simple proposition is proved in the Appendix.
Proposition 2.1.

The Markov process is irreducible and aperiodic on , with respect to , where denotes the Lebesgue measure on and the counting measure on .

The process is positive Harris recurrent.

The uniform distribution is the unique invariant probability measure of .
We also define an index process evolving on , with chosen arbitrarily and trying to track walkers that move clockwise. Specifically, stays constant most of the time, and its value only changes when for some , and the direction of the th walker at the time is +1 while . In that case, the value of switches to (or to a uniformly chosen such if there are multiple candidates) and remains there at least until the first time walker encounters a different walker.
Next we show that the Markov process is uniformly ergodic on the state space , which consists of all elements ,
where we identify pairs of states and of the following form: The message is with a different walker in each state, i.e., , all positions and directions are identical, and , the th and th walkers are in the same position , and the two walkers move in opposite directions, i.e., .
As with , we assume that the transitions between directions and between successive values of the process are such that the sample paths of are rightcontinuous, so that is a nonexplosive, Borel right process [28, 13]. Its ergodicity properties are summarized in Theorem 2.2, proved in the Appendix. Here, and throughout the paper, for an arbitrary measure and function we write for , whenever the integral exists.
Theorem 2.2.

is irreducible and aperiodic with respect to , where, as before, and denote the Lebesgue and counting measures on and , respectively, and denotes the counting measure on .

is uniformly ergodic, with a unique invariant probability measure .

converges to equilibrium uniformly exponentially fast: There are constants such that,
for all , all measurable , and all .

The following ergodic theorem holds for : For any bounded (measurable) function and any initial state ,
Finally we note that the dynamics of can be described by its infinitesimal generator . Let denote the collection of all functions continuous functions , such that is continuously differentiable in for each . This is a dense subset of , and the infinitesimal generator of acts on each as,
(1) 
where, for any tuple of directions , is the same as but with its th coordinate having the opposite sign from that of , . The first term in the sum on the righthand side above corresponds to the motion of the th walker at constant velocity , while the second one corresponds to its change of direction at rate . It is easy to see that defined on is a closed operator and there is small enough so that , cf. [14, Proposition 1.3.5], so that is the domain of .
Once again, we are interested in the following three quantities, as functions of and : The limiting distribution of the direction of the message at time ; The longterm average speed of the message; The longterm average number of jumps per unit time. Also, we wish to examine the nature of the tradeoff between the speed and cost.
3 Results: Speed and Cost with m=2 walkers
Here we state and discuss our main results for both the discrete and the continuous case. The proofs are given in Section 4. We adopt the following standard notation: For the probabilities of events depending on an underlying Markov process we write for the measure describing the distribution of the process conditional on , and when for some probability measure . Similarly, and denote the corresponding expectation operators.
3.1 The discrete circle
Consider the problem of walkers on the circle, changing directions with rate , as described in Section 2.1.
Theorem 3.1 (Message speed).
In the case of walkers, for any initial state, the longterm average speed of the message is:
Note that the speed is always less than or equal to , and it is decreasing in both and ; see Figure 1.
In the boundary case , the speed is either 1 or 1, depending on the initial directions of the two walkers. Therefore, is discontinuous at , since as , for any . Figure 2 shows the results of two simulation experiments, illustrating the convergence of the speed of the message to the corresponding value computed in Theorem 3.1.
Theorem 3.1 answers question of Section 2.1. The answer to question is a simple consequence of the theorem, given in Corollary 3.2 below.
Corollary 3.2 (Message direction).
In the case of walkers, for any initial state , the steady state probability that the message moves in the clockwise direction is:
Next we examine the asymptotic cost of message transmissions. Theorem 3.3 describes the longterm average number of jumps of the message, per unit time.
Theorem 3.3 (Transmission cost).
In the case of walkers, for any initial state, the longterm average cost of message transmissions is:
We observe that the cost is decreasing in , and for each fixed it is a concave function of ; see Figure 1. Also, unlike the speed , the cost is continuous and equal to zero at .
Speed vs. cost. It is interesting to observe the following simple, scalefree relationship between the asymptotic speed and cost: , for all and . Therefore, on the average, the message travels a (clockwise) distance of units between successive jumps, regardless of the value of .
In terms of the speed/cost tradeoff, note that for each there is an below which the speed increases and the cost decreases as . This suggests that, if such a protocol were to be implemented in practice, it is the relatively smaller values of that would be most effective in the long run.
3.2 The continuous circle
Now we turn to the problem of walkers on the continuous circle of circumference , moving with constant speed and changing directions at rate . Theorem 3.4 gives the natural continuous analog of the discretetime result in Theorem 3.1.
Theorem 3.4 (Message speed).
In the case of walkers, for any initial state, the longterm average speed of the message is:
Note that the speed is always no greater than , as in the discrete case. Also observe that, as would be expected, is decreasing in the circumference length and increasing in the walker speed . Moreover, is also decreasing in the reversal rate .
Theorem 3.4 answers question of Section 2.2. The answer to question , given below, is an immediate consequence of Theorem 3.4.
Corollary 3.5 (Message direction).
In the case of walkers, for any initial state , the steady state probability that the message moves in the clockwise direction is:
In our final result we determine the longterm average number of jumps per unit time.
Theorem 3.6 (Transmission cost).
In the case of walkers, for each time let denote the (random) number of times the message jumps from one walker to the other up to time . Then, for any initial state, the longterm average cost of message transmissions is:
Observe that the cost is naturally increasing in and decreasing in . But, unlike in the discrete case, is monotonically increasing in . Again we also observe that there is a simple, scalefree relationship between the asymptotic speed and cost, : In the longrun, the message travels a (clockwise) distance of units between successive jumps.
A comparison of the results of Theorems 3.4 and 3.6 with their discretetime analogs is perhaps informative. Consider a large discrete circle of size and a small rate of direction updates . Then, noting that the circumference of the continuous circle corresponds to in the discrete case (since, because is odd, that is the number of steps required for two walkers starting in the same location and moving in opposite directions to meet again), we have the following scaling limit. Taking the speed in the continuous case, and the rate in the discrete case to be such that , passing to the continuous limit we obtain,
Finally we note that the parameters of the problem define the dimensionless quantity , and dimensional analysis alone (meaning, speeding up time by a constant factor, or dilating space by a constant factor) shows that and , for suitable functions . In this light, our results can be interpreted as showing that , for all .
4 Proofs
4.1 The discrete circle
Proof of Theorem 3.1. First, consider the reduced chain,
where the differences are taken modulo . Clearly is irreducible and aperiodic on the corresponding reduced state space consisting of all configurations, of the form,
except and . Let denote the unique invariant measure of . The limit in the theorem exists a.s. by ergodicity; in order to compute its actual value, we define the following regeneration time,
(2) 
and we consider two special states of : and . Let denote the probability measure on given by,
(3) 
Then is indeed a regeneration time for in the sense that, with , we also have . We will use the following general version of Kac’s formula; cf. [1, Corollary 2.24].
Lemma 4.1.
For any function and any regeneration time for :
To apply Lemma 4.1, we first compute :
Lemma 4.2.
Proof. Consider the (further restricted) chain on the state space , and note that its unique invariant measure is uniform. Write, let denote the measure conditioned on , and let,
so that, in fact, . Then, by Kac’s formula [1], we have,
as claimed. ∎
The central step in the proof of the theorem is an application of Lemma 4.1 with , which, combined with Lemma 4.2 gives us that can be expressed as,
where the sums above (and in what follows) correspond to addition over (as opposed to modulo addition over ), and where the second equality follows from the fact that, by the definition of , the message is with walker 1 up to time . Therefore, writing , for , , we have,
where we noted that is zero by symmetry, since the two walkers start off in opposite directions.
Now write, and let denote the first time when the two walkers meet,
so that can be expressed in terms of either or . We observe that, at time , either the two walkers decide to go in opposite directions, in which case , or they continue moving together until they choose opposite directions, in which case the difference of their locations stays constant; therefore,
Since the last expectation above is conditioned on the two walkers starting from the same position, in opposite directions, and with the first one moving in the positive (clockwise) direction, there are exactly two possible scenarios for their first meeting time : In the first scenario, at time walker 1 is two steps “ahead” in the clockwise direction of walker 2 (as they are, e.g., at time ). In this case, we will necessarily have . We call this event . In the second scenario, the relative positions of the two walkers at time will be reversed, which necessarily means that the first walker travelled a whole circle “around” the second one before they met, so that (since is odd) on , we must have . Therefore,
(4) 
Finally we compute the probability of the event :
Lemma 4.3.
Proof. Here we consider the chain on , where . Note that, for the state , the initial condition corresponds to .
We will only need to examine the evolution of until time , which, since is odd, can equivalently be expressed as,
and the same argument as in the last paragraph before the statement of the lemma shows that, given , the only two possible values of are 0 and , on and on , respectively. Therefore, letting,
we have that and ; cf. Figure 3.
In fact, for this computation it will suffice to consider the trace of on the set,
cf. [25, Example 1.4.4.]. The evolution of this Markov chain is fairly simple and its transition probabilities are easy to compute; e.g., the probability of the transition from to is equal to,
The first term above corresponds to the case when the two walkers both maintain their original directions after their first step; the second term corresponds to the case when only the first walker changes direction, after which they keep moving at a distance two apart, until one of them changes direction again and they either reach the state or the state , each having probability 1/2 by symmetry; and the third term corresponds to the case when only the second walker changes direction after their first step, and its value is the same as the second term again by symmetry. The remaining transition probabilities can be similarly computed; see Figure 3.
Finally, for every state we define , so that . Writing and for the states and , respectively, we have , , and in fact it is easy to see that the onestep conditional expectation of given any state or , is equal to . This relationship can be expressed as a simple recursion: Letting and , we have,
Adding the first two equations above shows that is a constant, say , independent of , and substituting this in the recursion for gives . Similarly solving for we obtain, , and from the boundary values we can solve for to get, . Therefore,
as claimed. ∎
Proof of Theorem 3.3. Recall the ergodic chain defined in the beginning of the proof of Theorem 3.1. Write for its state space, for its unique invariant measure, and let denote its transition kernel, , . Now consider the bivariate chain Then is also ergodic, with unique invariant measure,
for every state of . Therefore, the limit in the statement indeed exists a.s., and it equals,
where consists of the following 8 states,
and where, with a slight abuse of notation, the negative values of the variables above are again interpreted modulo .
In order to compute the actual value of , we first observe that,
where we used the fact that the invariant distribution of is uniform, which implies that , for any and . Simplifying,
(5) 
To compute the difference of the two probabilities in (5), recall the definition of the regeneration time and the measure in (2) and (3), respectively. By Lemma 4.1, for any state of the form , we have,
where the chain was defined in the proof of Lemma 4.3 and as before. Therefore, substituting this twice in (5) and recalling the discussion of the evolution of until time from the proof of Lemma 4.3, we have,
Comments
There are no comments yet.