1. Introduction
1.1. Background and undesirable ergodicity assumption
Since the emergence of wireless networks and following their continual development, the impact of user mobility on network performance has attracted significant attention. In [9], the authors showed that mobility creates a multiuser diversity leading to a significant improvement in peruser throughput. Since this seminal work, the observation that mobility increases throughput has been confirmed in a wide variety of situations captured by various stochastic models, see [2, 3, 4, 5, 6, 7, 16, 22]. Interestingly, to the best of our knowledge, the first paper to show that mobility could under certain circumstances actually degrade delay only appeared recently [1].
In all these models, user mobility is represented by an ergodic process on a finite region of the plane. For instance, users follow in [9] a stationary and ergodic trajectory on the unit disk; in [3, 4, 5, 6], users follow an irreducible Markovian trajectory in a network consisting of a finite number of cells. In our view, one of the limitations of such a modeling assumption is the highly unrealistic behavior it displays under congestion. Indeed, in the congestion regime, users stay in the network for a long time, so that if their trajectory is ergodic, they necessarily visit the same place a large number of times, as if they were walking circularly.
1.2. Highlevel model description and motivation
In the present paper, we pursue the modeling approach started in [17, 23]. The main idea to alleviate the aforementioned drawback resulting from the ergodic trajectory assumption is to focus on a single cell and abstract the rest of the network as a single state. By doing so, we only keep track of the precise location of users when they are located in the considered cell: when located elsewhere (either outside the network or in the rest of the network), we do not track them precisely. This simple model could be generalized by focusing on several cells rather than a single one (see the discussion in Section 5 below). Users can thus be in one of three “places”, as pictured in Figure 1:

outside the network, meaning that they do not require service (the left fluffy shape);

in the considered cell (the middle hexagon);

in the network but not in the considered cell, i.e., in the rest of the network (the right fluffy shape).
Moreover, our work is motivated by future LTE networks where cells can be small in range (pico, femto cells). In this case, users experience similar radio conditions and we will therefore assume below that they receive the same transmission capacity, independently of her location within the cell. While focusing on the spatial mobility aspect of users, the present study consequently ignores the possible spatial variations of transmission capacity inevitably presented by larger cells. In the following, this equal capacity is denoted by .
1.3. Mathematical model and results
Our mathematical model is introduced in two steps. At this stage, we only give a highlevel description of our model in order to give the big picture: details are provided in Section 2.
We first introduce a “free” model , which is simply a twoclass ProcessorSharing queue with one impatient class: from the mobile network perspective, nonimpatient users correspond to static users who do not move, and impatient users to mobile users who move and thus potentially leave the cell to the rest of the network. The nonzero transition rates of Markov process are given by
(1.1) 
with and (see Section 2.1 for a detailed interpretation of these parameters); as specified below, represents the impatience/mobility rate.
In a second step, we introduce our full model which is obtained from the free model (1.1) by enforcing a balance condition in the form of the fixedpoint equation (FP) detailed below. This fixedpoint equation means that the flows of mobile users to and from the rest of the network must balance out. This condition consequently means that the considered cell is “typical”, in that the cell imposes a load on the rest of the network equal to the reciprocal load from the rest of the network to the considered cell.
If denotes the load of static (i.e., nonimpatient) users and the load of mobile (i.e., impatient) users, the stability condition without enforcing this balance equation is since class users are impatient and thus cannot accumulate (see Lemma 2.1). From the mobile network perspective, the interpretation is that mobile users can always escape to the rest of the network where they are not tracked. The stability condition is therefore clearly fictitious, because even if we do not keep track of the precise location of mobile users in the rest of the network, they still impose a load on the network which should be accounted for. When enforcing the balance equation (FP), the stability condition then becomes which is the natural expected stability condition since, considering the cell as a representative cell of a larger network, is the normalized load per cell (see Lemma 2.3).
The study of this model is driven by the desire to understand the impact of mobility on performance. We wish, in particular, to address questions such as: given the total load , does the network perform better if the proportion of mobile users increases ? Answering such a question being generally difficult, we here resort to the approximation obtained in the heavy traffic regime where . In addition to providing useful insight into the impact of mobility on performance, this model turns out to exhibit a highly original heavy traffic behavior, whereby the number of users in system scales like as . If all users were static we would have the usual scaling; our model therefore suggests that not only throughput but also delay is improved with mobility.
To the best of our knowledge, this unusual heavy traffic scaling only appeared earlier [14] in the case of the ShortestRemainingProcessingTime service discipling with heavy tails service distribution. In this case, such an improvement is conceivable: indeed, since the service distribution is heavy tailed, very long jobs are not so rare. If the service discipline is FIFO, then these jobs impose a very large delay on the numerous smaller jobs that arrive after them. With SRPT, in contrast, only the large jobs spend a long time in the network, essentially due to their large service requirement. As regards the impact of mobility in wireless networks, it has been already observed [23], through an approximate analysis and extensive simulation, that the performance gain due to mobility can be related to an “opportunistic” displacement of mobile users within the network; in fact, any local increase of traffic in one given cell induces the displacement of the moving users to a neighboring cell in order to complete their transmission, hence alleviating the traffic for remaining (static or moving) users in the original cell. Our contribution in this paper is to theoretically justify this statistical behavior in the heavy traffic regime.
1.4. Organization of the paper
2. Model description and main result
We now introduce our model in details: as above, we first address a “free” model simply represented by a twoclass ProcessorSharing queue with one impatient class; further, we introduce the full model which derives from the free model by enforcing a balance condition in the form of a fixedpoint equation (FP). We then state our main result and explain the main steps of the proof.
2.1. Free model
In the free model represented by the Markov process , with nonzero transition rates (1.1), we consider two classes of users:
1) class users are static: they arrive to the cell from the outside at rate
, require a service which is exponentially distributed with parameter
and are served according to the ProcessorSharing service discipline. They consequently leave the network (to the outside) at an aggregate rate , with the number of class users;2) class users are mobile: they arrive to the cell from the outside at rate , require a service which is exponentially distributed with parameter and are served according to the ProcessorSharing service discipline. As for class users, they leave the network to the outside upon completing service at an aggregate rate ; the difference with class users is that they are mobile and can thus leave the cell (now, to the rest of the network and not the outside) before completing service. We assume that each mobile user leaves the cell at rate , and so class users leave the cell to the rest of the network at an aggregate rate . Finally, mobility can also make users enter the cell from outside the network and we assume that this happens at rate .
At this stage, it is apparent from rates (1.1) that differentiating the outside and the rest of the network is artificial and bears no consequence on the distribution of this Markov process. All that matters is the total arrival rate and the total service rate of class users. This distinction, however, will become crucial later.
The distribution of Markov process with nonzero transition rates (1.1) thus depends on the five parameters , , , and (and more precisely, on and only through their sum ). The superscript f refers to “free”, as the “full” process in that we will be mainly interested belongs to this class, but with chosen as a function of the other four parameters , , and .
In the rest of the paper, we write and . The following result describes the stability region of , which depends on whether or . Whenever is positive recurrent, we denote by
its stationary distribution. Here and throughout the paper, vector inequalities are understood componentwise, so for instance
means that for .Lemma 2.1.
Stability of depends on whether or in the following way:

if , then is positive recurrent if , null recurrent if and transient if ;

if , then is positive recurrent if , null recurrent if and transient if .
In either case, when the process is positive recurrent, then we have .
These results can be proved with Lyapounovtype arguments and the comparison with suitable queues. Such arguments are standard and the proof is therefore omitted.
2.2. Constrained model
The previous result formalizes the behavior pointed out in the introduction, namely that in the presence of mobile users (i.e., when ), mobile users do not matter as regards to stability. In fact, if they accumulate, they can then escape to the rest of the network where they are not tracked. However, this is only an artifact of our modeling approach since mobile users that escape to the rest of the network should somehow be accounted for. The goal of the constrained model that we now introduce aims at doing this; it is obtained by taking as a function of the other four parameters through a fixedpoint equation.
2.2.1. The fixedpoint equation
In the free model, the three parameters , and govern the transition involving the outside, while the two parameters and govern transitions within the network. Out of these five parameters, all but can be considered as exogenous and dictated by the users’ behavior: how often do they arrive, how fast they move, etc. In contrast, is hard to directly tie down with users’ behavior and is more an artifact of our modeling approach.
In order to fix the value of in an exogenous way, the idea is to impose a balance condition. Roughly speaking, we assume that the cell is in equilibrium (see Section 5 for a discussion on this assumption) and that the flows of mobile users to and from the rest of the network balance each other. Provided that is positive recurrent, we thus want to impose the balance equation
(FP) 
We note that (FP) is a fixedpoint equation, as is a function of , the other four parameters being kept fixed. Provided that there exists a unique solution to (FP) with the four parameters and given (necessary and sufficient conditions for this will be stated below), this unique solution is denoted by . We then consider the process with the same transition rates (1.1) than the free process, but where the value of parameter has been set to , chosen as a function of and via (FP). The process will be the main object of investigation in this paper.
Definition 2.1.
Our main result is that even a slight amount of mobility (i.e., even very small, instead of ) dramatically increases the performance of the network and leads to a unusual heavy traffic scaling. To explain this we first discuss the case with no mobility.
2.2.2. Heavy traffic regime
When we say , we mean that we consider a sequence of systems indexed by , where the parameters , , and in the th system satisfy (where , ) and as , we have , , with , where and . We then use the notation to mean weak convergence as .
We will also consider convergence when other parameters vary. We use, in particular, the notation to mean weak convergence as , and also introduce another parameter and use the notation to mean weak convergence first as and then as . To be more precise, means that for any continuous and bounded function we have
2.2.3. The case
Consider now the case . We distinguish two cases :

if , then the free process is transient or null recurrent, and so (FP) is not defined;
Thus, the constrained model is only defined for ; in this case, it corresponds to the free process with and is in particular positive recurrent. The following result, taken from [19], states that its heavy traffic behavior obeys the usual scaling.
Lemma 2.2.
2.2.4. The case
We now show that, whatever the value of , the behavior changes dramatically and leads to a unusual scaling. We first investigate the existence and uniqueness to the fixedpoint equation (FP). The proof relies on monotonicity and continuity arguments detailed in [17] and it is thus only briefly recalled here.
Lemma 2.3.
This result is comforting: indeed, is the “natural” stability condition. Comparing Lemmas 2.1 and 2.3, we see that imposing (FP) changes the stability condition from (mobile users do not matter) to (mobile users matter). Moreover, we observe the peculiar feature that, whenever the stability condition is violated, the Markov process is not defined at all, and not simply transient as is usually the case. This is due to the fact that we seek to impose a longterm balance equation through (FP), which cannot be sustained for a system out of equilibrium.
For completeness and since the key equation (2.2) below will be useful later, we provide a short sketch of the proof of Lemma 2.3. So consider and assume , since otherwise is not defined. Let
the other four parameters being fixed. The balance of flow for the free system entails or equivalently,
(2.1) 
In particular, (FP) is equivalent to
(2.2) 
Since , this relation shows that no solution can exist for . Assume now that . It is intuitively clear that is continuous and strictly decreasing to : as class
users arrive at a higher rate, the probability of the system being empty decreases strictly and continuously to
. As after (2.1), this entails the existence and uniqueness of solutions to (FP). We recall that this unique solution is written and defineas the total arrival rate of class users in the constrained model.
According to Lemma 2.3, the heavy traffic behavior consists in letting when . The following result is the main result of the paper. Extensions of this result are discussed in Section 5.
Theorem 2.4.
Assume that . As , the sequence
is tight and any of its accumulation points is almost surely smaller than the point given by
This result shows that adding even a slight amount of mobility, i.e., going from to , dramatically changes the heavy traffic behavior, making scale like instead of . We could actually show that is indeed the right order, i.e., accumulation points are (see Section 5).
Remark 2.5.
It is surprising that this upper bound does not depend on . Indeed, when , Lemma 2.2 implies that and so interchanging limits suggests that should converge to a limit that should blow up as . This is not the case, however, and we actually conjecture that converges to a limit independent of (see Section 5). That limits cannot be interchanged testifies from the subtlety of the result, which, we believe, is due to the fact that we need an unusual large deviation result for a two timescale system, see Section 5.2.
Let us now explain where this unusual scaling comes from: the idea is to reduce the problem to questions on the free process by writing
(2.3) 
It is easy to see that as . Thus, as is a particular case of , understanding the asymptotic behavior of as amounts to understanding the asymptotic behavior of as . The following result specifies this behavior.
Lemma 2.6.
Assume that and . Then as , the sequence is tight and any accumulation point is almost surely smaller than the constant with given as in Theorem 2.4.
As , in particular, the sequence is tight and any accumulation point is almost surely smaller than the constant .
Next, (2.2) shows that
and so, for the same reason as above, understanding the asymptotic behavior of as amounts to understanding the asymptotic behavior of as .
Lemma 2.7.
Assume that . For any , we then have
In particular,
In view of (2.3), the two previous lemmas directly imply Theorem 2.4. In other words, the scaling of arises for the two following reasons:

the (at most) linear increase of as ;

the exponential decay of as .
Remark 2.8.
In Section 5, we discuss refinements of these upper bounds: in particular, we show how to prove that , and we conjecture that
with constant
Remark 2.9.
The linear increase in of is natural in the setting of singleserver queues. Moreover, the refinement suggests that is of the order of . This makes state far from the typical value of and the exponential decay of the stationary probability of being at is thus expected in view of the Large Deviations theory. The link with the Large Deviations theory is discussed in more details in Section 5.
3. Proof of Lemma 2.6
In the rest of the paper, we use several couplings. We use the notation to mean that we can couple and such that . If and are random processes, this is to be understood as for all , and vector inequalities are understood componentwise.
In order to prove Lemma 2.6, we first exhibit a family of processes indexed by some additional parameter and with for every , and . We build this coupling in two steps, and then analyze the process . In order to prove that , we then exhibit another family of processes with and such that .
3.1. First coupling:
Starting from (1.1), the first step consists in neglecting the term in the departure rate of by lower bounding it by . When we do so, this makes the departure rate smaller for the second coordinate, which makes it larger, which in turn makes the departure rate from the first coordinate smaller, and hence the first coordinate larger. Thus if is the valued Markov process with nonzero transition rates
then we have . For completeness, we provide a proof of this result.
Proof of .
Let the current state of our coupling be with . We see as the “small” system and we index its customers by with (the user class) and . The “big” system has the same customers and also additional ones which we label with and . The next transition is built as follows:

at rate , go to ;

at rate , go to ;

each customer of type has an exponential clock with parameter and leaves the system if it rings: note that if this only affects the big system, while if this affects both systems;

at rate , do the following:

choose a customer from the big system uniformly at random, i.e.,

if is in the small system, let ;

else, let be chosen uniformly at random in the small system independently from everything else;
Then remove the customer from the small system, and remove the customer from the big system if it is of type .

This construction is such that

if a class customer arrives in the small system it also arrives in the big system;

if a class customer leaves the big system and not the small one, then this customer was an “additional” customer which was in the big system but not in the small one.
In particular, this construction leads to a state with . Moreover, the small system has the same dynamics as because is chosen uniformly at random in the small system, and the big system has the same dynamic as . Thus, this indeed builds a coupling of and with , as desired. ∎
3.2. Second coupling:
Starting from , we build by lowering the service rate of : when is larger than some threshold , we put the service to , and when , we put instead of , the former being indeed smaller than the latter when . More precisely, we fix (which is omitted from the notation for convenience) and we define and the valued Markov process with nonzero transition rates
so that (in contrast to the inequality , the proof bears no difficulty and is thus omitted). Since , this gives as desired.
Note that is an queue, so that
follows a Poisson distribution with parameter
. In particular, we obtain the convergence and so we only have to prove that in order to prove Lemma 2.6. To do so we resort to another coupling and compare to a birthanddeath process .3.3. Third coupling:
As is larger than the equilibrium point of , excursions of above level are rare and so is only rarely turned off. For this reason, it is natural to compare with the process obtained by putting the indicator function to . To do so, let
we directly build the coupling that we need and consider the valued Markov process with the following nonzero transition rates:
(3.1) 
with
In words, what this process does is the following:

and are independent Markov processes;

is a statedependent singleserver queue with arrival rate and instantaneous service rate when in state ;

is an queue with arrival rate and service rate ;

has the same arrivals than , but departures are different: there are additional departures at rate when , and no departure when .
Since the function has been chosen so that
we see that is a Markov process with the same transition matrix than , and so we will actually write and we have .
In particular, this coupling defines several Markov processes, such as , , , and . For ease of notation, we will use the notation to denote the law of these Markov processes starting at , where the dimension of depends on the process considered. For instance, if is measurable with respect to and is measurable, we will use the notation
that actually means
and
Recall that the goal is to prove that : what we will do is first prove this result for , which is much simpler since is a birthanddeath process (whereas on its own is not Markov) and then transfer this result to .
3.4. Control of
Let us now prove that . Let first . Since the function is increasing, when is above level its departure rate is at least . Thus, if is an queue with arrival rate and departure rate , we have . Likewise, if is an queue with arrival rate and departure rate , we have .
Recall that and that : the load of is
and the load of is
We thus deduce that are subcritical queues (uniformly in , with fixed), so that
Since , we obtain
and in view of , we finally get the desired result for , namely .
3.5. Transfer to
We now transfer the result for to thanks to their coupling (3.1). Recall that and obey the same dynamics, with the exception that service in is interrupted when makes excursions above . To compare their stationary distributions, we consider their trajectories over cycles of , where a cycle starts when and ends when returns to from above: so there is a long period corresponding to where and have the same dynamics, and they get closer (because the departure rate from is larger), and then a short period when where departures from are turned off and and get further apart (when there is a departure from ). Considering such cycles makes the comparison between and tractable.
To formalize this idea, define recursively the stopping times and
and let , . Note that and
are ergodic Markov chains. Let
and be their respective stationary distribution and note, since and are independent, that in distribution.Let now and ; for any function , define the functions and by
The following result then relates the stationary distribution of and to that of and , respectively. In the sequel, we write for the norm of a function .
Lemma 3.1.
For any bounded function we have
Proof.
We present the arguments only for , as the same arguments apply to . In this proof, denotes the almost sure convergence as . Since is a (possibly delayed) renewal process, by the strong Markov property, we have
The rest of the proof is devoted to showing that we also have
Recall that represents the th cycle of : , then reaches at time and goes back to at time . For each cycle, starts in a random location : call th cycle the th cycle of such that starts in , and denote its corresponding time interval by . If
represents the “reward” accumulated along the th cycle, then writing
for the number of cycles starting before time , partitioning the cycles depending on their starting point (for ) provides
Comments
There are no comments yet.