I Introduction and Literature Review
A multihop wireless network consists of nodes transmitting and receiving information over the wireless medium, with data from a source node having to pass through multiple hops before it can reach its destination. The control of wireless networks, involving scheduling, routing and power control, is a complex and challenging problem. Applications often require distributed control as opposed to centralized control. Distributed control algorithms may not always match centralized algorithms in terms of performance. However, they offer ease of implementation from a practical perspective in scenarios where it may not be feasible to collect information from all over the network before arriving at a control decision.
Different flows in a network, arising from different applications, may ask for QualityofService (QoS). QoS may vary depending on the nature of the application. Some applications require an endtoend mean delay guarantee on the packets being transmitted. Some others, such as a live streaming video, may require all packets to satisfy a hard delay requirement. In some cases, the QoS constraint is a bandwidth requirement for the user. Services involving VoIP (Voice over IP) are sensitive to delay and delay variability in the network, and require preferential treatment over other packets [1]. Another service that requires QoS is remote healthcare, which involves collection of data about a patient from a remote location and transmitting it elsewhere to be analysed [2]. Such applications in the context of the Internet of Things (IoT) [3] will require the coming together of different kinds of traffic with various QoS requirements, and with different levels of sensitivity [4].
While directly solving problems that involve QoS requirements may not be straightforward, one can look for appropriate asymptotic solutions. One approach is to study the network in the large queue length regime, and translate mean delay requirements of flows into effective bandwidth and effective delay as given by large deviations theory, and formulate these as physical layer requirements [5]. In the case of multihop networks, however, owing to the complex coupling between queues, such a formulation is not easy to obtain [6].
Backpressure based methods are common in network control. These are connected to control based on Lyapunov Optimization [7]. Backpressure based algorithms may not provide good delay performance, especially in lightly loaded conditions [8], [9]
. Techniques based on Markov Decision processes (MDPs) are also popular
[10]. In [11], [12], the problem of minimizing power while providing mean and hard delay guarantees is studied. However the algorithm requires knowledge of system statistics and is not throughput optimal.Fluid limits [13] are an asymptotic technique used to study networks and obtain control solutions. The network parameters and variables, under a suitable scaling, are shown to converge to deterministic processes. This is called the fluid limit. The stability of the fluid limit has a direct bearing upon the stability of the original stochastic system [14], [15]. The technique of discrete review is used in [16]. Here, the network is reviewed at certain time instants, and control decisions are taken till the next time instant using the state information observed. In [17] the authors use fluid limit based techniques to establish the stability of a perqueue based scheduling algorithm. A robust fluid model, obtained by adding stochastic variability to the conventional fluid model, is discussed in [18]. Another algorithm using per hop queue length information, with a low complexity approximation that stabilizes a fraction of the capacity region is given in [19]. However, the algorithm does not address delay QoS.
Our main contributions in this paper are summarised below.

[noitemsep,topsep=0pt]

We propose an algorithm that solves a weighted optimization in order to address mean delay requirements of different flows. The weights are time varying and state dependent, as opposed to fixed weight schemes. This assigns dynamic priorities to different flows.

Our policy uses the technique of discrete review, which involves taking decisions on the network control at certain time instants, thus reducing the overall control overhead as opposed to schemes which require computations in every slot, such as in [20]. Discrete review schemes have been used in queueing networks [16]; however, the implementation is centralized and they do not consider delay deadlines. The use of discrete review separates our policy from works such as [17] or [21] which involve decision making in each slot. The policies in [17], [19] are throughput optimal but do not provide other QoS.

Iterative gradient ascent is used to solve the optimization problem in a distributed manner, similar to what is done in [22]. This can be implemented easily in a cyclic manner, with message passing between the nodes after each step. The gradient calculation requires only local information, and the projection step requires knowledge of links with which a particular link interferes.

The algorithm works based on queue length information, which acts as a proxy for delay. Thus it differs from [22] which uses delay information to obtain delay guarantees, and thus has a different function being maximized. In addition, this algorithm is analysed extensively theoretically and is shown to be throughput optimal. Simultaneously, it also has provisions for mean delay QoS.
The rest of this paper is organized as follows. In Section II, we describe the system model, and formulate an optimization problem to address our requirements. In Section III, we describe the algorithm and its distributed implementation in detail. In Section IV, we obtain the fluid limit of the system under our algorithm, and show its throughput optimality. In Section V we detail the simulation results, followed by conclusions in Section VI.
Ii System Model and Problem Formulation
We consider a multihop wireless network (Fig. 1). The network is a directed graph with being the set of vertices and , the set of links on . The system evolves in discrete time denoted by . We have directional links, with link from node to node having a time varying channel gain at time . At each node , denotes the cumulative process of exogenous arrival of packets destined to node , upto time . The packets arrive as an i.i.d sequence across slots,with mean arrival rate . All traffic in the network with the same destination is called flow ; the set of all flows is denoted by . Each flow has a fixed route to follow to its destination. At each node there are queues, with denoting the queue length at node corresponding to flow at time . The queues evolve as
(1) 
where denotes the cumulative number of packets of flow that are transmitted from node to node till time
. The vector of queues at time
is denoted by . Similarly we have the arrival vector and the service vector . We will assume that the links are sorted into interference sets . At any time, only one link from an interference set can be active. A link may belong to multiple interference sets. In this work we will assume that any two links which share a common node will fall in the same interference set. We assume that each node transmits at unit power. Then, the rate of transmission between node and node can be written as where is some achievable rate function, and is the schedule at time .We want to develop scheduling policies such that the different flows obtain their endtoend mean delay deadline guarantees. Our network control policy, Queue Weighted Discrete Review (QWDR), is as follows. We have a sequence of review times , chosen as
(2) 
where the are called review periods and . Define . At each , we solve the optimization problem,
(3)  
(4)  
(5) 
assuming for at least one link flow pair . If all are zero, we define the solution to be for all . The first constraint corresponds to the fact that flows cannot simultaneously be scheduled on a link, and the second constraint corresponds to interference constraints. In (3), we optimize the sum of rates weighted by the function as well as the queue lengths. More weight may be given to flows with larger backlogs, while the function captures the delay requirement of the flow. These are chosen such that flows requiring a lower mean delay would have a higher weight compared to flows needing a higher mean delay. Also, flows whose mean delay requirements are not met should get priority over flows whose requirements have been met. The weights therefore are functions of the state, and denotes a desired value for the queue length of flow . We use the function
(6) 
Thus is close to when is larger than , and reduces to as reduces. Thus, delays which are above certain thresholds obtain higher weights in the optimization function. We seek to regulate the queue lengths using with a careful selection of , and thereby control the delays. For any flow, the are chosen in the following manner. If the required endtoend mean delay of the flow with arrival rate is , we choose . In some sense, we are taking the queue length equivalent to the required delay using Little’s Law and using it as a threshold that determines the scheduling process.
The network control variables correspond to the fraction of time in one review period in which link will be transmitting flow . In a review period, we will assume that the channel gain is fixed (slowfading), but drawn as an i.i.d sequence from a bounded distribution . Each node transmits at unit power. The rate over link is . Let be the number of time slots till time in which the channel was in state . Let be the number of slots till time , in which channel state was , the schedule was and flow was scheduled over . Clearly, for any , , ,
(7) 
Iii Gradient Ascent and Distributed Implementation
The optimization problem is separable into linkflow elements, with each linkflow element being a unique in the network. Let be the set of all linkflow elements. Any corresponds onetoone with a linkflow element ; we call this mapping from all to as . Consider the optimization problem
(8) 
with , and where and is the set of that satisfy constraints (4) and (5); however, we remove the assumption that the variables are positive. This is equivalent to the optimization problem (3) . This is a linear optimization problem with linear constraints. One can then define a sequential iteration
(9) 
with being an arbitrary initial point, modulo , and denoting projection into the set . This is cyclic Incremental Gradient Ascent. Let . From Proposition 3.2 of [23], the following holds.
Lemma 1.
Thus, given the optimization problem to be solved at a particular time, we can use the gradient ascent method (9) to arrive at an optimal point in a distributed sequential fashion. First, obtain , and then project onto . Since where , the first step is clear. The projection step is described below.
Iiia Distributed Projection
Two links that share a node are assumed to interfere with each other. Therefore, an update of the optimization variables at a will affect only those links which have either or as end points. The set is defined by the intersection of halfspaces , where each is characterized by an equation ,
with being the unit normal vector. Due to the nature of our constraints, is nonnegative.
Each halfspace corresponds to one interference constraint. During an update step, a point may break at most two interference constraints. This is because each link has two sets of constraints, one for each end. If one constraint is broken, one step of projection will suffice. If we break both constraints, we can iteratively project it, first to one hyperplane, then the next and so on repeatedly. It can be shown
[24, Theorem 13.7] that this scheme converges to the projection onto the intersection of the hyperplanes. We will now obtain the analytical expressions for projecting a point onto a hyperplane. Let the hyperplane be defined by . Let the point lie outside , i.e.,Define . It is easy to see that satisfies . Since , and is normal to the plane boundary of , it follows that is the perpendicular projection of onto . It can also be shown that satisfies all the other hyperplane constraints that does.
Now we will describe the complete algorithm.
IiiB Algorithm Description
At each , the problem (3)(5) is solved in a distributed fashion. The nodes calculate for all , and , and use these till the end of the review cycle. We will now describe how are calculated at each node.
We assume that there is some convenient ordering of the linkflow elements, and computation proceeds in that order. Let this order be , and assume that the elements of the vector are also arranged in the same order. At linkflow element , we update the first component of as
(10) 
Here , where . The node then calculates the inner products
where , correspond to the two interference constraints that the update step may break. These correspond to constraints at the two nodes at which link is incident. If exactly one constraint, say , gets broken, we can project the point back to the constraint set by calculating where is the number of links in that interference set. The element communicates to all elements in its interference set. All these elements do the update
If both constraints are violated, the above projection step has to be repeatedly done, first for elements corresponding to constraint , then for , again for and so on.
After projection, the node passes to the node corresponding to the next component of , and the process is repeated cyclically, i.e, we repeat step (10) with 1 replaced by 2, and then by 3 and so on, across the nodes till a stopping time. At the stopping time, set any negative components of to zero. For each interference set , we check its constraint . If the constraint is not met, do an appropriate scaling. This ensures compliance with the constraints.
The complete algorithm is given below, as Algorithm 1, QWDR (Queue Weighted Discrete Review), which uses in turn, Algorithms 2, 3 and 4. The last algorithm schedules flows on a link for a fraction of time equal to the corresponding .
Iv Fluid Limit
Define the system state to be , where the process with , representing the queue values at the last review instant, and representing the cumulative allocation vector from the last review instant to the current time. From the queue evolution (1) and the allocation, it is clear that the system
evolves as a discrete time countable Markov chain, since at any time
the next state may be computed by solving the optimization (3) with replaced by , and using the cumulative allocation process to determine how allocation must be done in the next slot to satisfy the solution of (3). The associated norm is . Positive recurrence of this Markov chain would imply stability. We will show positive recurrence of this Markov chain via its fluid limit.Consider a real valued process evolving over (discrete) time , with being its initial state. Consider a sequence of these processes as . Define the corresponding scaled (continuous time) process,
Define the scaled processes , , , and as above. For a scaled process , denote . For the vector processes , , , and , we define the corresponding scaled vector processes. The term fluid limit denotes the limits obtained as we scale for these processes. Consider . The process is a projection of .
We assume that the rates satisfy . This will happen since the channel gains are assumed bounded and transmit power is fixed. Consider the scaled process .
We use the following definition.
Definition 1.
A sequence of functions is said to converge uniformly on compact sets (u.o.c) if uniformly on every compact subset of the domain.
We obtain the following result for the components of .
Theorem 1.
For almost every sample path , there exists a subsequence such that,
(11)  
(12)  
(13)  
(14) 
uniformly on compact sets, for all , and . The limiting functions are also Lipschitz continuous, and hence almost everywhere differentiable. The points at which it is differentiable are called regular points. In addition, the limiting functions satisfy the following properties.
(15) 
(16) 
(17) 
(18) 
(19) 
where satisfies
(20) 
where the dot indicates derivative, at regular t and .
Proof.
The Strong Law of Large Numbers implies
for any subsequence , with . Thus we obtain the first parts of (11) and (15). Since the rates are bounded, it follows that . Therefore, for , we have
Thus, the family of functions is uniformly bounded and equicontinuous. By the ArzelaAscoli theorem [25], we can see that for any sequence with , there exists a subsequence along which
as wp 1, u.o.c. This implies the second part of (11). The resultant is Lipschitz, being the result of uniform convergence of a sequence of Lipschitz functions. The first part of (12), and second part of (15) follow from the Strong Law of Large Numbers applied to the channel process. From equation (1), we can see that the terms on the right hand side converge under this scaling. Consequently,
wp 1 u.o.c, as . Like , both and will be Lipschitz. Equation (16) follows by observing that the scaled queue process will satisfy the queueing equation (1), and applying the appropriate limit in that equation.
Since the fluid variables and are Lipschitz, they are differentiable almost everywhere. At the points where they are differentiable, we obtain (17) by differentiating (16). The first part of (18) follows from (7).
To see the second part of (12), observe that, by definition,
(21) 
for . Applying ArzelaAscoli theorem, we obtain the subsequence that satisfies the second part of (12).
Since is almost everywhere differentiable, (19) follows.
In obtaining the fluid limit of the allocation process , we will not distinguish between the actual and the ideal allocation, since they converge to the same limit. Let the actual allocation be . The actual allocation differs from the ideal allocation due to roundoff errors. At a time , let . Bounding possible errors in each review period we get,
The last term follows by summing up roundoff errors in review periods upto , and observing that in any review period , errors are of the form , where . Since , where , we get
Since are and , we have and , and hence, the fluid limits of and are equal.
To show (20), observe that
Hence we have
Multiplying LHS and RHS by , summing over i, j, f, and taking , the LHS becomes
(22) 
where and . Since the allocation satisfies
where was the previous review point with . Since , we can write as
(23) 
The RHS can therefore be written as
Using (18) and (15), this becomes
(24)  
(25) 
Dividing (22) and (25) by , equating, and taking ,
where . Thus we obtain (20). The second part of (13)follows from (23). To obtain (14), observe that
with being a review period. Taking , (14) follows. Since , the second part of (18) follows. ∎
Denote the vector of all by . We will use the following result to establish the stability of the network.
Theorem 2.
(Theorem 4 of [15]) Let be a Markov Process with denoting its norm. If there exist and a time such that for a scaled sequence of processes , we have
then the process is stable (positive recurrent).
Using this result, we will establish stability of the network under our algorithm and show that it is throughput optimal. We first define the capacity region of the network. A schedule s is a mapping from , the set of all linkflow elements, to . Let the set of all feasible schedules be denoted by and is its convex hull. We define the capacity region as follows.
Definition 2.
The capacity region is given by the set of all for which there exists an such that
(26) 
where ,
is the stationary probability that the channels are in state
, and is the rate across link when the channels are in state .Now we establish the throughput optimality of our policy.
Theorem 3.
The policy QWDR stabilizes the process for all arrivals in the interior of .
Proof.
To prove this, we will first pick a suitable Lyapunov function, whose drift will be shown to be negative.
Pick an arrival rate matrix . This implies that there are rates and that satisfy
(27) 
for each . These rates correspond to the terms in (26). Consider the Lyapunov function
where the dot indicates the derivative. This is a continuous function of , with . We can write the derivative,
where the inequality followed from (27). Observing that
and that a similar equation holds for replaced by , it follows that if we show
(28) 
it will imply . We have
where the second inequality follows from (20). Now, if we show that whenever , (28) will follow. To see this, assume that at some ,
Comments
There are no comments yet.