Sleeping Multi-Armed Bandit Learning for Fast Uplink Grant Allocation in Machine Type Communications

10/30/2018 ∙ by Samad Ali, et al. ∙ 0

Scheduling fast uplink grant transmissions for machine type communications (MTCs) is one of the main challenges of future wireless systems. In this paper, a novel fast uplink grant scheduling method based on the theory of multi-armed bandits (MABs) is proposed. First, a single quality-of-service metric is defined as a combination of the value of data packets, maximum tolerable access delay, and data rate. Since full knowledge of these metrics for all machine type devices (MTDs) cannot be known in advance at the base station (BS) and the set of active MTDs changes over time, the problem is modeled as a sleeping MAB with stochastic availability and a stochastic reward function. In particular, given that, at each time step, the knowledge on the set of active MTDs is probabilistic, a novel probabilistic sleeping MAB algorithm is proposed to maximize the defined metric. Analysis of the regret is presented and the effect of the prediction error of the source traffic prediction algorithm on the performance of the proposed sleeping MAB algorithm is investigated. Moreover, to enable fast uplink allocation for multiple MTDs at each time, a novel method is proposed based on the concept of best arms ordering in the MAB setting. Simulation results show that the proposed framework yields a three-fold reduction in latency compared to a random scheduling policy since it prioritises the scheduling of MTDs that have stricter latency requirements. Moreover, by properly balancing the exploration versus exploitation tradeoff, the proposed algorithm can provide system fairness by allowing the most important MTDs to be scheduled more often while also allowing the less important MTDs to be selected enough times to ensure the accuracy of estimation of their importance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 24

page 25

page 28

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The fifth generation (5G) of cellular communication networks is expected to support Internet of Things (IoT) [2] services and applications such as virtual reality [3], autonomous vehicles [4], and unmanned areal vehicles [5]. To enable such emerging IoT applications, 5G systems must have native support for machine type communications (MTCs). In contrast to enhanced mobile broadband (eMBB) services that require high data rates for large data packets, in MTC, a large number of machine-type-devices (MTDs) must communicate small data packets [6]. Due to the heterogeneous nature of IoT applications, MTC data packets have fundamentally novel requirements in terms of latency, reliability, and security [7, 8]. These requirements bring forward new cellular networking challenges that include random access channel congestion, signaling overhead management, and a need for satisfying various quality-of-service (QoS) requirements for different IoT applications [9]. Moreover, optimizing the wireless system in terms of throughput and spectral efficiency is challenging since it is hard to acquire uplink channel state information (CSI) of transmitting MTDs at the base station (BS). Therefore, reducing the signaling overhead and latency, while avoiding random access channel congestion are important open problems in MTCs.

MTC can be categorized into two groups depending on whether scheduling requests are sent by MTDs or not. The first MTC group is coordinated transmission, in which MTDs perform a random access process and the BS schedules MTDs, similar to conventional cellular systems. Clearly, this method is inefficient since the data packets are small and, hence, the signaling to data packet size ratio is large. In the second method, known as uncoordinated transmission, to reduce the signaling overhead, MTDs choose a random uplink radio resource and transmit their data without sending any scheduling request. Both approaches can suffer from severe collisions among transmissions due to the fact that the number of MTDs is often much larger than the number of available resources. In a coordinated transmission, collisions can occur during random access while in the uncoordinated method, they occur during packet transmission. In a massive MTC [10] scenario, such problems become even more challenging to address. The authors in [11] and [12] provide an extensive overview of several proposed solutions for such problems. One possible solution is known as access class barring (ACB) [13] where different access classes are assigned to MTDs and in massive access scenario, MTDs with lower class are barred from a transmission. The authors in [14] leverage ideas from non-orthogonal multiple access and apply them to the random access process so as to identify random access requests from multiple MTDs with the same preamble. In [15] the authors propose a method to reduce the signaling overhead of random access using signatures and Bloom filtering. Correlation between transmission patterns of different MTDs is exploited in [16] to optimize the random access process by reducing the collisions. To avoid wasting radio resources in random access collisions, the authors in [17] propose to attach the MTD identity information in the physical random access channel which will prevent the BS from allocating uplink resources to devices that collided. Clearly, most of the research in this area [13, 14, 15, 16, 17] is focused on optimizing random access process for MTC and solving problems associated with collisions.

For uncoordinated transmission, in [18], the authors present a resource allocation approach for a massive number of devices with reliability and latency guarantees. Meanwhile, the work in [19] presents a game-theoretic model for optimizing the coexistence of MTDs with cellular users in the uplink period. Throughput and outage analysis of uncoordinated non-orthogonal multiple access (NOMA) for massive MTC are presented in [20] in which the authors compare the performance of successive joint decoding (SJD) to successive interference cancellation (SIC). A dynamic Compressed sensing (CS) multi-user detection is used in [21] to exploit the user correlation in uncoordinated uplink NOMA for joint user activity detection and decoding, which has much better performance compared to conventional CS based schemes. Even though these prior solutions can improve the performance of MTCs, coordinated access still suffer from heavy signaling overheard and collisions [10, 11, 13, 15, 16, 17]. Moreover, uncoordinated transmissions still also experience non-negligible collisions, particularly in massive access scenarios [18, 19, 20, 21]. The main drawback of this prior art is that it relies solely on random access process (for sending scheduling requests in coordinated transmission or sending data packets in the uncoordinated scheme) whose performance is optimal only when the number of competing devices is equal to the number of available resources. This clearly does not hold in massive MTC cases since the number of radio resources is limited, and hence, novel solutions are needed to address the uplink resource allocation problem for massive MTCs.

To address the challenges of random access congestion, collisions and high signaling overhead, a middle ground, from the point of view of the uplink resource allocation method between a) fully scheduled by using scheduling requests and b) uncoordinated transmission, can be achieved by using the concept of a fast uplink grant [22] and [23]. In the fast uplink grant, a BS sends an uplink grant to MTDs without MTDs sending scheduling requests. If the MTDs have data to transmit, they proceed with the transmission, otherwise, the radio resource is wasted [23]. An overview of challenges and opportunities of the fast uplink grant is provided in [24] and two main challenges associated with the fast uplink grant are outlined. The first challenge is that the set of the MTDs that have data to transmit should be known to the BS. The second challenge is the optimal selection of MTDs for the fast uplink grant allocation. When the number of active MTDs is larger than the number of fast uplink grants that can be allocated, an optimal allocation policy must be developed. Therefore, to realize fast uplink grant allocation, the BS must have a mechanism to predict the set of active MTDs at each time. This is a type of traffic modeling known as source traffic modeling [25] which is inherently different from the aggregate traffic modeling [26] at the BS. Source traffic prediction can be categorized into two groups, periodic traffic and event-driven traffic. Clearly, periodic traffic prediction is easier compared to event-driven traffic prediction. For periodic traffic, one can adopt calendar based pattern mining techniques [27]. For event-driven traffic, event detection and source traffic prediction mechanisms must be developed. In [28], an MTD traffic prediction method based on the so-called directed information is presented for source traffic prediction. By using the method proposed in [28] upon detection of an irregular transmission, a set of future active MTDs facing the same event can be detected. The authors in [29] propose a predictive resource allocation scheme for event-driven MTC in which MTDs are physically located across a line where their traffic pattern can be predicted.

The second step after predicting the source traffic is the optimal allocation of the fast uplink grants. If the BS has full knowledge of the QoS requirements of all the MTDs, this task is rather trivial. However, in practice, due to privacy and security issues, as well as financial benefits of data for the application, the MTDs might not reveal the nature of the application to the BS. Moreover, the QoS requirements of the MTDs might change at different times, due to changes in channel quality between the MTDs and the BS and the presence of various applications that must send data through a single MTD. Therefore, the BS must perform fast uplink grant allocation, with limited or no prior knowledge about the QoS requirements of the MTDs, and use the information revealed to the BS after the transmission for future fast uplink grant allocation purposes. One natural tool for such a task is multi-armed bandit (MAB) theory which are essentially a class of reinforcement learning problems

[30]. MABs have been previously used in other wireless communications problems (e.g., see [31] where a review of applications of MABs in small cells is provided.) The authors in [32] use MABs for channel selection in device-to-device (D2D) communications and in [33], MABs are used for distributed user association in energy harvesting small cell networks. MAB is also proposed for multi-user channel allocation for cognitive radio networks in [34]. All of the aforementioned works focus on using MAB in problems where it is not possible to get full information on the state of the system and resource allocation with no prior knowledge is required. The optimal fast uplink grant allocation is a similar problem where an optimal resource allocation is needed with no prior knowledge on the QoS requirements of the MTDs in the system. Therefore, MAB theory is a natural tool for our problem.

The main contribution of this paper is to address the problem of optimal fast uplink grant allocation with no prior information about QoS requirement of the MTDs using the MAB theory. We consider that the BS is not able to perfectly predict the set of the active MTDs and hence, a probability of activity is associated with each MTD at any given time. Therefore, the BS has probabilistic knowledge on the set of active MTDs and we propose a novel MAB algorithm for allocating the fast uplink grant under these conditions. The contributions of this paper can, therefore, be summarized as follows:

  • In order to capture a diverse set of QoS metrics during scheduling, we introduce a compound QoS metric that is a combination of three MTD-specific metrics: a) the value of the data packets, b) maximum tolerable access delay, and c) the data rate. We concretely define this metric by proposing a novel method to model the access delay by mapping it to a value between zero and one using a sigmoid function known as Gompertz function.

  • To find the optimal MTD that the BS must schedule at each time slot, a novel probabilistic sleeping MAB algorithm is proposed. Sleeping MABs are appropriate to address the problems where the set of of active MTDs change over time. Our probabilistic version of the algorithm takes the probabilistic knowledge on the activity of the each MTD into account. The proposed algorithm combines the probability of activation of each arm with the concept of upper confidence bound (UCB) in the context of sleeping MABs.

  • We rigorously analyze the regret of the proposed MAB algorithm and decouple the effect of the MTC source traffic prediction errors and the learning process on the regret. We analytically derive the conditions under which the errors in MTC source traffic prediction lead to selecting an MTD with lower utility value thereby increasing the regret of the proposed MAB algorithm.

  • Simulation results show that for any source traffic prediction algorithm with good accuracy, the proposed algorithm is optimal since it achieves logarithmic regret. For example, the proposed framework achieves up to three-fold improvement in the access delay compared to a baseline random scheduling policy.

  • We extend the proposed probabilistic sleeping MAB from single MTD selection to several MTD selection by using the concept of best ordering of bandits and provide an algorithm for scenarios where multiple MTDs can be scheduled at any given time. In this method, MTDs with highest UCB value are selected for transmission, which achieves much better performance in terms of delay and throughput compared to the baseline random allocation policy. Here, our simulation results show two-fold performance improvement in terms of latency compared to a baseline random allocation policy.

The rest of the paper is organized as follows. Section II presents the system model and problem formulation. In Section III, we introduce the proposed probabilistic sleeping MAB solution and its extension to multiple MTDs and provide the regret analysis and study the effect of the source traffic prediction accuracy on the performance of the MAB algorithm. Numerical results are presented in Section IV and conclusions are drawn in Section V.

Ii System Model and Problem Formulation

Consider the uplink of a cellular system composed of one BS and a set of MTDs that use a fast uplink grant. Scheduling is done at the BS and a fast uplink grant is sent to each scheduled MTD. We assume that the total available bandwidth is divided into resource blocks, each of which is of size and duration . Without loss of generality, we consider the problem of selecting one MTD for the fast uplink grant at each time duration . Hereinafter, we use for indexing MTDs and for time. Due to the heterogeneous nature of IoT applications, packets are assumed to have different QoS requirements. The system model is presented in Fig. 1.

Fig. 1: Illustration of system model. First, the set of active MTDs are predicted. Next, selected MTDs receive the fast uplink grants and transmit their data.

Ii-a Performance Metrics

We now define three performance metrics that are combined to build a single metric that is used in the problem formulation.

Ii-A1 Value of information

At time , for each MTD , we define the value of information as the assessment of the utility of an information product in a specific usage context [35]. Hence, each packet that arrives at the queue of an MTD will have an associated value . According to [35], this value can be determined by relative pairwise comparison of all IoT applications and the use of the so-called analytic hierarchy process (AHP) to calculate the importance weight for each packet. This normalized value is derived in the form of a percentage of importance, and hence we choose .

Ii-A2 Maximum tolerable access delay

Delay in a wireless communication network consists of different components: Processing delay which is a function of hardware and software used by the MTDs, queuing delay , and transmission delay which pertains to the delay for the transmission of the data packets through the physical medium. Once the data is transmitted and received at the BS, the time needed for the packet to travel to the final destination through a network of wireless, wired, or fiber link is called routing delay . Finally, the access delay

, which is the main focus of this work, is the time duration from the moment that the packet is ready for transmission, until the MTD receives the uplink resource blocks to transmit the packet. For each data packet of MTD

, we consider a maximum tolerable access delay defined as the total delay that can be tolerated from the time instance at which the data packet is ready to be transmitted at the MTD queue until it is scheduled to be sent. To calculate the total access delay that can be tolerated for each MTD, we first assume that all the other delay components are modeled and subtracted from the total tolerable delay of the packet. We assume and to be constant since the packets are small and always generated by the same devices, and the MTDs are either stationary or have low mobility. Since most of the MTDs have sparse packet transmissions and we can consider that the service time is considerably shorter than the packet inter-arrival times and, hence, the queuing time resulting from other packets in the system is considered to be negligible. Once all the delay components are modeled, we can calculate the maximum tolerable access delay as follows:

(1)

Due to the fact that the values of , , , and are constant and that each application that is transmitting through the MTD might have different QoS requirements, the maximum tolerable access delay for each MTD will be different at any given time. Moreover, once a packet is in the MTD queue and waits for to access the channel, after each time step of waiting, its tolerable access delay will be shorter. Therefore, the packets of each MTD might have different tolerable access requirements at different times.

Ii-A3 Throughput

Once each signal is received at the BS, the signal-to-noise ratio (SNR) is given by:

(2)

where represent the channel between MTD node and the BS. is the power spectral density of the noise, is the bandwidth of the transmission channel, and is the transmit power of MTD . The channel is modeled as where represents the small-scale Rayleigh fading, assumed to be independent at different times [36]. Large scale fading is included in where with and

denoting the path loss and log-normal shadowing with variance

. We use the 3GPP path loss model from the BS to MTDs [37] which is given by . Subsequently, the rate is given by:

(3)

Ii-B Problem Formulation

We first normalize as well as the maximum tolerable access delay to a value within the range . For the rate , we simply divide the achieved rate by the maximum rate that can be achieved by the node having the best channel to the BS. We fix for the entire period by using the knowledge of the set of all the MTDs that are registered in the network. Thus, we use a normalized rate .

To normalize the maximum tolerable access delay, we use a mapping from maximum tolerable access delay to a number in using a function . To do this, we use Gompertz function [38] with slight modifications, which is an asymmetric sigmoid function that is widely used in growth modeling. The rationale behind using this function is that it is possible to control the point at which the value of the function starts to decrease as well as the steepness of the curve. Gompertz function [38] is given by where parameter defines the asymptote of the function, sets the displacement along the time axis, and determines the growth rate or the steepness of the function. The Gompertz function is an increasing function in time. Moreover, since smaller values of the maximum tolerable access delay mean that the MTD has delay-sensitive data to transmit, and hence, it should have a higher value in the utility function, we modify the Gompertz function to create a new function that is decreasing with time, as follows:

(4)

Fig. 2 shows the plot of the modified Gompertz function for some different values of the control parameters. Any scheduling algorithm performs better in terms of delay if it selects MTDs with smaller maximum tolerable access delay, which is the one that maximizes function .

Fig. 2: Modified Gompertz function for modeling latency for different values of the control parameters.

For each MTD , we can now define a utility function that combines all of the QoS metrics:

(5)

In (5), , , and are weight parameters used to modify the importance of each metric with . The best performance at time is achieved if an MTD is selected such that:

(6)
s.t.

where is the set of active MTDs and is rate threshold required for data transmission. If , , , and the set of active MTDs are available to the BS, solving (6) is straightforward. However, in real-world networks, having such information at the BS is impractical due to the following reasons. First, MTDs should send a scheduling request to the BS using periodically available random access slots. Sending scheduling requests in MTC is not optimal since it: a) will most likely fail in massive access scenario, b) requires large signaling overhead compared to the small data packet size, and c) increases the latency. This motivates the development of a predictive resource allocation scheme, where the set of active MTDs is predicted at the BS. Second, for optimal performance in the system, the BS must know the channel state information (CSI) of the MTDs, their data values, and their exact latency requirements. Clearly, in practical MTD networks, the BS does not have full knowledge on the parameters of the metric defined in (5). For example, since the data packets are small, having instantaneous CSI at the BS requires signaling overhead that is almost equal to the data size, which is naturally inefficient. Moreover, as discussed earlier, the tolerable access delay and value of the data packets can be different each time. Therefore, it is appropriate to solve problem (6) using online learning methods with limited or no information [30] at the BS. In this case the learning algorithm can learn the statistical properties of the CSI, the tolerable access delay, and the value of the data packets over time. Next, we propose a novel online algorithm based on MAB theory [30] to solve (6).

Iii Proposed Multi-Armed Bandit Framework and Algorithm

Iii-a MAB theory and MAB problem formulation

In a multi-armed bandit problem, a player (decision maker), pulls an arm from a set of available arms (selects an action from a set of available actions). Each arm generates a reward after being played, based on a distribution that is not known to the decision maker – the decision maker only observes the reward of the selected arm. The aim of the player is to maximize a cumulative reward or minimize a cumulative regret. Regret is defined as the difference between the reward of the best possible arm at each game instant, and the generated reward of the arm that is played.

Let be the reward of playing arm from the set of arms at time , and to be the highest possible reward that could be achieved at time from the set of all arms . The regret up to time is defined as [30]:

(7)

where the expectation is taken over the random choices of the algorithm as well as the randomness in reward allocation. In our problem, each MTD is seen as an arm in the MAB settings and the BS is the player that selects the best arm at each time and after playing that arm, receives a reward that is generated by the metric defined in (5). Hence, the reward that is generated by each MTD is:

(8)

where is an indicator function that is equal to when the argument of the function holds and otherwise. Indicator functions are used to show that the reward of the algorithm at time step for selecting MTD is under the following conditions:

  • , i.e, the achieved throughput falls below the defined threshold and the packet cannot be transmitted successfully. This often happens when the channel quality between MTD and the BS is below a certain level.

  • . Here, is the time that MTD is selected for transmission and is the time when MTD had a packet ready for transmission. Hence, is the number of time steps that MTD has waited to receive the fast uplink grant. Naturally, if , then the MTD packets will be dropped and the reward at the BS for selecting MTD will be .

The goal of the BS is to maximize its cumulative reward over time. To solve such a problem, the natural solution is to find the best possible arm and play it all the time. This requires playing all of the available arms for many times to find their expected value. However, randomly selecting arms in the process of learning is highly suboptimal. Hence, an MAB algorithm finds the arms with higher rewards and chooses them more often, which is known as exploitation of those arms. At the same time, an MAB algorithm should explore all the other arms enough times to find their expected value more precisely. This is known as the exploration versus exploitation tradeoff. Several methods exist to solve the problem of exploration/exploitation. One of the most popular solution approaches for the MAB problem is based on the concept of upper-confidence bound (UCB). In this method, the MAB algorithm at each time plays an arm such that:

(9)

where is the time step, is the number of the times that arm was played in the previous time steps up to , is the sum of the rewards of playing arm up to time , and is a parameter that provides a tradeoff between exploration and exploitation. Larger values of

lead to a higher amount of exploration. We will next use the UCB concept in our proposed probabilistic sleeping MAB algorithm to provide a tradeoff between exploration and exploitation. In the UCB method, an interval is defined around the average of the received rewards from each arm. This confidence interval

depends on the number of the times that an arm was played and the total number of the times that algorithm is running. The more one arm is played, the UCB value becomes smaller. This means that the empirical mean is closer to the real expected value of the arm. When using the concept of confidence intervals in MABs, the concept of optimism in the face of uncertainty is used. This concept favors selecting an arm that has higher UCB value.

Iii-B Sleeping Bandits and Proposed Algorithm

In classical MAB problems, it is assumed that all of the arms are available to be played at all time instants. However, for the MTC fast uplink grant scheduling problem, this assumption is not valid since MTDs will have a small number of packets and usually, after each transmission, they become idle for some time. Hence, we consider a scenario in which, the set of available arms varies over time. This type of problems are called sleeping MAB problems [39]. In our problem, since the availability of the MTDs follows the distribution of their traffic, and the reward can be described by (8), we have sleeping bandits with stochastic action availability and stochastic rewards. The authors in [39] provide an algorithm named AUER that addresses such problems and achieves optimal regret. However, AUER is only applicable to sleeping MAB problems in which the set of available arms is perfectly known to the decision maker in advance. In our problem formulation, such an assumption will not hold. Therefore, we propose a novel solution, summarized in Algorithm 1. Here, we consider that the BS has a prediction algorithm (e.g., such as those proposed in [25], [28], and [40]) to determine the set of active MTDs at each given time. This algorithm provides the set of active MTDs with a certain probability. That is, each MTD has a probability of being active at time . In this problem, since the availability of the MTDs is probabilistic, the selected MTD might not be active, which will lead to reward and a waste of resources. Therefore, to solve the optimization problem in (6) we propose an MAB algorithm that takes such a probability of being active into account. In this algorithm, the BS at each time selects an MTD such that:

(10)

where is the sum of rewards of MTD , is the number of the times that MTD was selected and was active, and is the total number of the times that the selected MTD was active. is defined as the set of active MTDs at time . In contrast to the original UCB method, we only count the number of times that the selected MTD was active. This ensures that the statistical average and the UCB values are calculated correctly. Since the availability of the MTDs in set have associated probabilities, the error of the prediction at the BS will propagate to the MAB. This means that the performance of the sleeping MAB will suffer since some selected MTDs for the fast uplink grant might not be active. Less error in the prediction algorithm will lead to a better performance of the probabilistic sleeping MAB. This algorithm will select MTDs with higher values of the utility function and higher probability of being active while balancing the tradeoff between exploration and exploitation.

Initialize , for all , initialize
for to do
if s.t. then
  Play arm
else
  Play arm
end
if is an available arm then
 observe payoff
else
end
Algorithm 1 The Probabilistic Sleeping MAB Algorithm.

Iii-C Prediction error

Any error in the source traffic prediction algorithm that provides the set of active MTDs will affect the performance of the proposed sleeping MAB algorithm. Here, we first define the prediction error which will later be used in our analysis in Section III-D. Two prediction errors:

  1. For any MTD that is active at time for which the source traffic prediction algorithm assigns a probability of being available , the prediction error will be . If an optimal MTD is active and has high prediction error, that MTD might not be scheduled and some sub-optimal MTD will be scheduled instead, which will lead to regret . We use to capture this event.

  2. For any non-active MTD that is in the set , the prediction error is . If any non-active MTD is improperly selected due to high prediction error instead of an optimal MTD , then the returned reward is zero, and, hence, regret is . This is the highest amount of regret that can happen at any given time. We denote event this by .

Iii-D Regret Analysis of the Proposed Algorithm

Next, we provide the analytical regret analysis of the proposed probabilistic sleeping MAB. We derive the upper bound of the regret and derive the relation between the accuracy of the source traffic prediction method and the regret of our proposed algorithm. Throughout this section, we use the following setup. Consider a MAB scenario with arms, where , with being the expected value of the rewards of arm . We define as the number of times arm was played while some arm in set could have been played. We define , which is always positive. The expected value of the regret can be expressed as:

(11)

and [39]. In the following, is the number of times that arm is played until time , is the number of the times that arm was played and it was available, and is the number of the times that the played arm was available, is the time step in the algorithm and is the total time that the algorithm has been running. Moreover, in the following, shows the average received reward of arm up to time . Next, we derive the number of times that prediction error event happens with function .

Lemma 1.

Given the definitions of , , and , the following holds:

(12)
Proof.

We start from Chernoff-Hoeffding inequality where are strictly bounded by the intervals and considering the confidence bound , the inequality can be given by

(13)

After simplifications, we prove the lemma. ∎

This lemma is used in Theorem 1 where we analyze the regret bounds of the proposed probabilistic sleeping MAB solution presented in Algorithm 1. In our proposed MAB algorithm, a suboptimal arm is selected instead of the optimal arm in the following cases: a) The MAB algorithm does not have an accurate estimate of the rewards of each arm. This mostly happens during the initial learning phase, b) A suboptimal arm is selected because of prediction error , or c) Zero reward is returned due to prediction error . Clearly, cases a) and b) for the regret are a function of the accuracy of the prediction algorithm. We decouple the effect of the prediction errors of the source prediction algorithm from the uncertainty of the MAB algorithm about the expected values of the rewards of each MTD. We show that prediction errors can lead to linear regret with respect to the total running time of the algorithm with a coefficient that is a function of the prediction error. However, such a coefficient becomes very small for a source traffic prediction algorithm with high accuracy, and therefore, make the linear term very small.

Theorem 1.

The regret of the probabilistic sleeping MAB algorithm is at most:

(14)

where is the average activity probability of the source traffic prediction.

Proof.

To derive the regret bounds for our algorithm, we need to bound the regret arm by arm. We need to find the expected value of the number of the times that each arm was played, when that arm was suboptimal. That is, what is the expected number of times that arm was played while some other arm could have been played. Assume that arm was already played times while some other arm was available. The expected number of times that arm was played can be written as:

(15)

To analyze this, we define two events and as follows:

(16)

and, for all :

(17)

means that average received reward for each arm is not further away than the real value of expected value of the reward of each arm, within a margin of the UCB value. We have defined since it will help us in evaluating the accuracy of our estimation of the reward for each arm. After conditioning on , we have:

(18)
(19)

From Lemma 1, the probability of occurrence of for each arm is , and, thus, for all we have:

(20)

No we can evaluate after conditioning on . Event will happen if at least one of the following conditions hold [41]:

I) We are grossly overestimating the value of arm :

(21)

By carefully evaluating events and , we can observe that . Note that this overestimation is evaluated considering the worst case scenario with .

II) We are grossly underestimating the values of all of the arms in , which can be captured by the following event:

(22)

For , this term never holds when conditioned on , i.e, . However, for , arm will be grossly underestimated under the following condition:

(23)

This means that, for all arms in , the probability of being active (while the arm is actually active) so low that the probabilistic UCB value is lower than the real expected value of the arm. However, is not sufficient for incurring regret and another condition must hold for arm : the probability of being active must be high enough such that its probabilistic UCB value is within the confidence interval around the real expected value, i.e., we must have:

(24)

which leads to:

(25)

Since for selecting the suboptimal arm both and must hold, we define the event:

(26)

and, thus, if occurs, a suboptimal arm might be played which lead to increase in the accumulated regret. We should state that is independent of .

III) The expected value of the arms and are nearly equal. When the expected values of two arms are close to each other, following two conditions will lead to choosing a suboptimal arm: a) whenever the confidence interval of the suboptimal arm is large and, hence, the suboptimal arm has higher UCB value compared to the optimal arm, or b) When the UCB value of the optimal arm is larger than the suboptimal, but the optimal arm has lower probability, and, therefore the suboptimal arm is selected. These two conditions can be expressed by:

(27)

After rearranging (27), to choose the optimal arm, the following condition is needed for the confidence interval:

(28)

Now, in order for the condition in (28) to hold, we must play arm enough times to have an exact estimate of its value:

(29)

This means that, conditioned on , after playing arm for times, will never happen since the confidence intervals are small enough. Therefore we have .

Now, we can write (15) as:

(30)

We have already seen that , , and . Moreover, since the probability of an MTD being active is independent of the event . As observed from Lemma 1, we have , and since has occurred, (30) simplifies to:

(31)
(32)
(33)

It is impossible to derive a closed-form expression for the number of times that event happens since the confidence interval and accuracy of the estimated average for each arm changes at each time. However, we can conclude that the number of times that happens is a linear function of time multiplied by a coefficient that is the function of the prediction error . Clearly, as . Therefore, we have:

(34)

We can now exactly calculate . To this end, we need to count the total number of times that a given arm was selected that this arm and was available which is equal to the total number of times that we have selected an arm multiplied by probability of the availability of that arm, i.e.,

(35)

Therefore, we can derive as:

(36)

By plugging this in (11), we can conclude:

(37)

This completes the proof. ∎

This theorem shows that the performance of the proposed sleeping MAB algorithm is a function of the accuracy of the predictions that are done in the previous step. This theorem shows that, for a source traffic prediction algorithm with good accuracy, after the learning period, the sleeping MAB will be able to select the most important MTD and it can achieve logarithmic regret. In MAB problems, logarithmic regret, as compared to linear regret shows that the algorithms has been able to learn the arms with higher reward and the gap between the selected arm and the best arm has become smaller [30]. In our MTC setting, this means that, our of the set of active MTDs, the one with best combination of latency requirements, wireless channel quality, and high value will be selected. Clearly, the we can change the coefficients of the reward that we have defined in (5) to give higher priority to the QoS of interest.

Iii-E Multiple MTD selection

In the previous sections, we have studied the sleeping MAB algorithm for the fast uplink grant allocation problem. Most MAB algorithms are developed for selecting one arm at a time. However, in practical wireless systems, at any given time, there are multiple radio resources block that could be allocated to the MTDs, and hence, the network may need to select more than one MTD for resource allocation. Here, we extend the proposed sleeping MAB algorithm for multiple arms. We assume that there are radio resource blocks in the frequency domain that can be allocated for MTDs. In order to do this, since the criteria in selecting the best MTD in the probabilistic sleeping MAB algorithm was the MTD with highest UCB value, we extend our methods by selecting highest UCB values at each time step. This method follows the concept of best ordering of arms in MAB theory, in which, the arms are ordered based on their importance to be selected [30]. If we assume that the arms are selected one by one, after selecting the best MTD, for the next selection, we must choose the next arm with highest UCB value. Hence, the ordering of the UCB values and selecting the best MTDs is a very natural extension to the proposed probabilistic sleeping MAB. We should mention that at each time step, all of the MTDs that are active for the first time are selected first, and then other MTDs are sorted based on their UCB value. This proposed method of multiple MTD selection is summarized in Algorithm 2.

Initialize , , for all , initialize
for to do
if s.t. and then
  Play all arms with or set = number of arms with and
else
  Order the arms in descending oder by and select first arms
end
if is an available arm then
 For all available arms, observe payoff
+
else
 for all non-available arms do
end
Algorithm 2 Multiple MTD Selection

Iv Simulation Results

Iv-a Single MTD selection

We consider a single circular cell with radius meters consisting of MTDs with MTDs being active at each time. The noise power is considered to be dBm/Hz and bandwidth is

kHz and standard deviation for the log normal shadow fading is

dB. All statistical results are averaged over a large number of independent runs. Each MTD has a reward distribution with expected value . The value of the reward function changes due to the following reasons. First, the achieved rate at each time changes due to changes in the channel quality. Second, the maximum tolerable access delay might change at different times since the packet in the MTD might face various delays. Moreover, each MTD can send packets from various applications with different data values. In the utility function, values , and are initially used. As needed, we change the parameters of the modified Gompertz function from Fig. 2 based on the maximum access delay required in the system to have an accurate modeling of the latency.

In Fig. 3, we set , and , and we show the regret resulting from the proposed sleeping MAB algorithm. Two intervals for the probabilities provided by a source traffic prediction are considered, one in and another in . The result is compared to: a) A random scheduling policy, b) The case when the availability of the MTDs is not taken into account in the selection process of (10) and only UCB values are used, and c) A scenario in which the prediction is error free. Fig. 3 clearly shows that the random allocation of radio resources has linear regret which is much worse compared to the logarithmic regret achieved by the proposed solution. Fig. 3 also shows that the proposed enhancement of our algorithm done by adding the probability in (10) provides around and improvement on the performance compared to using the sleeping MAB without modification for two probability intervals. Moreover, Fig. 3 shows that perfect prediction has the best performance. The baseline random policy performs very poorly in terms of regret as seen from Fig. 3 since its regret increases linearly with time.

Fig. 3: Regret resulting from the proposed probabilistic sleeping MAB compared to sleeping MAB with prediction, sleeping MAB with perfect prediction, and random allocation.

In Fig. 4, we consider and to study the performance in terms of latency. The maximum tolerable access delay is considered to be a value in ms and we set the parameters of the modified Gompertz function to , and with the time horizon . For every value of the maximum tolerable access delay in the system, the average maximum tolerable access delay of a random allocation policy is compared to the sleeping MAB algorithm. From Fig. 4, we can see that the random allocation of the fast uplink grant achieves a delay that is equal to the average delay of the network. In contrast, the proposed algorithm is able to select MTDs with stricter latency requirements. The maximum tolerable access delay of the MTD selected by the proposed algorithm is almost three-fold less than that of the randomly selected MTD. Note that this scheduling policy not only decreases the average latency of the system but is also able to satisfy the individual latency requirements of each MTD by prioritizing the scheduling of MTDs with strict requirements.

Fig. 4: Average maximum tolerable access delay of the selected MTDs.

The scatter plot of the latency of the selected MTD at each time is presented in Figs. 5(a), 5(b), and 5(c) for the proposed sleeping MAB for , , and respectively, and in Fig. 5(d) for the random allocation case. We set the maximum tolerable access delay to ms and the parameters of the modified Gompertz function to , and . Each dot in these figures corresponds to the maximum tolerable access delay of the selected MTD. Fig. 5(a), 5(b), and 5(c) show the effectiveness of the sleeping MAB algorithm in optimizing the latency while providing fairness in the system. Those figures also show the effect of the explore/exploit control parameter . From Figs. 5(a), 5(b), and 5

(c), we can see that, initially, the dots are uniformly distributed which means that the MTDs are randomly selected. However, after learning, the intensity of the dots for MTDs with stricter latency requirements is much higher than that of the MTDs with larger delay requirement, which means that delay sensitive MTDs are scheduled more often. However, after after the learning period, the algorithm will keep scheduling MTDs with larger latency requirements. This increases the accuracy of the information at the BS about the latency requirements of all MTDs and also provides fairness. Moreover, if the latency requirements of an MTD has changed over time, the algorithm can discover that and start scheduling that MTD accordingly. Such a behavior shows how the proposed algorithm can balance between exploration and exploitation using parameter

. From Fig. 5(d), we can see that a random scheduling algorithm selects the latency completely randomly at all times and the performance of the system is much worse than the proposed sleeping MAB.

(a) Probabilistic sleeping MAB with
(b) Probabilistic sleeping MAB with
(c) Probabilistic sleeping MAB with
(d) Random allocation
Fig. 5: Required access delay of the selected MTD at each time during the entire learning period. Effect of exploration-exploitation parameter is shown. This figure shows how our proposed method can optimize the system while providing fairness.

In Fig. 6, we present the scatter plot of the achieved throughput of the system at each time step for , , and and a random allocation policy. Here, we have set and . The bandwidth is considered to be kHz and the transmit power of all the MTDs dBm. It is clear from Fig. 6 that the random allocation policy, on average, achieves a lower rate. However, for , the proposed method is showing much better average performance.

(a) Probabilistic sleeping MAB with
(b) Random allocation
Fig. 6: Scatter plot of the achieved throughput of the system at each time step for three different values of the explore/exploit parameter compared to a random allocation policy.

In Fig. 7, we present the average sum-rate of the system for the entire learning period for different values of the exploration-exploitation parameter . Clearly, our proposed method outperforms the random allocation by up to . Fig. 7 shows that increasing decreases the capacity of the system. This means that scheduling the MTD that has higher average throughput can lead to to doubling the rate in the system. Increasing fairness is possible, but it will lead to a lower rate achieved in the system.

Fig. 7: Average sum-rate of the system under different values of exploration-exploitation parameter .

Iv-B Multiple Resource Blocks

In this section, we provide the results for selecting multiple MTDs by using Algorithm 2. Here, we consider that all the devices require the same amount of resources and one resource block is enough for transmitting the packet of each MTD. For each failed transmission, we consider the device to be available in the next time step.

First, we provide the regret of the algorithm to study its performance. We set , and , and the utility function values , and . We assume that there are MTDs in the system and, at each time, MTDs are active and MTDs can be scheduled at each time. We consider two intervals for the probability of being active that is provided by a source traffic prediction algorithm, one in and another in . The regret of the proposed probabilistic sleeping MAB is compared to the random baseline scenario and perfect prediction. Fig. 8 shows that the proposed method achieves logarithmic regret. We observe that compared to the random allocation policy, the regret achieved by our proposed probabilistic sleeping MAB is nearly three and four times lower for the two different probability intervals that we considered. Fig. 8 naturally confirms that the perfect prediction scheme achieves the best performance.

Fig. 8: Regret resulting from the proposed probabilistic sleeping MAB compared to sleeping MAB for multiple resource blocks with prediction, sleeping MAB with perfect prediction, and random allocation.

In Fig. 9, we present the average delay of the selected MTDs with and . The maximum tolerable access delay is considered to be a value in ms and we set the parameters of the modified Gompertz function to , and with the time horizon . It is clear that the proposed probabilistic sleeping MAB algorithm is able to provide much better average achieved access delay in the system. One must note that the achieved average access delay is almost constant for any value of the maximum tolerable delay, since the select MTDs are averaged. This shows that, in real-time systems, by increasing the number of MTDs, our proposed solution achieves almost a three-fold improved performance compared to baseline methods. This is an interesting result since it shows that, for a massive access scenario, our proposed method is able to achieve very low access delay. In contrast, conventional random access based systems experience excessive delays due to collisions.

Fig. 9: Average achieve access delay in the system for all the selected MTDs.
(a) Probabilistic sleeping MAB
(b) Random allocation
Fig. 10: Scatter plot of the average access delay of the system.

In Fig. 10, a scatter plot of the average access delay of the selected MTDs is presented for our proposed solution compared to a random allocation policy. We set the maximum tolerable access delay to ms, and the parameters of the modified Gompertz function to , and . Each dot in 10 captures the average of the maximum tolerable access delay of the selected MTDs. From 10, we can clearly observe that the proposed sleeping MAB achieves a better performance and can improve the average latency in the system. Moreover, there is a balance between selecting the MTDs with the most strict access delay requirements and exploring other MTDs so as to provide fairness, which can be done by changing .

V Conclusions

In this paper, we have introduced a novel sleeping MAB framework for optimal scheduling of MTDs using the fast uplink grant. First, we have devised a mixed QoS metric based on a combination of the value of the data, rate of the link, and maximum tolerable access delay of each MTD. Second, we have used that metric as a reward function in a MAB framework whose goal is to find the best MTD at each time for scheduling. Moreover, we have considered an imperfect source traffic prediction where each MTD in the set of active MTDs has a probability of being active. Then, we have proposed a probabilistic sleeping MAB framework to solve the problem of fast uplink grant allocation. We have analytically studied the regret of the proposed probabilistic sleeping MAB and we have shown how the errors in the source traffic prediction algorithm impact the performance of the proposed sleeping MAB compared to the case of perfect source traffic prediction. Moreover, we have extended the sleeping MAB algorithm for selecting multiple arms at each time to use it in scenarios where more than one MTD is scheduled at each time. Simulation results have shown that the proposed algorithm performs much better than a random allocation policy, and can achieve almost three-fold performance gain in terms of latency and throughput. To the best of our knowledge, this is the first paper that addresses the optimal allocation of the fast uplink grant for MTC.

References

  • [1] S. Ali, A. Ferdowsi, W. Saad, and N. Rajatheva, “Sleeping multi-armed bandits for fast uplink grant allocation in machine type communications,” in Proc. IEEE Global Communications Conference (GLOBECOM), Workshop on Ultra-High Speed, Low Latency and Massive Connectivity Communication for 5G/B5G, Abu Dhabi, UAE, Dec 2018, pp. 1–6.
  • [2] M. R. Palattella, M. Dohler, A. Grieco, G. Rizzo, J. Torsner, T. Engel, and L. Ladid, “Internet of things in the 5G era: Enablers, architecture, and business models,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 3, pp. 510–527, March 2016.
  • [3] M. Chen, W. Saad, and C. Yin, “Virtual reality over wireless networks: Quality-of-service model and learning-based resource management,” IEEE Transactions on Communications, vol. to appear, 2018.
  • [4] A. Ferdowsi, U. Challita, and W. Saad, “Deep learning for reliable mobile edge analytics in intelligent transportation systems,” CoRR, vol. abs/1712.04135, 2017. [Online]. Available: http://arxiv.org/abs/1712.04135
  • [5] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Unmanned aerial vehicle with underlaid device-to-device communications: Performance and tradeoffs,” IEEE Transactions on Wireless Communications, vol. 15, no. 6, pp. 3949–3963, June 2016.
  • [6] Z. Dawy, W. Saad, A. Ghosh, J. G. Andrews, and E. Yaacoub, “Toward massive machine type cellular communications,” IEEE Wireless Communications, vol. 24, no. 1, pp. 120–128, February 2017.
  • [7]

    A. Ferdowsi and W. Saad, “Deep learning for signal authentication and security in massive Internet of Things systems,”

    IEEE Transactions on Communications, vol. to appear, pp. 1–1, 2018.
  • [8] A. Ferdowsi and W. Saad, “Deep learning-based dynamic watermarking for secure signal authentication in the Internet of Things,” in Proc. of 2018 IEEE International Conference on Communications (ICC), Kansas City, USA, May 2018.
  • [9] P. Schulz, M. Matthe, H. Klessig, M. Simsek, G. Fettweis, J. Ansari, S. A. Ashraf, B. Almeroth, J. Voigt, I. Riedel, A. Puschmann, A. Mitschele-Thiel, M. Muller, T. Elste, and M. Windisch, “Latency critical IoT applications in 5G: Perspective on the design of radio interface and network architecture,” IEEE Communications Magazine, vol. 55, no. 2, pp. 70–78, February 2017.
  • [10] C. Bockelmann, N. Pratas, H. Nikopour, K. Au, T. Svensson, C. Stefanovic, P. Popovski, and A. Dekorsy, “Massive machine-type communications in 5G: physical and MAC-layer solutions,” IEEE Communications Magazine, vol. 54, no. 9, pp. 59–65, September 2016.
  • [11] M. T. Islam, A. e. M. Taha, and S. Akl, “A survey of access management techniques in machine type communications,” IEEE Communications Magazine, vol. 52, no. 4, pp. 74–81, April 2014.
  • [12] A. Laya, L. Alonso, and J. Alonso-Zarate, “Is the random access channel of LTE and LTE-A suitable for M2M communications? a survey of alternatives,” IEEE Communications Surveys Tutorials, vol. 16, no. 1, pp. 4–16, December 2013.
  • [13] Z. Wang and V. W. S. Wong, “Optimal access class barring for stationary machine type communication devices with timing advance information,” IEEE Transactions on Wireless Communications, vol. 14, no. 10, pp. 5374–5387, Oct 2015.
  • [14] Y. Liang, X. Li, J. Zhang, and Z. Ding, “Non-orthogonal random access for 5G networks,” IEEE Transactions on Wireless Communications, vol. 16, no. 7, pp. 4817–4831, July 2017.
  • [15] N. K. Pratas, C. Stefanovic, G. C. Madueno, and P. Popovski, “Random access for machine-type communication based on bloom filtering,” in Porc. of 2016 IEEE Global Communications Conference (GLOBECOM), Dec 2016, pp. 1–7.
  • [16] A. E. Kalor, O. A. Hanna, and P. Popovski, “Random access schemes in wireless systems with correlated user activity,” in Proc. of IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), June 2018, pp. 1–5.
  • [17] N. Zhang, G. Kang, J. Wang, Y. Guo, and F. Labeau, “Resource allocation in a new random access for M2M communications,” IEEE Communications Letters, vol. 19, no. 5, pp. 843–846, May 2015.
  • [18] G. C. Madueño, Č. Stefanović, and P. Popovski, “Reliable reporting for massive M2M communications with periodic resource pooling,” IEEE Wireless Communications Letters, vol. 3, no. 4, pp. 429–432, Aug 2014.
  • [19] N. Abuzainab, W. Saad, C. S. Hong, and H. V. Poor, “Cognitive hierarchy theory for distributed resource allocation in the Internet of Things,” IEEE Transactions on Wireless Communications, vol. 16, no. 12, pp. 7687–7702, Dec 2017.
  • [20] R. Abbas, M. Shirvanimoghaddam, Y. Li, and B. Vucetic, “Grant-free massive NOMA: Outage probability and throughput,” arXiv preprint arXiv:1707.07401, 2017.
  • [21] B. Wang, L. Dai, Y. Zhang, T. Mir, and J. Li, “Dynamic compressive sensing-based multi-user detection for uplink grant-free noma,” IEEE Communications Letters, vol. 20, no. 11, pp. 2320–2323, Nov 2016.
  • [22] 3GPP, “Study on latency reduction techniques for LTE,” 3rd Generation Partnership Project (3GPP), Technical Specification (TS) 36.881.
  • [23] C. Hoymann, D. Astely, M. Stattin, G. Wikstrom, J. F. Cheng, A. Hoglund, M. Frenne, R. Blasco, J. Huschke, and F. Gunnarsson, “LTE release 14 outlook,” IEEE Communications Magazine, vol. 54, no. 6, pp. 44–49, June 2016.
  • [24] S. Ali, N. Rajatheva, and W. Saad, “Fast uplink grant for machine type communications: Challenges and opportunities,” arXiv preprint arXiv:1801.04953, 2018.
  • [25] M. Laner, P. Svoboda, N. Nikaein, and M. Rupp, “Traffic models for machine type communications,” in Proc. of the Tenth International Symposium on Wireless Communication Systems, Ilmenau, Germany, Aug 2013, pp. 1–5.
  • [26] M. Centenaro and L. Vangelista, “A study on M2M traffic and its impact on cellular networks,” in 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT), Dec 2015, pp. 154–159.
  • [27] J. Adhikari and P. Rao, “Identifying calendar-based periodic patterns,”

    Emerging paradigms in machine learning

    , pp. 329–357, 2013.
  • [28] S. Ali, W. Saad, and N. Rajatheva, “A directed information learning framework for event-driven M2M traffic prediction,” IEEE Communications Letters, pp. 1–1, 2018.
  • [29] J. Brown and J. Y. Khan, “A predictive resource allocation algorithm in the LTE uplink for event based M2M applications,” IEEE Transactions on Mobile Computing, vol. 14, no. 12, pp. 2433–2446, Dec 2015.
  • [30] R. S. Sutton, A. G. Barto et al., Reinforcement learning: An introduction.   MIT press, 1998.
  • [31] S. Maghsudi and E. Hossain, “Multi-armed bandits with application to 5G small cells,” IEEE Wireless Communications, vol. 23, no. 3, pp. 64–73, June 2016.
  • [32] S. Maghsudi and S. Stańczak, “Channel selection for network-assisted D2D communication via no-regret bandit learning with calibrated forecasting,” IEEE Transactions on Wireless Communications, vol. 14, no. 3, pp. 1309–1322, March 2015.
  • [33] S. Maghsudi and E. Hossain, “Distributed user association in energy harvesting small cell networks: A probabilistic bandit model,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1549–1563, March 2017.
  • [34] Y. Gai, B. Krishnamachari, and R. Jain, “Learning multiuser channel allocations in cognitive radio networks: A combinatorial multi-armed bandit formulation,” in Proc. of IEEE Symposium on New Frontiers in Dynamic Spectrum (DySPAN), Singapore, Singapore, April 2010, pp. 1–9.
  • [35] C. Bisdikian, L. M. Kaplan, and M. B. Srivastava, “On the quality and value of information in sensor networks,” ACM Trans. Sen. Netw., vol. 9, no. 4, pp. 48:1–48:26, Jul. 2013. [Online]. Available: http://doi.acm.org/10.1145/2489253.2489265
  • [36] Y. S. Cho, J. Kim, W. Y. Yang, and C. G. Kang, MIMO-OFDM wireless communications with MATLAB.   John Wiley & Sons, 2010.
  • [37] 3GPP, “Radio frequency (RF) requirements for LTE pico node B,” 3rd Generation Partnership Project (3GPP), Technical Specification (TS) 36.931.
  • [38] D. Jukić, G. Kralik, and R. Scitovski, “Least-squares fitting Gompertz curve,” Journal of Computational and Applied Mathematics, vol. 169, no. 2, pp. 359–375, 2004.
  • [39] R. Kleinberg, A. Niculescu-Mizil, and Y. Sharma, “Regret bounds for sleeping experts and bandits,” Machine learning, vol. 80, no. 2-3, pp. 245–272, 2010.
  • [40] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Machine learning for wireless networks with artificial intelligence: A tutorial on neural networks,” CoRR, vol. abs/1710.02913, 2017. [Online]. Available: http://arxiv.org/abs/1710.02913
  • [41] P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time analysis of the multiarmed bandit problem,” Machine learning, vol. 47, no. 2-3, pp. 235–256, 2002.