Deep Reinforcement Learning for Random Access in Machine-Type Communication

01/24/2022
by   Muhammad Awais Jadoon, et al.
ETH Zurich
CTTC
0

Random access (RA) schemes are a topic of high interest in machine-type communication (MTC). In RA protocols, backoff techniques such as exponential backoff (EB) are used to stabilize the system to avoid low throughput and excessive delays. However, these backoff techniques show varying performance for different underlying assumptions and analytical models. Therefore, finding a better transmission policy for slotted ALOHA RA is still a challenge. In this paper, we show the potential of deep reinforcement learning (DRL) for RA. We learn a transmission policy that balances between throughput and fairness. The proposed algorithm learns transmission probabilities using previous action and binary feedback signal, and it is adaptive to different traffic arrival rates. Moreover, we propose average age of packet (AoP) as a metric to measure fairness among users. Our results show that the proposed policy outperforms the baseline EB transmission schemes in terms of throughput and fairness.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/04/2022

Collision Resolution with Deep Reinforcement Learning for Random Access in Machine-Type Communication

Grant-free random access (RA) techniques are suitable for machine-type c...
07/21/2021

A Deep Reinforcement Learning Approach for Fair Traffic Signal Control

Traffic signal control is one of the most effective methods of traffic m...
08/27/2018

A Directed Information Learning Framework for Event-Driven M2M Traffic Prediction

Burst of transmissions stemming from event-driven traffic in machine typ...
03/09/2018

Random Access Schemes in Wireless Systems With Correlated User Activity

Traditional random access schemes are designed based on the aggregate pr...
02/22/2022

A Deep Reinforcement Learning based Approach for NOMA-based Random Access Network with Truncated Channel Inversion Power Control

As a main use case of 5G and Beyond wireless network, the ever-increasin...
10/11/2019

Non-Uniform Time-Step Deep Q-Network for Carrier-Sense Multiple Access in Heterogeneous Wireless Networks

This paper investigates a new class of carrier-sense multiple access (CS...
07/24/2019

Fairness in Reinforcement Learning

Decision support systems (e.g., for ecological conservation) and autonom...

I Introduction

Random access (RA) protocols such as slotted ALOHA are highly relevant in designing the multiple access schemes for massive machine-type (MTC) in future wireless networks. In slotted ALOHA RA protocol [1], retransmission strategies are usually employed to resolve collisions and to optimize metrics such as throughput, fairness or delay. There exist several widely used strategies that exploit feedback signals from the receiver to implement a transmission control mechanism [7, 14, 11]. One of the widely used transmission strategies is exponential backoff (EB). In EB, the probability of (re)transmission is multiplicatively decreased by a backoff factor each time a collision occurs. Optimal values for the backoff factor and different flavours of the backoff policy have been addressed in the literature. Binary Exponential Backoff (BEB), i.e., when , has been used in standards such as Ethernet LAN, IEEE 802.3 or IEEE 802.11 WLAN and it has been considered in [9, 5] for theoretical analyses. Recently, it has been shown that performs better in terms of delay compared to BEB [2]. An algorithm similar to BEB has been used in [7] that considers past feedback to adjust transmission probability, but such algorithm requires constant sensing of the channel.

We divide EB schemes operating with packet collision feedback into non-symmetric versus symmetric exponential backoff (nSEB and SEB), respectively. In nSEB only the users that suffer the collision back off, whereas in the SEB, all users back off when a collision occurs, regardless of whether they attempted transmission or not. Different assumptions such as packet arrival traffic model, feedback type (e.g. binary or ternary feedback signaling) influence the performance of these schemes. For instance, nSEB leads to the well-known capture effect

, where a single user or a reduced set of users occupies all channel resources. Analytical closed-form solutions for such well-studied schemes are still an open research problem depending on system model assumptions, such as stability and queue assumptions. When considering multiple access for the MTC in 5G and beyond communication technologies, these protocols cannot be directly applied and therefore, machine learning tools appear attractive to model multiple access problems.

In recent years, deep reinforcement learning (DRL) has attracted much attention in wireless communication research as a potential tool to model multiple access problems. The main motivation for using RL is its ability to learn near-optimal strategies by interacting with the environment through trial-and-error. The application of RL for channel access goes back to 2010 where Q-learning was used for a multi-agent RL setting [10]. In [3], ALOHA-Q protocol is proposed for a single channel slotted ALOHA scheme that uses an expert-based approach in RL. The goal in that work is for nodes to learn in which time slots the likelihood of packet collisions is reduced. However, the ALOHA-Q depends on the frame structure and each user keeps and updates a policy for each time slot in the frame. In [4], the ALOHA-Q is enhanced by removing the frame structure. However, every user still has to keep the number of policies equal to the time slots window it is going to transmit in. Other works such as [13, 19, 17] consider RL-based multiple access works for multiple channels. In [13] and [17] deep Q-Network (DQN) algorithm is used for multiple user and multiple channel wireless networks. As opposed to [13], we use a different and a smaller set of state parameters, i.e., we consider only one previous action and feedback. In [19], another DRL algorithm known as actor-critic DRL is used for dynamic channel access. Furthermore, in [18], a heterogeneous environment is considered in which an RL agent learns an access scheme in co-existence with slotted ALOHA and a time division multiple access (TDMA) access schemes.

In this work, we leverage on DRL to design a channel access and transmission strategy for RA, aiming for reduced signaling and no user coordination. More specifically, the system model considers a binary broadcast feedback, common to all users. We learn a policy that can be designed to be equal for all users through centralised training or it can be optimised or adapted individually through online training. In this particular work, we have considered the former approach. We consider Poisson process for packet arrivals; however, as opposed to the works mentioned above, our proposed scheme and results are not constrained to the specific arrival process. Furthermore, unlike all the above-mentioned works, we do not assume the system to be in saturation state (when users always have a packet in their buffer). This state is of particular interest in MTC systems where not all users will always have a full buffer because of the sporadic traffic arrivals.

The rest of the paper is organised as follows: Section II introduces the system model and defines the performance metrics, Section III describes the RL environment and DQN architecture, simulation results are provided in Section IV and finally conclusions are drawn in Section V.

Ii System Model and Problem Formulation

We consider a synchronous slotted RA system with a set of active users, a central receiver such as a base station, and an error-free broadcast channel. The physical time is divided into slots, with slot index , and the duration of each slot is normalized to . We assume that every packet spans exactly one time slot, and all users are perfectly synchronized. Each user is equipped with a buffer and we assume that it can store one packet. The buffer state of user at time is defined as , where if there is a packet in the buffer and it is otherwise. If buffer is full, new packets arriving at user are discarded and are considered lost. At each time slot , user takes an action , where corresponds to the event when user chooses to not transmit and corresponds to the event when user transmits a single packet on the channel. If only one user transmits on the channel in a given time slot, the transmission is successful, whereas a collision event happens if two or more users transmit in the same time slot. The collided packets are discarded and need to be retransmitted until they are successfully received. We define the binary feedback signal , which is broadcast to all users, as

(1)

Moreover, we assume that each user keeps a record of its previous action , the previous feedback from the receiver and its current buffer state . We refer to the tuple

(2)

as the history or state of user at time , and to as the global history of the system.

Ii-a Slotted Events

Within time , the events happen in the following order at user :

  1. The buffer state of user is .

  2. A number of new packets arrive. We assume that packet arrivals follow mutually independent Poisson processes with average arrival rate per user , where and is the total arrival rate.

  3. The buffer state is updated to an intermediary buffer state as, to account for the newly arrived packet.

  4. If there is at least one packet in the buffer (), the action is drawn at random from the distribution . Otherwise, .

  5. The feedback signal is broadcast from the receiver to all users.

  6. Based on the feedback signal observed by each user, if a packet has been transmitted successfully, i.e., and ), then the packet is deleted from the buffer. The buffer state is updated as, , where

    is the random variable that indicates when a packet from user

    has been successfully delivered to the receiver during the time slot , and it is defined as,

    (3)
  7. The values , and are used to update the history for the next time slot, .

Definition 1.

A policy or access scheme of user at time slot , is a mapping from to a conditional probability mass function over the action space . We consider a distributed setting in which there is no coordination or message exchange between users for the channel access. Each new action is drawn at random from as follows:

(4)

We are interested in developing a distributed transmission policy for slotted RA that can effectively adapt to changes in the traffic arrivals and provide better performance than the baseline reference EB schemes.

Ii-B Performance Metrics

The objective is to evaluate a scheme that efficiently utilizes the channel resources as well as accounts for fairness between users. We consider throughput and propose a new metric, age of packet (AoP), to measure fairness.111Jain’s index [8], which is often used as a fairness metric in other publications [4], is not adequate here: while our setting is perfectly fair on the long run (because policies and the channel model are perfectly symmetric among users), we are interested in quantifying the amount of capture effect as an indicator for short-term imbalances, which we interpret as a lack of fairness.

Ii-B1 Throughput

The channel throughput is defined as the average number of packets that are successfully transmitted from all users. For the finite time horizon , and a given total arrival rate , the throughput is computed as

(5)

Ii-B2 Age of Packet (AoP)

Certain policies like nSEB notoriously tend to be low in fairness, in that they suffer from the so-called capture effect, by which one node occupies the channel for a large stretch of time, during which other nodes maintain a very low transmit probability and accumulate large delays. We quantify this effect via the average AoP.222This metric has a different connotation to age of information (AoI). A low average AoP is thus indicative of fairness. The AoP of user , denoted as , grows linearly with time if a packet stays in the user buffer, and it is reset to if the packet is transmitted successfully. Specifically, we assume that , and the AoP evolves over time as follows:

(6)

The average AoP for user after a time span of time slots is given by

(7)

and the average AoP of the overall system by . For systems with a finite buffer size like ours, another relevant metric is the packet discard rate. Due to space constraints, we do not focus on it in this paper, and do not attach any penalty to packets being discarded. However, note that if the per-user throughputs are all equal to (due to symmetry), then the PDR is simply the difference between (per-user) arrival rate and throughput, .

Iii RL Environment and DQN Architecture

We resort to the tool-set of DRL and use DQN to tackle the problem of RA in MTC networks.

Iii-a Environment

The environment is the available physical resource in this case, i.e., the channel, as shown in Fig. 1. Every user interacts with the environment by taking an action and receiving a reward signal.

Iii-B State and actions

By state we mean the memory content at user at time . In this context, we define the state as the local history of each user.333We will use the terms history and state interchangeably in the rest of the paper. The environment is partially observable to each user, i.e., each user is unaware of the history of the other users. The action of a user is to transmit , or wait .

Iii-C Reward

Let be the immediate reward that user obtains at the end of time slot . The reward depends on the user action and other users’ actions , . The accumulated discounted reward for user is defined as

(8)

where is a discount factor.

We consider the reward function as successful transmissions. The reward at time slot is calculated as:

(9)

The reward is global, i.e., all users receive the same reward, which indicates that the agents are fully cooperative.

Note that in this work we are not interested in optimizing the AoP, nor do we incorporate the AoP into reward function, which could be considered to optimize AoP. We merely use average AoP to assess the fairness achieved by different policies.

Fig. 1: Interaction of agents with the environment. At the beginning each time slot , each agent first performs an action , then receives the feedback signal at the end of the time slot from the receiver. Users update their buffers depending on the feedback signal and new packet arrivals

Iii-D Deep Q-Network (DQN)

In many RL algorithms, the basic idea is to estimate the action-value function or Q-function

by using Bellman equation and iteratively updating the Q-values at each time step in the following way:

where is the old value and and is the learned value obtained by getting the reward after taking the action at state , moving to the next state and then taking the action that maximizes . is the learning rate. Commonly, a function approximate is used to estimate the . In the DQN [12]

algorithm, a neural network is used to estimate the

.

We use multi-agent DRL and incorporate parameter sharing method [6] to perform training in a centralized way for a common policy for all agents, using the experiences of all the agents/users simultaneously. In this way, the does not depend on , which is why we may omit the subscript in notation. We use the experience replay technique by storing the experience samples of each user in a memory buffer , and sampling them uniformly as mini-batches of size from for training. Moreover, we use two neural networks as in [16]. The Q-network with parameters that is used to evaluate and update the policy, and the target network with parameters . The parameters of the Q-network are frequently copied to the target network. This process is also depicted in Fig. 2. At every time step, the current parameters

are updated minimizing the Q-loss function

The learned common policy is deployed identically over the set of users, who take actions without coordination.

At each time slot , each user obtains the observation , updates its history with it and then feeds to the DQN, whose output are the Q-values for all the available actions. User follows the policy by drawing an action from the following distribution calculated using the softmax policy [15]

(10)

where is the temperature parameter which is used to adjust the balance between exploration and exploitation.

Fig. 2: DQN training schematic showing policy network, target network and experience replay.

Iv Experiments

We perform simulations for

users. For DQN training, we use a fully connected feedforward neural network with two hidden layers of 30 and 20 neurons in each.

Iv-a DQN Training Setup

Transfer Learning

We have experienced that the DQN can be trained well if the total arrival rate

is not too large or too small. In the extreme cases, the DQN may not be able to observe and explore all possible states well enough. Therefore, a transfer learning approach is considered. More specifically, once the DQN is trained for a given value of

, the weights are transferred to train for lower or higher values of . In the simulations, we start training the DQN for and use it to train for higher values of .

Adaptive learning rate

We consider an adaptive learning rate for the initial . After each iteration (step), we update it as . During the training, the learning rate is not reset to train each value and it is kept at its minimum value, . To allow exploration, during the training, we gradually increase the value of to for the initial value of and it is kept at for training of the next lambda values. The lower values of parameter allow for more exploration, where for larger values of , tends to the greedy policy. The training of the DQN is performed for time slots for each and the evaluation over time slots. The learning rate is updated after time slots, while the weights of the target DQN network are updated every time slots. We set the discount factor for all the experiments.

Iv-B Simulation Results and Discussion

Understanding the behaviour of the transmit probabilities for the different states helps to understand how the DQN learns an efficient policy. The history as defined in (2) can take different values. However, we shall only focus on the states when buffer, , that is for each history state for , we define the state , , and . For the other four states , where ; naturally, the action is . Fig. 3 depicts how the learned transmit probabilities (policy) vary with the arrival rate . When the state of the user is , i.e. the last transmission of user was successful and there is another packet in its buffer, it is evident that at the start, the DQN learns to transmit immediately if a success happens, which is like the immediate-first-transmission (IFT) policy [7, 14]. However, for higher arrival rates, it may not be reasonable to transmit as soon as the packet arrives. This is reflected on the transmit probability for , which starts decreasing after . Moreover, if user did not transmit in the last time slot and there was a collision, i.e., state , then the transmit probability almost goes to . The most interesting states are and . In , when the last transmission was successful but user did not transmit, as expected, the transmission probability is high for lower arrival rates and it gradually decreases as the arrival rate increases. Moreover, the state almost remains constant with a transmit probability around

. These two states provide each user more degrees of freedom to adjust the transmit probability.

Fig. 3: DQN-RA policy transition for each arrival rate . Each legend denotes the state of user when .

The main results of this work are illustrated in Fig. 4 and Fig. 5, where it is clear that the proposed DQN-based RA scheme (DQN-RA) learns a policy that outperforms both SEB or nSEB schemes in terms of both throughput and fairness, even though the DQN-RA was not optimized specifically for fairness. One reason is that users have symmetric arrival rates and cooperation among them during centralized training allow them to share the channel in a fair way. The proposed DQN-RA scheme is compared to the SEB and nSEB for two values of the backoff factor, (1.35-nSEB, 1.35-SEB) and (B-nSEB, B-SEB). The results show how the throughput–fairness tradeoff varies with . As expected, the nSEB scheme performs better in terms of throughput than the SEB scheme. On the other hand, SEB exhibits a lower average AoP as compared to nSEB. This also shows how different conditions and models can affect the performance of slotted ALOHA.

Furthermore, we have also observed that for the AoP, the nSEB scheme is dependent on the total number of time slots for which an experiment is evaluated. The AoP of nSEB will increase if increases because of the capture effect. The average AoP becomes higher if a single user occupies the channel for some time. However, the average AoP has moderate values for SEB as well as for the proposed DQN-RA scheme, provided that is large enough. Please note that we have set time slots to produce all the results for EB schemes.

Fig. 4: Average Throughput.
Fig. 5: Average AoP in log-scale.

In Fig. 6, we show standard boxplots for the AoP for the total users over time . The comparison is only shown between the binary EB schemes and the proposed DQN-RA scheme for , that has highest AoP for B-nSEB. In Fig. 5(a) we show the mean values the 10 users, while in Fig. 5(b)5(d), we can observe the AoP distribution for each user for B-nSEB, B-SEB and DQN-RA, respectively. The unfairness of B-nSEB can be clearly observed from the results where some users, like 4 and 5, are transmitting constantly, while user 2 seldom gets access to the channel; which also leads to significantly larger mean AoP. The results for B-SEB and DQN-RA are more interesting, both B-SEB and DQN-RA have similar medians (i.e., five time slots) for all users. The 25 percentile is 1.0 time slots for the B-SEB and 2.5 for the DQN-RA. The 75 percentile is about 20 time slots for the B-SEB and eight for the DQN-RA. This spread indicates that the DQN-RA is significantly fairer than the B-SEB, as users wait less than eight time slots 75% of the time for the DQN-RA, while B-SEB users could wait up to 20 time slots. This means that in short burst some B-SEB users will take over the channel and wait only one time slot, while making other users wait for a longer time. This happens frequently enough so every user gets a turn taking over the channel, thus ensuring overall fairness. This overtaking of the channel by one B-SEB user explains why it has the worst throughput of all the methods. The proposed policy does not only achieve the highest throughput, but it is also significantly fairer than all the other methods.

(a) Average AoP of users
(b) B-nSEB
(c) B-SEB
(d) DQN-RA
Fig. 6: AoP distribution of all users for the proposed DQN-based schemes and the baseline schemes for .

V Conclusion and Future Work

In this work, we showed the potential of RL for RA in wireless networks. We proposed a DRL-based transmission policy for RA, DQN-RA for different arrival rates. We proposed to use AoP as a metric to measure fairness of the proposed scheme. Our results showed that the proposed solution learns a policy using previous action and feedback that outperforms standard baseline EB schemes. Moreover, we have also analyzed how DQN-RA scheme’s transmission policy changes and adapts to different traffic arrival rates. However, we have not addressed the scalability of this approach to higher number of users, which is our consideration for the next work. For this purpose, we will employ more past actions and feedback signals for learning.

Acknowledgment

This work was supported by the European Union H2020 Research and Innovation Programme through Marie Skłodowska Curie action (MSCA-ITN-ETN 813999 WINDMILL) and the Spanish Ministry of Economy and Competitiveness under Project RTI2018-099722-B-I00 (ARISTIDES).

References

  • [1] N. Abramson (1977) The throughput of packet broadcasting channels. IEEE Transactions on Communications 25 (1), pp. 117–128. Cited by: §I.
  • [2] L. Barletta, F. Borgonovo, and I. Filippini (2018) The throughput and access delay of slotted-aloha with exponential backoff. IEEE/ACM Transactions on Networking 26 (1), pp. 451–464. External Links: Document Cited by: §I.
  • [3] Y. Chu, S. Kosunalp, P. D. Mitchell, D. Grace, and T. Clarke (2015) Application of reinforcement learning to medium access control for wireless sensor networks.

    Engineering Applications of Artificial Intelligence

    46, pp. 23–32.
    External Links: ISSN 0952-1976, Document Cited by: §I.
  • [4] L. de Alfaro, M. Zhang, and J. J. Garcia-Luna-Aceves (2020) Approaching fair collision-free channel access with slotted aloha using collaborative policy-based reinforcement learning. In 2020 IFIP Networking Conference (Networking), Vol. , pp. 262–270. External Links: Document Cited by: §I, footnote 1.
  • [5] J. Goodman, A. G. Greenberg, N. Madras, and P. March (1988-06) Stability of binary exponential backoff. J. ACM 35 (3), pp. 579–602. External Links: ISSN 0004-5411, Link, Document Cited by: §I.
  • [6] J. K. Gupta, M. Egorov, and M. Kochenderfer (2017) Cooperative multi-agent control using deep reinforcement learning. In Autonomous Agents and Multiagent Systems, G. Sukthankar and J. A. Rodriguez-Aguilar (Eds.), Cham, pp. 66–83. Cited by: §III-D.
  • [7] B. Hajek and T. van Loon (1982) Decentralized dynamic control of a multiaccess broadcast channel. IEEE Transactions on Automatic Control 27 (3), pp. 559–569. External Links: Document Cited by: §I, §IV-B.
  • [8] R. Jain, D. Chiu, and W. Hawe (1998) A quantitative measure of fairness and discrimination for resource allocation in shared computer systems. ArXiv cs.NI/9809099. Cited by: footnote 1.
  • [9] B. Kwak, N. Song, and L.E. Miller (2005) Performance analysis of exponential backoff. IEEE/ACM Transactions on Networking 13 (2), pp. 343–355. External Links: Document Cited by: §I.
  • [10] H. Li (2010) Multi-agent Q-learning for competitive spectrum access in cognitive radio systems. In 2010 Fifth IEEE Workshop on Networking Technologies for Software Defined Radio Networks (SDR), Vol. , pp. 1–6. Cited by: §I.
  • [11] J. L. Massey (1987) Some new approaches to random-access communication. In Proceedings of the 12th IFIP WG 7.3 International Symposium on Computer Performance Modelling, Measurement and Evaluation, Performance ’87, NLD, pp. 551–569. External Links: ISBN 0444703470 Cited by: §I.
  • [12] V. Mnih et al. (2015-02) Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529–533. External Links: ISSN 00280836, Link Cited by: §III-D, §III.
  • [13] O. Naparstek and K. Cohen (2019) Deep multi-user reinforcement learning for distributed dynamic spectrum access. IEEE Transactions on Wireless Communications 18 (1), pp. 310–323. Cited by: §I.
  • [14] P. R. Srikanta Kumar and L. Merakos (1984) Distributed control of broadcast channels with acknowledgement feedback: stability and performance. In The 23rd IEEE Conference on Decision and Control, Vol. , pp. 1143–1148. External Links: Document Cited by: §I, §IV-B.
  • [15] R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. (second edition). A Bradford Book, Cambridge, MA, USA. External Links: ISBN 0262039249 Cited by: §III-D.
  • [16] H. van Hasselt, A. Guez, and D. Silver (2015) Deep reinforcement learning with double Q-learning. External Links: 1509.06461 Cited by: §III-D.
  • [17] S. Wang, H. Liu, P. H. Gomes, and B. Krishnamachari (2018) Deep reinforcement learning for dynamic multichannel access in wireless networks. IEEE Transactions on Cognitive Communications and Networking 4 (2), pp. 257–265. External Links: Document Cited by: §I.
  • [18] Y. Yu, T. Wang, and S. C. Liew (2019) Deep-reinforcement learning multiple access for heterogeneous wireless networks. IEEE Journal on Selected Areas in Communications 37 (6), pp. 1277–1290. External Links: Document Cited by: §I.
  • [19] C. Zhong, Z. Lu, M. C. Gursoy, and S. Velipasalar (2018) ACTOR-critic deep reinforcement learning for dynamic multichannel access. In 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Vol. , pp. 599–603. External Links: Document Cited by: §I.