Intelligent Power Control for Spectrum Sharing: A Deep Reinforcement Learning Approach

12/20/2017
by   Jun Fang, et al.
0

We consider the problem of spectrum sharing in a cognitive radio system consisting of a primary user and a secondary user. The primary user and the secondary user work in a non-cooperative manner, and independently adjust their respective transmit power. Specifically, the primary user is assumed to update its transmit power based on a pre-defined power control policy. The secondary user does not have any knowledge about the primary user's transmit power, neither its power control strategy. The objective of this paper is to develop a learning-based power control method for the secondary user in order to share the common spectrum with the primary user. To assist the secondary user, a set of sensor nodes are spatially deployed to collect the received signal strength information at different locations in the wireless environment. We develop a deep reinforcement learning-based method for the secondary user, based on which the secondary user can intelligently adjust its transmit power such that after a few rounds of interaction with the primary user, both the primary user and the secondary user can transmit their own data successfully with required qualities of service. Our experimental results show that the secondary user can interact with the primary user efficiently to reach a goal state (defined as a state in which both the primary and the secondary users can successfully transmit their own data) from any initial states within a few number of iterations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/20/2017

Intelligent Power Control for Spectrum Sharing in Cognitive Radios: A Deep Reinforcement Learning Approach

We consider the problem of spectrum sharing in a cognitive radio system ...
01/29/2018

Ultra Reliable Communication via Opportunistic ARQ Transmission in Cognitive Networks

This paper presents a novel opportunistic spectrum sharing scheme that a...
07/21/2019

A Learning-Based Two-Stage Spectrum Sharing Strategy with Multiple Primary Transmit Power Levels

Multi-parameter cognition in a cognitive radio network (CRN) provides a ...
03/14/2021

Power Control in Spectrum Sharing Systems with Almost-Zero Inter-System Signaling Overhead

Power allocation in spectrum sharing systems is challenging due to exces...
10/26/2017

Exoplanet Atmosphere Retrieval using Multifractal Analysis of Secondary Eclipse Spectra

We extend a data-based model-free multifractal method of exoplanet detec...
09/24/2019

Symplectic P-stable Additive Runge–Kutta Methods

Symplectic partitioned Runge–Kutta methods can be obtained from a variat...
01/04/2021

Reconstructing Patchy Reionization with Deep Learning

The precision anticipated from next-generation cosmic microwave backgrou...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The dramatically increasing demand for spectrum resources requires new intelligent methods to enhance the spectrum efficiency. Per the Federal Communications Commission (FCC) [1], the spectrum in general is severely underutilized with the utilization rate of some bands as low as . In order to improve the spectrum efficiency, the notion of spectrum sharing with secondary users through cognitive radios is highly motivated [2]. Specifically, users from a secondary network are allowed to access the spectrum owned by licensed users (also called primary users) without causing harmful interference.

According to the roles of the primary user, the operation of spectrum sharing or dynamic spectrum access can be classified into a passive primary user model and an active primary user model

[3]. In many spectrum sharing studies, e.g. [4, 5, 6, 7], it is assumed that the operations of secondary users are transparent to the primary user so that the primary user does not need to adapt its transmission parameters. The transparency of secondary to primary can be accomplished by letting the secondary user to perform spectrum sensing to explore idle spectrum [4] or to strictly control its transmit power such that the interference to the primary networks is under a desired threshold [5, 6, 7]. However, some works in literature, e.g. [3, 8, 9, 10], also considered an active model in which some (cooperative or non-cooperative) interaction between the primary user and the secondary user are allowed to obtain improved transmission performance or economic compensations. For example, in [3], the spectrum sharing task is formulated as a Nash bargaining game which requires interaction between the primary user and the secondary user to reach a desired equilibrium. Also, in [10], to achieve spectrum sharing, the primary user and the secondary user are allowed to interact with each other to update their respective transmit powers. For the active model, a dynamic power control strategy is necessary for all users in the network such that a minimum quality of service (QoS) for successful data transmission is satisfied for both the primary and the secondary users.

Most existing works address this dynamic power control problem from an optimization perspective. In [11], a distributed constrained power control (DCPC) algorithm was proposed. Given the signal-to-interference-plus-noise ratio (SINR) and the required SINR threshold, the DCPC algorithm iteratively adjusts the transmit power of each transmitter such that all receivers are provided with their desired QoS requirements. Based on [11], modified approaches with different constraints or scenarios were developed [12, 13, 14, 10, 15, 16]. Other optimization-based methods were also proposed [17, 18, 19]

in recent years. Besides optimization-based methods, power allocation from the game theory’s point of view was also studied

[20, 21, 22, 23]. In [21], the power allocation problem was formulated as a noncooperative game with selfish users, where a sufficient condition for the existence of a Nash equilibrium was provided, and a stochastic power adaption with conjecture-based multiagent Q-learning approach was developed. However, the proposed approach requires that each user has the knowledge of the channel state information of every transmitter-receiver pair in the network, which may be infeasible in practice.

Reinforcement learning [24], also known as Q-learning, has been explored for cognitive radio applications such as dynamic spectrum access [25, 26, 27, 28, 29, 30, 31]. Using the experience and reward from the environment, users iteratively optimize their strategy to achieve their goals. Recently, deep reinforcement learning was introduced and proves its competence for challenging tasks, say Go and Atari games [32, 33, 34]. Unlike conventional reinforcement learning which is limited to domains with handcrafted features or low-dimensional observations, agents trained with deep reinforcement learning are able to learn their action-value policies directly from high-dimensional raw data such as images or videos [34]. Also, as to be shown by our experimental results, deep reinforcement learning can help learn an effective action-value policy even when the state observations are corrupted by random noise or measurement errors, while the conventional Q-learning approach is impractical for such problems due to the infinite number of states in the presence of random noise. This characteristic makes deep reinforcement learning suitable for wireless communication applications whose state measurements are generally random in nature.

In this paper, we consider a simple cognitive radio scenario consisting of a primary user and a secondary user. The primary user and the secondary user work in a non-cooperative manner, where the primary user adjusts its transmit power based on its own pre-defined power control policy. The objective is to let the secondary user learn an intelligent power control policy through its interaction with the primary user. We assume that the secondary user does not have any knowledge about the primary user’s transmit power, as well as its power control strategy. To assist the secondary user, a number of sensors are spatially deployed to collect the received signal strength (RSS) information at different locations in the wireless environment. We develop an intelligent power control policy for the secondary user by resorting to the deep reinforcement learning approach. Specifically, the use of deep reinforcement learning, instead of the conventional reinforcement learning, is to overcome the difficulty caused by random variations in the RSS measurements. Our experimental results show that, with the aid of the learned power control policy, the secondary user can intelligently adjust its transmit power such that a goal state can be reached from any initial state within a few number of transition steps.

The rest of the paper is organized as follows. Table I specifies the frequently-used symbols in this paper. The system model and the problem formulation are discussed in Section II. In Section III, we develop a deep reinforcement learning algorithm for power control for the secondary user. Experimental results are provided in Section IV, followed by concluding remarks in Section V.

transmit power of primary user
transmit power of secondary user
channel gain from transmitter to receiver
noise power of receiver
signal to interference plus noise ratio at receiver
minimum SINR requirement for receiver
number of sensor nodes
sensor node
receive power at sensor node
path loss between transmitter and sensor
variance of the Gaussian random variable

state of the Markov decision process

action of the Markov decision process
reward of the Markov decision process
TABLE I: Table of Symbols
Fig. 1: A schematic for spectrum sharing in cognitive radio networks.

Ii System Model

Consider a cognitive radio network consisting of a primary user and a secondary user, where the secondary user aims to share a common spectrum resource with the primary user, without causing harmful interference to the primary user. The primary user consists of a primary transmitter () and a primary receiver (), and the secondary user consists of a secondary transmitter () and a secondary receiver (), see Fig. 1. In our setup, we assume that the primary user and the secondary user are working in a non-cooperative way, in which the primary user is unaware of the existence of the secondary user, and adjusts its transmit power based on its own power control policy. Nevertheless, it should be noted that since the power control policy for the primary user is dependent on the environment (cf. (2) and (4)), the action taken by the secondary user at the current time will affect the primary user’s next move in an implicit way. There is also no communication between the primary network and the secondary network. Thus the secondary user has no knowledge about the primary user’s transmit power and its power control policy. For simplicity, we, at this point, assume that the primary user and the secondary user synchronously update their respective transmit power and the transmit power is adjusted on a time framed basis. We will show later our proposed scheme also works when the synchronous assumption does not hold.

The objective here is to help the secondary user learn an efficient power control policy such that, after a few rounds of power adjustment, both the primary user and the secondary user are able to transmit their data successfully with required QoSs. Clearly, this task cannot be accomplished if the secondary user only knows its own transmit power. To assist the secondary user, a set of sensor nodes are employed to measure the received signal strength (RSS) at different locations in the wireless environment. The RSS measurements are related to both users’ transmit power, thus revealing the state information of the system. We assume that the RSS information is accessible to the secondary user. Note that collecting the RSS information from spatially distributed sensor nodes is a basic requirement for many applications, e.g. source localization [35]. For our problem, each node only needs to report the RSS information once per time frame, which involves a low data rate. Therefore some conventional technologies such as the Zigbee [36] which delivers low-latency communication for wireless mesh networks can be employed to provide timely feedback of the RSS information from sensor nodes to the secondary user.

For both the primary user and the secondary user, the QoS is measured in terms of the SINR. Let and denote the transmit power of the primary user and the secondary user, respectively. The SINR for the th receiver is given as

(1)

where denotes the channel gain from the transmitter to the receiver , and is the noise power at the receiver . We assume that the primary receiver and the secondary receiver have to satisfy a minimum SINR requirement for successful reception, i.e. .

To meet the QoS requirement, the primary user is supposed to adaptively adjust its transmit power based on its own power control policy. In this paper, two different power control strategies are considered for the primary user. Note that our proposed method also works if the primary user adopts other power control policies. For the first strategy, the transmit power of the primary user is updated according to the classical power control algorithm [11]

(2)

where denotes the SINR measured at the primary receiver at the th time frame, denotes the transmit power at the th time frame, here we assume that the transmit power is adjusted on a time framed basis. is a discretization operation which maps continuous-valued levels into a set of discrete values

(3)

where . More precisely, we let equal the nearest discrete level that is no less than and let if . For the second power control strategy, suppose the transmit power at the th time frame is , where . The transmit power of the primary user is updated according to

(4)

where . We see that compared to (2), the power control policy (4) has a more conservative behavior: it updates its transmit power in a stepwise manner. Specifically, it increases its power (by one step) when and , and decreases its power (by one step) when and ; otherwise it will stay on the current power level. Here is the ‘predicted’ SINR at the th time frame.

Suppose sensors are deployed to spatially sample the RSS information. Let denote node , and denote the receive power at sensor at the th frame. In our paper, the following model is used to simulate the state (i.e. RSS) observations

(5)

where and represent the transmit power of the primary user and the secondary user, respectively, denotes the path loss between the primary transmitter and sensor , denotes the path loss between the secondary transmitter and sensor , and , a zero mean Gaussian random variable with variance

, is used to account for the random variation caused by shadowing effect and estimation errors. For free-space propagation, according to the Friis law

[37], and are respectively given by

(6)

where is the signal wavelength, () denotes the distance between the primary (secondary) transmitter and node .

We also assume that the transmit power of the secondary user is chosen from a finite set

(7)

where . The objective of the secondary user is to learn how to adjust its transmit power based on the collected RSS information at each time frame such that after a few rounds of power adjustment, both the primary user and the secondary user can meet their respective QoS requirements for successful data transmissions. Note that we suppose there exists at least a pair of transmit power such that the primary receiver and the secondary receiver satisfy their respective QoS (SINR) requirements, i.e. .

Fig. 2: Interaction between the secondary user and the environment (i.e. the primary user).

Iii A Deep Reinforcement Learning Approach for Power Control

We see that the secondary user, at each time frame, has to take an action (i.e. choose a transmit power from a pre-specified power set ) based on its current state

(8)

This power control process is essentially a Markov decision process (MDP) because after the decision maker (i.e. the secondary user) chooses any action in state , the process will move into a new state which depends on the current state and the decision maker’s action , and given and , the next state is conditionally independent of all previous states and actions. Also, after moving into a new state, the decision maker will receive a corresponding reward which can be defined as

The interaction between the secondary user and the environment is shown in Fig. 2. Note that here the decision maker (secondary user) is assumed to know whether the transmission between the primary transmitter and the primary receiver is successful or not. In practice, such knowledge may be obtained by monitoring an acknowledgment signal sent by the primary receiver to indicate successful receipt of a transmission from the primary transmitter.

The core problem of MDPs is to learn a “policy” for the decision maker: a function that specifies the action that the decision maker will choose when in state . More precisely, the goal of the secondary user is to learn a policy for selecting its action based on the current state in a way that maximizes a discounted cumulative reward which is defined as [24]

(9)

where is the discount factor and denotes the time frame at which the goal state is reached. For our problem, the goal state is defined as a state in which . Thus, the task becomes learning an optimal policy that maximizes , i.e.

(10)

Directly learning is difficult. In reinforcement learning, Q-learning provides an alternative approach to solve (10) [38]. Instead of learning , an action-value (also known as Q) function is introduced to evaluate the expected discounted cumulative reward after taking some action in a given state . When such an action-value function is learned, the optimal policy can be constructed by simply selecting the action with the highest value in each state. The basic idea behind the Q-learning and many other reinforcement learning algorithms is to iteratively update the action-value function according to a simple value iteration update rule

(11)

The above update rule is also known as the Bellman equation [39], in which is the state resulting from applying action to the current state . It has been proved that the value iteration algorithm (11) converges to the optimal action-value function, which is defined as the maximum expected discounted cumulative reward by following any policy, after taking some action in a given state . For the Q-learning, the number of states is finite and the action-value function is estimated separately for each state, thus leading to a Q-table or a Q-matrix, with its rows representing the states and its columns representing the possible actions. After the Q-table converges, one can select an action which has the largest value of as the optimal action in state .

Unfortunately, due to the random variation in the RSS measurement, the value of is continuous. As a result, the Q-learning approach is impractical for our problem since we could have an infinite number of states. To overcome this issue, we resort to the deep Q-network (DQN) proposed in [33]

. Unlike the conventional Q-learning method that generates a finite action-value table, for the DQN, the table is replaced by a deep neural network

to approximate the action-value function, where denotes the weights of the Q-network. Specifically, given an input , the deep neural network yields an

-dimensional vector, with its

th entry representing the estimated value for choosing the action from .

The training data used to train the Q-network are generated as follows. Given , at iteration

, we either explore a randomly selected action with probability

, or select an action which has the largest output , where denotes the parameters for the current iteration. After taking the action , the secondary user receives a reward and observes a new state . This transition is stored in the replay memory . The training of the Q-network begins when has collected a sufficient number of transitions, say transitions. Specifically, we randomly select a minibatch of transitions from , and the Q-network can be trained by adjusting the parameters

such that the following loss function is minimized

(12)

in which is the index set of the random minibatch used at the th iteration, and is a value estimated via the Bellman equation by using parameters from the current iteration, i.e.

(13)

Note that unlike traditional supervised learning, the targets for DQN learning is updated as the weights

are refined. For clarity, we summarize our proposed DQN training algorithm in Algorithm 1.

Fig. 3: Asynchronous update of the transmit power for the primary user and the secondary user.
   Initialize replay memory with buffer capacity Initialize network with random weights Initialize and , then obtain
  for  do
     Update via the primary user’s power control strategy (2) or (4)
     With probability select a random action , otherwise select
     Obtain via the random observation model (5) and observe reward
     Store transition in
     if  then
        Sample a random minibatch of transitions from , where the indexes in are uniformly chosen at random
        Update by minimizing the loss function (12), where targets are given by (13)
        Set
     end if
     if  is a goal state then
        Initialize and , then obtain
     end if
  end for
Algorithm 1 DQN Training for Power Control

After training, the secondary user can choose the action which yields the largest estimated value . For clarity, the proposed DQN-based power control scheme for the secondary user is summarized in Algorithm 2. We would like to point out that during the DQN training process, the secondary user requires the knowledge of whether the QoS requirements for the primary user and the secondary user are satisfied. Nevertheless, after the DQN is trained, the secondary user only needs the feedback from sensors to decide its next transmit power.

We discuss the convergence issue of the proposed power control policy. Suppose is a goal state. If the transmit power of the secondary user remains unchanged, then it is easy to show that the next state is also a goal state, whichever of (2) and (4) is chosen for the primary user to update its transmit power. On the other hand, the secondary user will eventually learn to choose a transmit power such that the next state remains a goal state. Therefore we can conclude that once reaches a goal state, it will stay at the goal state until the data transmission is over. Suppose the goal state is lost due to the discontinuity of data transmission, and the secondary user wants to restart a new transmission. In this case, learning is no longer required. The secondary user can select its transmit power according to the learned power control policy.

In our previous discussion, we assume that the primary user and the secondary user synchronously update their respective transmit power. Nevertheless, we would like to point out that the synchronous assumption is not necessarily required by our proposed scheme. Suppose the time frames between the primary user and the secondary user are not strictly synchronized (see Fig. 3). Both the primary user and the secondary user update their transmit power at the beginning of their respective time frames, that is, the primary user adjusts its transmit power at time , and the secondary user updates its transmit power at time , where denotes the duration of each frame. Without loss of generality, we assume . Clearly, our intelligent power control scheme would function the same as in the synchronous case if both the primary user and the secondary user perform their respective tasks, i.e. gather necessary information (i.e. for the primary user, , , and for the secondary user) and make decisions during the time window .

   Initialize , then obtain
  for  do
     Select
     Obtain
  end for
Algorithm 2 DQN-based Power Control Strategy
(a) Loss function vs. the number of iterations.
(b) Success rate vs. the number of iterations.
(c) Average number of transition steps vs. the number of iterations.
Fig. 4: Loss function, success rate, and average number of transition steps vs. the number of iterations used for training, where , .
(a) Loss function vs. the number of iterations.
(b) Success rate vs. the number of iterations.
(c) Average number of transition steps vs. the number of iterations.
Fig. 5: Loss function, success rate, and average number of transition steps vs. the number of iterations used for training, where , .
(a) Loss function vs. the number of iterations.
(b) Success rate vs. the number of iterations.
(c) Average number of transition steps vs. the number of iterations.
Fig. 6: Loss function, success rate, and average number of transition steps vs. the number of iterations used for training, where , .

Iv Experimental Results

We now carry out experiments to illustrate the performance of our proposed DQN-based power control algorithm111Codes are available at http://www.junfang-uestc.net/codes/DQN-power-control.rar. In our experiments, the transmit power (in Watt) of both the primary user and the secondary user is chosen from a pre-defined set , and the noise power at and is set to W. For simplicity, the channel gains from the primary/secondary transmitter to the primary/secondary receivers are assumed to be . The minimum SINR requirements for successful reception for the primary user and the secondary user are set to , , respectively. It can be easily checked that there exists a pair of transmit power which ensures that the QoSs of the primary user and the secondary user are satisfied. Also, a total number of sensors are employed to collect the RSS information to assist the secondary user to learn a power control policy. The distance between the transmitter and the sensor node

is uniformly distributed in the interval

(in meters).

(a) Loss function vs. the number of iterations.
(b) Success rate vs. the number of iterations.
(c) Average number of transition steps vs. the number of iterations.
Fig. 7: Loss function, success rate, and average number of transition steps vs. the number of iterations used for training, where , .
(a) Loss function vs. the number of iterations.
(b) Success rates vs. the number of iterations.
(c) Average number of transition steps vs. the number of iterations.
Fig. 8: Loss function, success rate, and average number of transition steps vs. the number of iterations used for training, where , .
(a) Loss function vs. the number of iterations.
(b) Success rate vs. the number of iterations.
(c) Average number of transition steps vs. the number of iterations.
Fig. 9: Loss function, success rate, and average number of transition steps vs. the number of iterations used for training, where , .

In our experiments, the deep neural network (DNN) used to approximate the action-value function consists of three fully-connected feedforward hidden layers, and the number of neurons in the three hidden layers are

, , and

, respectively. Rectified linear units (ReLUs) are employed as the activation function for the first and the second hidden layers. A ReLU has output 0 if the input is less than 0, and raw output otherwise. For the last hidden layer, the tanh function is used as the activation function. The Adam algorithm

[40] is adopted for updating the weights , where the size of a minibatch is set to . We assume that the replay memory contains most recent transitions, and in each iteration, the training of begins only when stores more than transitions. The total number of iterations is set to . The probability of exploring new actions linearly decreases with the number of iterations from to . Specifically, at iteration , we let

(14)

We use Algorithm 1 to train the network, and use Algorithm 2 to check its performance.

The performance is evaluated via two metrics, namely, the success rate and the average number of transition steps. The success rate is computed as the ratio of the number of successful trials to the total number of independent runs. A trial is considered successful if moves to a goal state within 20 time frames. The average number of transition steps is defined as the average number of time frames required to reach a goal state if a trial is successful.

We now study the performance of the deep reinforcement learning approach. Specifically, we examine the loss function, the success rate, and the average number of transition steps as a function of the number of iterations used for training. During training, the loss function is calculated according to (12). After iterations of training, the secondary user can use the trained network to interact with the primary user. The success rate and the average number of transition steps are used to evaluate how well the network is trained. Results are averaged over independent runs, in which a random initial state is selected for each run. Fig. 4 plots the loss function, the success rate, and the average number of transition steps vs. the number of iterations , where we set

, the standard deviation of the random variable used to account for the shadowing effect and measurement errors is set to

, and the primary user employs (2) to update its transmit power. We see that the secondary user, after only iterations of training, can learn an efficient power control policy which ensures that a goal state can be reached quickly (with average number of transition steps) from any initial states with probability one. Fig. 5 and Fig. 6 depict the loss function, the success rate, and the average number of transition steps vs. for different choices of and , where we set , for Fig. 5 and , for Fig. 6. We see that the value of the loss function becomes larger when we increase the variance or decrease the number of sensors. Nevertheless, the learned policy is still very efficient and effective, attaining a success rate and an average number of transition steps similar to those in Fig. 4. This result demonstrates the robustness of the deep reinforcement learning approach.

Next, we examine the performance of the DQN-based power control method when the primary user employs the second power control policy (4) to update its transmit power. Since the policy (4) is more conservative, the task of learning an optimal power control strategy is more challenging. Fig. 7 depicts the loss function, the success rate, and the average number of transition step as a function of , where we set and . We observe that for this example, more iterations (about ) are required for training to reach a success rate of one. Moreover, the learned policy requires an average number of transition steps of to reach a goal state. The increased number of transition steps is because the second policy used by the primary user only allow its transmit power to increase/decrease by a single level at each step. Thus more steps are needed to reach the goal state. Fig. 8 and Fig. 9 plot the loss function, the success rate, and the average number of transition steps vs. for different choices of and , where we set , for Fig. 8 and , for Fig. 9. For this example, we see that a large variance in the state observations and an insufficient number of sensors lead to performance degradation. In particular, the proposed method incurs a considerable performance loss when fewer sensors are deployed. This is because the random variation in the state observations makes different states less distinguishable from each other and prevents the agent from learning an effective policy, but using more sensors helps neutralize the effect of random variations.

Lastly, we compare the DQN-based power control method with the DCPC algorithm [11] which was developed for power control in an optimization framework. For the DCPC algorithm, the primary user and secondary user use the following power control policy to update their respective transmit power:

(15)
(16)

For the DQN-based method, the primary user uses the policy (2) to update its transmit power, the number of sensor nodes and the state observation noise variance are set to and , respectively. In Fig. 10, we examine the QoSs (i.e. SINRs) of the primary and secondary users as the iterative process evolves. We see that although both schemes can converge from an initial point, our proposed DQN-based method requires only a few transition steps to reach a goal state, while the DCPC algorithm takes tens of steps to converge. We also observe that the DQN-based scheme converges to a solution that is close to the optimal solution obtained by the DCPC algorithm, which further corroborates the effectiveness of the proposed DQN-based scheme. Note that optimization-based techniques such as the DCPC algorithm require global coordination among all users in the cognitive networks so that the primary user and the secondary user can interact in a cooperative way. In contrast, for our proposed scheme, the primary user follows its own rule to react to the environment. In other words, the interaction between the primary user and the secondary user is not planned out in advance and needs to be learned in real time. Although the training of the DQN involves a high computational complexity, after the training is completed, the operation of the power control has a very low computational complexity: given an input state , the secondary user can make a decision using simple calculations.

Fig. 10: SINRs of the primary and secondary users vs. the number of transition steps.

V Conclusions

We studied the problem of spectrum sharing in a cognitive radio system consisting of a primary user and a secondary user. We assume that the primary user and the secondary user work in a non-cooperative way. The primary user adjusts its transmit power based on its own pre-defined power control policy. We developed a deep reinforcement learning-based method for the secondary user to learn how to adjust its transmit power such that eventually both the primary user and the secondary user are able to transmit their respective data successfully with required qualities of service. Experimental results show that the proposed learning method is robust against the random variation in the state observations, and a goal state can be reached from any initial states within only a few number of steps.

References

  • [1] P. Kolodzy and I. Avoidance, “Spectrum policy task force,” Federal Commun. Commission, Washington, DC, USA, Tech. Rep. 02-135, 2002.
  • [2] S. Haykin, “Cognitive radio: brain-empowered wireless communications,” IEEE Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 201–220, Feb. 2005.
  • [3] Y. Wu, Q. Zhu, J. Huang, and D. H. K. Tsang, “Revenue sharing based resource allocation for dynamic spectrum access networks,” IEEE J. Sel. Areas Commun., vol. 32, no. 11, pp. 2280–2296, Nov. 2014.
  • [4] P. Wang, J. Fang, N. Han, and H. Li, “Multiantenna-assisted spectrum sensing for cognitive radio,” IEEE Transactions on Vehicular Technology, vol. 59, no. 4, pp. 1791–1800, May 2010.
  • [5] I. Mitliagkas, N. D. Sidiropoulos, and A. Swami, “Joint power and admission control for ad-hoc and cognitive underlay networks: Convex approximation and distributed implementation,” IEEE Transactions on Wireless Communications, vol. 10, no. 12, pp. 4110–4121, December 2011.
  • [6] D. I. Kim, L. B. Le, and E. Hossain, “Joint rate and power allocation for cognitive radios in dynamic spectrum access environment,” IEEE Transactions on Wireless Communications, vol. 7, no. 12, pp. 5517–5527, December 2008.
  • [7] J. Tadrous, A. Sultan, and M. Nafie, “Admission and power control for spectrum sharing cognitive radio networks,” IEEE Transactions on Wireless Communications, vol. 10, no. 6, pp. 1945–1955, June 2011.
  • [8] W. Su, J. D. Matyjas, and S. Batalama, “Active cooperation between primary users and cognitive radio users in heterogeneous ad-hoc networks,” IEEE Transactions on Signal Processing, vol. 60, no. 4, pp. 1796–1805, April 2012.
  • [9] Q. Zhu, Y. Wu, D. H. K. Tsang, and H. Peng, “Cooperative spectrum sharing in cognitive radio networks with proactive primary system,” in 2013 IEEE/CIC International Conference on Communications in China - Workshops (CIC/ICCC), Xi’an, China, Aug,12-14 2013, pp. 82–87.
  • [10] M. H. Islam, Y. C. Liang, and A. T. Hoang, “Distributed power and admission control for cognitive radio networks using antenna arrays,” in 2007 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, Dublin, Ireland, April,17-20 2007, pp. 250–253.
  • [11] S. A. Grandhi, J. Zander, and R. Yates, “Constrained power control,” Wireless Personal Communications, vol. 1, no. 4, pp. 257–270, December 1994.
  • [12] M. Xiao, N. B. Shroff, and E. K. P. Chong, “A utility-based power-control scheme in wireless cellular systems,” IEEE/ACM Transactions on Networking, vol. 11, no. 2, pp. 210–221, Apr. 2003.
  • [13] T. ElBatt and A. Ephremides, “Joint scheduling and power control for wireless ad hoc networks,” IEEE Transactions on Wireless Communications, vol. 3, no. 1, pp. 74–85, Jan. 2004.
  • [14] J. Tadrous, A. Sultan, M. Nafie, and A. El-Keyi, “Power control for constrained throughput maximization in spectrum shared networks,” in 2010 IEEE Global Telecommunications Conference GLOBECOM 2010, Miami, FL, USA, Dec. 6-10 2010, pp. 1–6.
  • [15] Y. Xing, C. N. Mathur, M. A. Haleem, R. Chandramouli, and K. P. Subbalakshmi, “Dynamic spectrum access with QoS and interference temperature constraints,” IEEE Transactions on Mobile Computing, vol. 6, no. 4, pp. 423–433, April 2007.
  • [16] S. Lee, Y. Zeng, and R. Zhang, “Retrodirective multi-user wireless power transfer with massive MIMO,” IEEE Wireless Communications Letters, vol. 7, no. 1, pp. 54–57, Feb. 2018.
  • [17] Y. F. Liu, Y. H. Dai, and S. Ma, “Joint power and admission control: Non-convex approximation and an effective polynomial time deflation approach,” IEEE Transactions on Signal Processing, vol. 63, no. 14, pp. 3641–3656, July 2015.
  • [18] K. Senel and S. Tekinay, “Optimal power allocation in NOMA systems with imperfect channel estimation,” in 2017 IEEE Global Communications Conference GLOBECOM 2017, Singapore, Singapore, Dec.,4-8 2017, pp. 1–7.
  • [19] Y. F. Liu, M. Hong, and E. Song, “Sample approximation-based deflation approaches for chance SINR-constrained joint power and admission control,” IEEE Transactions on Wireless Communications, vol. 15, no. 7, pp. 4535–4547, July 2016.
  • [20] T. Heikkinen, “A potential game approach to distributed power control and scheduling,” Computer Networks, vol. 50, no. 13, pp. 2295 – 2311, Sep. 2006.
  • [21] X. Chen, Z. Zhao, and H. Zhang, “Stochastic power adaptation with multiagent reinforcement learning for cognitive wireless mesh networks,” IEEE Transactions on Mobile Computing, vol. 12, no. 11, pp. 2155–2166, Nov. 2013.
  • [22] G. Yang, B. Li, X. Tan, and X. Wang, “Adaptive power control algorithm in cognitive radio based on game theory,” IET Communications, vol. 9, no. 15, pp. 1807–1811, Oct. 2015.
  • [23] L. Gao, L. Duan, and J. Huang, “Two-sided matching based cooperative spectrum sharing,” IEEE Transactions on Mobile Computing, vol. 16, no. 2, pp. 538–551, Feb. 2017.
  • [24] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction.   Cambridge: MIT press, 1998.
  • [25] M. Bennis and D. Niyato, “A Q-learning based approach to interference avoidance in self-organized femtocell networks,” in 2010 IEEE Globecom Workshops, Miami, FL, USA, Dec.,6-10 2010, pp. 706–710.
  • [26] H. Li, “Multiagent Q-learning for aloha-like spectrum access in cognitive radio systems,” EURASIP Journal on Wireless Communications and Networking, vol. 2010, May 2010.
  • [27] O. Naparstek and K. Cohen, “Deep multi-user reinforcement learning for dynamic spectrum access in multichannel wireless networks,” arXiv preprint arXiv:1704.02613, 2017.
  • [28] F. Fu and M. van der Schaar, “Learning to compete for resources in wireless stochastic games,” IEEE Transactions on Vehicular Technology, vol. 58, no. 4, pp. 1904–1919, May 2009.
  • [29] J. Lundén, V. Koivunen, S. R. Kulkarni, and H. V. Poor, “Reinforcement learning based distributed multiagent sensing policy for cognitive radio networks,” in 2011 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Aachen, Germany, May,3-6 2011, pp. 642–646.
  • [30] A. Alsarhan and A. Agarwal, “Spectrum sharing in multi-service cognitive network using reinforcement learning,” in 2009 First UK-India International Workshop on Cognitive Wireless Systems (UKIWCWS), New Delhi, India, Dec.,10-12 2009, pp. 1–5.
  • [31]

    T. Wang, C. K. Wen, H. Wang, F. Gao, T. Jiang, and S. Jin, “Deep learning for wireless physical layer: Opportunities and challenges,”

    China Communications, vol. 14, no. 11, pp. 92–111, Nov. 2017.
  • [32] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
  • [33] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, pp. 529–533, Feb. 2015.
  • [34] D. Silver et al., “Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, pp. 484–489, Jan. 2016.
  • [35] S. Tomic, M. Beko, and R. Dinis, “RSS-based localization in wireless sensor networks using convex relaxation: Noncooperative and cooperative schemes,” IEEE Trans. Veh. Technol., vol. 64, no. 5, pp. 2037–2050, May 2015.
  • [36] J. Yick, B. Mukherjee, and D. Ghosal, “Wireless sensor network survey,” Computer Networks, vol. 52, no. 12, pp. 2292–2330, Aug. 2008.
  • [37] T. S. Rappaport, Wireless communications: principles and practice.   NJ, USA: Prentice Hall, 2002.
  • [38] C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3, pp. 279–292, May 1992.
  • [39] R. Bellman, Dynamic programming.   Princeton, NJ: Princeton University Press, 2003.
  • [40] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.