D2D communication allows two nearby users to form a D2D pair and communicate with each other directly, thus improving the transmit quality significantly due to short transmission distance . D2D underlay communication reuses the spectrum of the cellular network to potentially increase spectral efficiency. However, D2D communication generates interference to the cellular network if the radio resources are not properly allocated . Thus, it is necessary to study the resource allocation to ensure the reliability of cellular communication and increase the network capacity.
There are many resource allocation methods in the existing literature, which can be divided into centralized and distributed schemes. In the centralized schemes[3, 4, 5], the BS is responsible for allocating resources to the CUEs and D2D pairs, and monitoring information such as Signal to Interference-plus-Noise Ratio (SINR), channel state information (CSI), etc. However, as the number of users increases, acquiring global CSI will cause severe signaling overhead, and a centralized scheme for spectrum allocation becomes unrealistic. Moreover, the complexity of centralized algorithms increases with the number of users, causing enormous computational pressure on the BS.
In order to reduce the signaling overhead and alleviate the computational pressure of the BS, a series of distributed methods are proposed. In a distributed approach, without a central controller, the D2D pairs opportunistically reuse the spectrum of cellular users (CUEs). Distributed schemes can scale well to larger networks, but require frequent exchange of information between adjacent D2D users. Quite a few distributed algorithms are based on game theory. models D2D pairs sharing spectrumc with CUEs as an auction mechanism. Large signaling overhead is incurred since the CSI and prices have to be shared among the D2D pairs. Moreover, this type of method requires a lot of iteration to converge.
In addition to game theory, machine learning has been considered as a effective tool in solving different network problems in 5G. Reinforcement learning (RL) is one of the most powerful machine learning tools for policy control . Recently, many works have applied reinforcement learning to solve the intelligent resource management problem in D2D underlay networks [9, 10, 11]. Each D2D pair is supported by an agent, which automatically selects a reasonable spectrum based on the policy learned by reinforcement learning. A Q-learning based resource allocation is proposed in . Since Q-learning isn’t suitable to deal with continuous valued state and action spaces, an actor-critic (AC) approach is proposed in . In 
, a decentralized mechanism based on deep reinforcement learning has been developed. The above previous works model the policy search process in reinforcement learning as a Markov decision process (MDP). However, in the decentralized settings of spectrum allocation problem, all agents (D2D pairs) are independently updating their policies, which is a multi-agent environment. The environment appears non-stationary from the view of any one agent. In addition, the above decentralized methods only focus on improve the performance of each user, ignoring cooperation between users.
In this paper, we propose a distributed spectrum allocation framework based on multi-agent deep reinforcement learning, named Neighbor-Agent Actor Critic (NAAC). NAAC is a framework of centralized training with decentralized execution, which uses information from neighbor users for training to effectively overcome the instability of the multi-agent environment, while makes full use of cooperation between users to further improve system performance. NAAC does not require information interaction when it is executed, so it significantly saves signaling overhead. The main contributions of this paper are summarized as follows: 1. The multi-agent environment is modeled by Markov game, which is more accurate and helpful for the study of subsequent reinforcement learning algorithms than MDP. 2. Proposing a multi-agent reinforcement learning framework of centralized training with decentralized execution, which solves the problem that the multi-agent environment is unstable and leverages the partnership between users to further increase system performance.
The rest of this paper can be organized as follows. Section II shows the system model and formulates the optimization problem. In Section III, we model the resource management problem as a partially observable Markov games and the NAAC framework is adopted to address it. The simulation results and analysis are presented in Section VI. Finally, Section V concludes the paper.
Ii System Model and Problem Formulation
Ii-a System Model
An downlink scenario in a single cell system is considered. A set of CUEs, denoted as and a set of active D2D pairs, denoted as are located in the coverage area of one BS. We denote the CUE in the system by , , the D2D pair by , , the transmitter and the receiver of D2D pair by and , respectively. Orthogonal Frequency Division Multiple Access (OFDMA) is employed to support multiple access for both the cellular and D2D communications, where a set of resource blocks (RBs) are available for spectrum allocation. In this system, the D2D pairs share the same spectrum with the CUEs.
We assume that the BS and the transmitter of a D2D pair transmit with power and , respectively. The channel gains of the cellular communication link from the BS to CUE , the D2D communication link from D2D transmitter to D2D receiver , the interference link from D2D transmitter to CUE , the interference link from the BS to D2D receiver and the interference link from D2D transmitter to D2D receiver when they share the same spectrum for data transmission, are represented by , , , and , respectively. The energy of the additive white Gaussian noise (AWGN) at a receiver is denoted by .
The instantaneous SINR of the received signal at CUE from the BS in RB can be written as
where represents the set of D2D pairs to which RB is allocated. The instantaneous SINR of the received signal at the from in RB can be written as
Ii-B Problem Formulation
We assume that each CUE has been assigned a RB and a RB can be allocated to multiple D2D pairs. We define a RB allocation matrix for the D2D pairs. When RB is allocated to , , otherwise .
Our objective is to find a RB allocation matrix for all D2D pairs to maximize the D2D sum rate, which can be formulated as
Constraints in (4) imply that a maximum of one RB can be allocated to each D2D pair. Since CUEs are the primary users of the frequency band, the transmission quality of CUEs should be ensured, constraints in (5) imply that the SINR of CUEs should satisfy a predefined constraint, where represents the SINR threshold of the CUE.
Iii Neighbor-Agent Deep Reinforcement Learning based Spectrum Allocation
The optimization problem in (3
) is difficult to solve as it is a NP-hard combinatorial optimization problem. In addition, we assume that each D2D pair can only obtain its own CSI, and there is no information interaction between D2D users. Using a distributed method to solve this optimization problem requires each D2D pair to choose RB autonomously. The reinforcement learning method is an effective method to solve such problems. Hence, in this section, we first model the multi-agent environment and then a distributed framework based on multi-agent reinforcement learning is proposed to address the spectrum allocation problem.
Iii-a Modeling of Multi-Agent Environments
Reinforcement learning is an area of machine learning, which is about taking suitable action to maximize reward in a particular situation. In reinforcement learning for spectrum allocation in D2D underlay communications, an agent corresponding to a D2D pair, interacts with the environment. In this scenario, the environment is considered to be everything outside the D2D link. At each time , the D2D pair, as the agent, observes a state, , from the state space, , and accordingly takes an action, , from the action space, , selecting RB based on the policy, . Following the action, the state of the environment transits to a new state and the agent receives a reward, .
Most of the existing works model the policy search process in reinforcement learning as a MDP. A MDP can be defined as a tuple , is the transition probability when the agent takes the action from the current state to a new state , is a discount factor. The return from a state is defined as the sum of discounted future reward , where is the time horizon. However, in the decentralized settings of spectrum allocation problem, all D2D pairs as agents are independently updating their policies as learning progresses, which is a multi-agent environment, the environment appears non-stationary from the view of any one agent, violating Markov assumptions required for convergence of reinforcement learning.
To make up for the shortcomings of MDP, in this work, we consider a multi-agent extension of MDP called partially observable Markov games to model the multi-agent reinforcement learning. At each time , the D2D pair , as the agent , observes a state, , from the state space, , and accordingly takes an action, , from the action space, , selecting RB based on the policy . Following the action, the state of the environment observed by agent transits to a new state and the agent receives a reward, .
An N-agent Markov game is formalized by a tuple , is the transition probability when all agents take actions from the current state to a new state , the constant represents the reward discount factor across time. The return of agent from a state is defined as the sum of discounted future reward , where is the time horizon.
In our system, the state space , the action space , and the reward function is defined as follows:
State space: The state observed by the D2D link (agent ) for characterizing the environment consists of several parts: the instant channel information of the D2D corresponding link, , the channel information of the cellular link, e.g., from the BS to the D2D transmitter, , the previous interference to the link, , the RB selected by the D2D link in the previous time slot, . Hence the state can be expressed as .
Action space: At each time , the agent takes an action , which represents the agent select a RB, according to the current state, , based on the decision policy . The dimension of the action space is if there are RBs.
Reward function: The learning process is driven by the reward function in the reinforcement learning. Each agent makes decision by maximizing its reward with the interactions of the environment. In order to maximize the D2D sum rate while guaranteeing the transmission quality of CUEs, we design a reward function relating to two parts: the D2D link rate and the SINR constraints of CUE. In our settings, the reward remains positive if the SINR constraints are satisfied; if the constraints are violated, it will be a negative reward, . The positive reward is proportion to the D2D link rate . We use the Shannon capacity model to evaluate , where is the instantaneous SINR of the received signal at the D2D receiver at time slot .
Iii-B Neighbor-Agent Actor Critic for Spectrum Allocation
The spectrum allocation problem formulated in Section II-B can be solved by applying the Q-learning , deep Q Network  and Actor-Critic (AC)  methods. However, all of the above methods model the reinforcement learning as a MDP. In addition, the above methods only focus on maximizing the expected cumulative discounted reward of each agent, ignoring cooperation between agents. In order to overcome the inherent non-stationary of the multi-agent environment, and to utilize the cooperation between the agents, a Neighbor-Agent Actor-Critic (NAAC) framework is adopted to optimize the policy by modeling multi-agent environment as Markov game and considering action policies of other agents so as to successfully learn policies that require complex multi-agent coordination.
NAAC framework for spectrum allocation in D2D underlay communications is shown in Fig. 1, which is a framework of centralized training with decentralized execution. A D2D pair is supported by an autonomous agent . NAAC framework is a extension of AC  where each agent is divided into two parts: critic and actor. The actor selects the action according to the observed state, and critic is augmented with extra states and actions information of other neighbor agents to evaluates the quality of the action. We allow the policies to use extra information to ease training, so long as this information is not used at execution time. The centralized training process is done at the BS. In the distributed execution process, a D2D pair (agent ) downloads the trained weight of the actor from the BS and loads it into its own actor . The actor selects action (RB) based on the state observed by the agent from the environment. When the agent takes the action , the environment returns a reward . When the communication is in good condition, the D2D pair can upload the historical information including collected at the time of execution to the BS for subsequent training.
A primary motivation behind NAAC is that, if we know the actions taken by all agents, the environment is stationary even as the policies change , since
for any . The constant transition probability satisfies the Markov assumption of reinforcement learning convergence. In a wireless communication environment, inter-user interference is mainly related to neighbor users. Since when the user’s transmit power is constant, the main factor affecting the inter-user interference strength is the large-scale fading, which is mainly related to the distance between users. Therefore, it’s no need all users’ information to ensure the stability of the environment, just the information of the neighbor users is enough. For a D2D pair , we denote a set of D2D pairs closest to plus itself as . We can use the information of neighbor agents instead of all agents to ensure the stability of the multi-agent environment, since
where contains the actions of the neighbors of agent .
The goal in reinforcement learning is to learn a policy which maximizes the expected return from the start distribution . In order to simplify the representation, the state , action and return
at the current moment are simply denoted as, and , and at the next moment are simply denoted as and . The action-value function corresponds to critic is used in many reinforcement learning algorithms. In a single agent environment, according to AC , consider function approximators parameterized by , the critic can be optimized by minimizing the loss:
where , : is a deterministic policy of actor.
Based on the DPG algorithm , a parameterized actor function
can be used to specify the current policy by deterministically mapping states to a specific action. The actor is updated by following the applying the chain rule to the expected return from the start distribution:
We can extend above idea into multi-agent environment. Consider a Markov game with agents, and let be the set of all agent policies. We use the states and actions of the neighbor agents of agent to evaluate the action-value function of agent , which can be written as:
where contains the states of the neighbors of agent , , is a centralized action-value function that takes the states and actions of the agent and its neighbor agents as input, and outputs the Q-value for agent .
Let the extended idea work with deterministic policies. If we now consider deterministic policies and let be the set of all policies, and the function approximator of the centralized action-value function of agent parameterized by , which we optimize by minimizing the loss:
Consider the deterministic policy of agent parameterized by . We can write the gradient of the expected return for agent , as:
Here is the replay buffer contains the tuples , recording experiences of all agents. The replay buffer is a finite sized cache. At each timestep the actor and critic are updated by sampling a minibatch uniformly from the buffer.
The mapping between the state and action space of the actor and the action-value function of the critic need to be approximated by function approximators. Q-learning can’t work well when the state-action space is very large, where many states may be rarely visited, thus the corresponding Q-values are seldom updated, leading to a much longer time to converge 
In NAAC, we denote the set of actor networks and critic networks of all agents as and with the weights and , respectively. The input of the actor network is the state observed by the agent, and the output is the selected action. The hidden layers in the actor network are all fully connected layers. For the critic network, first enter , after a fully connected layer, then enter , then go through several fully connected layers, and finally output .
Iii-C Training Algorithm
Since BS has more computing power than the mobile device, the centralized training process is done at the BS, As shown in the green part of Figure 1. The training algorithm is shown in Algorithm 1. The NAAC framework uses historical information to train the DNNs of actor and critic, and returns the weights of actor network .
Iv Performance Evaluation
In this section, we present the simulation results of the NAAC in comparison to four distributed approaches: 1. The most classic reinforcement learning method Q-learning ; 2. A reinforcement learning method with better convergence performance, Actor-Critic (denoted as AC) ; 3. The most classic deep reinforcement learning method Deep Q Network (denoted as DQN) ; 4. A game theory approach, Uncoupled Stochastic Learning Algorithm developed in  (denoted as SLA). Since we assume that each D2D pair can only obtain its own CSI and there is no information interaction between D2D users, centralized approaches with global information do not participate in performance comparisons.
For the simulations, we consider a single cell scenario with a radius of 500m. The CUEs and D2D pairs are distributed randomly in a cell, where the communication distance of each D2D pair cannot exceed a given maximum distance 30m. The detail parameters can be found in Table I.
Iv-a Simulations Results
|RB bandwidth||180 KHz|
|Number of CUEs||10|
|Number of RBs||10|
|BS transmission power ()||46 dBm|
|D2D transmission power ()||13 dBm|
|Cellular link pathloss|
|D2D link path loss exponent||4|
|UE thermal noise density||-174 dBm/Hz|
|CUE target SINR threshold ()||0 dB|
|Negative reward ()||-1|
In order to compare the convergence of these algorithms. Fig. 2 shows the training process of the four approaches in terms of the total reward performance when the number of D2D pairs is 10. Total reward is the sum of the rewards obtained by all agents. Since SLA is an online learning algorithm, it does not participate in the discussion of this indicator. We can see that NAAC has achieved the greatest total reward, while the convergence is most stable. The total reward and convergence of Q-learning are the worst, since Q-learning can’t work well when the state-action space is vary large. DQN solves the mapping problem of high-dimensional space by using DNN, improves in both total reward performance and convergence than Q-learning. For the AC, its performance is better than Q-learning and DQN, since AC optimizes the policy by combining the process of the policy learning and value learning with good convergence properties. However, none of the above algorithms considers the impact of multi-agent environment on stability of training process and the cooperation between multiple agents (D2D pairs) on system performance. The NAAC introduces the state and action information of neighbor D2D pairs to assist the training process, greatly improving the stability of the training process, and achieving a higher total reward.
The outage probability can reflect the reliability of the communication links. In Fig. 3, we show the outage probability of cellular links as a function of the number of D2D pairs , and NAAC works with the number of neighbor users . The outage probability of cellular links increases as the number of D2D links grows, since there are more D2D pairs sharing the spectrum with CUEs, which causes the CUEs to suffer more severe cross-layer interference. It’s shown that NAAC achieved the best performance, since the other four methods update their own policies independently during the training, NAAC can learn the policy of cooperation among users by introducing the states and actions of neighbor D2D pairs to assist the training. Therefore, the policies between different D2D pairs can be coordinated with each other to prevent multiple D2D pairs from simultaneously selecting the same RB to cause severe cumulative interference to the CUE.
Fig. 4 shows the D2D sum rate as a function of , and NAAC works with . The the D2D sum rate increases as the number of D2D links grows, since more D2D pairs are allocated to RBs. It can be seen that NAAC is significantly better than the other four algorithms, and as the number of D2D links increases, the advantages become larger. Since the other four algorithms can only achieve individual optimization, the effect of global optimization cannot be guaranteed, but NAAC of centralized training with decentralized execution can cooperatively optimize the sum rate of D2D links.
The proposed framework combines the strengths of centralized and distributed schemes. Compared with the centralized method, our method is executed without global information, which significantly reduces the signaling overhead and alleviates the computational pressure of the BS. Therefore, our method can scale well to larger networks. Compared with distributed method, our method uses the historical information of neighbor users to learn the policies of mutual cooperation, avoiding frequent real-time information exchange between users, more suitable for user-intensive communication scenario. In addition, our method can transfer complex training processes to the cloud (BS), significantly reduces the complexity of algorithm execution.
This paper has studied the resource management in D2D underlay communications, and formulated the intelligent spectrum allocation problem as a decentralized multi-agent deep reinforcement learning problem. For full cooperation between users, the NAAC framework of centralized training with distributed execution is adopted, which requires no signaling interaction. The simulation results show that the proposed approach can effectively reduce the outage probability of cellular links, improve the sum rate of D2D links and have good convergence.
-  Y. Kai, J. Wang, H. Zhu, and J. Wang, “Resource allocation and performance analysis of cellular-assisted ofdma device-to-device communications,” IEEE Transactions on Wireless Communications, vol. 18, no. 1, pp. 416–431, 2019.
-  D. Feng, L. Lu, Y. Yuan-Wu, G. Y. Li, G. Feng, and S. Li, “Device-to-device communications underlaying cellular networks,” IEEE Transactions on Communications, vol. 61, no. 8, pp. 3541–3551, 2013.
-  D. Wu and N. Ansari, “High capacity spectrum allocation for multiple d2d users reusing downlink spectrum in lte,” in 2018 IEEE International Conference on Communications (ICC), pp. 1–6, IEEE, 2018.
-  A. Köse and B. Özbek, “Resource allocation for underlaying device-to-device communications using maximal independent sets and knapsack algorithm,” in 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), pp. 1–5, IEEE, 2018.
-  Z. Kuang, G. Liu, G. Li, and X. Deng, “Energy efficient resource allocation algorithm in energy harvesting-based d2d heterogeneous networks,” IEEE Internet of Things Journal, vol. 6, no. 1, pp. 557–567, 2019.
-  F. Zaki, S. Kishk, and N. Almofari, “Distributed resource allocation for D2D communication networks using auction,” in 2017 34th National Radio Science Conference (NRSC), pp. 284–293, IEEE, 2017.
-  C. Jiang, H. Zhang, Y. Ren, Z. Han, K.-C. Chen, and L. Hanzo, “Machine learning paradigms for next-generation wireless networks,” IEEE Wireless Communications, vol. 24, no. 2, pp. 98–105, 2017.
-  R. S. Sutton, A. G. Barto, et al., Introduction to reinforcement learning, vol. 135. MIT press Cambridge, 1998.
-  K. Zia, N. Javed, M. N. Sial, S. Ahmed, A. A. Pirzada, and F. Pervez, “A distributed multi-agent RL-based autonomous spectrum allocation scheme in D2D enabled multi-tier HetNets,” IEEE Access, vol. 7, pp. 6733–6745, 2019.
-  H. Yang, X. Xie, and M. Kadoch, “Intelligent resource management based on efficient transfer actor-critic reinforcement learning for IoV communication networks,” IEEE Transactions on Vehicular Technology, 2019.
-  H. Ye, Y. G. Li, and B.-H. F. Juang, “Deep reinforcement learning for resource allocation in V2V communications,” IEEE Transactions on Vehicular Technology, 2019.
-  D. A. Plaisted, “Some polynomial and integer divisibility problems are NP-hard,” in 17th Annual Symposium on Foundations of Computer Science (sfcs 1976), pp. 264–267, IEEE, 1976.
-  T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015.
-  R. Lowe, Y. Wu, A. Tamar, J. Harb, O. P. Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” in Advances in Neural Information Processing Systems, pp. 6379–6390, 2017.
-  D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, “Deterministic policy gradient algorithms,” in ICML, 2014.
-  S. Dominic and L. Jacob, “Distributed resource allocation for D2D communications underlaying cellular networks in time-varying environment,” IEEE Communications Letters, vol. 22, no. 2, pp. 388–391, 2018.