Value-Decomposition Networks based Distributed Interference Control in Multi-platoon Groupcast

03/25/2020
by   Xiongfeng Guo, et al.
0

Platooning is considered one of the most representative 5G use cases. Due to the small spacing within the platoon, the platoon needs more reliable transmission to guarantee driving safety while improving fuel and driving efficiency. However, efficient resource allocation between platoons has been a challenge, especially considering that the channel and power selected by each platoon will affect other platoons. Therefore, platoons need to coordinate with each other to ensure the groupcast quality of each platoon. To solve these challenges, we model the multi-platoon resource selection problem as Markov games and then propose a distributed resource allocation algorithm based on Value-Decomposition Networks. Our scheme utilizes the historical data of each platoon for centralized training. In distributed execution, agents only need their local observations to make decisions. At the same time, we decrease the training burden by sharing the neural network parameters. Simulation results show that the proposed algorithm has excellent convergence. Compared with another multi-agent algorithm (MARL) and random algorithm, our proposed solution can dramatically reduce the probability of platoon groupcast failure and improve the quality of platoon groupcast.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

page 5

page 6

12/24/2021

Dynamic Channel Access and Power Control in Wireless Interference Networks via Multi-Agent Deep Reinforcement Learning

Due to the scarcity in the wireless spectrum and limited energy resource...
11/06/2017

Distributed Multi-resource Allocation with Little Communication Overhead

We propose a distributed algorithm to solve a special distributed multi-...
08/14/2020

Multi-Agent Deep Reinforcement Learning enabled Computation Resource Allocation in a Vehicular Cloud Network

In this paper, we investigate the computational resource allocation prob...
03/18/2018

A Machine Learning Approach for Power Allocation in HetNets Considering QoS

There is an increase in usage of smaller cells or femtocells to improve ...
04/26/2021

Multi-resource allocation for federated settings: A non-homogeneous Markov chain model

In a federated setting, agents coordinate with a central agent or a serv...
12/18/2018

Distributed Algorithms for Internet-of-Things enabled Prosumer Markets: A Control Theoretic Perspective

We develop a distributed control algorithm for prosumer based sharing ec...
09/04/2018

Multi-Path Alpha-Fair Resource Allocation at Scale in Distributed Software Defined Networks

The performance of computer networks relies on how bandwidth is shared a...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As a 5G new application scenario and Intelligent transportation system (ITS) service, the platoon has received more and more attention [1]. With very small headways, vehicles in platoons can achieve considerable fuel consumption gains [2]. To reap the benefits of platooning, platoon members must have enough awareness of the movement status of the platoon, assisted by the 5G vehicle-to-everything (V2X) communication technologies. However, in a multi-platoon scenario, the resource selection strategies, transmission range, and power of each platoon interact with each other, forming a complicated communication interference environment [3]. Improper radio resource allocation will cause interference to platoons using the same radio resources. Therefore, it is necessary to study the groupcast communication resource allocation algorithm for platoons to ensure the reliability of the groupcast in different platoons.

In recent years, many resource allocation schemes have been proposed, which can be divided into centralized and distributed schemes. In the centralized algorithm [3] [4], the base station collects real-time information of different platoons, and then allocates resources to varying platoons according to algorithms such as branch and bound or graph theory to achieve system optimization. However, the high dynamics of the vehicle networking channel environment and the randomness of the V2V payload hinder the application of centralized algorithms in large-scale scenarios, which are reflected explicitly reflected in the complexity of the algorithm and the delay in information collection.

In the distributed algorithm, there is no need for the central controller to perform global scheduling. It can avoid wasting radio resources by signaling interaction, massive computing burden, and scalability limitations. Reinforcement learning (RL) is considered as a useful tool to solve the distributed allocation of 5G resources

[5]. In RL, agents learn strategies to maximize system efficiency by observing changes in the environment. Recently, many intelligent resource management schemes using reinforcement learning have been proposed. In [6], the author proposed a Q-learning based resource allocation algorithm. In [7], a distributed resource allocation algorithm based on deep reinforcement learning was offered, assuming that the agents make asynchronous decisions. In [8], a fingerprint-based deep Q-network method is introduced, which enabled agents to make decisions simultaneously. The above previous works have adopted specific processes such as asynchronous updates and fingerprints to reduce non-stationary in a multi-agent environment.

Unlike the above works, we propose a distributed resource allocation framework based on Value-Decomposition Networks (VDN). By integrating the value function of each agent, a joint action-value function is obtained, which is used for back-propagating by different individual deep neural networks. VDN guarantees that the agents cooperate to improve system efficiency. When deployed, the agent only needs its local observations to make optimal decisions, without the need for other signaling exchanges. The main contributions of this article can be summarized as

Fig. 1: The multi-platoon groupcast scenario.
  • Considering the reliability requirements of the platoon groupcast, the hyperparameter

    is introduced to encourage the platoon to complete the transmission of the payload as soon as possible.

  • The problem of collaborative interference control for multi-platoon in highway scenario is modeled as Markov games. We assume that each agent can only obtain its local observations, which cannot represent the entire environment. It can more accurately represent changes in the environment.

  • Designing spectrum access and power selection scheme based on VDN. Agents can coordinate their resource selection strategies through centralized training. During distributed execution, agents only need their observations to make global optimization decisions.

  • The recurrent neural network (RNN) is used to integrate the agent’s past actions and outputs to overcome the instability caused by local observability. And we let different agents share the neural network to speed up the training and convergence.

The rest of this paper can be organized as follows. In section II, we introduce the system model of multi-platoon firstly, then formulate optimization problems. In Section III, we model the resource allocation problem as a partially observable Markov game and propose a VDN based algorithm to solve this problem. Simulation results and analysis are presented in Section IV and conclusions are drawn in Section V.

Ii System Model and Problem Formulation

Ii-a System Model

As shown in Fig. 1, we consider a multi-platoon groupcast scenario. The set of platoons is denoted by . Each platoon contains the platoon leader (PL) and the corresponding following vehicle , that is, the platoon members (PMs). To ensure the safety of the platoon, the PL must periodically groupcast information to the PMs. The dedicated spectrum of the platoon groupcast can be divided into multi-subchannel, denoted by . represents that the PL chooses channel to groupcast the message, and otherwise. Assume that the transmit power selected by PL is , and the receive power of the corresponding PMs can be expressed as , where , represents the channel gain between the sender and receiver. We use a free-space propagation model and Rayleigh attenuation to model the channel gain. The channel gain can be expressed as

(1)

where is the channel coefficient including path propagation fading, shadow fading and antenna gain, and its dB form expression is . is a small-scale Rayleigh attenuation coefficient. In each communication link,

conforms to an independent exponential distribution with a mean value of

. The signal-to-interference-plus-noise ratio (SINR) of receiver of the platoon in channel can be expressed as:

(2)

where represents the transmission power of the PL. represents the noise power, and the other parts of the denominator represents interference from other PLs to platoon. We assume that each PL only selects one subchannel, and for each platoon. The PL’s groupcast rate is limited by the vehicle with the worst reception channel quality in the platoon. The effective groupcast rate of PL is

(3)

where represents the SINR between PL and his PM . Because the signal attenuation is mainly caused by wireless propagation loss, it can be considered that the tail vehicle has the worst reception channel quality [4]. Then the above formula can be abbreviated as

Ii-B Problem Formulation

Due to the critical role of V2V communication in the platoon safe driving process, the platoon has strict requirements on the reliability and delay of communications. Since the safety of the platoon requires the message of PL, the groupcast quality in the platoon should be guaranteed. Our optimization goal is to maximize the reliability of platoon groupcast, which can be expressed as follows

(4)

where is the payload size, is the payload generation period, and represents unit time slot. Limit indicates the maximum number of subchannel that can be selected in each groupcast. Limit represents a numerical limit on the transmit power of PL. Due to the high computational complexity and the random evolution of channel conditions, it is difficult to obtain a globally optimal solution in practical applications. Therefore, we propose an optimization algorithm based on VDN to solve this problem.

Iii Multi-agent Deep Reinforcement Learning based Resource Allocation

We assume that each PL only obtains the channel state information (CSI) of its platoon before taking action [9]. Each PL uses a distributed resource selection strategy to select resources independently. In this section, we first model this optimization problem as Markov games and then propose a distributed resource allocation algorithm based on VDN to solve the resource allocation problem.

Iii-a Multi-agent Environment Modeling

In reinforcement learning (RL), an agent learns strategies through interaction with the environment to achieve the goal of maximizing returns. Mathematically, reinforcement learning problems can be modeled as Markov Decision Processes (MDP). In particular, the above process can be described as a tuple

. The reward function can be obtained according to the current state , the taken action , and the state

corresponding to the next moment.

of the MDP indicates the probability of the next state is when the action is taken under the state , which can be expressed as . Unfortunately, traditional reinforcement learning methods such as Q-learning and Policy Gradient-based on MDP are not applicable in distributed resource allocation. The major problem is that as the training progresses, each agent independently updates its strategy, which affects the state transition probability of other PLs. from any agent’s perspective, the environment becomes unstable, which violates the Markov assumptions required for convergence of RL [5].

In order to overcome the convergence problem of single-agent MDP, the environment is modeled as Markov games in this paper. Markov games is an extension of the MDP in the case of multi-agent. It can be expressed as a tuple: , where is participants in the Markov games, is the state space, is the action space of all individuals in the environment, and is the reward function. In collaborative Markov games, the goal of each agent is to maximize the overall reward of the system, where is a discount factor. The larger means more attention to future rewards. represents the time frame for calculating rewards. At every discrete moment , different agents get different descriptions of the environment based on own partially observation. Note that a single observation cannot represent the entire environment. Agent chooses action based on strategies . The joint action behavior of multi-agent can be expressed as , and the corresponding state transition probability is . In Markov games, each agent needs to consider the strategies of other agents while learning interactively with the environment.

In our solution, the state space, the action space, and the reward function is defined as follows:

Observation Space: At each time slot t, each agent observes local information, which is used to describe the current state of the environment. It consists of internal and external parts. The internal information includes the remaining load of the groupcast payload and the remaining time budget . The external information is the interference situation of the platoon in all subchannels at a previous moment, which can be measured by PMs. Therefore, the observed state of agent at time slot is

(5)

Fig. 2: Architecture of VDN based resource selection algorithm.

Action Space: Agent takes action at time , which indicates the subchannel and power selected by the agent according to the strategy and the observed state. The dimension of the action space is , because there are subchannels and discrete power levels.

Reward: Our optimization goal is to maximize the probability of successful transmission of V2V loads. To avoid learning difficulties caused by sparse rewards, we set reward to an effective V2V transmission rate during the agent payload transmission process. We use prior knowledge to design the reward value, that is, a higher transmission rate can complete the transmission of the payload more effectively and accelerate the learning of the agent [10]. In our tunning experience, we found that the agent will deliberately score points, such as the curve reaching the end. Therefore, after all, payload transmission is completed , we give the system a reward that is positively related to the remaining time budget. So in each time slot, the relevant reward is offered by

(6)

where is hyperparameters that need to be tuned in experiments, note that all platoons use the same reward.

Iii-B Multi-agent Interference Control Scheme

Aiming at the problem that distributed resource selection, we studied the problem of maximizing system reward based on Markov games and proposed a VDN-based resource selection algorithm for platoon resource selection. The main assumption in the VDN framework is that the system’s joint action-value function can be decomposed into different agent value functions [11]

(7)

is the action value based on the agent’s local observation, and represents the joint action value of the system. In the algorithm learning process, a joint network is trained centrally, which is obtained by the aggregation of local networks of all agents. Specific network architecture, as shown in Fig. 2.

Importantly, to ensure that the global max operations performed on has the same result as the single max operation performed on of each agent. The monotonicity of the linear hybrid network needs to be guaranteed

(8)

During the training process, the joint action-value function is used for back-propagating by different individual deep neural networks, which can promote agents to cooperate to achieve the overall optimal [11]. After the training is completed, each agent gets a network based on its local observations, used for decentralized execution.

To tackle the problem of partially observable in Deep Q-learning (DQN), we input the current local observations of the agent and the actions

of the previous moment into the DQN. Furthermore, we added Gate Recurrent Unit (GRU) after fully connected layers (FC)

[12]. GRU is a special kind of RNN that can solve problems such as long-term memory. Hence, agents can integrate information from previous moments to make up for the shortcomings of current local observations. As shown in Fig.2, we use the output of FC and the output of GRU as input to the GRU. Then GRU generates the output at the current time. Such feedback can make better use of local observations to approximate the actual value.

As the number of agents increases, the number of learnable parameters also increases. Then the computational complexity, memory storage, and processing time during training become uncontrollable. Inspired by [13], we let each agent share network parameters during the learning process since they are isomorphic. By adding agent identification at the input, a DQN network can be used for multi-agent. While avoiding lazy agents, it can reduce processing delay and accelerate network convergence. However, it should be noted that each agent’s learning strategy is based on their history. Role information can be provided to the network in the form of 1-hot encoding.

Nevertheless, as the number of agents increases, the size of the vector becomes more extensive, and non-zero values in 1-hot become considerably sparse. Under these circumstances, directly coding through 1-hot will still increase the burden of training. So when the number of agents exceeds a specific number, we encode the role information of the agent to reduce space consumption. Then, we connect the encoded id to the previous action and the current observation state of the corresponding agent and input it into VDN.

Iii-C Training Implementation

[htb] The VDN-based Solution

0:  DQN network structure, hybrid network structure.
0:  The weights of DQN network.
1:  Initialize multi-platoon simulation environment, train network parameters , target network and initialize the replay memory ;
2:  for each  do
3:     Update simulation environment and exploration rate ;
4:     for each  do
5:        Receive observation state ;
6:        for  do
7:           With probability pick random ;
8:           else ;
9:        end for
10:        Perform joint actions to get reward and new state ;
11:        Store in the replay memory in sequence, then ();
12:     end for
13:     Randomly select period data from ;
14:     Train the VDN network through equation (10);
15:     Update target Q-network parameters ();
16:  end for

Similar to [10], our scheme uses centralized training and distributed execution. In particular, each PL independently runs its DQN when distributed implementation.

To ensure a balance between exploration and development, we adopt the -greedy exploration. The probability of that PL adopts the optimal action output by VDN and the probability of is that PL chooses the action randomly to realize the exploration of the environment [14]. In order to make sufficient use of the learning results of VDN, we linked the exploration rate to the training episodes , if .

When corresponding joint operation is executed, the observation value of each agent is changed to . Moreover, each agent received a unified reward of the environment. We deposit transformation matrices of each agent in each step during each episode into replay memory sequentially. Different from traditional DQN, to take advantage of the temporal properties of GRU, we randomly extract multiple full-episode data from for training during strategy learning. Similar to DQN, we designed two neural networks to cut the correlation and improve the convergence. In particular, each agent calculates the target Q-value through the Q-target network and reward

(9)

where is the corresponding reward, and

is a discount factor. Then using the difference of overall target Q-value and estimated Q-value to update the training network

(10)
Parameter Value
Number of V2V subchannel 2
subchannel bandwidth 180kHz
Carrier Frequency 5.9 GHz
Number of platoon 4
V2V payload size [1,2,…] * 1200 bytes
V2V transmit power [23, 15, 10, -114] dBm
V2V payload generation period T 100ms
Large-scale fading update WINNER+B1 every 100 ms
Small-scale fading update Rayleigh fading every 1 ms
TABLE I: Simulation Parameters

Every specific training episode, the training network is used to update the parameters of the target network : . Algorithm 1 describes in detail the multi-agent resource selection algorithm based on VDN in the multi-platoon cooperative interference control scenario.

Iv Simulation

Our multi-platoon Cooperative interference control simulation platform is built according to the highway scenario of 3GPP TR 37.885 [15]. The distance between the head and tail vehicles in the platoon is 75m. Platoons are evenly distributed on two adjacent roads. PL independently selects subchannels and transmission power based on their observations and strategies to complete the groupcast within the platoon. More detailed simulation parameter settings are shown in Table I

. The agent’s minimum exploration rate is 0.03. The number of neurons in the hidden layer GRU is 64. Other fully connected layers use the rectified linear unit (ReLU) as the activation function. The learning rate is

. we randomly sample 32 episodes of data each time for training. All simulations are implemented on PyTorch

and Python . To verify the effectiveness of the proposed algorithm, we will compare two baseline scenarios. 1) MARL: Each agent has a fingerprint-based deep Q-network, which is used to deal with the instability of the environment [8]; 2) Random: Each agent randomly selects the subchannels and transmit power, at each time step.

Fig. 3 shows the convergence of the proposed algorithm. As the training progresses, the exploration rate gradually decreases, and the agent learns strategies from the environment. The total reward increases with the training cycle, and eventually stabilizes within a specific range, which demonstrates the effectiveness of the proposed algorithm in solving Markov games. Further, we compare the role of the hyperparameter . When , the algorithm can converge, but its reward remains at a lower level. This is because the reward for completing the transfer is not enough to motivate the agent to complete the payload transfer. And the agent may be satisfied with the reward of the transfer process, resulting in the transmission failure. But when , the training reward fluctuates severely, which is because the reward for completing the transmission is too large compared to the previous incentive reward—cause network training shock. When , the agent can get a suitable completion reward, which can not only ensure the convergence of the network but also motivate the agent to complete the payload transmission as soon as possible. Therefore, in the application of the algorithm, we need to set the coefficient reasonably to make full use of the algorithm performance.

Fig. 3: Total reward of the algorithm during the training process with different . The platoon payload size bytes.

Fig. 4: Platoons payload delivery probability with varying payload sizes.

Fig. 5: Total platoons transmission rates of the different schemes within the same episode. The platoon payload size bytes.

The relationship between the success rate of the platoon groupcast and the payload size of the platoon groupcast is demonstrated in Fig. 4. With the increase of the payload, the transmission success rates of different algorithms have declined to vary degrees. Among them, the Random algorithm has the worst performance, and transmission failure occurs when , which also illustrates the necessity of a resource allocation algorithm in distributed scenarios. For the MARL algorithm, a large number of transmission failures also occur when , but it has been improved relative to Random. For the proposed algorithm, a small number of failures did not happen until . Further observing the characteristics of the different curves, it can be found that the slope of the proposed algorithm is the smoothest as increases. This shows that even if there is a large amount of payload that the algorithm cannot handle, the proposed algorithm can still meet the system performance requirements to the greatest extent.

To further explore the reasons for the different performance of different algorithms in Fig. 4, we randomly select a transmission episode. Then in Fig. 5, the total rate of system transmission for each step in this transmission episode is plotted. The results show that the proposed algorithm has excellent performance. In the MARL scheme, although there is a high transmission rate at the beginning, it cannot be maintained. It fluctuates constantly, and even a zero transmission rate situation occurs during transmission. There are two main reasons for this phenomenon: the agent cannot timely respond to changes in the environment due to changes in the transmission channel caused by vehicle movements. The other reason is that MARL’s distributed ability to coordinate different agents resource selection strategies is limited, resulting in selected channels resource conflict or non-optimal sending power. As a result, the transmission rate fluctuates severely in MARL. In the proposed scheme, although the system transfer rate is low at the beginning. However, as time progresses, the system transmission rate is stable at a high level. Note that, the reason why our solution has a total transmission rate of after time step is that PLs have transmitted his payload, not because of collision. The results show that with the proposed algorithm, even if there is no signaling interaction between different agents, the resource selection of different agents can achieve distributed coordination through interaction with the environment. It avoids the occurrence of resource conflicts, which improves the efficiency of the system.

V Conclusion

This paper studies the groupcast channel resource selection problem in a multi-platoon scenario. We model the problem as Markov games, then propose a VDN based multi-agent distributed resource allocation scheme. The proposed solution does not require signaling interaction between agents during the execution phase and can achieve cooperation between agents to achieve global optimization. At the same time, the scheme can decrease the training delay by sharing the neural network parameters between agents. Simulation results show that the solution has good convergence and can effectively improve the probability of successful transmission of the platoon. We believe that the algorithm proposed in this paper can be implemented in other similar multi-agent scenarios.

References

  • [1]

    A. Ferdowsi, U. Challita and W. Saad, ”Deep Learning for Reliable Mobile Edge Analytics in Intelligent Transportation Systems: An Overview,” in IEEE Vehicular Technology Magazine, vol. 14, no. 1, pp. 62-70, March 2019.

  • [2] G. Guo and S. Wen, ”Communication Scheduling and Control of a Platoon of Vehicles in VANETs,” in IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 6, pp. 1551-1563, June 2016.
  • [3] P. Wang, B. Di, H. Zhang, K. Bian and L. Song, ”Platoon Cooperation in Cellular V2X Networks for 5G and Beyond,” in IEEE Transactions on Wireless Communications, vol. 18, no. 8, pp. 3919-3932, Aug. 2019.
  • [4] R. Wang, J. Wu and J. Yan, ”Resource Allocation for D2D-Enabled Communications in Vehicle Platooning,” in IEEE Access, vol. 6, pp. 50526-50537, 2018.
  • [5] Z. Li, C. Guo and Y. Xuan, ”A Multi-Agent Deep Reinforcement Learning Based Spectrum Allocation Framework for D2D Communications,” 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 2019, pp. 1-6.
  • [6] K. Zia, N. Javed, M. N. Sial, S. Ahmed, A. A. Pirzada and F. Pervez, ”A Distributed Multi-Agent RL-Based Autonomous Spectrum Allocation Scheme in D2D Enabled Multi-Tier HetNets,” in IEEE Access, vol. 7, pp. 6733-6745, 2019.
  • [7] H. Ye, G. Y. Li and B. F. Juang, ”Deep Reinforcement Learning Based Resource Allocation for V2V Communications,” in IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 3163-3173, April 2019.
  • [8] L. Liang, H. Ye and G. Y. Li, ”Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning,” in IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2282-2292, Oct. 2019.
  • [9] Y. S. Nasir and D. Guo, ”Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks,” in IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2239-2250, Oct. 2019.
  • [10] L. Liang, H. Ye and G. Y. Li, ”Multi - Agent Reinforcement Learning for Spectrum Sharing in Vehicular Networks,” 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Cannes, France, 2019, pp. 1-5.
  • [11] P. Sunehag et al., ”Value-decomposition networks for cooperative multi-agent learning based on team reward”, Proc. Int. Conf. Auton. Agents MultiAgent Syst. (AAMAS), pp. 2085-2087, Jul. 2018.
  • [12] Y. Teng, M. Yan, D. Liu, Z. Han and M. Song, ”Distributed Learning Solution for Uplink Traffic Control in Energy Harvesting Massive Machine-Type Communications,” in IEEE Wireless Communications Letters, in press.
  • [13] K. K. Nguyen, T. Q. Duong, N. A. Vien, N. Le-Khac and L. D. Nguyen, ”Distributed Deep Deterministic Policy Gradient for Power Allocation Control in D2D-Based V2V Communications,” in IEEE Access, vol. 7, pp. 164533-164543, 2019.
  • [14] J. Cui, Y. Liu and A. Nallanathan, ”Multi-Agent Reinforcement Learning-Based Resource Allocation for UAV Networks,” in IEEE Transactions on Wireless Communications, vol. 19, no. 2, pp. 729-743, Feb. 2020.
  • [15] 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Study on evaluation methodology of new Vehicle-to-Everything (V2X) use cases for LTE and NR; (Release 15), 3GPP TR 37.885 V15.3.0, Jun. 2019.