DDPG-based Resource Management for MEC/UAV-Assisted Vehicular Networks

08/19/2020 ∙ by Haixia Peng, et al. ∙ 0

In this paper, we investigate joint vehicle association and multi-dimensional resource management in a vehicular network assisted by multi-access edge computing (MEC) and unmanned aerial vehicle (UAV). To efficiently manage the available spectrum, computing, and caching resources for the MEC-mounted base station and UAVs, a resource optimization problem is formulated and carried out at a central controller. Considering the overlong solving time of the formulated problem and the sensitive delay requirements of vehicular applications, we transform the optimization problem using reinforcement learning and then design a deep deterministic policy gradient (DDPG)-based solution. Through training the DDPG-based resource management model offline, optimal vehicle association and resource allocation decisions can be obtained rapidly. Simulation results demonstrate that the DDPG-based resource management scheme can converge within 200 episodes and achieve higher delay/quality-of-service satisfaction ratios than the random scheme.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Vehicular networks, referred to the wireless networks for vehicles to everything communications, are attracting more and more attention from academia and industries and are significantly improving intelligent transportation services [gurugopinath2019cache, liang2019spectrum, ye2019deep, liang2019deep]. Via vehicular networks, road safety and traffic efficiency are enhanced and an increasing number of vehicular applications and data services are enabled [peng2019spectrum]. However, due to limited spectrum resources [zhang2014dynamic] and on-board computing/caching resources, propelling vehicular networks to support the emerged applications and services, especially the one with tasks requiring sensitive-delay and multiple dimensions of resources, still faces a host of challenges [tayyaba20205g, zhang2013cooperative]. To address those challenges while controlling the costs on resource deployment in an acceptable range, multi-access edge computing (MEC) and unmanned aerial vehicle (UAV) technologies have been applied to vehicular networks [Peng2019SDN, zhao2019computation, chen2019uav, zhang2018energy, LEAD2020feng].

By shifting computing and caching resources to edge nodes or base stations to enable preset MEC servers in vehicular networks, vehicles can choose to offload some tasks to the MEC servers via different access technologies. As data transmission between the MEC server and the core network is avoided, compared to tasks offloaded to the cloud computing server, a shorter response delay is provided to satisfy the sensitive delay requirement of each task offloaded to the MEC server [Peng2019SDN, ning2019mobile]. Moreover, considering the time-varying demanding on resources in vehicular networks, especially with bursty traffic caused by some events or social activities, insufficient resource issues remain on the MEC-mounted base station. Increasing the amount of resources installed at each base station physically would cause waste most of the time. Thus, in addition to MEC-mounted base stations, removable and flexible MEC servers can be enabled by mounting MEC servers in UAVs to assist the bursty resource-demanding in vehicular networks [chen2019uav, yang2019energy].

To benefit the vehicular networks from the MEC-mounted base stations and UAVs, plenty of existing works have been proposed. For example, an air-ground integrated network has been proposed in [cheng2018air] to deploy and schedule the MEC-mounted UAVs to support vehicular applications. To dynamically manage the available spectrum, computing, and/or caching resources for the MEC-mounted base stations or UAVs in vehicular networks, existing works have studied the computing task offloading and/or resource management schemes with or without considering the heterogeneous delay-requirements of vehicular applications [peng2019spectrum, peng2020deep, ning2019mobile]. However, most of the existing works have targeted at the vehicular scenarios only with the MEC-mounted base stations or UAVs. How to simultaneously manage the available multi-dimensional resources for both MEC-mounted base stations and UAVs to support vehicular applications with heterogeneous and sensitive delay requirements still needs efforts.

In this paper, we investigate joint vehicle association and resource management in a vehicular network with MEC-mounted macro eNodeB (MeNB) and UAVs. To centrally manage the available multi-dimensional resources for the MEC-mounted MeNB and UAVs, assume a controller is installed at the MeNB and in charge of making vehicle association and resource allocation decisions for the whole network. Specifically, to maximize the number of tasks offloaded to the MEC servers while satisfying their quality-of-service (QoS) requirements, we formulate an optimization problem and execute it at the controller. Since the formulated problem is non-convex and high in compute complexity, we transform the formulated problem using reinforcement learning (RL) by taking the controller as an agent and design a deep deterministic policy gradient (DDPG)-based solution. Through offline training, optimal vehicle association and resource allocation decisions then can be made timely by the controller to satisfy the offloaded tasks’ QoS requirements.

The rest of this paper is organized as follows. The system model, including the MEC/UAV-assisted vehicular network model and the multi-dimensional resource management model, is described in Section II, followed with the formulated resource optimization problem. In Section III, we transform the formulated problem using RL and solve it with a DDPG-based algorithm. Extensive simulation results are provided in Section IV to demonstrate the performance of the DDPG-based resource management scheme. Finally, we conclude this work in Section V.

Ii System Model

In this section, we illustrate an MEC/UAV-assisted vehicular network model and a multi-dimensional resource management model, and then formulate a resource optimization problem.

Ii-a MEC/UAV-Assisted Vehicular Network

Consider a vehicular network with MEC-mounted MeNBs and UAVs to cooperatively support delay-sensitive and diverse resource-demanding vehicular applications. As illustrated in Fig. 1, an MeNB is deployed on one side of a straight two-way road, with two UAVs fly at a fixed speed above the road and on each side of the MeNB. The amounts of available spectrum, computing, and caching resources for the MeNB and UAV are denoted by and for , respectively. Vehicles on the considered road segment randomly generate different computing tasks and send task offloading requests to the MeNB or UAV as needed. The task offloading request sent by vehicle at time slot is denoted by , where , , and are the amount of computing resources required by, the data size of, and the maximum delay tolerated by the task, respectively.

Figure 1: An illustrative structure of the MEC/UAV-assisted vehicular network.

To efficiently manage the multi-dimensional resources of the MEC-mounted MeNB and UAVs among the received task offloading requests, a controller is enabled at the MeNB since the considered road segment is under the coverage area of the MeNB. The procedures of resource management then can be summarized as follows,

  1. The controller receives driving information about and task offloading requests from vehicles under the coverage area of the MeNB;

  2. According to the received information, a vehicle association and resource allocation decision is made by the controller to decide the association pattern111As the task division technology is not considered here, each vehicle is assumed to associate with and offload its computing task to the MeNB or one of the UAVs at each time slot. for each vehicle and pre-allocate the available resources among the received task offloading requests;

  3. The controller sends the computing and caching resource allocation results to each UAV and sends the spectrum allocation result and association pattern to each vehicle;

  4. Each vehicle offloads its computing task to the MeNB or UAV over the allocated spectrum resources;

  5. The MeNB and UAVs cache and process the received computing tasks and then return the processing results to the corresponding vehicles.

Ii-B Resource Management Model

As the resource demand from offloaded tasks changes with time due to the high vehicle mobility and vehicle’s heterogeneous computing tasks, the controller has to dynamically manage the available resources for the MeNB and UAVs. Denote the set/number of vehicles on the considered road segment at time slot as . is the association variable between vehicle and the MeNB or the two UAVs, where if vehicle associates with the MeNB and otherwise. According to the location information about the MeNB, the two UAVs, and each vehicle, the controller can distinguish vehicles under different MEC servers. As each vehicle can only associate with one of the MEC servers, only vehicles under the coverage area of the MEC-mounted UAV would have .

Spectrum resources: The controller allocates the and spectrum resources among vehicles associating with the MeNB and with the two UAVs, respectively. As the trajectory of each UAV is pre-designed, by controlling the distance between the two UAVs, spectrum reusing is adopted between uplink transmissions to the two UAVs with acceptable interference. Assume the transmit power of each vehicle for offloading computing tasks to the MeNB or UAV is fixed at . Then the uplink transmission rates achieved by the MeNB and UAV from vehicle can be described as,

(1)

and

(2)

respectively, where . (or ) is the gain of uplink channels from vehicle to the MeNB (or UAV ) at time slot . The fractions of spectrum resources allocated to vehicle from the MeNB and UAV are denoted by and , respectively.

Caching resources: Vehicle sends its computing task with transmission rate (or ) to the MeNB (or UAV ). The data of each task has to be cached before processing the task. Let and be the fractions of caching resources allocated to vehicle ’s task from the MeNB and UAV , respectively. Then vehicle ’s task can be successfully completed by the MEC server only if or .

Computing resources: Denote and as the fractions of computing resources allocated to vehicle from the MeNB and UAV , respectively. As the sizes of the offloading request and processing result are relatively small [zhou2018computation], we ignore the time cost on collecting requests from and sending the processing result back to each vehicle [zhang2018energy]. For vehicle , the total time duration from generating the task to receiving the processing result then is,

(3)

Ii-C Problem Formulation

Due to the heterogeneous computing tasks and high vehicle mobility, demanding on the multi-dimensional resources from offloaded tasks is time-varying. To dynamically manage the total available resources for the MeNB and the two UAVs to satisfy time-vary resource demand, an optimization problem is formulated. From both of the service provider and vehicle user’s perspectives, it is critical to efficiently manage the finite resources to satisfy the resources demanding from as many offloaded tasks as possible. Thus, we formulate the optimization problem to maximize the number of tasks successfully completed by the MEC servers with given amounts of available resources. As a result, the problem is given as follows,

(4a)
(4b)
(4c)
(4d)
(4e)
(4f)
(4g)
(4h)
(4i)
(4j)

where is the association pattern matrix for vehicles in . , , and denote the allocation matrices of available spectrum, computing, and caching resources for the MeNB and UAVs, respectively. denotes the step function, which is if the variable is , and otherwise. Then for vehicle with enough allocated caching resources for its computing task and satisfied task’s delay requirement, we have or . The constraints on available resources for the MeNB and the two UAVs are given by (4e)-(4j).

Iii DDPG-based Resource Management Scheme

From equation (3) and the objective function in the optimization problem, the allocation of spectrum and computing resources is coupled with each other. Due to integer variable and the step function [peng2020deep], the formulated problem is non-convex. Also, the computational complexity of the problem increases with the number of vehicles under the coverage area of the MeNB. It is difficult to solve the formulated problem with the traditional optimization methods to satisfy tasks’ strict delay requirements. Thus, to timely obtain a vehicle association and resource allocation decision, we transform the formulated problem using RL and then design a DDPG-based solution in this section.

Iii-a Problem Transformation

We transform the above optimization problem according to the main idea of RL [liang2019deep, peng2020deep, shen2020ai]. Specifically, the controller implements the agent’s role, and everything beyond the controller is regarded as the environment. Denote as the environment state space. For each state , the agent chooses association and resource allocation action from the action space according to the current policy , where . Then by executing in the environment, a reward will be returned back to the agent for guiding the policy updating until an optimal policy is obtained. Let and be the positions of vehicle and UAV , respectively. Then at time slot , we can express the environment state as

(5)

Let and () be the numbers of vehicles associated to the MeNB and UAV , respectively. Then the action selected by the agent at time slot is given by

(6)

As the agent updates policy according to the received reward , to obtain an optimal policy that would achieve the objective function in the original problem, two reward elements are defined as follows,

(7)
(8)

where, and are the rewards achieved by vehicle from satisfied delay requirement and from allocated caching resources by action , respectively. Only if the delay requirement is satisfied and enough caching resources are allocated, positive rewards can be achieved by vehicle , otherwise and are negative. By using the logarithmic function and adding a small value in (7) and (8), we can avoid sharp fluctuation on the rewards and therefore improving the convergence performance.

Iii-B DDPG-based Solution

The constraints of the original problem and equation (6) indicate that the action space, , is continuous and discretizing is infeasible due to the large size of each action. That is, the RL methods targeted at discrete state and action spaces, such as DQN, are inapplicable here. Thus, DDPG, an RL method that combines the advantages of policy gradient and DQN, is adopted to solve the transformed problem. As illustrated in Fig. 2

, the agent under DDPG is composed of an actor and a critic, both implemented by two deep neural networks (DNNs), i.e., a target network and an evaluation network. For an input environment state, the actor makes an action decision and the critic uses a Q function to value each pair of state-action. The standard Q-value function is given by

(9)

where denotes the immediate reward returned to the agent at time slot , which is defined as the average reward over vehicles in , i.e., . And is the discount factor on .

Figure 2: The DDPG framework for the MEC/UAV-assisted vehicular network.

There are two stages of the DDPG-based solution, training and inferring, where the training stage is performed offline. As the correlation among transitions used in the training stage would reduce the convergence rate, experience replay technology is adopted in DDPG. Namely, saving

transitions in a replay memory buffer first and then randomly selecting a mini-batch of transitions from the buffer to train the DDPG model, i.e., to update the actor and critic’s parameters until converge. Parameters of the evaluation networks of the actor and the critic are updated according to policy gradient and loss function

[lillicrap2015continuous], respectively. Specifically, the parameter matrix of actor’s evaluation network, , is updated in the direction of , where denotes the derivative w.r.t. and is the policy objective function. And the critic adjusts its evaluation network’s parameters, , in the direction of to minimize the loss, , where is the Q-function of the critic’s target network. With in-time updated parameters of the actor’s and critic’s evaluation networks, and , the agent then softly updates the parameters of the two target networks by

(10)

and

(11)

with and , respectively.

Assume there are steps in one episode, and we use the total rewards achieved per episode, , to measure the change of rewards during the training stage. Then the DDPG-based solution can be summarized in Algorithm 1.

/* Initialization */
Initialize the size of the replay memory buffer and the parameter matrices of the two DNNs in both actor and critic.
/* Parameter updating */
foreach episode do
       Set initial state and let .
       foreach step  do
             Choose action by the actor’s evaluation network, i.e., ;
             Obtain reward and the subsequent state ;
             if the number of transitions  then
                   Save transition in the replay buffer;
             else
                   Replace the first saved transition in the buffer with ;
                   Randomly select transitions from the replay buffer and input them to the actor and critic;
                   Update the evaluation networks’ parameters for the actor and critic according to policy gradient and loss function;
                   Update the target networks’ parameters with (10) and (11).
            .
      
      
Algorithm 1 The DDPG-based solution

Iv Simulation Results

To demonstrate the performance of the DDPG-based resource management scheme, we simulate vehicle mobility with planung transport verkehr (PTV) Vissim first, and then use the obtained traffic data and the randomly generated computing tasks to train and test the DDPG-based resource management model. Assume the vehicle’s transmit power is Watt. The channel gains of uplink transmissions from a vehicle to the MeNB and from a vehicle to UAV are described as and [Ye2018Dynamic], respectively, where, (or ) is the vehicle-MeNB (or vehicle-UAV) distance. Considering the heterogeneous vehicular applications, we assume vehicle periodically generates different computing tasks with MHz, kbits, and ms. Unless otherwise stated, all other parameters used in the training and inferring stages are set in Table I.

Parameter Value
Height of the MeNB m
Altitude of the UAV m
Flying speed of the UAV m/s
Available spectrum resources for the MeNB/UAV MHz
Computational capabilities at the MeNB/UAV GHz
Caching resources at the MeNB/UAV kbits
Communication range of the MeNB/ UAV m
Background noise power dBm
Replay buffer size
Size of each mini-batch of transitions
Discount factor on reward
Learning rate of the actor/critic
/
Table I: Parameters set for the training stage

Fig. 3 demonstrates the convergence performance of the presented DDPG-based resource management scheme. Since the parameters of the DDPG model are initialized according to state at the beginning of the training stage and the corresponding policy is not optimal, total rewards achieved in the first episodes are relatively small and fluctuate dramatically. With the training going, the policy tends to be optimal. Thus, high total rewards are achieved and the reward fluctuation is getting smaller and smaller in the last episodes. As steps are included in each episode and the two reward elements and are clipped to , the highest achieved total rewards during the training stage is , as shown in Fig. 3. By clipping the two reward elements to , the highest rewards achieved by each vehicle are limited to . Thus, high rewards achieved by vehicle from excessive allocated resources can be avoided, therefore guiding the agent to learn the optimal policy fast and improving the converge performance of the DDPG-based solution.

Figure 3: Total rewards achieved per episode during the training stage.

For the DDPG-based solution, vehicle association and resource management decisions made by the optimal policy should maximize the long-term rewards. As positive rewards are achieved by vehicles only when its offloaded task’s delay or QoS requirements are satisfied by the allocated resources, high episode rewards indicate more offloaded tasks are with satisfied delay/QoS requirements. Hence, we use two evaluation criterion, delay/QoS satisfaction ratios222The delay/QoS satisfaction ratios are the ratios of the tasks completed by the MEC servers with satisfied delay/QoS requirements over the total number of offloaded tasks [peng2020deep]., to measure the performance of the DDPG-based resource management scheme. Also, we compare the DDPG-based scheme with the random scheme, which rapidly and randomly decides the association patterns for and allocates resources to vehicles under the coverage area of the MeNB.

The three subfigures in Fig. 4 show the average delay/QoS satisfaction ratios over times of tests versus the amounts of available spectrum, computing, and caching resources for the MeNB and UAVs. With more available resources for the MeNB and UAVs, resources allocated to each task offloading request are increased, and therefore resulting in more tasks completed by the MEC servers with satisfied delay/QoS requirements. For each offloaded task, the satisfied QoS requirement indicates the delay requirement is satisfied and at least caching resources are allocated to that task. Thus, under the same test setting, the average delay satisfaction ratio is always higher than the QoS one, as shown in Fig. 4. Moreover, we can see that the gap between the delay and the QoS satisfaction ratios of the DDPG-based scheme is far less than that of the random scheme because the DDPG-based scheme jointly allocates the multi-dimensional resources to satisfy the offloaded tasks’ QoS requirements.

(a) Vs. spectrum resources
(b) Vs. computation capabilities
(c) Vs. caching resources
Figure 4: Average delay/QoS satisfaction ratios over 10000 tests.

V Conclusion

This paper has investigated the multi-dimensional resource management problem in the MEC/UAV-assisted vehicular network. Particularly, to support as many offloaded computing tasks with satisfied delay and QoS requirements as possible, an optimization resource management problem has been formulated for the controller installed at the MeNB. As the formulated problem is non-convex and with high computational complexity, instead of leveraging traditional optimization methods, we have transformed the original problem with RL and designed a DDPG-based solution. Simulation results have demonstrated that the DDPG-based algorithm can converge within episodes during the training stage and higher delay/QoS satisfaction ratios can be achieved by the DDPG-based scheme than the random scheme.

References