The next-generation wireless networks must be capable of managing a broad spectrum of wireless technologies with heterogeneous quality-of-service (QoS) and quality-of-experience (QoE) requirements. Among such technologies include emerging applications such as holographic telepresence, the Internet of everything, drone-based applications, and collaborative robots [6Gapplication, lotfi2022semantic, jebellat2021training, lotfi2021]. To meet strict QoS and QoE requirements of these applications, the new open radio access network (O-RAN) has been recently introduced which addresses the demand for virtual and programmable components, as well as intelligent, data-driven, and closed-loop control of the RAN.
The key advantages of O-RAN are the possibility for operators to mix and match components, the use of open interfaces, the fact that it is software-centric and scalable, and the potential to improve network performance using machine learning (ML) approaches. Furthermore, all of these factors increase the flexibility of the network design [polese2022understanding]. The disaggregation of RAN functions into different units is a key feature of O-RAN that makes the network adaptable. The third generation partnership project (3GPP) splits base stations (BSs) into an open central unit (O-CU), an open distributed unit (O-DU), and an open radio unit (O-RU) [3gpp2017study]. Moreover, these distributed units with open interfaces link to RAN’s intelligent controllers (RIC) to manage and control the network in near real-time and non-real-time controlling loops [oranslice2020, polese2022understanding]. The new generation networks need a controlling approach with fast convergence, between ms and s for near real-time controllers, to obtain their required QoS [oranAI]. Furthermore, to maintain the network QoS in the face of dynamic changes and heterogeneous requirements, O-RAN slicing is being explored as a viable solution.
Different network slices need to be managed carefully to prevent service level agreement (SLA) diversity. While artificial intelligence (AI) and ML approaches are usable in network slicing, they face challenges, including the requirement for a vast amount of diverse data and sufficient exploration to accelerate convergence and train an ML model that can effectively generalize to different situations without impacting the actual RAN performance[polese2022understanding, brik2022deep]. Given the difficulty of gaining access to this amount and variety of data, making large-scale decisions involving several O-DUs will be challenging. The O-RAN slicing challenge to manage and control diverse and dynamic service requirements has recently been studied in several works [thaliath2022predictive, niknam2020intelligent, hammamipolicy, polese2021colo, bonati2021intelligence]
. Two main ML categories that have been used in the literature are supervised deep learning and deep reinforcement learning (DRL). To avoid SLA violations, the works in[thaliath2022predictive] and [niknam2020intelligent] proposed supervised ML-based resource provisioning approaches for network slicing by using predictions on traffic and the number of active users in the network. The works in [hammamipolicy, polese2021colo, bonati2021intelligence] proposed DRL-based approaches to achieve online training and dynamic resource allocation in O-RAN. In particular, the authors in [hammamipolicy] provided a performance comparison between two on-policy and off-policy models for delay-tolerant and delay-sensitive services, and the work in [polese2021colo] utilized auto-encoders to minimize unnecessarily high-dimensional input for the DRL agent and improve DRL controller performance in the presence of unreliable real-time wireless data. Furthermore, the authors in [bonati2021intelligence] demonstrated the feasibility of RAN scheduling control over real-time RIC by collecting data at the network edge in a DRL method.
Although the prior art in [thaliath2022predictive, niknam2020intelligent, hammamipolicy, polese2021colo, bonati2021intelligence] is effective in a set of use cases, it has a number of limitations including slow convergence and lack of generalizability. For example, the DRL methods in [hammamipolicy] require near 20k training time steps before they converge, and thus, they can lead to inefficient resource allocation in some delay-sensitive systems. In addition, while the work in [polese2021colo] aims to overcome the limitations of DRL approaches in real-time wireless network scenarios, it has not been completely resolved, especially in real-time controlling scenarios. While DRL algorithms are effective in complex tasks, their training procedures are slow in the face of unreliable real-time wireless data, causing delays in the O-RAN controlling mechanism. Furthermore, DRL methods suffer from a lack of sufficient exploration, particularly in dynamic heterogeneous environments such as the one studied in [bonati2021intelligence]. These challenges make DRL approaches insufficient for the O-RAN slicing scenario and mandate new solutions that can cope with the demand for broad exploration of dynamic wireless networks.
The main contribution of this work is to utilize the opportunity provided by O-RAN to create new experiences based on disaggregated modules. To this end, we formulate a problem that aims to minimize the probability of SLA violation while considering some constraints on the total required resources. To solve this problem, we propose utilizing a joint DRL and population-based strategy as an evolutionary-based DRL (EDRL) approach in the O-RAN slicing scenario to accelerate the learning process in the RIC modules. By leveraging population actors, the EDRL provides enough exploration to stabilize convergence properties and make the learning process more robust. To do that, we consider the O-RAN slicing issue and model the O-RAN slicing managing problem as a Markov decision process (MDP). Then, to solve the MDP problem and find the best allocation policy for O-RAN slicing, we offer an EDRL algorithm. Simulation results show a% improvement in the network performance compared to the DRL baseline method.
To the best of our knowledge, this is the first work that utilizes a hybrid method of evolutionary algorithms and DRL to achieve efficient and dynamic slice management in future O-RANs.
Ii System model
Consider an O-RAN architecture for a wireless network with heterogeneous users in a set served by different network slices. By considering the O-RAN architecture as a dynamic slice optimization, we have resource management in RIC modules for O-RAN slicing as Fig 1. In this system, there are types of network slices in a set with specific QoS requirements as , defined as follows: enhanced mobile broadband (eMBB), machine-type communications (MTC), and ultra-reliable low latency communications (URLLC). Each of the slice serves user equipment (UE). Furthermore, the slices must share a pool of available resources to meet the QoS requirements of their assigned UEs. To guarantee that the SLA is satisfied, dynamic slice optimization will be performed in the RT-RIC module to optimize slice management. In addition, as a part of the slicing, the medium access control (MAC) layer should allocate resources following the radio resource management (RRM) strategy provided by the slice management. To address this challenge, next, we present the wireless model and, accordingly, formulate the slice management problem cognizant of wireless resource constraints.
Ii-a Wireless communication model for O-RAN network slicing
According to the O-RAN architecture shown in Fig. 1, different network slices server different UEs with different QoS criteria. The RIC module is also in charge of managing these network slices and the resources assigned to them. Due to the stochastic nature of the wireless channel, static resource management would be ineffective. As a result, dynamic resource management is considered in order to dynamically re-assign resources to slice networks in each frame based on UEs’ network and channel changes. As a result, while this aids in adapting to dynamic changes in the wireless channel, it also complicates resource assignment.
Besides, the QoS criteria can also be defined particularly in terms of throughput, capacity, and latency for slices. To define the slice ’s achievable QoS, we consider orthogonal frequency-division multiple access (OFDMA) schemes. The slice achievable data rate can be written as
is a binary variable that shows the resource block (RB) allocation indicator in the RBof user , and is a binary variable that shows the RB allocation indicator in the RB of slice . Also, represents the RB bandwidth and is the total available RBs for downlink communications. In addition, is the O-RU transmit power per RB, and is the user distance from its assigned O-RU. Moreover, represents the path loss exponent, and is the time-varying Rayleigh fading channel gain. In (II-A), denotes the downlink interference from the neighboring O-RUs transmitting over RB , and
represents the variance of the additive white Gaussian noise (AWGN).
Ii-B Problem formulation
Due to the restricted resources shared between slices with heterogeneous services and dynamic UEs, our goal is to minimize the probability of SLA violation , , while considering some constraints on the total required resources. Also, and
represent vectors ofand as resource allocation indicators at MAC and RIC, respectively. Therefore, we formulate the following optimization problem to find an optimal allocation policy for O-DUs and distribute the shared resources among heterogeneous UEs in each O-RAN slice .
where and represent, respectively, the desired threshold and margin values in QoS required by slice . Constraint (2c) and (2d) represent the feasibility conditions on allocated resources to slices and UEs. Problem (2a) indicates that the slices achieve their demanded QoS by minimizing the probability of SLA violation. The proposed problem is an NP-hard mixed-integer stochastic optimization problem which is challenging to solve. Markov decision process (MDP) provides a mathematical framework for decision-making and optimization problems involving partially random situations. As a result, it is advantageous to model the (2a) as an MDP and solve it using dynamic methods such as DRL approaches.
Ii-C Stochastic game-based optimization problem
By considering the RT-RIC module as an intelligent agent that makes decisions to manage the O-RAN slicing, the other components, i.e., O-RU, O-DU, and O-CU will act as the agent’s environment that is influenced by the agent’s actions, as shown in Fig 1. Thus, the decision process of the mentioned O-RAN slicing controlling the problem is represented as an MDP with tuples , where , , and represent the state space, action space, and transition probability from current state to the next state, respectively. The MDP tuples are described as follows:
represents the O-RAN status in each step of time which contains the achievable QoS of each slice , UEs’ density in each slice , and resource allocation history as the previous action . Therefore, the observation of the intelligent agent in time is as follows:
is defined as a vector of the number of required resources for the O-RAN slices and UEs. Thus, in each time , the agent, based on its policy, decides to perform the action as .
characterizes summation of the complement probability of SLA violation in (2a), which relies on the incoming traffic of each slice and the radio condition of the connected UEs. The desired reward value is described as:
Therefore, the procedure makes best use of the available bandwidth to meet the QoS demands of all slices. Then, the defined MDP model can be investigated using a DRL approach. The main task of a DRL agent is to find an optimal policy as a mapping from the state space to the action space that maximizes its expected average discounted reward , where . Given a policy , the state-value and action-value functions are defined as and , respectively. Due to continuous states and actions, the Deep Deterministic Policy Gradient (DDPG) has been used as a model-free and off-policy algorithm. As part of the actor-critic technique, the RT-RIC agent simultaneously learns an ideal policy to assign the resources that optimize long-term reward. The policy network parameterized by is updated using the gradient defined as follows with random samples transitions:
Also, the value network parameterized by will be updated by minimizing the following loss as:
Training one agent that interacts with the environment can be very time-consuming. However, with O-RAN, we may leverage experience from all of the disaggregated modules to guide the agent in the training procedure. For instance, different O-DUs may experience similar network instances (i.e., network traffic, QoS requirements, etc.) while being deployed at different locations across the network. Accordingly, sharing their experiences increase the generality of the resource assignment task. The prior works in [thaliath2022predictive, hammamipolicy, polese2021colo, niknam2020intelligent, bonati2021intelligence] utilize the supervised deep learning and DRL approaches for network management. The method used in these works has limitations such as the need for large-scale data, lack of enough exploration, and unstable convergence of the supervised deep learning and DRL approaches in O-RAN slicing. Hence, inspired by [khadka2018evolution] we employ a hybrid strategy to solve the problem (2a). This hybrid strategy combines the evolutionary algorithm (EA) optimization method with the DDPG algorithm of the DRL approach to better utilize experience samples and provide a more effective performance in less time than DRL alone. EA provides a diverse set of samples representing a wide range of services and traffic requirements to improve DRL learning performance. In response, DRL injects gradient information into the EA population. Injecting DRL gradient information into EA, augments EA’s ability to select samples that force the DRL in policy space toward the regions with higher reward.
Iii Evolutionary DRL method
In fact, EDRL is a hybrid method combining population-based EA and high sample efficiency DRL approaches. The EDRL uses diverse EA experiences to train the DRL, while DRL injects gradient information into the EA population. Accordingly, it makes them powerful to converge faster and thus are suitable for real-time applications [oranAI]. The employment of a fitness metric that aggregates returns throughout a whole episode makes EAs beneficial in environments with reward-less states where the reward is only specified and known for a few states and is resistant to long time horizons [khadka2018evolution]. Accordingly, the EDRL addresses the delayed reward issue, which is obvious in network slicing since the network requires to experience diverse policies to offer a different reward. In general, the flow of the EDRL algorithm is divided into three interacting phases; the population phase, the DRL phase, and the interaction between them which are explained in the following paragraphs.
Iii-a Population phase
The population actors evaluate in one episode of interaction with the environment during the population phase, as illustrated in Fig. 1. During the evaluation episode, they measure the fitness score as a cumulative sum of return value in each interaction as , and save the actors experience in replay buffer as tuple . As is clear, the measured fitness score is determined by achieved QoS of each slice as which depends on the wireless aspects of the environments. Then, based on the value of the fitness scores, the population actors get sorted for selection part. Consequently, the results will be used in the mutation and crossover sections to create the next generation using the elite individuals of the population. Here, the O-RAN allows providing new experiences through disaggregated modules in different geographical locations, such as leveraging different populations with separate environments. As a result, the network will experience numerous wireless communication traffics, which is crucial to improve the generalization ability of the network’s dynamic management system.
Iii-B DRL phase
On the other hand, in the DRL phase, a critic network that is parameterized by will be updated using a random batch of replay buffer samples by gradient descent (GD) manner as (6). Then, the critic network trains a DRL actor by sampled policy gradient (5). The EA simply uses samples in the fitness score and then leaves information. However, by utilizing them in a replay buffer and continually applying powerful gradient-based actor-critic algorithms, they extract more information from data while maintaining high sample efficiency.
Iii-C Interaction between populations and DRL
The most crucial phase will then be performed, which include an interaction between the EA and DRL algorithms. During this phase, the weights are copied to the worst-performing individuals in the population actors and cause leverage of the learned information from DRL and help to stabilize learning. Thus, they learned from the experience of episodes as well as fitness scores by taking this approach. Also, a synchronization period governs how frequently the RL-actors information shares with the EA population. Furthermore, following the selection of the elites, the is updated by the best performers of to accelerate convergence.
While the population actors explore in parameter space, and the RL actor explores in action space, they complement each other and lead to effective policy space exploration. Besides, the critic network is updated with samples from the EA population’s policy, which may or may not be used in the DRL agent’s next action. As a result, this hybrid method behaves as an on-policy method in each synchronization period, , and as an off-policy method other times, providing benefits from both methods [gu2017interpolated]. The effectiveness of EA is determined by the selection, the crossover, and the mutation parts. It means that the appropriate choice of these parts is critical. The parents will be chosen in the selecting part to generate the next generation. As is obvious, selecting for higher fitness scores will improve overall quality in each generation. On the other hand, selection by diversity avoids to stuck in a local extremum. The selection would be determined by taking into account the environment. In our system model, the tournament selection [miller1995genetic] would have the optimum performance while considering the O-RAN environment as follows:
where and represents the selected parents to generate the next generation. In the O-RAN environment, the populations are actor-networks that are located in the RT-RIC modules and have interaction with distinct O-DU/O-RU environments in different areas as Fig. 1. Following parent selection, the crossover and mutation sections inject additional randomness into the system, resulting in more exploration and generalization. To ensure the transfer of parents’ valuable properties to the next generation, we use the average function in the crossover section as follows:
Then in the mutation part, the population would be perturbed to create new features for the next generation as follow [khadka2018evolution]:
where represents a Gaussian noise with zero mean and unit variance. Also, shows the mutation rate, and
is a random variable in. Moreover, the and are super mutation probability and reset probability, respectively.
Iii-D The proposed EDRL algorithm
In Algorithm 1, we summarize the EDRL method to solve the optimization problem in (2a)-(2d). The input variables of the algorithm are the number of generation , the population size , the number of elites , and the population actors’ network weights as which are initialized with random weights. Also, the DRL agent’s actor and critic network weights as and are initialized randomly. Moreover, a random variable and a synchronization period are considered as input variables in Algorithm 1. The algorithm proceeds to output the trained policy of the DRL agent. In each generation loop , the population actors’ fitness score is measured, and each population actor experience is stored in the replay buffer . Then, in the "Evolution" section of the Algorithm, the of the best population actors based on the measured are selected as elites to generate the next generation of population actors based on the parent selection (7), crossover (8) and mutation (9). In this step, the DRL agent updates its actor and critic networks by using (5), (6), and the replay buffer random samples. The algorithm terminates once the DRL agent policy network is converged or after maximum generation.
|10-tap Rayleigh fading channel|
|DDPG batch size||128|
|Replay buffer size|
Iv simulation results
In the simulation part, we investigate an O-RAN architecture with three slices (i.e., eMBB, MTC, URLLC). We consider a MHz bandwidth as RBs which can be dynamically assigned to different slices. The slices serve a total of
users which are randomly and uniformly distributed in the network and divided among the slices as, , and . In the EDRL part, we consider the actor as the population, with elite fraction and crossover and mutation batch size and
, respectively. In the DRL part, we implement the DDPG algorithm using Pytorch with three fully-connected layers with, , neurons for actor and critic networks and a tanhactivation function. For all the models, the learning rate and Adam optimizer are considered. Table I summarizes the simulation parameters.
Figure 2 compares the performance of the proposed EDRL algorithm with the DRL algorithm as a baseline method. The cumulative rewards in Fig. 2 are measured with . The results are displayed in each episode, for comparison with conventional DRL and are averaged over a sufficient number of runs. The results in Fig. 2 reveal that the EDRL approach can give up to a greater final return value than the DRL method and prove the efficacy of the proposed EDRL system over the wireless environment. Fig. 2 also illustrates that the suggested approach achieves faster convergence than the baselines, that is due to the employment of a population-based method with actors, who supply many valuable experiences of diverse modules to use in the training of RL-based actors. Furthermore, as Fig. 2 indicates, the DRL method which has better sample efficiency shows better performance at the early episodes of the training process, where limited samples are available. After some episodes, the joint DRL and population-based method EDRL, which has experienced more samples in comparison to early episodes, provide more generality in the training process and thus outperforms the DRL.
shows the Cumulative Distribution Functions (CDFs) of the achieved QoS in each slice through the EDRL and DRL training process. The slices are assumed to be the eMBB slice, MTC slice, and URLLC slice, in that order. Each slice meets the service demands by considering a distinct QoS for specific services, such as the average data rate in the eMBB slice in Fig.2(a), the capacity of the MTC slice in Fig. 2(b), and the maximum delay in URLLC slice in Fig. 2(c). Considering an average data rate QoS metric in eMBB slices guarantees a stable service for connected UEs. Similarly, the capacity as a QoS parameter in MTC slices ensures that a high connection density is supported. Furthermore, considering an exponential random variable with an average size of Kb as packet length for URLLC slice and selecting maximum delay as a QoS criterion ensures the lowest possible value for maximum latency in these delay-sensitive services.
Figure 4 shows the per-user throughput in each slice of the simulation O-RAN environment. The results presented in Fig. 4 were obtained during generation iterations by distributing , , and users to the slices, respectively. As seen in Fig. 4, the network users follows the QoS demand in Fig. 3. Furthermore, the results indicate that the Algorithm 1 was successful in keeping the users’ throughput within the intended range.
In this paper, we have developed a novel O-RAN slicing framework over an evolutionary-based DRL approach. To this end, we have formulated an optimization problem that efficiently allocates the shared available RBs to each O-RAN slice. To solve this problem, we have modeled the optimization problem as an MDP, then by employing the desegregated modules in O-RAN architecture, we develop a new EDRL algorithm to find an optimal policy for allocating the available resources to distinct O-RAN slices. Accordingly, utilizing the population experiences in the DRL training, the trained policy is more general and robust in different traffic situations over the wireless networks. The simulation results have shown up to improvements in maximum rewards compared to the DRL baseline method. Further, the results have highlighted the importance of utilizing different experiences and generalization in policy training in dynamic wireless networks, and demonstrated the efficiency of the proposed algorithm in presence of wireless bandwidth constraints.