It is now clear that advanced softwarization and virtualization paradigms such as network slicing will be the cornerstone of 5G networks (d2020slicing) and the Internet of Things (bizanis2016sdn). Indeed, by sharing a common underlying physical infrastructure, network operators (NOs) can dynamically deploy multiple “slices” tailored for specific services (e.g., video streaming, augmented reality), as well as requirements (e.g., low latency, high throughput, low jitter) (5gkpi), avoiding the static—thus, inefficient—network deployments that have plagued traditional hardware-based cellular networks. To further decrease latency, increase throughput, and provide improved services to their subscribers, NOs have recently started integrating multi-access edge computing (MEC) technologies (taleb2017multi), which are expected to become essential to reach the sub-1ms latency requirements of 5G. MEC will be so quintessential that the European Telecommunications Standards Institute (ETSI) has identified MEC as “one of the key pillars for meeting the demanding Key Performance Indicators (KPIs) of 5G” and “[as playing] an essential role in the transformation of the telecommunications business” (mecjoint).
Despite the clear advantages of network slicing and MEC, the truth of the matter is that we cannot have one without the other. Indeed, slicing networking resources only, e.g., spectrum and resource blocks (RBs) cannot suffice to satisfy the stringent timing and performance requirements of 5G networks. Real-time wireless video streaming, for example, requires at the same time (i) networking resources (e.g., RBs) to broadcast the video, (ii) computational resources to process and transcode the video, as well as (iii) storage resources to locally cache the video. The key issue that sets MEC slicing apart from traditional slicing problems is that MEC resources are usually coupled, meaning that slicing one resource usually leads to a performance degradation in another type of resource.
We further verify this critical issue in Fig. 1, where we show an experiment (testbed described in Section 8) where we instantiate 1 slice for video streaming (S1) and 2 slices for video transcoding (S2 and S3). S1 starts at , while S2 and S3 start at . Fig. 1 clearly shows that as soon as S2 and S3 start, the performance of S1 plummets. This is because the computational resources allocated for S2 and S3 cause the video buffer (blue line) to drop from 30 seconds to 10 seconds, which in turn causes highly-jittered bitrate (red line). As soon as S2 and S3 end at , buffer size and video bitrate sharply increase and stabilize. This demonstrates that slices that require both computation and networking resources (S1, video streaming) are inevitably impacted by slices running on the same node that only require computation (S2 and S3, video transcoding). Therefore, taking into account the coupling among slices is a compelling necessity to guarantee appropriate performances when designing edge slicing algorithms.
Extensive research efforts have already explored MEC algorithms for cellular networks and network slicing (direct; 8666109; castellano2019distributed; ndikumana2019joint). The key limitation of prior work, however, is the assumption that network slicing and MEC are distinct problems. As we demonstrated in Fig. 1, this is hardly the case in practical scenarios. However, the already rich literature on network slicing—discussed in detail in Section 2—has not yet considered this fundamental aspect. This makes the joint MEC/slicing paradigm a substantially unexplored problem. We also point out that due to the massive scale envisioned for 5G and IoT applications, centralized algorithms become prohibitive. Thus, a core issue is not only to account for resource coupling, but also to devise new slicing algorithms that provide highly efficient and scalable slicing strategies.
Novel contributions. The paper’s key innovation is the design, analysis and experimental evaluation of the unified MEC slicing framework, called Sl-EDGE, that allows network operators to instantiate heterogeneous slice services (e.g., video streaming, caching, 5G network access) on edge devices. In a nutshell, we make the following novel contributions;
We mathematically model and discuss coupling relationships among networking, storage and computation resources at each edge node (Section 4). We formulate the Edge Slicing Problem
(ESP) as a Mixed Integer Linear Programming (MILP) problem, and we prove that it is NP-hard (Section5);
We propose three novel slicing algorithms to address (ESP), each having different optimality and computational complexity. Specifically, we present (i) a centralized optimal algorithm for small network instances (Section 5); (ii) an approximation algorithm that leverages virtualization concepts to reduce complexity with close-to-optimal performance (Section 6.1), and (iii) a low-complexity algorithm where slicing decisions are made at the edge nodes with minimal overhead (Section 6.2);
We extensively evaluate the performance of the three slicing algorithms through simulation, and compare Sl-EDGE with DIRECT (direct), to the best of our knowledge the state-of-the-art slicing framework for MEC 5G applications (Section 7). Our results show that, by taking into account coupling among heterogeneous resources, Sl-EDGE (i) instantiates slices 6x more efficiently then the algorithm proposed in (direct), as well as satisfying resource availability constraints; (ii) can be implemented with a distributed approach while getting 0.25 close to the optimal solution;
We prototype and demonstrate Sl-EDGE on a testbed composed by 24 software-defined radios. Experimental results demonstrate that Sl-EDGE instantiates heterogeneous slices providing LTE connectivity to smartphones, video streaming over WiFi, and ffmpeg video transcoding while achieving an instantaneous throughput of over LTE links, video streaming bitrate with an overall CPU utilization of (Section 8).
2. Related Work
Motivated by the ever increasing interests from both NOs and standardization entities (mecslicing; mecdeploy; mecjoint), network slicing and multi-access edge computing technologies have recently become “all the rage” in the wireless research community (mancuso2019slicing; d2019slice; mandelli2019satisfying; garcia2018posens; rost2017network; zhang2019adaptive; foukas2017orion). Lately, we have seen a deluge of algorithms to efficiently slice portions of the network and instantiate service-specific slices. These solutions leverage optimization (han2019utility; halabian2019distributed; sun2019hierarchical; agarwal2018joint)caballero2017network; jiang2017network; d2018low)
and machine learning(8666109) tools.
Concurrently, MEC has demonstrated to be an effective methodology to significantly reduce latency. This paradigm has been successfully used to provide task offloading (misra2019detour; tran2018joint; liu2016delay; wang2019computation), augmented reality (AR) (al2017energy; erol2018caching; tang2018multi), low-latency video streaming (he2019cloud; yang2018multi), and caching (hoang2018dynamic; zhang2018cooperative), among others. We refer the interested reader to the following surveys for a more detailed overview of network slicing and MEC (afolabi2018network; kaloxylos2018survey; taleb2017multi; mao2017survey; mach2017mobile).
These works are extremely effective when the two technologies are considered as independent entities operating on the same infrastructure. However, as shown in Fig. 1, they are prone to resource over-provisioning when both technologies coexist on the same edge nodes and share the same resources. Ndikumana et al. (ndikumana2019joint) consider the allocation of heterogeneous resources for task offloading problems in MEC ecosystems, while in (agarwal2018joint) Agarwal et al. consider the problem of jointly allocating CPUs and virtual network functions for network slicing applications. Similarly, in (8666109) Van Huynh et al.
account for slice networking, computation, and storage resources by designing a deep dueling neural network that determines which slices to be admitted, maximizes the network provider’s reward, and avoids resource over-provisioning. Although(8666109) accounts for resource heterogeneity, the authors only focus on the slicing admission control problem, and do not consider how to partition the resources of each slice among the nodes of the network. A heterogeneous resource orchestration framework for edge computing ecosystems called Senate is presented by Castellano et al. in (castellano2019distributed). Senate leverages Distributed Orchestration Resource Assignment (DORA) and election algorithms to allow the instantiation of multiple virtual network functions on the same infrastructure. However, this work focuses on the wired portion of the edge network only.
The closest work to ours is (direct), where Liu and Han proposed a distributed slicing framework for MEC-enabled wireless networks called DIRECT. Despite being extremely successful in slicing networks with MEC resources residing in dedicated servers close to the base stations, DIRECT does not account for the case where both networking and MEC resources coexist on the same edge node.
This paper separates itself from prior work as it makes a substantial step forward toward the coexistence of network slicing and MEC technologies. Indeed, we consider the challenging case of edge nodes jointly providing wireless network access and MEC functionalities to mobile users. Furthermore, we model the intrinsic coupling among heterogeneous resources residing on edge nodes, and design Sl-EDGE, a slicing framework that leverages such a coupling to instantiate heterogeneous slices on the same physical infrastructure.
3. Sl-EDGE at a glance
Sl-EDGE is a slicing framework for MEC-enabled 5G systems. Its key advantage is that it provides a fast, flexible and efficient deployment of joint networking and MEC slices. The three-tiered architecture of Sl-EDGE is illustrated in Fig. 2.
The physical infrastructure consists of a set of MEC-enabled networking edge nodes (e.g., base stations, access points, IoT gateways)—referred to as MEC hosts (mecjoint)—controlled by one or more Infrastructure Providers (IPs). MEC hosts are located at the network edge and simultaneously provide networking, storage and computational services (e.g., Internet access, video content delivery, caching).
Sl-EDGE Users are both mobile and virtual NOs and Service Providers (SPs)—referred to as the tenants—willing to rent portions of the infrastructure to provide services to their subscribers. Tenants access Sl-EDGE to visualize relevant information such as position of MEC hosts, which areas they cover and a list of networking and MEC services that can be instantiated on each host (e.g., 5G/WiFi connectivity, caching, computation). Whenever tenants need to provide these services, they submit slice requests to obtain networking, storage or computation resources. The received slice requests are collected and processed by Sl-EDGE which (i) determines the set of requests to be accommodated by using centralized (Section 5) and distributed algorithms (Section 6); (ii) instantiates slices by allocating the available resources to each admitted slice, and (iii) notifies to the admitted tenants the list of the resources allocated to the slice.
4. System model
Let be the set of deployed MEC-enabled networking devices, or edge nodes. Edge nodes provide both wireless connectivity and MEC services to a limited portion of the network. Therefore, they can be clustered into clusters located in different geographical areas (ndikumana2019joint; bouet2018mobile; gudipati2013softran). Let be the set of these independent clusters, and let be the set of edge nodes in cluster . Each edge node is equipped with a set of networking, storage and computational capabilities, usually measured in terms of number of RBs (d2019slice), megabytes, and billion of instructions per second (GIPS) (ndikumana2019joint), respectively. Let represent the resource type, i.e., networking (), storage (), and computing (). Moreover, let be the set of resources available at each edge node . An example of the physical infrastructure and its clustered structure is shown in Fig. 3.
Let be the set of slice requests submitted to Sl-EDGE, with being the set for networking, storage, and computing slice requests, respectively. Each request of type is associated to a value used by the IP to assess the importance—or monetary value—of . Also, we define the -dimensional request demand array , where represents the amount of resources of type requested by in cluster . Without loss of generality, we assume that for all .
4.1. Resource coupling and collateral functions
To successfully slice networking and MEC resources, it is paramount to understand the underlying dynamics between resources of different nature. To this purpose, let us consider two simple but extremely effective examples.
4.1.1. Content Caching
a tenant instantiates a storage slice (Fig. 5) to provide caching services to its subscribers, i.e., , and specifies how many megabytes () should be allocated in each cluster . In this case, the content to be cached should be (i) first transmitted and then (ii) processed by storing edge nodes. Therefore, storage activities related to caching procedures not only utilize storage resources, but also require networking and computational resources.
4.1.2. 5G networking
in this example (Fig. 5) a tenant wants to provide cellular services to mobile subscribers. Hence, it submits a networking slice request of type and specifies the clusters to be included in the slice and the amount of spectrum resources (
) needed in each cluster. Edge nodes providing connectivity must (i) perform channel estimation and baseband signal processing procedures, and (ii) locally cache or buffer the data to transmit. Therefore, the allocation of resources of typeentails resources of type and .
These two examples suggest that heterogeneous resources are tightly intertwined, thus motivating the need for new slicing algorithms that account for these intrinsic relationships. To incorporate coupling within the Sl-EDGE framework, we introduce the concept of collateral functions.
Let us consider the case where, to instantiate a slice of type , resources must be allocated on edge node . For any resource type , we define the collateral function . This function (i) reflects coupling among heterogeneous resources, and (ii) determines how many resources of type should be allocated on edge node when allocating resources of type . Of course, .
In this paper, we model resource coupling as an increasing linear function with respect to . This way, the number of resources of type needed to instantiate resources of type on a given edge node can be evaluated as , with being measured in units of type per unit of type , e.g., GIPS per megabyte.111We assume that differs among edge nodes, but it is uniform across services of type . When different services of type have different values of (e.g., video encoding and file compression might require a different number of GIPS to process the same data) we can extend by adding service-specific classes with different values. However, the more general case where is a non-linear function can be easily related to the linear case by using well-established and accurate piece-wise linearization techniques (lin2013review). For any and , let be the collateral matrix for edge node . Such a matrix can be written as
5. Edge slicing problem and its optimal solution
The key targets of Sl-EDGE are to (i) maximize profits generated by infrastructure slice rentals, and (ii) allow location-aware and dynamic instantiation of slices in multiple clusters, while (iii) avoiding over-provisioning of resources to avoid congestion and poor performance. We formalize the above three targets with the edge slicing optimization problem (ESP) introduced below.
where and respectively are the slice admission and resource slicing policies. Quantity
is a binary variable such thatif request is admitted, otherwise. Similarly, represents the amount of resources of type that are assigned to on edge node .
One can easily verify that Problem (ESP) meets the previously mentioned requirements, since it (i) aims at maximizing the total value of the admitted slice requests; (ii) guarantees that each admitted slice obtains the required amount of resources in each cluster (Constraint (2)), and (iii) prevents resource over-provisioning on each edge node (Constraint (3)).
Given the presence of both continuous and 0-1 variables, (ESP) belongs to the class of MILPs problems, well-known to be hard to solve. More precisely, in Theorem 1 we will prove that (ESP) is NP-hard even in the case of requests having the same value and edge nodes belonging to a single cluster.
Theorem 1 (NP-hardness).
Problem (ESP) is NP-hard.
To prove this theorem, we reduce the Splittable Multiple Knapsack Problem (SMKP), which is NP-hard (10.1007/978-3-319-18173-8_27), to an instance of (ESP). Let us assume that all edge nodes belong to the same cluster and all submitted slice requests are of the same type . Furthermore, we assume that all requests have value for any . Since all requests are of the same type , for any edge node . Let us now consider the SMKP, whose statement is as follows: given a set of knapsacks (the edge nodes) with limited capacity () and a set of items (requests) with certain value () and size (), assume that items can be split among the knapsacks while satisfying Constraint (2), is there an allocation policy that maximizes the total number of items added to the knapsacks without overfilling them? We observe that Problem (ESP) is a reduction of the SMKP. Since this reduction can be built in polynomial time, it follows that Problem (ESP) is NP-hard. ∎
Problem (ESP) can be solved by means of efficient and well-established exact Branch-and-Cut (B&C) algorithms. Even though the worst-case complexity of such algorithms is exponential, B&C leverages structural properties of the problem to confine the search space, thus reducing the time needed to compute an optimal solution. The B&C procedure—not reported here for the sake of space—can be found in (Elf2001). We now focus on how to overcome some of the limitations of B&C. Specifically, B&C suffers from high computational complexity, and requires a centralized entity with perfect knowledge, both of which are unacceptable in large-scale and dynamic networks.
6. Approximation Algorithms
We now design two approximation algorithms for (ESP) whose primary objective is to (i) reduce the computational complexity of the problem, and (ii) minimize the overhead traffic traversing the network. In Section 6.1 and Section 6.2, we present the algorithmic implementation of the two algorithms, and further discuss their optimality, complexity and overhead.
6.1. Decentralization through virtualization
One of the main sources of complexity in Problem (ESP) is the large number of optimization variables and . However, we notice that , where is the set cardinality operator. On the contrary, the number of variables is , with being the total number of edge nodes in the infrastructure. While is generally limited to a few tens of requests, the number of edge nodes deployed in the network might be very large. However, a big portion of these edge nodes are equipped with hardware and software components that are either similar or exactly the same. Thus, we can leverage similarities among edge nodes to reduce the complexity of Problem (ESP) while achieving close-to-optimal solutions and reduced control overhead.
Edge nodes with similar collateral functions will behave similarly. However, being similar in terms of only does not suffice to determine whether or not two edge nodes are similar. In fact, nodes with similar might have a different amount of available resources. For this reason, we leverage the concept of similarity functions (huang2008similarity).
Let be a function to score the similarity between edge nodes and . Two edge nodes are said to be -similar with respect to if , for any . If , we say that and are identical.
Through -similarity, we can first determine which edge nodes inside the same cluster are similar, and then abstract their physical properties to generate a virtual edge node. For the sake of generality, in this paper we do not make any assumption on (interested readers are referred to (xu2005survey) for an exhaustive survey on the topic). However, the impact of and on the overall system performance will be first discussed in Section 6.1.1, and then evaluated in Section 7.
Now we present V-ESP, an approximation algorithm that leverages virtualization concepts to compute a solution to Problem (ESP). The main steps of V-ESP are as follows:
Step 1: (Virtual edge node generation): for each cluster , we build the similarity matrix . For any real , element indicates whether or not and are -similar. That is, if , otherwise. We partition the set into independent subsets that contain similar edge nodes only. Partitions are generated such that and for any .
Each non-singleton partition is converted into a virtual edge node. Specifically, for each non-singleton partition , we define a virtual edge node whose available resources are equal to the sum of the available resources of all edge nodes in the partition, i.e., . The collateral function of the virtual edge node is constructed as , where is a function that generates a virtualized collateral function for virtual edge node discussed in Section 6.1.1. We show an example of virtualization procedure in Fig. 6.
Step 2: (Virtual Edge Nodes Advertisement): each cluster advertises Sl-EDGE the set of virtualized edge nodes, as well as their virtual collateral functions () and available resources ().
Step 3: (Solve virtualized ESP): Sl-EDGE solves Problem (ESP) with virtualized edge nodes through B&C. Slice admission and resource slicing policies are computed and each cluster receives the -tuple , with being the resource allocation policy over the virtualized edge nodes of cluster .
Step 4: (Virtualized edge node resource allocation): upon receiving , cluster solves Linear Programming (LP) problems in parallel, one for each virtual edge node . These LPs are formulated as follows:
which can be optimally solved by computing any feasible resource allocation policy that satisfies all constraints.
Step 5: (Slicing Policies Construction): let be the optimal solution of the -th instance of (6). The resource slicing policy for cluster is constructed by stacking all individual solutions computed by individual clusters, i.e., . The final slice admission and resource slicing policies are with .
Through V-ESP, each cluster exposes virtual edge nodes only, rather than (Steps 1-2). Thus, virtualization reduces the number of edge nodes and thus the number of variables in (ESP). Moreover, since virtualization leaves the structure of the slicing problem unchanged, we efficiently solve Step 3 through the same B&C techniques used for (ESP). In addition, while Steps 3-5 are executed whenever a new slicing policy is required (e.g., tenants submit new slice requests or the slice rental period expires), Steps 1-2 are executed only when the structure of the physical infrastructure changes (e.g., edge nodes are turned on/off or are subject to hardware modifications). This way, we can further reduce the overhead. In short, V-ESP splits the computational burden among the NO (Step 3) and the edge nodes (Steps 1-2 and 4), which jointly provides the high efficiency typical of centralized approaches while enjoying reduced complexity of decentralized algorithms.
6.1.1. Design Aspects of Virtualization
Step 1 relies on -similarity to aggregate edge nodes and reduce the search space. Intuitively, the higher the value of , the smaller the set of virtual edge nodes generated in Step 1, the faster Sl-EDGE computes solutions in Step 3. However, large values might group together edge nodes with different available resources and collateral functions. In this case, (i) Step 1 might produce virtual edge nodes that poorly reflect physical edge nodes features, and (ii) solutions computed at Step 3 might not be feasible when applied to Step 4. Thus, there is a trade-off between accuracy and computational speed, which will be the focus of Section 7.4.
Another aspect that influences the efficiency and feasibility of solutions generated by the V-ESP algorithm is the function , which transforms collateral functions of similar edge nodes into an aggregated collateral function. Recall that , which can be represented as a collateral matrix (1), must mimic the actual behavior of physical edge nodes belonging to the same partition . To avoid overestimating the capabilities of virtual edge nodes, and producing unfeasible solutions, the generic element of the virtual collateral matrix (1) for virtual edge node is set to . Although this model underestimates the capabilities of physical edge nodes and may admit less requests than the optimal algorithm, it always produces feasible solutions in Steps 3 and 4.
6.2. Distributed Edge Slicing
In this section we design a distributed edge slicing algorithm for Problem (ESP) such that clusters can locally compute slicing strategies. We point out that making (ESP) distributed is significantly challenging. In fact, both utility function and constraints are coupled with each other through the optimization variables and . This complicates the decomposition of the problem into multiple independent sub-problems.
In order to decouple the problem into multiple independent sub-problems, we introduce the auxiliary variables such that for any request and cluster . Thus, Problem (ESP) can be rewritten as
where , while Constraint (9) guarantees that different clusters admit the same requests.
Problem (D-ESP) is with separable variables with respect to the clusters. That is, Problem (D-ESP) can be split into sub-problems, each of them involving only variables controlled by a single cluster. To effectively decompose Problem (D-ESP) we leverage the Alternating Direction Method of Multipliers (ADMM) (boyd2011distributed). The ADMM is a well-established optimization tool that enforces constraints through quadratic penalty terms and generates multiple sub-problems that can be iteratively solved in a distributed fashion.
The augmented Lagrangian for Problem (D-ESP) can be written as follows:
where are the so-called dual variables, and is a step-size parameter used to regulate the convergence speed of the distributed algorithm (boyd2011distributed).
Let , we define which identifies the slice admission policies taken by all clusters except for cluster . Similarly, we define . Problem (D-ESP) can be solved through the following ADMM-based iterative algorithm
where each cluster sequentially updates and , while the dual variables are updated as soon as all clusters have updated their strategy according to (12). To update (12) each cluster solves the following quadratic problem
where is the adjusted value of request defined as
and is used by cluster to obtain the number of clusters that have accepted request .
The advantages of Problem (DC-ESP) are that (i) clusters do not need to advertise the composition of the physical infrastructure to the IP or to other clusters, and (ii) it can be implemented in a distributed fashion. Indeed, at any iteration , the only parameters needed by cluster to solve (12) are the dual variables and the number of clusters that admitted the request at the previous iteration.
It has been shown that ADMM usually enjoys linear convergence (shi2014linear), but improper choices of might generate oscillations. To overcome this issue and achieve convergence, we implemented the approach proposed in (boyd2011distributed, Eq. (3.13)) where is updated at each iteration of the ADDM. The optimality and convergence properties of DC-ESP will be exstensively evaluated in Figs. 10 and 11.
7. Numerical Results
We now assess the performance of the three slicing algorithms described in Section 5 and Section 6 by (i) simulating a MEC-enabled 5G network, and by (ii) comparing the algorithms with the recently-published DIRECT framework (direct).
We consider a scenario where edge nodes provide mobile subscribers with 5G NR connectivity as well as storage and computation MEC services, such as caching and video decoding. We assume that (i) edge nodes share the same NR numerology—more precisely, networking resources are arranged over an OFDM-based resource grid with 50 RBs, and (ii) edge nodes are equipped with hardware components with up to 1 Terabyte of storage capabilities and a maximum of 200 GIPS. We fix the number of RBs for each edge node, while we randomly generate the amount of computation and storage resources at each simulation run. To simulate a realistic scenario with video transmission, storage and transcoding applications, collateral matrices in (1) are generated by randomly perturbing the following matrix at each run. To give an example, processing a data rate of Mbit/s (equivalent to LTE 16-QAM with 50 RBs) requires GIPS (e.g., turbo-decoding) (holma2009lte), which results in GIPS/RB. Similarly, a 1-second long compressed FullHD 30fps video approximately occupies kB and requires GIPS to decode, thus MB/GIPS. We assume that the physical infrastructure consists of MEC-enabled edge clusters, each containing the same number of edge nodes but equipped with different amount of available resources and collateral functions. We model as the cosine similarity function (huang2008similarity) and, unless otherwise stated, the aggregation threshold is set to . Slice requests and the demanded resources are randomly generated at each run.
In the following, we refer to the optimal B&C algorithm in Section 5 as O-ESP. Similarly, the two approximation algorithms proposed in Section 6.1 and Section 6.2 are referred to as V-ESP and DC-ESP respectively.
7.1. The impact of coupling on MEC-enabled 5G systems
DIRECT (direct) provides an efficient distributed slicing algorithm for networking and computing resources in MEC-enabled 5G networks. Although this approach does not account for the case where edge nodes provide both networking and MEC functionalities, it represents the closest work to ours.
Moreover, DIRECT does not explicitly slice storage resources. Thus, to perform a fair comparison, we consider the case where tenants do not request any storage resource. Let be the total number of edge nodes in the network. We let tenants randomly generate slice requests to obtain networking and computational resources. Results are shown in Fig. 7, where any positive value indicates resource over-provisioning.
Fig. 7 shows that Sl-EDGE never produces over-provisioning slices. Conversely, since DIRECT does not account for coupling among heterogeneous resources on the same edge node, it always incurs in over-provisioning, allocating up to 6x more resources than the available ones. These results conclude that already existing solutions, which perform well in 5G systems with networking and MEC functionalities decoupled at different points of the network, cannot be readily applied to scenarios where resources are simultaneously handled by edge nodes—which strongly motivates the need for approaches such as Sl-EDGE.
7.2. Maximizing the number of admitted slices
Let us now focus on the scenario where the IP owning the infrastructure aims at maximizing the number of slice requests admitted by Sl-EDGE—to maximize resource utilization, for instance. Although each slice request comes with an associated (monetary) value , the above can be achieved by resetting the value of each request to in Problem (ESP).
Fig. 8 reports the performance of Sl-EDGE as a function of the total number of generated slice requests for different values of the number of edge nodes . We notice that the number of admitted slices increase as the slice requests that are submitted to Sl-EDGE increase (left-side plot). However, Fig. 8 (center) clearly shows that the percentage of admitted slices rapidly decreases as increases (only requests are admitted by O-ESP when and ). This is due to the scarcity of resources at edge nodes, which prevents the admission of a large number of slices. Thus, IPs should either provide edge nodes with more resources, or increase the number of deployed edge nodes. Fig. 8 (left), indeed, shows that denser deployments of edge nodes (i.e., ) allows more slices to coexist on the same infrastructure.
Finally, the right-hand side plot of Fig. 8 shows the computational complexity of the three algorithms measured as the number of function evaluations needed to output a solution. As expected, the complexity of all algorithms increases as both and increase. Moreover, O-ESP, a fully centralized algorithm, has the highest computational complexity. V-ESP and DC-ESP, reduced-complexity versions of O-ESP, instead show lower complexity. However, V-ESP and DC-ESP admit approximately and less requests than O-ESP, respectively, due to their non-optimality.
7.3. Maximizing the profit of the IP
Let us now consider the case of slice admission and instantiation for profit maximization (Fig. 9). In this case, Sl-EDGE selects the slice requests to be admitted to maximize the total (monetary) value of the admitted slices. Similarly to Fig. 8, Fig. 9 (center) shows that increasing reduces the percentage of admitted slices.
When compared to the problem described in Section 7.2, this profit maximization problem differs because (i) even if the number of edge nodes is small (i.e., ), profit maximization produces profits that rapidly increase with , and (ii) the percentage of admitted requests steeply decreases as
increases. Indeed, the higher the number of requests, the higher the probability that slices with high value are submitted by tenants. In this case,Sl-EDGE prioritizes more valuable requests at the expenses of others.
7.4. Impact of on the V-ESP algorithm
Finally, we investigate the impact of different choices on the performance of the V-ESP algorithm. Recall that regulates the number of edge nodes that are aggregated into virtual edge nodes (Section 6.1). The higher the value of , the higher the percentage of edge nodes that are aggregated, and the smaller the number of virtual edge nodes generated by Sl-EDGE.
Fig. 10 shows the computational complexity of V-ESP as a function of for different number of deployed edge nodes. As expected, does not impact neither O-ESP nor DC-ESP, however the impact on V-ESP is substantial. Indeed, larger values of reduce the number of physical edge nodes in the network, which are instead substituted by virtual edge nodes (one per aggregated group). This reduction eventually results in decreased computational complexity. Surprisingly, Fig. 10 also shows that V-ESP enjoys an even lower computational complexity than that of the distributed DC-ESP when . Recall that V-ESP centrally determines an efficient slicing strategy over virtualized edge nodes, and these strategies are successively enforced by each cluster. This means that V-ESP can compute an efficient slicing policy as rapidly as DC-ESP while avoiding any coordination among different clusters. Overall, Fig. 10 shows that V-ESP (green dashed line) computes a solution 7.5x faster than O-ESP (purple dashed line) when is high.
However, Fig. 11 shows that reduced computation complexity comes at the expense of efficiency.
Indeed, the optimality ratio (i.e., the distance of the output of any approximation algorithm from the optimal solution of the problem) decreases as increases up to a maximum of 25% loss with respect to the optimal. Although the optimality ratio for is high (i.e., and for and , respectively), clearly a trade-off between computational complexity and efficacy should be considered.
8. Sl-EDGE prototype
We prototyped Sl-EDGE on Arena, a large-scale 64-antennas SDR testbed (bertizzolo2019arena). A server rack composed of 12 Dell PowerEdge R340 servers is used to control the testbed SDRs, and to perform baseband signal processing as well as generic computation and storage operations. The servers connect to a radio rack formed of 24 Ettus Research SDRs (16 USRPs N210 and 8 USRPs X310) through optical fiber cables to enable low-latency and reactive communication with the radios. These are connected to 64 omnidirectional antennas through coaxial cables. Antennas are hung off the ceiling of a office space and operate in the 2.4-2.5 and 4.9-5.9 GHz frequency bands. The USRPs in the radio rack achieve symbol-level synchronization through four OctoClock clock distributors.
We leveraged 14 USRPs of the above-mentioned testbed (10 USRPs N210 and 4 USRPs X310) to prototype Sl-EDGE. In our testbed, an edge node consists of one USRP and one server, the former provides networking capabilities, while the latter provides storage and computing resources. The testbed configuration adopted to prototype and evaluate Sl-EDGE performance is shown in Fig. 12.
Since there are no open-source experimental 5G implementations yet, we used the LTE-compliantsrsLTE software (gomez2016srslte) to implement LTE networking slices. Since LTE and NR resource block grids are similar, we are confident that our findings remain valid for 5G scenarios. Specifically, srsLTE offers a standard-compliant implementation of the LTE protocol stack, including Evolved Packet Core and LTE base station (eNB) applications. We leveraged srsLTE to instantiate 4 eNBs on USRPs X310, while we employed 9 COTS cellular phones (Samsung Galaxy S5) as users. Each user downloads a data file from one of the rack servers, which are used as caching nodes with storage capabilities.
We consider three tenants demanding an equal number of LTE network slices (i.e., , and ) at times , , and . Each tenant controls a single slice only and serves a set of UEs located in different clusters as shown in Table 1 (right).
To test Sl-EDGE abilities in handling slices involving both networking and computation capabilities, we also implemented a video streaming slice where edge nodes stream a video file stored on an Apache instance through the dash.js reference player (dashjs) running on the Chrome web browser. DASH allows real-time adaptation of the video bitrate, according to the client requests and the available resources (dash). Each streaming video was sent to the receiving server of the rack through USRPs N210 acting as SDR-based WiFi transceivers (WiFi Access Points (APs) and Clients in Fig. 12), using the GNU Radio-based IEEE 802.11a/g/p implementation (bloessl2018performance). In the meanwhile, the edge node performed transcoding of video chunks using ffmpeg (ffmpeg). We point out that each SDR can play multiple roles in the cluster (e.g., USRPs X310 can act as WiFi transceiver/LTE eNB), and the actual role is determined at run-time based on the slice types allocated to each tenant.
A demonstration of the operations of Sl-EDGE in the scenario of Fig. 12 is shown in Figs. 13, 14 and 15. Overall, the prototype of Sl-EDGE allocates and supports 11 heterogeneous slices simultaneously: 3 for cellular connectivity, 3 for video streaming over WiFi, and 5 for computation with the ffmpeg transcoding. Bitrate results for the LTE slices and individual UEs are reported in Fig. 13, where we show that Sl-EDGE provides an overall instantaneous throughput of .
It is worth to point out that the throughput of each LTE slice, and thus each UE, varies according to the amount of resources allocated to the tenants. An example is shown in Table 1 (left), where we report the output of Sl-EDGE O-ESP algorithm (i.e., the amount of RBs allocated to LTE Slice 1 ()) in each cluster. Such an allocation impacts the throughput of UEs attached to slice . As an example, in Fig. 13 we notice that is allocated 42 RBs at , 24 RBs in , and 0 RBs in and approximately achieves a throughput of , and , respectively.
The video streaming application from Fig. 14 involves 5 tenants that share 3 non-overlapping channels, allocated in any cluster . To avoid co-channel interference, Sl-EDGE only admits 3 flows at any given time. As Fig. 14 shows, during the first 70 seconds of the experiment only the slices for tenants 1, 2 and 3 are admitted, while tenant 4 needs to wait for tenant 3 to stop the video streaming before being granted a slice. Similarly, the slice for tenant 5 starts at time , when the flow of tenant 1 stops. Meanwhile, the tenants submit requests for computation slices to transcode the videos with ffmpeg, which compete with srsLTE and GNU Radio slices necessary for LTE connectivity and video streaming in the 3 LTE eNBs and 5 WiFi APs of the 4 clusters. Moreover. in each server one of the 6 cores is reserved to the operating system exclusively and is never allocated to tenants. Fig. 15 shows that Sl-EDGE limits the total CPU utilization to 83%, which demonstrates that Sl-EDGE avoids over-provisioning of available resources.
In this paper, we have presented Sl-EDGE, a unified MEC slicing framework that instantiates heterogeneous slice services (e.g., video streaming, caching, 5G network access) on edge devices. We have first shown that the problem of optimally instantiating joint network-MEC slices is NP-hard. Then, we have proposed distributed algorithms that leverage similarities among edge nodes and resource virtualization to instantiate heterogeneous slices faster and within a short distance from the optimum. We have assessed the performance of our algorithms through extensive numerical analysis and on a 64-antenna testbed with 24 software-defined radios. Results have shown that Sl-EDGE instantiates slices 6x more efficiently then state-of-the-art MEC slicing algorithms, and that Sl-EDGE provides at once highly-efficient slicing of joint LTE connectivity, video streaming over WiFi, and ffmpeg video transcoding.