Collaborative Service Caching for Edge Computing in Dense Small Cell Networks

09/25/2017
by   Lixing Chen, et al.
University of Miami
0

Mobile Edge Computing (MEC) pushes computing functionalities away from the centralized cloud to the proximity of data sources, thereby reducing service provision latency and saving backhaul network bandwidth. Although computation offloading has been extensively studied in the literature, service caching is an equally, if not more, important design topic of MEC, yet receives much less attention. Service caching refers to caching application services and their related data (libraries/databases) in the edge server, e.g. MEC-enabled Base Station (BS), enabling corresponding computation tasks to be executed. Since only a small number of services can be cached in resource-limited edge server at the same time, which services to cache has to be judiciously decided to maximize the system performance. In this paper, we investigate collaborative service caching in MEC-enabled dense small cell (SC) networks. We propose an efficient decentralized algorithm, called CSC (Collaborative Service Caching), where a network of small cell BSs optimize service caching collaboratively to address a number of key challenges in MEC systems, including service heterogeneity, spatial demand coupling, and decentralized coordination. Our algorithm is developed based on parallel Gibbs sampling by exploiting the special structure of the considered problem using graphing coloring. The algorithm significantly improves the time efficiency compared to conventional Gibbs sampling, yet guarantees provable convergence and optimality. CSC is further extended to the SC network with selfish BSs, where a coalitional game is formulated to incentivize collaboration. A coalition formation algorithm is developed by employing the merge-and-split rules and ensures the stability of the SC coalitions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 6

page 12

page 14

page 15

page 28

01/17/2018

Joint Service Caching and Task Offloading for Mobile Edge Computing in Dense Networks

Mobile Edge Computing (MEC) pushes computing functionalities away from t...
02/04/2020

Cooperative Service Caching and Workload Scheduling in Mobile Edge Computing

Mobile edge computing is beneficial to reduce service response time and ...
11/04/2020

Pricing-Driven Service Caching and Task Offloading in Mobile Edge Computing

Provided with mobile edge computing (MEC) services, wireless devices (WD...
09/05/2021

Horizontal and Vertical Collaboration for VR Delivery in MEC-Enabled Small-Cell Networks

Due to the large bandwidth, low latency and computationally intensive fe...
10/24/2020

Adaptive In-network Collaborative Caching for Enhanced Ensemble Deep Learning at Edge

To enhance the quality and speed of data processing and protect the priv...
12/24/2019

RetroRenting: An Online Policy for Service Caching at the Edge

The rapid proliferation of shared edge computing platforms has enabled a...
04/06/2021

Towards Soft Circuit Breaking in Service Meshes via Application-agnostic Caching

Service meshes factor out code dealing with inter-micro-service communic...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Modern mobile applications, such as Virtual/Augmented Reality (VR/AR), Immersible Communications, Mobile Gaming, and Connected Vehicles, are becoming increasingly data-hungry, compute-demanding and latency-sensitive. While cloud computing has been leveraged to deliver elastic computing power and storage to support resource-constrained end-user devices in the last decade, it is meeting a serious bottleneck as moving all the distributed data and computing-intensive applications to the remote clouds not only poses an extremely heavy burden on today’s already-congested backbone networks but also results in large transmission latencies that degrade quality of service. As a remedy to these limitations, mobile edge computing (MEC) (a.k.a. fog computing) [1, 2] has recently emerged as a new computing paradigm to push the frontier of data and services away from centralized cloud infrastructures to the logical edge of the network, thereby enabling analytics and knowledge generation to occur closer to the data sources.

MEC provides cloud computing capabilities within the radio access network in the vicinity of mobile subscribers. As a major deployment scenario, edge servers are deployed at mobile base stations (BSs) to serve end-users’ computation tasks [3], thereby mitigating the congestion within the core network. On the other hand, BSs are also able to access the cloud data centers in case of a need for higher computation power or storage capacity, resulting in a hierarchical computation offloading architecture among end-users, BSs, and the cloud. While computation offloading has been the central theme of most works studying MEC, or more broadly, mobile cloud computing (MCC), what is often ignored is the heterogeneity and diversity of mobile services, and how these services are cached on BSs in the first place. Specifically, service caching (or service placement) refers to caching computation services and their related libraries/databases in the edge server co-located with the BS based on spatial-temporal service popularity information [1], thereby enabling user applications requiring these services to be executed at the network edge in a timely manner. For instance, visitors in a museum can use AR service for better sensational experience. Thus, it is desirable to cache AR services at the BS serving this region for providing the real-time service. On the other hand, users play mobile games at home in the evening. Such information will then suggest operators to cache gaming services during this typical period for handling huge computation loads.

However, unlike the cloud which has huge and diverse resources, the limited computing and storage resources of an individual BS allow only a very small set of services to be cached at a time. As a result, which services are cached on the BS determines which application tasks can be offloaded to the edge server, thereby significantly affecting the edge computing performance. Although the BS can adjust its cached service on the per-task basis, the heterogeneous and time-varying user tasks, especially when the BS is serving multiple users, would lead to frequent cache-and-tear operations [1]. This not only imposes additional traffic burden on the core network but also causes extra computation latency, thereby degrading user quality of experience. As such, service caching is a relatively long-term decision compared to computation offloading, and which services to cache must be judiciously decided given the limited resources of individual BSs based on the predicted service popularity in order to optimize the edge computing performance.

Parallel to MEC, the Small Cell (SC) concept has appeared to provide seamless coverage, boost throughput and save energy consumption in cellular networks. The density of BSs in cellular networks has kept increasing since its birth to nowadays 4G networks, and is expected to reach about 40-50 BSs/km in next generation 5G networks and beyond, namely the ultra-dense networks (UDN) [4]. It is envisioned that the majority of SCs will be purchased, deployed and maintained by either end-users in their homes in a “plug and play” fashion, or enterprises in commercial buildings with no or minimal assistance from mobile operators. By enabling these SCs to operate in an open subscriber group (OSG) mode [5] and thus provide service in their immediate vicinities, mobile operators can significantly lower their capital expenditures and operational expenditures. In such dense networks, a typical mobile device will be within the coverage of multiple SCs. This creates an opportunity for nearby small cell BSs to virtually pool their computation resources together, share their cached services and collaboratively serve users’ computation workload. Specifically, small-cell BSs can collaboratively decide which services to cache so that the set of services available at the network edge is maximized to the users, see Fig.1 for illustration.


Fig. 1: System illustration. A BS can only cache a subset of services; requested service can be executed by the BS only if it is cached at that BS; requests for services not cached at registered BS can be satisfied by BS collaboration or cloud offloading; BSs have to jointly optimize service caching decisions.

Although the idea of collaborative service caching in dense small cell networks offers a promising solution to enhance MEC performance, how to achieve optimal collaborative service caching faces many challenges. Firstly, compared to conventional clouds that manage only the computing resource, MEC leads to the co-provisioning of radio access and computing services by the BSs, thus mandating a new model that can holistically capture both aspects of the system. BSs can collaborate not only in shared service caching but also the physical transmission for service delivery. Secondly, although dense deployment creates much coverage overlapping, BSs are still geographically distributed. This leads to a complex multi-cell network where demand and resources become highly coupled and hence an extremely difficult combinational optimization problem for collaborative service caching. Thirdly, since small cells are often owned and deployed by selfish individual users (e.g. home/enterprise owners), small cells will be reluctant to participate in the collaborative system without proper incentives. Therefore, incentives must be devised and incorporated into the collaborative service caching scheme.

In this paper, we study the extremely compelling but must less investigated problem of service caching in MEC-enabled dense small cell networks, and develop novel techniques for performing collaborative service caching in order to optimize the MEC performance. The main contributions of this paper are summarized as follows.

(1) We formulate the collaborative service caching in MEC-enabled dense small cell networks as a utility maximization problem. Both cases where small cells are fully cooperative and strategic (i.e. self-interested) are considered. To our best knowledge, this is the first work that studies collaborative service caching in dense networks.

(2) We develop an efficient decentralized algorithm, called CSC (Collaborative Service Caching), to solve the collaborative service caching problem in cooperative SC networks, which is an extremely difficult combinational optimization problem. Our algorithm is developed based on the Gibbs sampler, a popular Markov Chain Monte Carlo (MCMC) algorithm. Innovations are made to enable parallel Gibbs sampling based on the graph coloring theory, thereby significantly accelerating the algorithm convergence.

(3) We employ coalitional game to understand the behavior of strategic SCs when performing collaborative service caching. Incentive mechanisms are used to promote SC collaboration within generated coalitions. A decentralized coalition formation algorithm based on merge-and-split processes is developed and we prove that the proposed algorithm results in Pareto-optimal stable coalitions among the SCs.

The rest of this paper is organized as follows. Section II reviews related works. Section III presents the system model. Section IV focuses on the collaborative service caching problem with fully cooperative SCs. Section V designs the collaborative service caching for strategic SCs. Section VI presents the simulation results, followed by the conclusion in Section VII.

Ii Related Work

Computation offloading is the central theme of many prior studies in both mobile cloud computing (MCC) [6, 7] and mobile edge computing (MEC) [1, 2], which concerns what/when/how to offload users’ workload from their devices to the cloud or the edge servers. Various works have studied different facets of this problem, considering e.g. stochastic task arrivals [8, 9], energy efficiency [10, 3], collaborative offloading [11, 12] etc. However, the implicit assumption is that edge servers can process whatever types of computation tasks that are offloaded from users without considering the availability of services in edge servers, which in fact is crucial in MEC due to the limited resources of edge servers.

Similar service caching/placement problems, known as virtual machine (VM) placement, have been investigated in conventional cloud computing systems. VM placement over multiple clouds is studied in [13, 14, 15], where the goal is to reduce the deployment cost, maximize energy saving and improve user experience, given constraints on hardware configuration and load balancing. However, these works cannot be directly applied to design efficient service caching policies for MEC since mobile networks are much more complex and volatile, and optimization decisions are coupled both spatially and temporally.

Service caching/placement is also related to content caching/placement in network edge devices [16]. Various content caching strategies have been proposed, e.g., the authors in [17] aim to find optimal locations to cache the data that minimize packet transmissions in wireless sensor nodes and the authors in [18]

employ zero-forcing beamforming to null interference and optimize the content caching for maximizing the success probability. The concept of FemtoCaching is introduced in

[16] which studies content placement in small cell networks to minimize content access delay. The idea of using caching to support mobility has been investigated in [19], where the goal is to reduce latency experienced by users moving between cells. Learning-based content caching policies are developed in [20] for wireless networks with a priori unknown content popularity. While content caching mainly concerns with storage capacity constraints, service caching has to take into account both computing and storage constraints, and has to be jointly optimized with task offloading to maximize the overall system performance.

Service caching/placement for edge systems has gained little attention until very recently. [21] presents a centralized service placement strategy which allows placement of IoT services on virtualized fog resources while considering QoS constraints like execution deadline. The service placement problem in [21]

is formulated to be an integer linear programming problem and solved by CPLEX

[22] in a centralized manner. By contrast, the collaborative service caching problem in our paper is a non-linear combinational problem, which is much more difficult to be solved by existing Non-linear Integer Programming solvers. More importantly, instead of employing a centralized controller, we are more interested in a distributed network setting where BSs are able to make caching decisions in a decentralized manner without collecting the information of the whole network at a controller node. The most related work probably is [23], which studied the joint optimization of service caching/placement over multiple cloudlets and load dispatching for end users’ requests. There are several significant differences of our work. First, while the coverage areas are assumed to be non-overlapping for different cloudlets in [23], BSs have overlapping coverage areas in our considered dense cellular network. Second, while [23]

only develops heuristic solutions for service caching problem, we firmly prove the optimality of our algorithm. Third, while the algorithm is centralized in

[23], our algorithm enables decentralized coordination among BSs. Moreover, we consider the selfish nature of small cell owners. We also note that the term “service placement” was used in some other literature [24, 25] in a different context. The concern is to assign task instances to different clouds but there is no limitation on what types of services that each cloud can run.

Iii System Model

Iii-a Network

We consider a network of densely deployed SCs, indexed by . Each SC has an access point/base station (BS) endowed with computation and storage resources and hence can provide edge computing services to end users in its radio range. Each SC has a set of registered user equipments (UEs) within its coverage, indexed by . Let the set of all users be . For instance, in a typical IoT scenario in a multi-story building, a small cell BS (e.g. femtocell BS) and multiple mobile devices, sensors and things are deployed in each room of this building. UEs are authorized to access the edge computing resources of their home BS (i.e. the BS of the SC that they are registered to). However, due to the dense deployment of SCs in the building, a UE is in the radio coverage of multiple SCs besides its home BS. For each UE , let be the set of reachable BSs. Let be the uplink channel condition between UE and BS . Typically, the home BS of a UE has the best channel condition since it is often in the same room of the UE.

We say that two SCs are neighbors if there exists some UE such that . In other words, SCs and can potentially collaborate with each other to serve at least one common UE. Based on this definition, the network of SCs can be described by a graph where is the edge set and there exists an edge between SCs and if they are neighbors. Let denote the one-hop neighborhood of SC .

Iii-B Services and Demand

Service is an abstraction of application that is hosted by the BS and requested by UEs. Example services include video streaming, mobile gaming and augmented reality. Running a particular service requires caching the associated data, such as required libraries and databases, on the BS. We assume that there are totally possible computing services, indexed by . Different services require different processor power, memory, and storage resources. However, due to the limited computing resources on an individual BS, not all services can be cached at the same time. Given the computing resources (on various dimensions) of SC , the feasible sets of services that can be cached on SC simultaneously can be easily derived. Let be the set of feasible sets of services for SC and each element in is a feasible set of services. As a simple example, consider that each SC can only offer one kind of service at a time and hence, for all in this case.

The network of SCs periodically update their service caching decisions according to the predicted/reported service demand of the UEs. We focus on the decision problem for one such period, which is much longer than the timescale of computation offloading (i.e. packet transmission + task processing). Let be the caching decision of SC and collects the caching decisions of all SCs. The service demand of UE

in the current period is denoted by a vector

where is a tuple capturing the input data size (in bits) and the computation workload (in processor cycles) of demand requiring service .

Iii-C Utility Model

In SC networks, UEs derive benefits from completing their computation tasks. Let be the benefit that UE receives from tasks requiring service . Meanwhile, costs are incurred in order to complete these tasks, which come from two major sources: cost due to transmitting the data of these tasks from UEs to BSs and cost due to processing the computation workload of these tasks. We model them in more detail as follows.

Iii-C1 Transmission cost

Let denote the transmission power of UE , then the achievable uplink transmission rate between UE and BS can be calculated using the Shannon capacity

(1)

where is the channel bandwidth and is the noise power. Therefore, the transmission energy consumption of UE for sending bits of input data to BS is . The total energy consumption of UE is thus

(2)

where is the amount of data sent to BS from UE , which is jointly determined by the SC service caching decisions and UE-SC association as will be elaborated later in this paper. Notice that we neglect the transmission cost for BSs to send service outcome back to the UEs due to the fact that the size of service outcome is much smaller than that of service input data.

Iii-C2 Computation cost

If BS caches service , then the received workload requiring service can be processed at BS , thus incurring computation cost, e.g. energy consumption; otherwise, the computation is further offloaded to the cloud, thus incurring cloud usage cost, e.g., cloud service fee and Internet round-trip delay. Let be the unit computation consumption of BS and be the unit cloud usage cost. We assume that so that processing at the network edge is preferred compared to offloading to the remote cloud. Let and denote the workload processed for UE by BS and cloud, respectively, which are related to the SCs’ service caching decision and UE-SC association. Therefore, we have the computation cost of SC , , and cloud, , for executing the service demand from UE :

(3)

We consider a pay-to-work service provision framework where SCs are paid by UEs to process their service requests. Without loss of generality, we employ an intuitive payment scheme: if a SC or cloud offers services to UE , then UE must pay the SC or cloud to cover the incurred computation cost. Let be the total payment of UE , we have

(4)

The utility of SC is obtained by accumulating the differences between benefits and costs (i.e. transmission cost and payments) of its registered UEs. Recall that is the benefit UE receives if the request for service is satisfied, then utility of SC can be represented as:

(5)

Iii-D Non-Collaborative Service Caching

Before we study collaborative service caching, we first present the non-collaborative service caching problem as a benchmark. In this case, UEs transmit all service request to their home BSs. Therefore, if is UE ’s home BS, ; otherwise, . Moreover, BS only processes tasks requesting for services cached locally and the remaining tasks will be further offloaded to the cloud server, hence we have and . Depending what services are cached on BS , the utility of SC can be written as:

Notice that , the utility function of SC can be rewritten as:

(6)

Each SC aims to maximize its utility by optimizing its service caching decision, i.e. . Realizing that the UEs’ benefit and transmission cost do not depend on the service caching decision in the non-collaborative case. is the only term that depends on the service caching decision in (6). Recall that , the optimization problem can be further simplified to

(7)

The above problem is not difficult to solve: the optimal solution requires the SC to cache the most popular services (i.e. services with the largest ). However, non-collaborative SCs may rely heavily on the cloud service, which incurs large latency and cloud usage fee, due to the diversity of user demand and the limited caching capacity of BSs. One promising way to reduce the reliance on cloud is to enable collaboration among SCs. Figure 2 illustrates a simple motivating example for collaborative service caching. Consider two overlapping SCs providing two services to their respective UEs, and assume that each SC can only cache one service. Without collaboration (the left figure), each SC caches the most popular service, which is the red service in this case, in order to maximize its own utility. All computation demand for the green service will have to be offloaded to the cloud. With collaboration (the right figure), SC 1 caches the red service while SC 2 caches the green service. Because the coverage areas of these two SCs are highly overlapped, computation demand for the green service from UEs registered to SC 1 can be offloaded to SC 2. Likewise, computation demand for the red service from UEs registered to SC 2 can be offloaded to SC 1. In this way, all computation is retained at the network edge with zero cloud usage. In the next sections, we will study how to perform collaborative service caching to improve the MEC performance. We will first assume that SCs are fully cooperative with the common goal of maximizing the MEC performance of the whole network. Next, we study the problem considering selfish SCs.


Fig. 2: Comparison of non-collaborative service caching and collaborative service caching

Iv Collaborative Service Caching for Obedient SCs

In this section, we study collaborative service caching assuming that SCs are obedient and fully cooperative, who have the common goal of maximizing the total utility of the whole network. With collaborative service caching, a UE does not have to send its computation workload to its home BS. Instead, the service caching decisions of the SCs will reshape the workload distribution in the network in order to better utilize the limited computing resources of individual BSs.

Iv-a Problem Formulation

Given the service caching decisions of all SCs, let be the set of the SCs that cache service and are reachable by UE . Since UEs are often battery-powered and have stringent energy constraints, in our considered cooperative system, the demand of UE is always offloaded to the SC in that has the best uplink channel condition, namely . In this way, the UE incurs the least transmission energy consumption. Notice that our system is also compatible with other UE-SC association strategies. Designing appropriate association strategy helps improve the performance of MEC system, however, this goes beyond the main purpose of our analysis. Interested readers are referred to [26] and references therein. It is still possible that none of the SCs that UE can reach caches service , namely . In this case, UE ’s demand will be offloaded to the cloud via UE ’s home BS. To facilitate the exposition, we write as the BS to which UE send its demand for service , , which can be determined as follows:

(8)

where is the home BS of UE . Therefore, the amount of data sent from UE to BS can be determined as

(9)

and the amount of workload processed by BS , , and cloud, , can be determined as:

(10)
(11)

Substituting into (2) and into (3), we can then obtain the utility of SC which is a function of the service caching configuration of all SCs.

The objective of cooperative SC network is to determine the optimal collaborative service caching configuration in order to maximize the total utility of the whole network: , which is equivalent to minimizing the total cost. Define the cost of SC is as

(12)

Then the problem of collaborative service caching for obedient SCs becomes:

(13)

The above problem is a difficult combinatorial optimization problem where the service caching decisions of SCs are highly correlated. Centralized solutions are usually computationally prohibitive and require the knowledge of the entire network, which is difficult to obtain. In the next subsection, we develop an efficient decentralized algorithm to optimize the collaborative service caching decisions based on parallel Gibbs sampler and graph coloring.

Iv-B Chromatic Parallel Gibbs Sampler and Graph Coloring

Our decentralized collaborative service caching algorithm is developed based on the Gibbs sampling (GS) technique [27]

. In our problem, GS is used to generate the probability distribution of network service caching configuration

by repeatedly sweeping each SC sampling its service caching decision from conditional distribution with the remaining SCs fixing their service caching decisions. The theory of Markov chain Monte Carlo (MCMC) guarantees that the probability distribution of service caching configuration approximated by GS is proportional to , where the parameter has the interpretation of temperature in the context of statistical physics. Such a probability measure is known as the Gibbs distribution. Furthermore, performing Gibbs sampling while reducing can obtain service caching configurations resulting in globally minimal cost [28]. However, convergence results are typically available for the case of sequential sampling (one SC updating at a time) only. The main problem of sequential sampling is that: (1) it takes too long to complete one round of updating for large networks, (2) it works with an additional assumption that the global communication is available, which may not hold in SC networks. One promising solution for this problem is to enable parallelism in Gibbs sampling. It is worth noting that extreme parallelism (i.e. all SCs evolve their decisions at the same time) is often infeasible. As [29] have observed, one can easily construct cases where the parallel Gibbs sampler in which all SCs evolve their actions at the same time is not ergodic and therefore convergence to the Gibbs distribution is not guaranteed. In the following, we design an algorithm, called Chromatic Parallel Gibbs Sampler (CPGS), to transform the sequential sampling into an equivalent parallel sampling by exploiting the special structure of the considered SC network using the theory of Markov Random Field and Graph Coloring.

The sequential GS works by iteratively sampling each SC’s service caching decisions according to a joint posterior distribution , i.e.

(14)

where refers to the caching decisions of SCs excluding the SC , and temperature controls the trade-off between exploitation and exploration. The adopted distribution will ensure that GS converges to the Gibbs distribution. Intuitively, if the decision update at SC does not influence sampling distribution of SC and vice versa, then SC and can update their caching decisions independently and simultaneously. To formalize this intuition, we resort to the Markov Random Field (MRF). The MRF of a distribution is an undirected graph over the (caching decision variables of) SCs. On this graph, the set of all SCs adjacent to SC , denoted by , is called the Markov Blanket of SC . Given its Markov Blanket, the caching decision of SC must be conditionally independent of the caching decisions of all other SCs outside , namely

(15)

Therefore, any two SCs that are not in the Markov Blanket of each other can evolve their decisions simultaneously. The next proposition establishes the connection between the physical network graph and the MRF.

Proposition 1.

The two-hop neighborhood of SC on the physical network graph is the Markov Blanket of SC on the MRF.

Proof.

Consider any SC that is more than two hops away from SC . We need to prove below:

(16)

Recall that is the one-hop neighborhood of SC on the physical network graph . The total cost of the network can be divided into two parts

(17)

For any SC , SC is not its one-hop neighbor because SC is at least three hops away from SC . Therefore, fixing the caching decisions of all SCs except SC and SC , only depends on the SC ’s decision, namely . On the other hand, for any SC , SC is not its one-hop neighbor and hence, changing SC ’s decision does not affect , namely . Therefore,

(18)

The right-hand side is independent of , the proposition is proved. It is also obvious that the one-hop neighborhood of SC on the physical network graph is not its Markov Blanket. ∎

Proposition 1 implies that when SC changes its caching decision, the change in the sum cost of SC ’s one-hop neighborhood is the same no matter how a SC outside of its Markov Blanket also changes its caching decision at the same time. This property is the key to enabling the parallel evolution of the service caching decisions in GS. Based on this result, we divide the set of SCs into groups such that no two SCs within a group are in each other’s Markov Blanket and hence, SCs within the same group can evolve their decisions in parallel. We would like to be as small as possible to achieve the maximal level of parallelization.

Finding the minimum value of is equivalent to a graph coloring problem on the MRF. We color the MRF network with -coloring such that each SC is assigned one of colors and SCs in the Markov Blanket have different colors. Therefore, the minimum value of equals the chromatic number of the graph MRF. Let denote the set of SCs in color . Then CPGS simultaneously draws new decisions for all SCs in before proceeding to . The colored network ensures that all SCs within a color are conditionally independent of the SCs in the remaining colors and therefore can be sampled in parallel. Constructing the minimal coloring of a general network is NP-Complete. However, for simple models the optimal coloring can be quickly derived using existing graph coloring algorithms [30].

Example: Figure 3 illustrates the relationship between the physical network graph, the MRF, the Markov blanket and the coloring result for two typical network topologies, namely circle and star. For the circle network with , the minimum number of SC groups is . For the star network with , the minimum number of SC groups is . In both cases, is significantly smaller than .


Fig. 3: Illustration of MRF, Markov Blanket, and coloring.

Iv-C Decentralized Algorithm Based On Chromatic Parallel Gibbs Sampling

Now we are ready to present the decentralized algorithm to optimize the collaborative caching decisions (the pseudo-code is provided in Algorithm 1). The algorithm works distributedly in an iterative manner as illustrated in Fig. 4. In each iteration , a colorset is chosen according to a prescribed order. Every SC in this colorset goes through two steps: decision update and communication. To perform decision update, SC needs two pieces of information: (1) the service demand patten of its one-hop neighbors, which is exchanged at the beginning of each caching decision period; (2) the current service caching decision of SCs within its Markov Blanket , which are collected in the previous iterations. With this information, SC is able to compute the total cost of its one-hop neighborhood locally for every possible caching decision while fixing the caching decisions of the SCs in . Then, the BS samples a new service caching based on the following probability distribution:

(19)

The above equation implies that an action is selected with a higher probability if it leads to a lower total cost of the one-hop neighborhood. After the service caching decision is updated, the chosen SCs communicate new service caching decisions to the SCs in , which prepares for the subsequent iterations. Notice that during the iterations, SCs do not need to actually change their service caching decision, which is only needed after the completion of the algorithm.

Next, we formally prove the convergence and optimality of our algorithm.

Fig. 4: Illustration of decentralized collaborative service caching algorithm. The figure depicts a physical SC network; the star sign denotes the chosen SCs in each iteration to update their service caching decision.
1:  Input: Physical network graph , UE demand .
2:  Construct correlation graph based on ;
3:  Determine groups of uncorrelated SCs using graph coloring algorithm;
4:  for each iteration  do
5:     Pick a group of uncorrelated SCs according to some prescribed order;
6:     for each SC in the picked group do
7:        for each feasible caching decision  do
8:           Determine the workload distribution of UEs in ;
9:           Compute the total cost of the one-hop neighborhood ;
10:        end for
11:        Set with probability (19) and communicate to SCs in ;
12:     end for
13:     for each SC not in the picked group do
14:        Set ;
15:     end for
16:     Stop iteration if the sampling probability converges;
17:  end for
18:  Return: Collaborative service caching configuration .
Algorithm 1 Decentralized Collaborative Service Caching
Proposition 2 (Convergence).

The Chromatic Parallel Gibbs Sampler converges from any starting configuration to Gibbs distribution , where

(20)
Proof.

The convergence theorem of CPGS can be easily derived from the classic convergence of sequential Gibbs sampler [28] presented below:

Lemma 1 (Convergence of sequential Gibbs sampler).

Assume that each SC is sampled by the sequential Gibbs sampler with the sequence containing SC infinitely often. Then, for every starting configuration and every , we have where .

Proof.

The proof of this Lemma 1 can be found in [28], thus is omitted here. ∎

For sequential Gibbs sampling, we can construct a sampling sequence as:

(21)

where denotes -th BS in colorset . Since the convergence of sequential Gibbs sampling is guaranteed given any prescribed order as long as every SC is visited infinitely often, we know that sampling with the constructed order converges to the Gibbs distribution. Recall that in CPGS the caching decision updates are independent for SCs sharing the same color, therefore, SCs with the same color can update the caching decision at the same time. This also ensures the convergence to the Gibbs distribution. ∎

By extending Proposition 2, we can show the algorithm also guarantees that the service caching configuration converges to the optimal solution of CSC-O with an appropriate temperature .

Proposition 3 (Optimality).

As decreases, the algorithm converges with a higher probability to the optimal solution of CSC-O. When , the algorithm converges to the global optimal solution with probability 1.

Proof.

Let , we rewrite as

(22)

where is obtained from by multiplying both its denominator and numerator by . As , approaches 1 if and approaches 0 otherwise. As a result, the optimal caching decision will be sampled with probability 1. ∎

Following the classic result of parallel computing in [31], we can analyze the time complexity of proposed algorithm:

Proposition 4 (Time complexity).

Given a -coloring of the SC network, CPGS generates a new joint service caching decision in runtime .

Proof.

From [31] (Proposition 2.6), we know that the parallel execution of CPGS corresponds exactly to the execution of a sequential scan Gibbs sampler for some permutation over variables. Therefore the running time can be derived as:

(23)

where is the number of processors. In our case, each SC derives caching decision individually at local edge server, therefore equals the number of SCs in , which gives the equality . ∎

Proposition 4 indicates that CPGS provides a linear speed-up for single-chain sampling, advancing the Markov chain for an -coloring SC network in time rather than . Typically, is much smaller than , thereby accelerating the convergence speed of our algorithm.

V Collaborative Service Caching for Strategic SCs

In the previous section, we studied collaborative service caching assuming that SCs are fully cooperative. However selfish SCs will be reluctant to participate in the cooperative network if doing so reduces their individual utility. To formally understand SCs’ selfish behavior and the resulting service caching strategies, we use the theoretical framework of the coalitional game to investigate how selfish SCs form coalitions to collaboratively provide edge computing services.

A coalitional game is defined as a tuple where is the set of players and is a function that assigns for every possible coalition a real number representing the total benefit achieved by coalition . By evaluating the values of different coalitions, players decide what coalitions to form. In what follows, we first design the interaction within each coalition and define the value function for any given coalition . Then we describe what coalitions are desired in terms of stability and design a distributed coalition formation algorithm.

V-a Value Function and SC Interactions within a Coalition

In this subsection, we define the value function and describe the interaction among SCs that belong to a given coalition (which, however, may not be stable). The value function of an SC coalition is defined as the maximum total utility that can be achieved when collaboration is restricted to SCs in this coalition, i.e.

(24)

where denotes the collaborative service caching decision of SCs in coalition . When cooperation is restricted in , an SC in does not offload its UE workload to any SC outside . By using the collaborative service caching algorithm developed in Section IV (restricted to coalition ), the optimal total utility and hence the value of the coalition can be computed. However, is achieved assuming that SCs are always cooperative within the coalition, which may not be true. It is entirely possible that some SCs cache popular services for neighbor SCs instead of its favorite services in order to improve utility of the coalition, which makes them unwilling to work in a collaborative manner. In the following, we design an effective algorithm for strategic SCs based on coalitional game such that collaboration is always favorable for SCs to work in the generated coalitions. Before proceeding, we introduce two collaboration patterns:

V-A1 Plain Collaboration

The SC collaboration simply relies on pay-to-work framework described in Section III-C2): each UE pays the BSs and cloud to cover the computation cost incurred by processing its service requests. This payment scheme is relatively weak since UEs only covers the computation cost of SC for processing the service requests and no extra reward for participating in the collaboration.

V-A2 Incentivized Collaboration

Beside the pay-to-work framework, SCs receive extra payments by participating in the collaboration. In this case, the values of extra payments are decided by Proportional Fairness Division [11]: dividing the payoff (i.e. utility improvement) of the whole coalition due to cooperation proportionally to the SC’s individual utility achieved without cooperation. Specifically, for SC , its modified utility will be

(25)

where represents the utility of SC if it does not join any coalition (perform non-collaborative service caching), and is the proportional weight satisfying , i.e.,

(26)

Based on the proportional fairness division, the extra payment paid/received by SC can thus be determined as . Here, can be interpreted as the expected utility of SC by participating in the coalition, while is the actually realized utility. The gap of the two is filled by the extra payment . Clearly, the must be cleared within each coalition and hence . If , then SC pays . If , then SC receives .

Intuitively, we have extra payment (i.e., )) for SCs with the Plain Collaboration. It is expected that Incentivized collaboration can better promote cooperation among strategic SCs as will be shown later in the simulation.

There are two implementation issues for the payment scheme. First, payments need to be properly distributed since multiple SCs may be involved in the transaction. Moreover, direct payment from one SC to another faces fraud risks in the monetary transaction. To enable effective and safe transaction, the edge orchestrator, which is a trusted third-party, collects payments from all SCs and then distributes them to SCs.

V-B Stability of Coalitions

SCs may form multiple disjoint coalitions and there are many ways that SCs can form coalitions. To characterize what kind of coalitions are preferred by SCs, we introduce the notion of Stable Coalition: no SC(s) have incentives to leave the current coalition to form different coalitions. Clearly, the requirement that all SCs in a coalition must receive higher utilities than working individually is a necessary (but not sufficient) condition for stability.

Consider any subset of SCs. We call a collection of coalitions formed by SCs in , where are disjoint subsets of . If , then we call a partition of . We introduce the notion of a defection function , which associates each possible partition of with a group of collections. The stability of a partition of is defined against a defection function as follows.

Definition 1 (-stability).

A partition of is -stable if no SCs are interested in leaving the current partition to form a new collection of coalitions . That is, at least one SC in these SCs does not improve its utility by leaving the current partition.

In other words, a defection function restricts the possible ways that SCs may deviate/defect. Two defection functions are of particular interest. The first function, denoted by , associates each partition with all possible collections in . Therefore, it does not put any restriction on the way SCs may deviate and hence is the most general case. The second function, denoted by , associates each partition with collections that can be formed by merging or splitting coalitions in . Therefore, -stability is a weaker notion than -stability. Next, we design a distributed SC coalition formation algorithm that achieves at least -stability.

V-C Distributed Coalition Formation Algorithm for Small-cell network

Before presenting the SC coalition formation algorithm, we first introduce the notion of Pareto dominance to compare the “quality” of two collections of coalitions.

Definition 2 (Pareto Dominance).

Consider two collections of disjoint coalitions and formed by the same subset of SCs . Pareto-dominates , denoted by , if and only if with at least one strict inequality for some SC.

Pareto dominance implies that a group of SCs prefer to form coalitions rather than . Based on this, we define following two operations, namely Merge and Split [32], which are central to our coalition formation algorithm:

  • Merge: merge a set of coalitions into a bigger coalition if .

  • Split: split a coalition into a set of smaller coalitions if .

By performing Merge, a group of SCs can operate and form a single and larger coalition if and only if this formation increases the utility of at least one SC without decreasing the utility of any other involved SCs. Hence, a Merge decision ensures that all involved SCs agree on its occurrence. Likewise, a coalition can decide to split and divide itself into smaller coalitions if splitting is preferred in the Pareto sense.

1:  Initial: The SC network is partitioned by with non-collaborative SCs at the beginning of each operational time slot.
2:  Output: A stable partition of ; Service decision of SCs in each coalition : .
3:  Repeat
4:  (a) Merge ():
5:     Chose a possible set of SC coalitions for attempt of merging into , where ;
6:     Solve and obtain optimal configuration with CPGS;
7:     Derive , and decide whether to merge by examining Pareto dominance;
8:  (b) Split ()
9:     Chose a coalition for attempt of splitting into a set of disjoint coalitions , where ;
10:     Solve and obtain optimal configuration with CPGS;
11:     Derive , and decide whether to split by examining Pareto dominance;
12:  Until Merge and Split converges to a stable partition ;
13:  Return Stable partition and its service caching decisions .
Algorithm 2 Distributed SC coalition formation

Our coalition formation algorithm for collaborative service caching with strategic SCs (CSC-S) is developed based on the Merge and Split operations, which is presented in Algorithm 2. The algorithm iteratively executes the merge-and-split operations. Given the current partition , each coalition negotiates, in a pairwise manner, with neighboring SCs to assess a potential merge. The two coalitions will then decide whether or not to merge. Whenever a Merge decision occurs, a coalition can subsequently investigate the possibility of a Split. Clearly, a Merge or a Split operation is a distributed decision that an SC (or a coalition of SCs) can make. After successive merge-and-split iterations, the network converges to a partition composed of disjoint coalitions and no coalition has any incentive to further merge or split. In other words, the partition is merge-and-split proof. The convergence of any merge-and-split iterations such as the proposed algorithm is guaranteed as shown in [32].

V-D Stability Analysis

The outcome of the above algorithm is a partition of disjoint independent coalitions of SCs. As an immediate result of the definition of stability, every partition resulting from proposed algorithm is -stable. In particular, no coalitions of SCs in the final partition have the incentive to pursue a different coalition formation through Merge or Split. Next, we investigate whether the proposed algorithm can achieve -stability.

A -stable partition has the following properties according to [32]. (i) No SCs are interested in leaving to form other collections in (through any operation). (ii) A -stable partition is the unique outcome of any arbitrary iteration of merge-and-split if it exists. (iii) A -stable partition is a unique -maximal partition, i.e., for all partition , we have . Therefore, the -stable partition provides a Pareto optimal utility distribution. However, the existence of a stable partition is not always guaranteed [32]. Nevertheless, we can still have the following result.

Proposition 5 (Coalition Stability).

The proposed distributed SC coalition formation algorithm converges to the Pareto-optimal -stable partition, if such a partition exists. Otherwise, the final partition is -stable.

Proof.

The proof is immediate due to the fact that, when it exists, the -stable partition is a unique outcome of any merge-and-split iteration [32], such as any partition resulting from our coalition formation algorithm. ∎

The stability of the grand coalition (e.g. all SCs form a single coalition) is of particular interest in the coalitional game theory. It can be easily shown that the considered SC coalitional game is generally not superadditive and its core is generally empty due to the limitation on BS interaction and hence, the grand coalition is not stable. Instead, independent disjoint coalitions will form. Readers who are interested in more details on the stability of grand coalition in coalitional games are referred to

[33, 34].

V-E Complexity Analysis

The complexity of the proposed coalition formation algorithm lies mainly in the complexity of the Merge and Split operations. For a given network, in one Merge operation, each current coalition attempts to merge with other coalitions in a pairwise manner. In the worst case scenario, every SC, before finding a suitable merge partner, needs to make a merge attempt with all other SCs in . In this case, the first SC requires attempts for Merge, the second requires attempts and so on. The total number of Merge attempts in the worst case is thus . In practice, the merge process requires a significantly lower number of attempts since finding a suitable partner does not always require to go through all possible merge attempts (once a suitable partner is identified the merge will occur immediately). The complexity is further reduced due to the fact that SCs do not need to attempt to Merge with physically unreachable SCs. Moreover, after the first run of the algorithm, the initial non-collaborative SCs will self-organize into larger coalitions. Subsequent runs of the algorithm will deal with a network composed of a number of coalitions that is much smaller than .

For the split operation, in the worst case scenario, splitting can involve finding all the possible partitions of the set formed by the SCs in a single coalition. For a given coalition , this number is given by the Bell number which grows exponentially with the number of SCs in the coalition. In practice, this Split operation is restricted to the formed coalitions, and thus it will be applied to small sets. The split complexity is further reduced due to the fact that, in most scenarios, a coalition does not need to search for all possible split forms. For instance, once a coalition identifies a suitable split structure, the SCs in this coalition will split, and the search for further split forms is not needed in the current iteration.

Vi Simulation

In this section, we conduct systematic simulations in practical scenarios to verify the efficacy of the proposed algorithms.

Vi-a Setup

Our system model adopts the widely-used stochastic geometry approach, homogeneous Poisson Point Process (PPP) [35], for BS and UE deployment. To be specific, we simulate a 500m500m square area where a set of BSs are deployed whose locations are chosen according to the PPP with density . The deployment of UEs follows another PPP with density . The service demand of each UE is modeled as a Poisson process with arrival rate randomly drawn from 0 to 20. The transmission power of UEs is set as 10 dBm and the channel model follows the free space pathloss model: , where is the distance between UE and BS . The bandwidth for each UE is

MHz. The cache capacity of BSs is set to 1 (service) and BSs are allowed to collaborate with other BSs with a range of 150m. The processing cost of BSs are random variables

and the cost of cloud usage is set as 5. The temperature for chromatic parallel Gibbs sampler is set as 10.

Fig. 6 shows the deployed BSs and UEs generated by homogeneous PPP. We have a total of 13 BSs and 72 UEs, and each UE registers to the BS nearest to it. Fig. 6 depicts the service demand received by BSs from its registered UEs, where the length of each color block denotes the demand for a particular service. The simulation results to be shown in the following are based on this BS/UE deployment and service demand pattern.

Fig. 5: BS/UE deployment and registered association
Fig. 6: Demands of BSs received from registered UEs

Vi-B Utility analysis

First, we compare the system utilities achieved by three strategies: Non-collaborative service caching (NCOL), Collaborative service caching with obedient SCs (CSC-O) and Collaborative service caching with strategic SCs (CSC-S). For CSC-S, we applied both Plain Collaboration (PC) and Incentivized Collaboration (IC) schemes to see how they may influence the coalition formation. Fig. 7 depicts the overall utility of the edge system by applying these four strategies. It can be observed that collaborative strategies dramatically increase the system utility compared to the non-collaborative case, achieving utility improvement by 42.8% to 57.1%. Specifically, CSC-O achieves the largest system utility since service caching decisions of all BSs are made to maximize the utility of the whole edge system. However, system-wide optimality may not be preferred for every BS, since not all BSs are guaranteed with utility gain by following the system optimal configuration. To see this, we show the utility achieved by individual BSs in Fig.9. As can be observed, BS 6 and 11 suffer from utility loss compared with NCOL by following the optimal service caching decision derived by CSC-O. By contrast, the utilities of individual BSs achieved by CSC-S are strictly larger than that of NCOL case, which means the collaboration between BSs is favorable even considering the selfishness of SCs. Moreover, the system utility achieved by CSC-S is comparable with CSC-O.

Fig. 7: Comparison of system utilities

Vi-C Workload distribution and service caching decision

Next, we proceed to see how the proposed algorithm works to benefit the edge computing system. The key idea of SC collaboration is to accommodate more workload at the edge system thereby avoiding the high cost of cloud usage. Fig. 9 presents the workload distribution of each BS. We see clearly that the three collaborative strategies, compared to the NCOL, allow more workload to be processed within the edge system instead of offloading to the cloud. The main mechanism underlying this advantage is increasing the diversity of cached services among the neighbor BSs. Fig. 10 depicts the service caching decisions obtained by NCOL, CSC-O, and CSC-S(IC). Notice that, for better illustration, we place the service caching decisions of neighbor BSs adjacent to each other in the figure. By combining the service demand pattern in Fig. 6, we can see that NCOL simply caches the services having the highest demand and therefore, the case illustrated in Fig. 2 can easily occur. For example, two neighbor BS 5 and BS 7 both cache service 8 in NCOL case. By contrast, the service caching decisions of collaborative strategies exhibit a higher level of diversity among neighboring BSs.

Fig. 8: Utilities of individual BSs
Fig. 9: Workload distribution
(a) Non-collaborative BS
(b) Collaborative obedient BSs
(c) Collaborative strategic BSs
Fig. 10: Service caching decision

Vi-D Chromatic Parallel Gibbs sampling

Fig. 13 compares the convergence processes of CPGS and sequential Gibbs sampler when running CSC-O. It can be observed that CPGS converges much faster than the sequential Gibbs sampler yet reaches to the same optimal system utility. We further present the sampling probability of service caching decisions. Fig. 13 averages the action sampling probabilities of BSs in the first 10 iterations. It can be observed that in this stage the distribution of sampling probabilities are relatively even among potential actions for most BSs. Fig. 13 averages the sampling probabilities in the last 10 iterations and we can see that the sampling probabilities become more focused and the optimal service caching decisions are sampled with a high probability, approaching to 1 for most BSs. This implies that the sampling probability has converged to the Gibbs distribution.

Fig. 11: Convergence comparison
Fig. 12: Unconverged Gibbs distribution
Fig. 13: Converged Gibbs distribution

Vi-E Coalition formation game

Fig. 14 presents the generated coalitions by applying CSC-S(IC) and CSC-S(PC). Several points are worth noticing. First, the members in one coalition are not necessarily neighbors. However, collaboration only occurs between neighbors. For example, in the coalition (Fig. 14(a)), BS 1 and BS 12 are not physically adjacent to each other and therefore no collaboration takes place between them, yet these two BSs belong to the same coalition. Second, a BS may not want to join any coalition. In other words, a BS may want to form an isolated coalition that contains only itself since collaborating with any other BS in the network leads to utility loss. In this particular simulation, we observe that BS 2 and BS 9 separately form isolated coalitions with incentivized collaboration. As a result, the utilities of these two BS stay the same before and after the coalition formation. Third, the BSs in the same coalition tend to cache different types of service in order to accommodate more workload in the edge system. By comparing Fig. 14(a) and Fig. 14(b), we also see that Incentivized Collaboration scheme promotes BSs to form more and larger coalitions compared with Plain Collaboration.

(a) Coalition formation with IC
(b) Coalition formation with PC
Fig. 14: Coalitions generated by CSC-S

During the coalition formation, each iteration is executed by following the Merge/Split operation that aims to find a Pareto-dominant coalition partition than the current partition. Therefore, after each iteration, at least one of the BSs improves its utilities without decreasing the utilities of other BSs. Figure 16 shows the system utility evolution of each specific Merge or Split operation. We see that the system utility is improved with every Merge/Split operation and after only several iterations, the network converges to a stable partition of coalitions. This indicates that in practice, the complexity of running the proposed algorithm is low and hence, it can be easily implemented.

Fig. 16 show the payments of each SC. It can be verified that the payment of SCs is cleared within each coalition. For instance, in the coalition with Incentivized Collaboration, SCs 1, 7, 11, 12 pay and SCs 5, 8, 13 receive payment .

Fig. 15: Utility evolution of SCs (IC)
Fig. 16: Payments of SCs

Vii Conclusion

In this paper, we studied collaborative service caching for dense small cell networks. We proposed an efficient decentralized algorithm, called CSC (Collaborative Service Caching), that tailors service caching decisions to collaborative small cell network with obedient (CSC-O) or strategic (CSC-S) BSs. The CSC-O is built on the chromatic parallel Gibbs sampler which tremendously accelerates the convergence of conventional Gibbs sampler yet provides provable optimality performance. The proposed framework is further extended to the SC network consisting of strategic BSs with the assistance of coalitional game. The developed coalition formation algorithm ensures that even selfish BSs are well-motivated to collaborate with other BSs in the coalition via proper incentive schemes. Although our work makes a valuable first step towards optimizing MEC considering service heterogeneity, there are a few limitations in the current model that demand future research effort. For example, user-cell association (load dispatching) decisions can be incorporated in the optimization framework and service demand prediction methods need to be developed for making service caching decisions.

References

  • [1] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing: The communication perspective,” IEEE Communications Surveys & Tutorials, 2017.
  • [2]

    W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and challenges,”

    IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, 2016.
  • [3] Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading for mobile-edge computing with energy harvesting devices,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 12, pp. 3590–3605, 2016.
  • [4] X. Ge, S. Tu, G. Mao, C.-X. Wang, and T. Han, “5g ultra-dense cellular networks,” IEEE Wireless Communications, vol. 23, no. 1, pp. 72–79, 2016.
  • [5] L. G. Garcia, K. I. Pedersen, and P. E. Mogensen, “On open versus closed lte-advanced femtocells and dynamic interference coordination,” in Wireless Communications and Networking Conference (WCNC), 2010 IEEE.   IEEE, 2010, pp. 1–6.
  • [6] N. Fernando, S. W. Loke, and W. Rahayu, “Mobile cloud computing: A survey,” Future generation computer systems, vol. 29, no. 1, pp. 84–106, 2013.
  • [7] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging it platforms: Vision, hype, and reality for delivering computing as the 5th utility,” Future Generation computer systems, vol. 25, no. 6, pp. 599–616, 2009.
  • [8] D. Huang, P. Wang, and D. Niyato, “A dynamic offloading algorithm for mobile computing,” IEEE Transactions on Wireless Communications, vol. 11, no. 6, pp. 1991–1995, 2012.
  • [9] J. Liu, Y. Mao, J. Zhang, and K. B. Letaief, “Delay-optimal computation task scheduling for mobile-edge computing systems,” in Information Theory (ISIT), 2016 IEEE International Symposium on.   IEEE, 2016, pp. 1451–1455.
  • [10] J. Xu, L. Chen, and S. Ren, “Online learning for offloading and autoscaling in energy harvesting mobile edge computing,” IEEE Transactions on Cognitive Communications and Networking, vol. PP, no. P, pp. 1–15, 2017.
  • [11] L. Chen and J. Xu, “Socially trusted collaborative edge computing in ultra dense networks,” in Edge Computing (SEC), 2017 IEEE/ACM Symposium on, 2017.
  • [12] S. Tanzil, O. Gharehshiran, and V. Krishnamurthy, “A distributed coalition game approach to femto-cloud formation,” IEEE Transactions on Cloud Computing, 2016.
  • [13] J. Tordsson, R. S. Montero, R. Moreno-Vozmediano, and I. M. Llorente, “Cloud brokering mechanisms for optimized placement of virtual machines across multiple providers,” Future Generation Computer Systems, vol. 28, no. 2, pp. 358–367, 2012.
  • [14] B. Li, J. Li, J. Huai, T. Wo, Q. Li, and L. Zhong, “Enacloud: An energy-saving application live placement approach for cloud computing environments,” in Cloud Computing, 2009. CLOUD’09. IEEE International Conference on.   IEEE, 2009, pp. 17–24.
  • [15] Y. Gao, H. Guan, Z. Qi, Y. Hou, and L. Liu, “A multi-objective ant colony system algorithm for virtual machine placement in cloud computing,” Journal of Computer and System Sciences, vol. 79, no. 8, pp. 1230–1242, 2013.
  • [16] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire, “Femtocaching: Wireless content delivery through distributed caching helpers,” IEEE Transactions on Information Theory, vol. 59, no. 12, pp. 8402–8413, 2013.
  • [17] K. S. Prabh and T. F. Abdelzaher, “Energy-conserving data cache placement in sensor networks,” ACM Transactions on Sensor Networks (TOSN), vol. 1, no. 2, pp. 178–203, 2005.
  • [18] D. Liu and C. Yang, “Caching policy toward maximal success probability and area spectral efficiency of cache-enabled hetnets,” IEEE Transactions on Communications, 2017.
  • [19] T. Wang, L. Song, and Z. Han, “Dynamic femtocaching for mobile users,” in Wireless Communications and Networking Conference (WCNC), 2015 IEEE.   IEEE, 2015, pp. 861–865.
  • [20] S. Müller, O. Atan, M. van der Schaar, and A. Klein, “Context-aware proactive content caching with service differentiation in wireless networks,” IEEE Transactions on Wireless Communications, vol. 16, no. 2, pp. 1024–1036, 2017.
  • [21] O. Skarlat, M. Nardelli, S. Schulte, and S. Dustdar, “Towards qos-aware fog service placement,” in Fog and Edge Computing (ICFEC), 2017 IEEE 1st International Conference on.   IEEE, 2017, pp. 89–96.
  • [22] IBM CPLEX Optimizer, https://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/.
  • [23] L. Yang, J. Cao, G. Liang, and X. Han, “Cost aware service placement and load dispatching in mobile cloud systems,” IEEE Transactions on Computers, vol. 65, no. 5, pp. 1440–1452, 2016.
  • [24] S. Wang, R. Urgaonkar, T. He, K. Chan, M. Zafer, and K. K. Leung, “Dynamic service placement for mobile micro-clouds with predicted future costs,” IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 4, pp. 1002–1016, 2017.
  • [25] Q. Zhang, Q. Zhu, M. F. Zhani, R. Boutaba, and J. L. Hellerstein, “Dynamic service placement in geographically distributed clouds,” IEEE Journal on Selected Areas in Communications, vol. 31, no. 12, pp. 762–772, 2013.
  • [26] R. Urgaonkar, S. Wang, T. He, M. Zafer, K. Chan, and K. K. Leung, “Dynamic service migration and workload scheduling in edge-clouds,” Performance Evaluation, vol. 91, pp. 205–228, 2015.
  • [27] C. P. Robert, Monte carlo methods.   Wiley Online Library, 2004.
  • [28] S. Geman and D. Geman, “Stochastic relaxation, gibbs distributions, and the bayesian restoration of images,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 721–741, 1984.
  • [29] D. Newman, P. Smyth, M. Welling, and A. U. Asuncion, “Distributed inference for latent dirichlet allocation,” in Advances in neural information processing systems, 2008, pp. 1081–1088.
  • [30] M. Kubale, Graph colorings.   American Mathematical Soc., 2004, vol. 352.
  • [31] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation: numerical methods.   Prentice hall Englewood Cliffs, NJ, 1989, vol. 23.
  • [32] K. R. Apt and A. Witzel, “A generic approach to coalition formation,” International Game Theory Review, vol. 11, no. 03, pp. 347–367, 2009.
  • [33] W. Saad, Z. Han, M. Debbah, A. Hjorungnes, and T. Basar, “Coalitional game theory for communication networks,” IEEE Signal Processing Magazine, vol. 26, no. 5, pp. 77–97, 2009.
  • [34] A. Bogomolnaia and M. O. Jackson, “The stability of hedonic coalition structures,” Games and Economic Behavior, vol. 38, no. 2, pp. 201–230, 2002.
  • [35] F. Baccelli, B. Blaszczyszyn et al., “Stochastic geometry and wireless networks: Volume ii applications,” Foundations and Trends® in Networking, vol. 4, no. 1–2, pp. 1–312, 2010.