1 Introduction
Mobile Edge Computing (MEC) [1] has emerged as a viable technology for mobile operators to push computing resources closer to the users so that requests can be served locally without longhaul crossing of the network core, thus improving network efficiency and user experience. In a nutshell, a typical MEC architecture consists of four layers of entities: the mobile users, the basestations, the edge servers, and the cloud datacenter. The edge servers are introduced in the edge of the network connecting the basestations to the network core, each server being an aggregation hub, or a mini datacenter to offload processing tasks from the remote datacenter. Because the region, or “cloudlet” [2], served by an edge server is much smaller, a commodity virtualizationenabled computer can be used to run compute and network functions that would otherwise be provided by the datacenter.
MEC can benefit many computehungry or latencycritical applications involving video optimization [3], content delivery [4], big data analytics [5], and augmented reality [6], to name a few. Originally initiated for cellular networks to realize the 5G vision, MEC has been generalized for broader wireless and mobile networks [7]. It is becoming more of a phenomenon with the Internet of Things; more than 5 billion IoT devices would be connected to MEC by 2020 according to a January 2017 forecast by BI Intelligence [8].
A challenge with MEC is how to align the edge servers with the basestations geographically to maximize the edge computing benefits. To address this challenge is applicationspecific. We focus on applications involving pairwise transactions between devices. Cellphone calls made from one user to another, peertopeer video streaming, and multiplayer online gaming are examples of this type of communication. If the two devices are served by different edge servers, the datacenter must get involved, thus incurring a backhaul cost. This cost is avoided if the same server serves both devices. However, it makes no practical sense if they are located in far remote geographic locations because a server should be geographically close to where it serves to avoid high installation cost and long latency [9, 10]. On the other hand, those devices with many transactions should belong to the same server. Although geographic proximity tends to imply high transactional activity, this relationship is not straightforward. As the number of servers is finite and their capacities limited, it is impossible to equally please all the users.
Therefore, we are motivated to solve the following optimization problem: where to place a set of edge servers of limited capacity and assign them to the basestations such that (1) offloading benefit is maximized and (2) each server is geographically close to its respective users. We require that the server locations be chosen from a set of predetermined geographic sites; this constraint applies to practical cases involving nontechnical factors in the deployment of the servers, for example, due to economics, policies, or management.
The server assignment problem is not new outside MEC. Indeed, it belongs to the body of work on distributed allocation of resources (virtual machines) widely studied in the area of cloud computing [11, 12, 13, 14]. The MEC problem is similar, however with unique constraints. Firstly, the servers in MEC need be near the user side, not the datacenter side, and so the communication cost to be minimized is due to the use of the backhaul network (towards the datacenter), not the fronthaul (towards the cells). Secondly, the geographic spread of the cells served by a MEC server should be a design factor, which is not a typical priority for a distributed cloud solution.
The MEC server assignment problem has been addressed in some forms, only recently [15, 9]. Our key contribution is a new practical formulation for the server assignment problem. We prove its NPhardness and subsequently explore an approximation solution based on local search heuristics. We evaluate its effectiveness and efficiency using both realworld and synthetic datasets and comparing to intuitive approaches.
The remainder of the paper is organized as follows. Related work is discussed in Section 2. The problem is stated and formulated in Section 3. The algorithm is proposed in Section 4. The results of the evaluation study are analyzed in Section 5. The paper concludes in Section 6 with pointers to our future work.
2 Related Work
Every finite computing system serving a large number of resourcehungry requests faces the challenge of how to assign resources to computing units to optimize hardware consumption and best satisfy application QoS requirements. The MEC server assignment problem shares the same challenge, which can arise in various scenarios.
The assignment problem in [16] applies to a MEC network supporting multiple applications of known request load and latency expectation and the challenge is to determine which edge servers to run the required virtual machines (VMs), constrained by server capacity and interserver communication delay. In [17, 18], where only one application is being considered and consists of multiple interrelated components organizable into a graph, the challenge is how to place this component graph on top of the physical graph of edge servers to minimize the cost to run the application. In the case that edge servers must be bound to certain geographic locations, a challenge is to decide which among these locations we should place the servers and interconnect them for optimal routing and installation costs [15].
The above works do not take into account the geographic spread of the region served by a server. The cells served by the same server can be highly scattered geographically, causing long latency and high management cost. This motivates the work in [9] proposing a spatial partitioning of the geographic area such that the cells of the same server are always contiguous. For this partitioning, a graphbased clustering algorithm is proposed that repeatedly merges adjacent cells to form clusters as long as the merger results in better offloading and no cluster exceeds the server capacity. In a similar research [10], where the cells served by each server are also contiguous, the objectives are to minimize the server deployment cost, the fronthaul link cost for each server to reach its assigned basestations, and the celltocell latency via the edge; the proposed algorithm is to repeatedly select the next remaining server of the least deployment cost and assign it to all the nearby basestations of the least fronthaul link cost so long as the server capacity is met.
The problem in [9] is aimed to optimize for workloads involving celltocell communication, whereas the problem in [10] is for individualcell workloads. The latter also requires that all the processing be fulfilled by the edge servers, hence zero backhaul cost. In this aspect, our work is more similar to [9] because we also optimize for celltocell workloads and cannot avoid backhaul use (thus, the objective to minimize its cost). However, there are key differences. First, the number of servers is a constraint in our problem, but not in [9]. Second, we minimize the geographic spread of the cells served by each server, instead of enforcing their geographic contiguity. We argue that these cells should not be too far from their server but do not have to be contiguous; in contrast, the solution in [9] enforces contiguity, but has no control of spread. Third, we require that the servers be bound to predetermined locations (as in [10, 15], but not a constraint of [9]).
3 Problem Statement
The geographic area is partitioned into a set of cells, each cell served by a basestation at a known location in ; we also refer to this basestation as basestation . The meanings of “basestation” and “cell” are general, not necessarily understood in conventional meaning as in cellular networks; for example, as a WiFi access router and its coverage area in WiFi networks. The edge layer consists of edge servers, whose locations are chosen from a set of candidate locations. For example, can be a subset of basestation locations; in this case, an edge server must be colocated with some basestation (as in [10]). In general, we admit any arbitrary candidate location in .
The input workload is a symmetric matrix of nonnegative real values representing the transaction demand between users in cell with users in cell . Note that is the transaction demand between users of the same cell . Denote by the total workload involving cell . Without loss of generality, assume that the total workload involving all the cells equals 1; i.e., . Each server exclusively manages the workload for a group of basestations. If basestation is assigned to a server at location , we have a fronthaul link, whose usage cost should increase with their distance; denote this cost by (assumed given, e.g., equal to distance). Because the backhaul links to reach the remote data center are much more expensive, representing the worstcase scenario, we assume their cost to be a fixed cost much higher than all fronthaul link costs. We want to avoid backhaul links as much as possible.
Our goal is to (1) server location assignment (SLA): assign the best location for each server and (2) cellserver assignment (CSA): assign the best server for each cell. We use 01 integer programming to formulate these assignments. Define a binary variable
such that iff there is a server at location and this server serves cell . Given , we can tell exactly the server locations (SLA) and the celltoserver assignment (CSA). A location is a server location iff , i.e., at least one cell is assigned to location . Another way to express this condition isBecause different locations must be chosen for the servers, we have the constraint below,
Also, each cell must be assigned to exactly one server, another constraint is
We assume that there is a capacity on the compute load a server can process. In the case a server is fully saturated, the residual workload must be serviced by the data center. Our objectives are to minimize the backhaul cost and geospread under this assumption.
3.1 Backhaul Cost
A transaction can be one of the following types: between users of the same cell, between users of two different cells assigned to the same server, and between users of two different cells assigned to different servers. The compute demand for the edge comes from transactions of the first two types. Specifically, if a server is placed at location , we represent its compute (demand) load as
(1) 
Of course, for every nonserver location .
If the server capacity is infinite, all of this load will be fulfilled by the server. However, limited by the server capacity , if , the remaining amount () of workload must be processed at the datacenter, thus incurring a backhaul cost. Consequently, the total backhaul cost is due to not only the transactions between cells assigned to different servers, but also those transactions assigned to the same server that exceed its capacity. We represent the backhaul cost as
(2) 
which can also be expressed as
(3) 
We want to minimize .
3.2 Geographic Spread
For better latency and easier management, we should keep the geographic region served by an edge server from spreading too far, especially for cells with many transactions. We quantify the geospread of a server as the sum of its distance to each assigned basestation, weighted by transaction demand. If this server is placed at location , its geospread is quantified as
(4) 
We want to minimize the total geospread for all the servers
(5) 
3.3 Optimization Problem and NPHardness
In summary, our problem is the following twoobjective optimization problem.
Problem 3.1 (Min CostSpread Assignment (MCSA)).
s.t.  
(6)  
(7)  
(8) 
Theorem 3.2.
MCSA is NPhard.
Proof.
Consider a simple configuration of our problem: for all except for , and for all . Then, it is easy to see that
Because is a constant, we can choose any locations to place the servers. Given these server locations, what remains is to minimize ,
This minimization is NPhard because we show below that an algorithm for it can be used to solve the optimization version of the partition problem, which is known to be NPhard: partition a given set of positive integers, , into two subsets such that the respective subset sums differ the least. To reduce to our problem, consider servers and cells with , and let . Suppose that an optimal solution, , assigns a cellset to server 1 and a cellset to server 2. Without loss of generality, let . The corresponding is
There are two cases. First, if , then partition is optimal for the partition problem because the subset sums are identical. Second, in the otherwise case, , partition is optimal for the partition problem because no partition can offer a smaller subset sum difference,
Indeed, suppose by contradiction that this partition exists. Without loss of generality, let ; hence, we must have . Then, if we assign to server 1 and to server 2, will be
This is contradictory to the assumption that is the optimal solution. ∎
4 Heuristic Approach
We propose a threephase algorithm approach: focus on spread optimization first, then refine the solution based on the cost, which as a side effect may worsen the spread, and, finally, improve the solution again, this time for a better spread. As local search is widely used for hard combinatorial optimization problems, we present below the local search methods to optimize each individual objective and how to apply them in the threephase approach.
4.1 CostOnly Optimization
Because does not involve geography, we need compute only the best cellserver assignment based on the workload demand; any location choice for the servers would work. In a nutshell, our algorithm starts with a random assignment (random cellserver assignment and random serverlocation assignment), and repeatedly apply a local operation such that the new assignment improves . A local operation, denoted by , runs an algorithm to migrate cells between a pair of servers at locations and ; the servers are referred to as the server and server, respectively. As long as we can find a local operation that improves ,
(9) 
we make the new assignment permanent and repeat the same process until no such local operation is found.
Let and denote the cellsets of the server and server before (after) the location, respectively. According to Eq. (2), will decrease if the quantity
(10) 
decreases as a result of replacing with . Consequently, we should design the cellmoving algorithm such that its objective is to minimize .
This challenge can be translated into a graph bipartitioning problem. Let be a weighted graph (selfloop possible) where each cell in is a vertex and an edge connects cell and cell if they have transactions; the weight of edge is . A feasible solution is a partition of into two components. The first additive term of Eq. (10) is the cut weight of this partition and the second and third additive terms represent a capacityconstrained quality for the partition. We derive an algorithm to compute the best partition based on the FiducciaMattheyses (FM) heuristic [19]. FM is effective for solving the classic graph mincut bipartitioning problem whose objective is to minimize the cut weight while balancing the vertex weight. FM is fast (linear time in terms of the number of vertices) and simple (each local operation involves moving only one vertex across the cut). Because our objective is different (minimizing ), we need to modify FM.
algocf[t]
The cellmoving algorithm works as follows (see Algorithm LABEL:alg:psi). Resembling FM, the algorithm runs in passes and in each pass we compute a sequence of cell migrations each moving a vertex from to or from to such that after this series is maximally improved. The algorithm stops when no improvement can be made. To determine which vertex to move, let us define for each vertex a quantity called “gain”, which is the cost reduction if the vertex were moved from its component to the other component. Consider a vertex and, without loss of generality, suppose that . If vertex were moved to , its gain in the cut weight is
The edge weight sum of and that of would be changed to
and so the gain in the capacityconstrained quality is
The gain of vertex is
Intuitively, a positive (negative) gain would result in a smaller (larger) if the vertex switched its component.
At the beginning of each pass, we construct a priority queue of vertices based on their gain. This queue includes only those vertices whose migration would result in
(11) 
In other words, we consider moving a vertex only if the moving improves the load balancing between the two servers or keeps them under the server capacity.
During the current pass, we repeatedly select the vertex of highest gain from the priority queue, move it, and update the queue. After this vertex is moved, it is “locked” so that it cannot be moved again in the current pass. Then, we repeat this process until the queue is empty. We keep track of the gain accumulation after each step:
where is the vertex chosen in step . The best move decision would be to move vertices such that .
If , these moves would result in better because the new is ; we make these moves permanent and go on to the next pass which repeats the same procedure. Else, the algorithm makes no change (i.e., keep the same partition as that before the pass starts) and stops.
When a local operation finishes, we will use the final assignment resulted from this operation. The algorithm continues repeatedly with finding the next local operation that can further improve and stops when no such local operation is found.
4.2 SpreadOnly Optimization
To minimize can be reduced to solving a kmedian problem [20]. In kmedian, given a set of clients and a set of facilities, the goal is to choose facilities to open and assign an open facility to each client such that the total assignment cost is minimum, assuming that the assignment cost to service client by facility is (by default, a metric). We can consider each server a facility to open (, ) and each cell a client (). The cost to assign client to facility (if open) is (alternatively, we can think of as the assignment cost per unit of service and as the service demand). Then the total service cost of the corresponding kmedian problem is
Therefore, any kmedian solution ( iff facility is open and client is served by this facility) is a solution that minimizes for our problem.
Kmedian is NPhard [20] and the best approximation factor known to date is , achieved by Byrka et al. [21]. Using the local search approach, one can obtain an approximation factor of , for example by Arya et al.’s polynomialtime algorithm [22]. This algorithm starts with a feasible assignment and repeatedly perform a facility swap until no further cost reduction.
Similarly, our algorithm starts with a random assignment and then repeatedly applies a series of local operations. Let denote an algorithm that assigns the cells in to the servers located at a given subset of locations, , such that a cell is always assigned to the nearest server; i.e.,
A local operation, denoted by , involves a pair of a server location in the current server location set and a nonserver location , and does the following:

Remove location from the server set

Add location as a new server location set

Run to obtain a new cellserver assignment where is the new server set.
A local operation is chosen to take place permanently if of the resultant assignment is improved by at least a constant factor ; i.e.,
(12) 
Subsequently, the algorithm goes on repeatedly with finding another local operation satisfying this inequality until none is found.
4.3 ThreePhase Algorithm
The above algorithms are designed for only one objective, cost or spread. We propose the following threephase algorithm; a summary is given in Algorithm LABEL:alg_threephase.
In Phase 1 (lines 14 of Algorithm LABEL:alg_threephase), we run the spreadonly algorithm presented above to obtain an assignment with the (approximately) best spread.
In Phase 2 (lines 57 of Algorithm LABEL:alg_threephase), we start with this assignment and adjust the cellserver assignment to improve cost. During the process, the serverlocation assignment is intact. For the adjustment, we apply the same local search algorithm (same local operation) as in the costonly algorithm except for one small modification. Specifically, a local operation, , is made permanent not only if the resultant cost is less (Eq. (9), but also the resulted spread remains below a threshold,
(13) 
Because a local operation, while lessening the cost may worsen the spread, the threshold is introduced to keep the spread within a reasonable factor of that is the spread at the start of Phase 2. Here, ; we can set if the goal is to bring down the cost aggressively.
In Phase 3 (lines 810 of Algorithm LABEL:alg_threephase), we start with the assignment of Phase 2 and recompute the server locations for better spread (which has been worsen as tradeoff during Phase 2 compared to that in Phase 1). During the process, the cellserver assignment is intact. Denote the server set by and the cellset of each server by . The unknown to compute is the binary variable , set to iff server is placed at location . We have
(14) 
Because a server must be assigned to an exclusive location, to minimize is equivalent to finding a mincost maximal matching in a complete bipartite graph where an edge connects a vertex to a vertex with cost (which is known from the intact cellserver assignment). Therefore, we apply the Hungarian Algorithm [23] to compute this matching (), which runs in polynomial time (cubic in the number of vertices).
algocf[t]
5 Evaluation
We conducted an evaluation in two scenarios: using a synthetic dataset (Synthetic500) to represent a workload that has no relationship with geography and a realworld dataset (Milano625) to represent a workload in which demand is higher between cells of increasing proximity.

Synthetic500: The service area is a 2D square area where random locations are chosen for the basestations and their corresponding cells are the Voronoi cells of . The workload demand between cell and cell is generated uniformly at random: .

Milano625: We constructed this dataset from the collection of georeferenced Call Detail Records over the city of Milan during Nov 1st, 2013  Jan 1st, 2014 (https://dandelion.eu/). Specifically, we partition the area into a grid of cells of size , and count the calls between these cells made during the Monday of Nov 4th, 2013.
In both studies, the transaction demand values are normalized such that they sum to 1. The number of MEC servers is set to whose location is chosen from random locations. These quantities are reasonable given the number of cells. The server capacity is set to , meaning {3%, 4%…, 8%} of the total workload. Figure 1 shows the heat map of the workload demand for the Milano625 dataset.
For convenience, we refer to our threephase algorithm as , as it applies kmedian, FiducciaMattheyses (FM), and Hungarian algorithms in the three phases, respectively. Serving as benchmark for comparison are: (the random assignment algorithm), (the spreadonly algorithm using kmedian), and (the costonly algorithm using FM with another step using Hungarian to improve spread). Note that the only difference between and is that the former starts with a assignment while the latter starts with a random assignment. The metrics for comparison are cost () and spread (). offers a good upperbound for both cost and spread, while represents a good lower bound (supposedly best) for spread and a good lowerbound (supposedly best) for cost.
The parameter in Eq. (12) is set to 0.0001 for kmedian and in Eq. (13) is set to
(no spread constraint in the second phase). The simulation runs on 10 random sets of candidate server locations and, for each set, 5 random choices for the initial assignment. The results are averaged over these 50 runs and plotted with 100% confidence interval.
5.1 Workload Without Geography Correlation
Figure 2 shows the results for the synthetic case in which geography is no factor in workload demand. In terms of cost (Figure 2), is almost identical to , both incurring a high backhaul cost of almost 0.9 (i.e., 90% of the total workload), even when the server capacity increases. This is not surprising because is costblind and so when workload has no relationship with geography, minimizing spread results in a cost as bad as that of a random assignment. Perhaps for the same reason, the other two methods, and , also incur almost the same cost. In other words, whether we start with a assignment or a assignment, an application of FM+HUNG would result in similar costs. It is important to note that the FM step is effective, especially as the server capacity increases. With a server capacity of 0.08, applying FM reduces the backhaul cost to 0.73, a 20% improvement from the initial assignment.
In terms of spread (Figure 2), is the best (expected) and the worst (understandable because it is spreadblind). Between the other two, more interestingly, has a similar spread (only slightly larger) compared to that of . This study suggests that, for a workload input that has no relationship with geography, (1) we can do better than a random assignment, (2) the 3phase algorithm can run without Phase 1 () which has almost zero benefit, and (3) there is no clear winner between and ; which one should be chosen depends on whether we prefer minimizing cost or spread.
5.2 Workload With Geography Correlation
Figure 3 shows the results for the realworld dataset (Milano625), in which the workload has a strong correlation with geography; specifically, higher between cells of shorter distance [9]. Similar to the above study, and, expectedly, is worse than all the other algorithms in both objectives and has the best spread. There are, however, key differences.
First, has a substantially lower cost than ’s; this implies that, due to the correlation between workload and geography, by minimizing spread there is, to some extent, a benefit in reducing the cost. Indeed, because tends to cluster cells near each other and workload demand is high between close cells, heavy workloads tend to be served by the edge, hence less workload going backhaul (compared to a random assignment). Second, is clearly better than in both objectives; this substantiates the effectiveness of having Phase 1 () in our algorithm, leading to not only better cost but also better spread. Third, has a spread only slightly worse than ; this shows the effectiveness of Phase 3 () in improving spread. In short, all the three phases in the proposed algorithm () are important to achieving both objectives.
5.3 Other Observations
Figure 4 gives a visual representation of the assignment map according to and . Both methods are consistent with the physical map of Milan (Figure 4(f)); that is, because most transactions involve the inner neighborhoods, a server closer to the the central area covers fewer cells (which have high activity) than those in the outskirt (which have low activity). While places the servers spatially nicely (Figure 4(a), Figure 4(b)), allows for some discontiguity in the cells that belong to the same server (Figure 4(c), Figure 4(d)); the latter does so to reduce the backhaul cost. For example, when the capacity , is (0.43, 0.21) for and (0.49, 0.18) for . This is a tradeoff between cost versus spread. Although we cannot avoid this tradeoff, it is important to point out that offers a better workload balance, as clearly illustrated in Figure 4(e). The ratio of the maximum to the minimum workload demand is at least two times less with than with .
6 Conclusions
We have addressed a new server assignment problem for MEC, whose decision making needs to be made for where geographically to place the servers and how to assign them to the user cells based on transactional workloads. The formulation of the two objectives with respect to the backhaul cost and geographic spread has not appeared in the MEC literature. We have proposed and evaluated a heuristic solution leveraging kmedian, FiducciaMattheyses, and Hungarian methods. The solution is not optimal (due to the NPhardness of the problem), but an effective approximation. For the future work, our next step is to consider the case where the workload demand is not static. In practice, the workload demand varies over the time, but usually follows a pattern. Knowing this pattern, for example, in the form of a probability distribution, an interesting goal is to compute an assignment offering the best expected optimization.
References
 [1] ETSI, “Mobileedge computing: Introductory technical white paper,” The European Telecommunications Standards Institute (ETSI), September 2014.
 [2] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The case for vmbased cloudlets in mobile computing,” IEEE Pervasive Computing, vol. 8, no. 4, pp. 14–23, Oct 2009.
 [3] X. Xu, J. Liu, and X. Tao, “Mobile edge computing enhanced adaptive bitrate video delivery with joint cache and radio resource allocation,” IEEE Access, vol. 5, pp. 16 406–16 415, 2017.
 [4] X. Song, Y. Huang, Q. Zhou, F. Ye, Y. Yang, and X. Li, “Content centric peer data sharing in pervasive edge computing environments,” in 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), June 2017, pp. 287–297.
 [5] S. Nastic, T. Rausch, O. Scekic, S. Dustdar, M. Gusev, B. Koteska, M. Kostoska, B. Jakimovski, S. Ristov, and R. Prodan, “A serverless realtime data analytics platform for edge computing,” IEEE Internet Computing, vol. 21, no. 4, pp. 64–71, 2017.
 [6] A. AlShuwaili and O. Simeone, “Energyefficient resource allocation for mobile edge computingbased augmented reality applications,” IEEE Wireless Communications Letters, vol. 6, no. 3, pp. 398–401, June 2017.
 [7] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing: The communication perspective,” IEEE Communications Surveys Tutorials, vol. 19, no. 4, pp. 2322–2358, Fourthquarter 2017.
 [8] P. Newman, “Iot platforms : Examining the wide variety of software that holds iot together,” Source: BI Intelligence Store, January 2017.
 [9] M. Bouet and V. Conan, “Geopartitioning of mec resources,” in Proceedings of the Workshop on Mobile Edge Communications, ser. MECOMM ’17. New York, NY, USA: ACM, 2017, pp. 43–48. [Online]. Available: http://doi.acm.org/10.1145/3098208.3098216
 [10] R. Mijumbi, J. Serrat, J. L. Gorricho, J. RubioLoyola, and S. Davy, “Server placement and assignment in virtualized radio access networks,” in 2015 11th International Conference on Network and Service Management (CNSM), Nov 2015, pp. 398–401.
 [11] F. Hao, M. Kodialam, T. V. Lakshman, and S. Mukherjee, “Online allocation of virtual machines in a distributed cloud,” in IEEE INFOCOM 2014  IEEE Conference on Computer Communications, April 2014, pp. 10–18.
 [12] M. Alicherry and T. V. Lakshman, “Network aware resource allocation in distributed clouds,” in 2012 Proceedings IEEE INFOCOM, March 2012, pp. 963–971.
 [13] Z. A. Mann, “Allocation of virtual machines in cloud data centers  a survey of problem models and optimization algorithms,” ACM Comput. Surv., vol. 48, no. 1, pp. 11:1–11:34, Aug. 2015. [Online]. Available: http://doi.acm.org/10.1145/2797211
 [14] H. Deng, L. Huang, C. Yang, H. Xu, and B. Leng, “Optimizing virtual machine placement in distributed clouds with m/m/1 servers,” Computer Communications, vol. 102, pp. 107 – 119, 2017. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0140366417300221
 [15] A. Ceselli, M. Premoli, and S. Secci, “Mobile edge cloud network design optimization,” IEEE/ACM Trans. Netw., vol. 25, no. 3, pp. 1818–1831, Jun. 2017. [Online]. Available: https://doi.org/10.1109/TNET.2017.2652850
 [16] W. Wang, Y. Zhao, M. Tornatore, A. Gupta, J. Zhang, and B. Mukherjee, “Virtual machine placement and workload assignment for mobile edge computing,” in 2017 IEEE 6th International Conference on Cloud Networking (CloudNet), Sept 2017, pp. 1–6.
 [17] T. Bahreini and D. Grosu, “Efficient placement of multicomponent applications in edge computing systems,” in Proceedings of the Second ACM/IEEE Symposium on Edge Computing, San Jose / Silicon Valley, SEC 2017, CA, USA, October 1214, 2017, 2017, pp. 5:1–5:11. [Online]. Available: http://doi.acm.org/10.1145/3132211.3134454
 [18] S. Wang, M. Zafer, and K. Leung, “Online placement of multicomponent applications in edge computing environments,” IEEE ACCESS, vol. 5, pp. 2514–2533, 2017. [Online]. Available: http://dx.doi.org/10.1109/ACCESS.2017.2665971
 [19] C. M. Fiduccia and R. M. Mattheyses, “A lineartime heuristic for improving network partitions,” in Proceedings of the 19th Design Automation Conference, ser. DAC ’82. Piscataway, NJ, USA: IEEE Press, 1982, pp. 175–181. [Online]. Available: http://dl.acm.org/citation.cfm?id=800263.809204
 [20] O. Kariv and S. L. Hakimi, “An algorithmic approach to network location problems. ii: The pmedians,” SIAM Journal on Applied Mathematics, vol. 37, no. 3, pp. 539–560, 1979.
 [21] J. Byrka, T. Pensyl, B. Rybicki, A. Srinivasan, and K. Trinh, “An improved approximation for kmedian and positive correlation in budgeted optimization,” ACM Trans. Algorithms, vol. 13, no. 2, pp. 23:1–23:31, Mar. 2017. [Online]. Available: http://doi.acm.org/10.1145/2981561

[22]
V. Arya, N. Garg, R. Khandekar, A. Meyerson, K. Munagala, and V. Pandit,
“Local search heuristic for kmedian and facility location problems,” in
Proceedings of the Thirtythird Annual ACM Symposium on Theory of Computing
, ser. STOC ’01. New York, NY, USA: ACM, 2001, pp. 21–29. [Online]. Available: http://doi.acm.org/10.1145/380752.380755  [23] H. W. Kuhn and B. Yaw, “The hungarian method for the assignment problem,” Naval Res. Logist. Quart, pp. 83–97, 1955.
Comments
There are no comments yet.