1 Introduction
Cloud computing has gained firm traction in the marketplace as major hightech companies rush to offer cloud services, such as Amazon AWS, Google AppEngine, Microsoft Azure, and Apple iCloud. For cloud providers, in order to get the best return for investment and to provide the best possible service to customers, one critical task is to manage the datacenter resources effectively. Today’s resource management in datacenters involves a core problem, known as virtual machine (VM) placement. Each customer specifies a desired number of VMs, as well as the resource requirements for each VM, including CPU, memory, storage, I/O throughput, and possibly bandwidth between VM pairs XF10 ; WMZ11 ; JPX12 . A cloud provider’s datacenters have a large number of physical machines (PM) mounted on racks and connected through layers of switches that form the datacenter network Cisco . The VM placement problem is to assign the VMs to the PMs so that certain cost, profit or performance objective is optimized, subject to the PMs’ resource capacity constraints and possibly network bandwidth constraints.
There is a great variety of VM placement problems, depending on what clouds offer, what customers need, and the performance/cost objectives of both parties. One category of services that customers often request contains anticolocation requirements, which take the generic form that a set of requested resources should not be colocated in a sense that depends on the precise specification. For instance, to improve the availability of its service, a customer may require some of its VMs not to be placed on the same physical server or the same server rack AAS14 .
This paper focuses on a special type of anticolocation requirements – disk anticolocation. Many VM types offered by public clouds such as Amazon EC2 EC2Inst have multiple virtual disks per VM. When a customer requests such a VM, he may be interested in the following disk anticolocation requirement: No physical disk of the PM to which the VM is assigned should contain more than one of the VM’s virtual disks. That is, the VM’s virtual disks should be spread out across the physical disks of the PM.
Our earlier paper XTFC17 has discussed the use cases and benefits of disk anticolocation extensively. Here, we summarize that discussion. Cloud users often care a great deal about disk IO performance. Since local disks (or directly attached disks) to PMs have numerous advantages over networkbased storage, such as higher IO throughput, lower latency, more predicable IO performance, lower cost and lower complexity AWS_Storage ; Datadog , they are the preferred storage option for many highvalued, critical applications such as NoSQL databases, Hadoop/MapReduce storage nodes, log or dataprocessing applications Datadog ; ScyllaDB ; DataStax ; Hadoop_hw . For such applications, when a requested VM is assigned to a PM, the VM’s virtual disks will be mapped to the local physical disks of the PM. When disk anticolocation is satisfied, accesses to different virtual disks do not interfere with each other; the users of the VM can expect improved disk IO performance, especially when RAID is used.
Although our problem adds only one complication – disk anticolocation – to the classical VM placement problem, it is far more difficult to solve than the classical VM placement problem^{1}^{1}1In this paper, we do not focus on theoretical computation complexity, but on usable algorithms. The classical VM placement problem is a form of multidimensional bin packing problems. Even the basic onedimensional bin packing problem is NPhard CK04 .. This greater difficulty can be seen later from the problem formulations, for instance, by counting the number of decision variables. It can also be seen intuitively. There are two levels of assignment to be made: One is to assign VMs to PMs; the other is to assign virtual disks to physical disks. What makes the overall problem especially difficult is that the two levels of assignment are intertwined. To the best of our knowledge, there are no known optimal combinatorial algorithms to solve the problem, other than naive enumeration (see Section 2 for detailed discussion).
We advocate the use of mixed integer programming (MIP) WN99 formulations and algorithms for our problem. The benefits of using MIP were argued in XTFC17 , and it has been used in a number of prior studies on similar resource management problems BCFF12 ; Fang2013 ; PLPMC13 ; RG17
. The MIP approach should complement other approaches that are frequently used for datacenter resource management, including specialized combinatorial algorithms and heuristic algorithms.
The main challenge with the MIP approach is that the MIP algorithms can take a long time to find an optimal solution. Typical strategies to cope with that challenge include finding better algorithms, using more powerful computers to run the algorithms, or reformulating the problem differently. In this paper, we will explore the third strategy – reformulation Trick05 – to reduce the computation time for the disk anticolocation problem. The paper presents three MIP formulations. The first one, F1, was the original formulation developed in XTFC17 ; it is shown here for completeness and for comparison. The main contributions of this paper are in developing two additional formulations, F2 and COMB, and in evaluating and comparing the computation time of all three formulations. Formulations F2 and COMB involve nonobvious reformulation of the variables. That is, they define the variables very differently from what an obvious formulation does (in our case, formulation F1 is the obvious formulation). As Trick suggests, it is this type of reformulation that the modelers can make the most contribution in reducing the computation time because MIP solvers are not sophisticated enough to perform such reformulation Trick05 .
From our evaluation of the formulations, we arrive at the following main observations. Different formulations lead to drastically different computation time. However, which formulation has the least computation time depends on the problem instance. All three formulations can be useful for the right instances. But, formulation COMB is especially flexible and versatile, and it can solve large problem instances. Throughout the paper, we have discussions about how to decide which formulation to use in different situations.
Due to the inherent difficulty of our problem, when the problem size becomes large enough, no algorithm will be able to solve it optimally. In that case, one has to resort to nonoptimal heuristic algorithm. Our earlier work XTFC17 explores how to solve large problem instances with a heuristic decomposition approach. The approach reported in this paper and the approach in XTFC17 are different but complementary.
The rest of the paper is organized as follows. In Section 2, we discuss additional related work. In Section 3, we describe our VM placement problem and present three MIP formulations. In Section 4, we present experimental results to compare the computation time of the three formulations. Conclusions are given in Section 5.
2 Additional Related Work
There is a large body of research on different VM placement problems, such as VM placement with traffic awareness or network constraints MPZ10 ; AL12 ; BCFF12 ; ZYLW15 , with routing JLHCC12 ; PLPMC13 , with resource sharing by colocated VMs SSS11 ; RG15 ; RG17 , with energy awareness Fang2013 ; Li20131222 ; marotta2015simulated , with random or timevarying resource requirements WMZ11 ; CS14 ; MSY12 . Our earlier work XTFC17 and the followup paper by other authors Hbaieb2017 are the only papers that consider disk anticolocation. In Hbaieb2017 , Hbaieb et al. propose a more scalable algorithm combining a decomposition method with local search heuristic. Neither paper deals with optimal algorithms.
Most VM placement problems, like ours, are superclasses of the vector bin packing problem, which is wellknown to be NPhard. Even for the vector bin packing problem, there have been relatively few exact (i.e., optimal) algorithms in the literature. Instead, research has focused on approximation algorithms and online algorithms (see
CKPT17 for a recent survey). Within the exact algorithms, nearly all are about 1 or 2dimensional vector packing with identical bins HDC94 ; DIM16 , whereas practical VM placement problems usually have more than two dimensions (i.e., resource types) and different bin (i.e., PM) types. More importantly, many VM placement problems like ours are more than vector bin packing. In our case, even if we have an exact algorithm for general vector bin packing with multiple bin types, it still won’t solve our problem in which disk anticolocation is coupled with vector bin packing.A majority of prior studies on VM placement avoid MIP formulations all together. In the cases where MIP formulations are used, they are usually used to describe the problem; the algorithms are usually not based on MIP. Instead, the effort is usually on developing specialized combinatorial algorithms, such as multidimensional binpacking heuristics or approximation algorithms AT07 ; CZS11 ; WMZ11 , graph algorithms MPZ10 ; AL12 ; ZYLW15 or other sophisticated heuristics Fang2013 . None of these are exact algorithms. For VM placement problems, changes to the problem specification often make the original algorithm inapplicable, unless it is a general MIP algorithm. The above algorithms are tailored to the special problems that the authors study, usually relying on certain structures of the problems. In our assessment, they cannot be adapted easily to our problem, due to the addition of the disk anticolocation requirement, which poses difficult constraints of a different kind.
There is a small number of prior studies that do use MIP, but they consider very different problems from our problem Li20131222 ; BCFF12 ; Fang2013 ; PLPMC13 ; WNLL13 ; marotta2015simulated ; RG17 . For instance, Fang2013 studies a problem of VM placement with energyaware routing; PLPMC13 studies a problem of placing customerrequested virtual networks into the datacenter’s physical substrate, subject to the capacity constraints of physical nodes and physical links; RG17 formulates and solves an MIP problem for sharingaware VM placement where colocated VMs can share memory pages. These earlier studies provide only one MIP formulation, but do not attempt problem reformulation.
Practical cloud systems usually adopt less sophisticated heuristics, such as roundrobin, firstfit or firstfitdecrease, as evidenced by opensource middleware stacks cloudstack ; openstack ; euca . While simple heuristics may find solutions quickly, they can also be underachieving in terms of performance. In particular, when a problem is sufficiently complex or have difficult constraints, intuitions that are needed to develop sound heuristics may fail. The anticolocation constraints in our problem are difficult. It is not easy to design a heuristic algorithm that always has good performance.
3 Three Problem Formulations
In this section, we present three MIP formulations of our VM placement problem and discuss their complexity and applicability. In the next section, we will evaluate their computation time when a standard MIP solver is used. In Table 1, we summarize the major notation.
number of VMs  number of PMs  
index of VM  index of PM  
the set of VMs  the set of PMs  
index of virtual disk  index of physical disk  
the set of VM types  the set of PM types  
index of VM type in  index of PM type in  
a configuration ID  the set of all type PMs  
the number of vCPUs that PM can support  
the amount of memory (in GiB) of PM  
a fixed cost associated with running PM  
the number of vCPUs required by VM  
the memory requirement (in GiB) by VM  
a set of virtual disks requested by VM  
the requested disk volume size (in GB) for the requested virtual disk  
the set of available physical disks of PM  
the size (in GB) of the physical disk  
the binary assignment variable from VM to PM  
a binary assignment variable for disk  
the binary variable indicating whether PM is used by some VMs 

the ID set of all the feasible configurations with respect to a type PM  
the total number of type VMs that need to be placed  
the vector representation of configuration  
the number of type VMs in configuration  
a binary PMtoconfiguration assignment variable  
the set of PMs with moderate numbers of feasible configurations  
a sufficient large constant 
Consider VMs and PMs. Each VM has the following resource requirements: memory, number of vCPUs, number of local disk volumes (virtual ones) and their respective sizes. Each PM has certain memory capacity, number of vCPUs that it can support, and number of local disks and their respective sizes. These local disks may be in the PM or directly attached.
We first give an overview of the constraints for our problem.

There are the usual capacity constraints for each resource: With respect to the vCPU or memory resource, the total amount of resource required by all the VMs assigned to a PM cannot exceed the resource capacity of the PM.

The next set of constraints is quite special, which makes our problem different from the usual VM placement problems. When multiple virtual disks are requested for a VM , there is a disk anticolocation constraint: No physical disk of the PM (to which VM is assigned) should contain more than one of VM ’s requested virtual disks. The motivations for such a constraint have been given in Section 1.

A final set of constraints is that the aggregate size of all virtual disks assigned to a physical disk cannot exceed the capacity of the physical disk.
The optimization objective will ultimately be decided by the cloud provider. For concreteness, we assume that a fixed operation cost is incurred for a PM as long as the PM is used by some VMs (that is, some VMs are assigned to the PM). Specifically, when a PM is turned on to host some VMs, there is a fixed cost associated with running the PM; when the PM is off, there is zero cost. The operation cost may include the average energy cost when a machine is running and typical maintenance cost. The optimization objective is to minimize the total operation cost of all the used PMs.
The model can be enriched in many ways. With respect to the costs and objective, we may include loaddependent costs in the optimization objective. For instance, the energy cost of a PM may be larger when the CPU load is higher. The model can also be extended to include local and network bandwidth constraints, although network constraints pose great difficulty and require additional techniques for solutions MPZ10 ; Fang2013 . Those additional constraints depend on actual customers’ needs and cloud providers’ policies, and they vary across customers/providers and change over time. Given the absence of details, we will not include those additional constraints in this paper. We expect that disk anticolocation is a class of distinct constraints. It is worthwhile to single it out for a focused investigation.
3.1 Formulation 1  Direct Assignment
Let the sets of the VMs and PMs be denoted by and , respectively. For each VM , let be the number of vCPUs required and let be the memory requirement (in GiB).^{2}^{2}21 GiB (gibibyte) is equal to bytes, which is bytes; 1 GB (gigabyte) is equal to bytes. For each VM , a set of virtual disks is requested and the set is denoted by . For each of the requested virtual disks , let be the requested disk volume size (in GB).
For each PM , let be the number of vCPUs it can support, be the amount of memory (in GiB), and be the set of available physical disks. The sizes of the physical disks are denoted by (GB) for .
For each and each , let be the binary assignment variable from VM to PM , which takes the value if VM is assigned to PM and otherwise. The binary variables are used for disk assignment: is set to 1 if VM is assigned to PM and the requested virtual disk , where , for VM is assigned to the physical disk of PM , where ; it is set to otherwise. Let be a 01 variable indicating whether PM is used by some VMs. The following is our first formulation for VM placement.
(1)  
s.t.  (2)  
(3)  
(4)  
(5)  
(6) 
(7)  
(8)  
(9)  
(10)  
The following explains some of the constraints:

(2) ensures that the requested virtual disks for VM may be assigned to the physical disks of PM only if VM is assigned to PM .

(3) ensures that every requested virtual disk must be assigned to exactly one physical disk.

(4) ensures that every VM is assigned to exactly one PM.

(6) is the disk capacity constraint.
Remark. The difficulty of our problem is reflected first by the variables, which are indexed by four subscripts, implying a large number of such variables. Moreover, there are two levels of assignments, VM assignment and disk assignment, and (2) implies that they cannot be separated. Finally, in formulation F1, we assume that each active PM has a fixed cost. In reality, some cost may be load dependent. For instance, the energy cost of a PM may depend on the number of VMs assigned to it. If the loaddependent energy cost needs to be incorporated and if the energy cost depends on the load linearly, our model only requires a small modification: We only need to modify the objective function by adding a linear term in the variables. There will be no other changes to the constraints.
3.2 Formulation 2 – Assign Configurations
The numbers of VM and PM types are often much smaller than the total numbers of VMs and PMs, respectively. For example, in Amazon EC2 EC2Inst , there are only 40 VM types. Amazon does not disclose the detailed configurations of their PMs. From EC2Inst , one can deduce that the PMs falls into a small number of types. For example, the five m4type VMs are all supported by 2.4 GHz Intel Xeon E52676 v3 (Haswell) processors; the five c4type of VMs are all supported by Intel Xeon E52666 v3 (Haswell) processors^{3}^{3}3We will see in Section 3.4 that, for formulation F2 and formulation COMB to work, having a small number of VM types is more important than having a small number of PM types. With a small number of VM types, the dimension of the configuration vectors is small. The formulations can still be effective even if the number of PM types is in thousands. The number of PM types mainly affects the precomputation time spent on enumerating the number of feasible configurations that can be supported by each PM type. Since this enumeration is onetime effort and it is done in advance, the time spent on it is not counted towards the computation time for solving an instance of the VM placement problem. We will take advantage of the small number of VM types..
Let denote the set of VM types. For each , let be the total number of type VMs that need to be placed. Let denote the set of PM types. For each , let be the set of all type PMs. For different , the sets are disjoint.
A configuration with an ID of a PM is a vector of nonnegative integers, denoted by , where each represents the number of type VMs assigned to the PM in configuration . We say a configuration is feasible with respect to a PM if the configuration is supportable by the PM’s resources, including allowing the disk anticolocation constraints to be satisfied. For instance, suppose there are only VM types and suppose the vector is a feasible configuration for a PM. That means the PM can support type VMs, type VMs, type VMs and type VMs simultaneously. For simplicity, we exclude the vector as a valid configuration, although this is not essential.
Since all PMs of the same type have the same amount of resources, a feasible configuration is also with respect to a PM type. Note that a configuration can be feasible to more than one PM types.
Suppose every configuration has a unique ID. For each PM type , let be the ID set of all the feasible configurations with respect to a type PM. For this formulation, the configurations in are assumed to be known (by preprocessing) and the number of them is assumed to be not too large, e.g., no more than hundreds of thousands. There are problem instances for which the assumptions hold (see Section 3.4 for the applicability of F2). Note that the disk anticollocation requirement must be satisfied in any feasible configuration. In the preprocessing step where we enumerate the feasible configurations for each PM type, we check the disk anticollocation requirement.
For each PM type , each PM and each , let be the  assignment variable with if and only if PM is assigned to take the configuration . The second formulation is as follows.
(11)  
s.t.  (12)  
(13)  
(14)  
(15)  
The following explains some of the constraints:
Formulation F2 is visibly very different from formulation F1. It is useful if, for each PM type, the feasible configurations are enumerable and the number of them is not too large. More detailed analysis on the formulations is deferred till Section 3.4.
3.3 Formulation 3 – Combined Formulation
For some PM types, the number of feasible configurations may be too large for formulation F2 to be useful; i.e., F2 will have too many variables. For example, the l6 PM type in Table 3 has millions of feasible configurations (see Section 4.1). For other PM types, the number of feasible configurations may be small. For instance, if a PM does not have a lot of physical resources (e.g., it can support a total of vCPUs), then the number of feasible configurations is usually small. The s1 PM type in Table 3 has only 10 feasible configurations. We next consider a hybrid approach that combines formulations F1 and F2.
Let be the set of PMs whose number of feasible configurations is not only enumerable, but also not too large (say, up to hundreds of thousands). Let denote the set of the rest PMs, i.e., . The cutoff between the two sets should be based on computational experiences in the actual environment where our method is applied (see Section 3.4 for more discussion). The VM assignment to the PMs in is done by choosing a configuration for each PM, as in formulation F2. The assignment to the PMs in is done with the direct approach, i.e., by assigning VMs to PMs directly as in formulation F1. This combined approach is expected to work well if the number of PMs in is not too large, say, up to several hundreds.
Let be the set of PM types of all the PMs in the set . For each , let be the set of all type PMs (which must be in ), and let be the ID set of all feasible configurations with respect to a type PM. For a VM , let denote its type. Let be the  assignment variable with if and only if PM is assigned to take the configuration . The variables , and are as in formulation F1. The combined formulation is as follows.
(16)  
s.t.  (17)  
(18)  
(19)  
(20) 
(21)  
(22)  
(23)  
(24)  
(25)  
(26)  
(27)  
(28)  
(29)  
The following explains some of the constraints:

(17)(25) deal with direct VM assignment to the PMs in the set , which should be compared with (2)(10) in formulation F1. More specifically, (17) ensures that the requested virtual disks for VM may be assigned to the physical disks of PM only if VM is assigned to a PM in . (18) ensures that every requested virtual disk must be assigned to exactly one physical disk only if VM is assigned to a PM in . (19) ensures that every VM is assigned to at most one PM in . (20) ensures that VM cannot have more than one of its virtual disks assigned to the same physical disk; (17) and (20) together enforce the disk anticolocation constraints. (21) is the disk capacity constraint. (22) and (23) are the resource capacity constraints posed by the number of vCPUs and the total memory size of each PM . (24) and (25) together ensure that if and only if for some , where is a large enough constant (it is enough to take ).

The constraint (29) guarantees that all the VMs are assigned.
3.4 Analysis of the Formulations
Formulations F2 and COMB are derived by reformulating the variables. They exploit special structures of the problem and define the variables very differently from what the obvious formulation, F1, does. As a result, the three formulations often have drastically different numbers of variables and constraints for the same problem instance. Our computational experiences have shown that the differences in computation time are often enormous. By counting the numbers of variables and constraints, it is often easy to see which formulation may be suitable and which are definitely impractical^{4}^{4}4
The branchandbound algorithm used by the MIP solvers involves visiting the nodes on a branchandbound tree and solving a linear programming (LP) problem for each node visited. The numbers of variables and constraints are good predictors for the computation time of each LP problem. The number of constraints is a trickier criterion to use, as sophisticated MIP solvers often add more constraints in an attempt to “tighten the constraints” of the LP problems. The objective is to solve the original MIP problem faster by reducing the number of nodes visited on the branchandbound tree.
. For example, if the number of variables exceeds tens of millions, then the formulation clearly will be difficult to solve. Similarly, if the number of variables in formulation F2 exceeds that in formulation F1 by orders of magnitude, then F2 will most likely be more difficult to solve. If the number of constraints in formulation F1 exceeds that in formulation F2 by orders of magnitude, then F1 will most likely be more difficult to solve.In order to use formulations F2 and COMB, the feasible configurations supported by each PM type need to be precomputed, by enumeration. Since this enumeration is onetime effort and it is done in advance, the time spent on it is not counted towards the computation time for solving an instance of the VM placement problem. For each PM type, we only need to enumerate up to a million feasible configurations. If a PM type has more than a million feasible configurations, formulation F2 will not be solvable. We have to use formulation COMB and apply direct VM assignment to the PMs of that type.
3.4.1 Formulation F1
The total number of variables is dominated by the number of the variables. That number is equal to the product of the total number of all the virtual disks in the problem with the total number of all the physical disks, i.e., . The number of constraints is also roughly the same.
Based on our computational experiences, when both numbers exceed hundreds of thousands, formulation F1 is impractical. When both numbers are below hundreds of thousands but above tens of thousands, F1 is likely solvable but may take a long time. When both numbers are less than tens of thousands, the formulation is often solvable fairly fast.
3.4.2 Formulation F2
The total number of variables is dominated by the number of variables, which is also the total number of configurations supported by all the PMs in the set , i.e., . If that number is greater than millions, the formulation will be either slow to solve or impossible to solve. Otherwise, the formulation is generally faster to solve than formulation F1. The number of constraints is roughly equal to times of the number of PMs, which is comparably small.
Formulation F2 is useful when the total number of configurations supported by all the PM types is not too large, e.g., under hundreds of thousands. It is generally easy to see when F2 is entirely impractical. For instance, a PM of a large type may have an exceedingly large number of feasible configurations, which will result in an exceedingly large number of variables and make formulation F2 impractical. An example is given in Section 4.1.
A small or moderate number of feasible configurations can happen if some combination of the following conditions is satisfied: (i) The number of VM types is small, e.g., dozens or less; (ii) a PM has a small capacity in at least one type of resources, e.g., 48 vCPUs; or (iii) there are policybased restrictions ensuring that certain PM types are used only for a small number of specific VM types.
As an example of (ii), all the PM types with 8 vCPUs cannot accommodate any VM of the type i2.8xlarge, which demands 32 vCPUs (see Tables 2 and 3). In general, for small or medium PM types, the number of feasible configurations is usually small because (1) a subset of the VM types are ruled out, and (2) for each remaining VM type, only a small number of such VMs can be assigned to a PM of the small or medium types due to resource scarcity.
As an example of (iii), a cloud provider may have a policy that the PMs of the large type are reserved for resourceintensive VM types. Such a policy is sensible for both economic and performance reasons, e.g., meeting the performance goals of highvalue customers. More concretely, if each l2type PM is only allowed to host the VM types with at least vCPUs requirement, then the number of feasible configurations reduces from more than to (see Tables 2 and 3).
3.4.3 Formulation COMB
For some large PM types, the number of feasible configurations may be large (say, more than hundreds of thousands). This is where formulation COMB is useful. We regard formulation COMB as one of the key contributions of the paper because it can treat large PMs separately by using direct VM assignment rather than using configurations. In the meantime, it treats the small or medium PM types by using configurations.
The total number of variables is roughly equal to the sum of the number of variables and the number of variables in formulation COMB. The number of variables is equal to , which should be compared with the case of F1. The number of variables is equal to , which is the number of feasible configurations supported by all the PMs in the set . The number of constraints is roughly equal to .
Thus, for formulation COMB to be effective, it is necessary that contains a small number of PMs (e.g., no more than hundreds), and the PMs in support a small to moderate number of feasible configurations, e.g., no more than hundreds of thousands. There is flexibility in setting the sets and . Based on the above discussion, should contain a small number of “large” PMs, i.e., PMs with rich resources. With respect to the PMs in , a small or moderate number of feasible configurations can happen under the conditions (i)(iii) given in Section 3.4.2. The above discussion provides a guideline for narrowing down the choices of and . The final decision can be made based on computational experiences and by comparing the actual numbers of variables and constraints for different choices of and , as the numbers can be easily computed.
Formulation COMB presents the most flexibility and applicability, because it contains formulations F1 and F2 as special cases. One can design good formulation COMB to speed up the computation or to solve larger instances.
3.4.4 Summary of Formulation Analysis

In F1, the number of variables and the number of constraints are comparable. If F1 is impractical from the computation point of view, it is because both numbers are large.

F2 usually has a small number of constraints. If the number of variables is also small, which depends on the PM types in the problem instance, then F2 is likely to be faster to solve than F1. When F2 is impractical, it is usually because the number of variables is too large, which in turn is due to the presence of some resourcerich (”large”) PM types.

If F2 is impractical, one can consider formulation COMB. The key is to decide the sets and ; the former contains the resourcerich PMs. In many problem instances, it is possible to drastically reduce the number of variables, as compared with F2, while only increase the number of constraints moderately. Then, COMB will be effective. When COMB is impractical, it is usually because there is a large number of resourcerich PMs, making the set large. However, in practice, that is unlikely to happen often because cloud providers prefer to use commodity PMs for cost and easeofmanagement reasons. Large PMs are rare, specialty items for special customers.

There will be problem instances for which none of the formulations are practical. In those cases, one has to resort to other strategies, most likely using heuristic algorithms; but the solutions will not be optimal.
4 Experiments
In this section, we will show problem instances and solve the three formulations using the MIP solver Gurobi Gurobi . The main objective is to compare the computation time and show the vast differences among the three formulations. The results will reveal that formulation COMB can be used for large and complex problem instances.
4.1 Setup
We follow the VM and PM setup in Amazon’s EC2 EC2Inst as close as we can. We take a subset of the allowed VM types (classes) of Amazon’s EC2. Their resource requirements are shown in Table 2. Cloud providers generally don’t disclose the detailed capabilities of all their PMs. As discussed in Section 3.2, the number of PM types is likely small. For the experiments, we assume the PM types are as shown in Table 3
. The amount of resources of each PM type is largely our guess based on the information revealed on Amazon’s web site. The operation costs (in the 5th column) are also based on our estimate
^{5}^{5}5The large cost increase when the number of disks exceeds 4 reflects the cost of running separate DAS (directed attached storage) devices.. The costs are normalized, with the lowest operation cost chosen to be . Since the problem is linear, it doesn’t matter what the chosen normalization base cost is. If the base cost is chosen to be instead of , the optimal cost is simply times of the optimal cost under the base cost .VM Type  vCPU  Memory (GiB)  Storage (all SSD; GB) 

m3.medium  1  3.75  1 4 
m3.large  2  7.5  1 32 
m3.xlarge  4  15  2 40 
m3.2xlarge  8  30  2 80 
c3.large  2  3.75  2 16 
c3.xlarge  4  7.5  2 40 
c3.2xlarge  8  15  2 80 
c3.4xlarge  16  30  2 160 
c3.8xlarge  32  60  2 320 
r3.large  2  15.25  1 32 
r3.xlarge  4  30.5  1 80 
r3.2xlarge  8  61  1 160 
r3.4xlarge  16  122  1 320 
r3.8xlarge  32  244  2 320 
i2.xlarge  4  30.5  1 800 
i2.2xlarge  8  61  2 800 
i2.4xlarge  16  122  4 800 
i2.8xlarge  32  244  8 800 
For Amazon EC2, each vCPU corresponds to a hyperthread of a physical core Field14 . In our experiments, we assume the PMs all support two hyperthreads per physical core. Hence, each physical core counts as 2 vCPUs. As an example, each Xeon E52680 processor has 8 cores and supports a total of 16 threads. A PM with one such processor offers 16 vCPUs.
For the PM types s1s4 and m1m5, we precomputed all their feasible configurations. As stated earlier, the precomputation step is a onetime effort and the required time is not counted toward the final computation time. In fact, for these PM types, the numbers of feasible configurations are quite small: s1–10, s2–36, s3–174, s4–174, m1–315, m2–2113, m3–4247, m4–4247, m5–3199. For the PM types l1l6, we did not precompute their feasible configurations because they have much more resources and the numbers of feasible configurations are large. For example, the l6 PM type has millions of feasible configurations. Therefore, when PM types l1l6 are involved in the experiments, we use formulation COMB instead of formulation F2. Experiment I and II are done with Gurobi5.6.3 on a ThinkPad 220i laptop with 2 Intel i3 cores and 10G RAM. The other experiments are done with Gurobi6.5.2 on a ThinkPad 240 laptop with 2 Intel i7 cores and 8G RAM. Gurobi is one of the highly regarded MIP solvers. Comparison results suggest that Gurobi is at least competitive against two other major commercial MIP solvers, CPLEX and XPRESS Mittelmann16 . All these commercial solvers are much faster than opensource alternatives.
PM Type  vCPU  Memory  Storage  Operation Costs 
(GiB)  (all SSD; GB)  (normalized)  
s1  8  16  1 256  100 
s2  8  32  1 512  120 
s3  8  64  2 512  200 
s4  8  64  4 512  300 
m1  16  32  2 512  600 
m2  16  64  4 512  700 
m3  16  128  4 1000  900 
m4  16  256  8 1000  1500 
m5  16  256  16 512  1800 
l1  32  256  4 1000  2500 
l2  48  512  8 1000  3500 
l3  64  1024  4 1000  5000 
l4  80  2048  16 1600  7000 
l5  120  4096  4 1000  9000 
l6  120  4096  24 1600  12000 
4.2 Comparison with Greedy Randomized Heuristic Algorithm
As a target for performance comparison, we developed our own heuristic algorithm. The heuristic algorithm is motivated by the general ideas of online heuristic algorithms Tang_ascalable ; AAS14 ; Li20131222 but should achieve much lower costs than the latter due to two exhaustive search steps, which we will describe.
Imagine that VM requests arrive dynamically. An online randomized algorithm will assign a requested VM to some random PM one at a time in the arrival order of the VM requests. Note that, in our experiments, all the VMs to be placed are given together in a batch. Our greedy randomized algorithm first randomly permutes the list of all the requested VMs; this emulates the random arrival order of the VM requests. For each VM in the permuted list, an attempt is made to assign the VM to a PM. The greedy aspect is that, for assignment, the list of used PMs, which are those already with some assigned VMs, is checked first; if the VM cannot be assigned to any PM in the used list, then the list of unused PMs is checked. The greediness tends to lead to more VM consolidation. In scanning either PM list, the order of scanning is uniformly random to emulate random selection; the first PM in the list that can accommodate the VM is selected (firstfit)^{6}^{6}6For a large datacenter, scalable online algorithms cannot afford to search through all the used PMs or unused PMs for each VM request. A typical strategy is to randomly sample a few used PMs and, if that does not work out, pick randomly a unused PM with sufficient resources. Our heuristic algorithm should do better in the achievable objective value. A more sophisticated algorithm is to keep track of an ordered list of all the PMs according to certain criterion and assign the VM to the first one on the list that fits. In this case, exhaustive search is needed and scalability is limited. Our heuristic algorithm does not maintain an ordered list because there is no obvious criterion for the order due to the difficult disk anticolocation requirement..
For each scanned PM, our heuristic algorithm checks whether it is possible to assign the currently considered VM to that PM. For vCPU or memory, all that is needed is to check whether the remaining number of vCPUs or the remaining memory is sufficient for the VM. For disk assignment, the algorithm exhaustively enumerates different disk assignment possibilities and uses the first one that is feasible^{7}^{7}7Checking the feasibility of disk assignment can be done by some standard assignment algorithm, which may be faster than enumeration but still takes some time. Either way, our heuristic algorithm has limited scalability, since in the worst case there is one disk assignment problem for every PM and for every VM request. But, it should achieve a lower cost than more scalable online randomized algorithms that do not check all the PMs for all possible disk assignment possibilities.. If the disk assignment (for the currently considered VM and PM) cannot be done by the algorithm, it is because the assignment is infeasible.
4.3 Main Results
We summarize the computation time and achieved costs of all experiments in Table 6 and Table 8. For the randomized heuristic, each test case is repeated for times and the average cost is reported. Note that, regardless of the VMPM ratios () that we have experimented with, typically not all the PMs are used by the VMs in our solutions. The VMs are consolidated into fewer PMs because our optimization objective is to minimize the total operation cost of the active (i.e., used) PMs. Out of the PMs, only those used PMs will incur costs.
4.3.1 Experiment I – VMs and PMs
We experimented with a problem of assigning VMs to PMs. The detailed setup is in Table 4. In this problem instance, only the small and medium types of PMs are used. Hence, we can compare formulations F1 and F2. Judging by the VM and PM numbers, this is a small instance. However, formulation F1 involves binary variables and constraints, which make it nontrivial for any optimization software. Formulation F2 involves variables and constraints. Formulations F1 and F2 are solved by Gurobi in and seconds, respectively, both yielding the optimal cost with PMs used. The results demonstrate that if all the feasible configurations can be precomputed and if their numbers are not too large, formulation F2 may be solved much faster than formulation F1. The reason is that F2 has much fewer constraints. We also experimented with the randomized heuristic algorithm. The average cost obtained by the heuristic algorithm is , which is about higher than the optimal cost of .
The obtained optimal solutions are useful for other purposes. For instance, they give indications on what resources are likely to be critical for different PM types. In Fig. 1 and Fig. 2, we show the resource utilization of the PMs in the optimal solutions for formulations F1 and F2. For each resource and each PM, the utilization of that resource on the PM is defined as the ratio of the total requested amount by all the VMs assigned to that PM over the total available amount from that PM. For instance, suppose two VMs are assigned to a PM, and suppose each VM requires 4 vCPUs and the PM supports 8 vCPUs. Then, the vCPU utilization on the assigned PM is 100%. Both solutions use PMs  s1: ; s2: ; s3: ; s4: . Both solutions show very similar patterns of resource utilization. The vCPUs are critical resources for PM types s2, s3 and s4. The number of local disks (labeled as ‘#lssd’) is a critical resource for PM types s1, s2 and s3, in the sense that all those disks tend to have some virtual disks assigned to them. The memory utilization is high for PM types s1 and s2. The utilization of the physical disk capacity (labeled as ‘lssd size’) is generally low (less than ). However, we should caution that these observations may change if the PMs have different resource configurations from what we are currently assuming.
We also examined a solution produced by the randomized heuristic algorithm with a cost of and active PMs. In Fig. 3, it shows that each PM type has a similar pattern of resource utilization as that of the optimal solutions obtained by F1 and F2. But the heuristic algorithm uses more s1type PMs and fewer s2type PMs. The s1 type has a larger vCPUmemory ratio compared with the s2 type. Therefore, the vCPU utilization of the s1 type is generally lower than the s2 type. The optimal solutions for F1 and F2 always use up all the available s2type PMs. Meanwhile, the heuristic algorithm assigns more VMs to the s1type PMs. Hence, the overall performance of the heuristic algorithm is worse.
With the objective of minimizing the total operation cost of the PMs, the optimal solution always seeks to improve the resource utilization of the active PMs. Hence, for almost every active PM, at least one resource is fully utilized. If that is not the case for some active PM, it is because there are no more unassigned VMs that can fit in that PM.
4.3.2 Experiment II – VMs and PMs
In this experiment, VMs will be assigned to PMs. Although the numbers of VMs and PMs are not so different from the previous problem instance, the mixes of the VM and PM types are quite different (see Table 4). Here, we have a fuller mix of almost all types of VMs and PMs. Formulation F2 is impractical for this instance, because the numbers of configurations for some large PM types are too great to be precomputed within a reasonable amount of time. Hence, we compare formulation COMB with formulation F1. Formulation F1 has binary variables and constraints, quite a bit larger than the previous problem instance. Formulation COMB has binary variables and constraints. Formulation F1 takes Gurobi about seconds (about minutes) to solve, which is much longer than for the previous instance. Formulation COMB takes seconds, which is about one third of the time for Formulation F1. We see that, for this problem instance, formulation COMB has a modest computation time advantage over F1, but a great advantage over F2. The optimal assignment has a cost of and the average cost reported by the heuristic is .
With respect to resource utilization, the vCPUs and the number of disks are still critical resources for most PM types. The memory utilization is very high for more than half of the PMs. The disk capacity is often less than utilized for all PM types other than m3, l1 and l5.
4.3.3 Experiment III & IV – around VMs and PMs
We further experimented with a much larger example, where VMs are to be assigned to PMs of different types. The mixes of VMs and PMs are described in the part about Experiment III in Table 4. The results are summarized in Table 6.
For this experiment, formulation F1 fails to finish running due to the large number of variables and constraints. Formulation F2 has binary variables and constraints. It took seconds to solve. We find that the vCPUs and the number of disks are still critical resources for most PM types.
In the setup of Experiment III, there are not any large VMs and PMs. For Experiment IV, we added 10 large VMs and 12 large PMs, as shown in Table 4. The experiment emulates a scenario where an enterprise customer deploys around VMs for its workforce. Most of the VMs are ordinary (not very powerful), and they are intended to be used by regular office workers. But, some large VMs are needed by power users or larger servers, and they must be put on large PMs. For this experiment, formulation F2 is impractical and formulation F1 fails to finish running. Formulation COMB has binary variables and constraints. It takes seconds to solve.
For the setup of Experiment III, we ran additional experiments on random combinations of the VM and PM types. More specifically, we kept VMs and PMs. We randomly assigned the VMs into the m3 types and c3 types, and randomly assigned the PMs into the s types and m types. The results of using formulation F2 are reported in Table 7. There are constraints for each combination, and around binary variables, depending on the specific PM types. The computation time is around dozens of seconds.
4.3.4 Experiment V, VI and VII – PolicyBased Examples
As discussed earlier, the number of feasible configurations for the large PM types can be very large, which limits the usefulness of formulations F2 and COMB. In reality, for economic or performance reasons, datacenters may have policies that large PMs are reserved for VMs that require a large amount of resources (see Section 3.4 for more discussion). Under such policies, the number of feasible configurations for large PMs can be drastically reduced, making formulation F2 and COMB more widely applicable. This group of experiments demonstrate the above points. The parameters for Experiment V, VI and VII are given in Table 5. Experiment V has 6020 VMs and 2012 PMs, a fairly large deployment. Experiment VII has 7500 VMs and 6000 PMs, an even larger deployment; moreover, all types of VMs and PMs are involved. Experiment VI is a smaller deployment, but has some large VM and PM types.
Suppose a datacenter has the following policy restrictions for large PMs (ltype), and suppose there are no restrictions for the small (stype) or medium (mtype) PM types.

l1 is restricted to: m3.xlarge, m3.2xlarge, c3.xlarge, c3.2xlarge, c3.4xlarge, c3.8xlarge, r3.xlarge, r3.2xlarge, r3.4xlarge, r3.8xlarge, i2.xlarge, i2.2xlarge, i2.4xlarge, i2.8xlarge.

l2 and l3 are restricted to: m3.2xlarge, c3.2xlarge, c3.4xlarge, c3.8xlarge, r3.2xlarge, r3.4xlarge, r3.8xlarge, i2.2xlarge, i2.4xlarge, i2.8xlarge.

l4, l5 and l6 are restricted to: c3.4xlarge, c3.8xlarge, r3.4xlarge, r3.8xlarge, i2.4xlarge, i2.8xlarge.
With the above policy, the number of configurations for each ltype PM is small and we can use formulation F2 to solve the problems in Experiment V, VI and VII. The results are summarized in Table 5. All the problems can be solved very quickly under formulation F2, from under 1 second to 261 seconds. In contrast, without the policybased restrictions, the two problems in Experiment V and VII cannot be solved using any of the three formulations. The problem in Experiment VI cannot be solved using formulation F1 or formulation F2.
4.3.5 Experiments with Different VMPM Ratios
In the cloud environment, the ratio between the VMs and PMs may vary. In this set of experiments, we start with the setup of Experiment III but vary the number of PMs. More specifically, there are , , , and PMs in each experiment, where the proportion of each PM type is the same as that of Experiment III. We keep the VM setup of Experiment III unchanged, which has VMs. The experiment with PMs is not feasible; hence we do not report its result. We summarize the performance results in Table 9, and plot the number of active PMs and the number of assigned VMs of each PM type in Fig. 4 and Fig. 5, respectively. The results show that for the m3type VMs, the PM types s2, s3, and s4 are more costefficient compared with the s1type and the mtype PMs. Thus, with more available PMs from each PM type, the optimal solution shifts the VM assignments to the PM types s2, s3 and s4. Though more PMs need to be turned on, the overall cost is decreased from to .
VM Type  No. of VMs  PM Type  No. of PMs  
Experiment  I  II  III  IV  Experiment  I  II  III  IV 
m3.medium  36  5  500  500  s1  7  5  150  150 
m3.large  14  5  200  200  s2  7  5  150  150 
m3.xlarge  10  5  150  150  s3  10  5  150  150 
m3.2xlarge  10  5  150  150  s4  7  5  150  150 
c3.large  5  2  m1  5  5  100  100  
c3.xlarge  5  2  m2  5  5  100  100  
c3.2xlarge  5  2  m3  5  5  100  100  
c3.4xlarge  5  2  m4  2  5  50  50  
c3.8xlarge  5  2  m5  2  5  50  50  
r3.large  5  l1  5  2  
r3.xlarge  5  l2  5  2  
r3.2xlarge  5  l3  5  2  
r3.4xlarge  5  l4  5  2  
r3.8xlarge  5  l5  5  2  
l6  2  
i2.xlarge  2  
i2.2xlarge  2  
i2.4xlarge  3 
VM Type  No. of VMs  PM Type  No. of PMs  

Experiment  V  VI  VII  Experiment  V  VI  VII 
m3.medium  0  0  1875  s1  300  0  900 
m3.large  4000  0  750  s2  300  0  900 
m3.xlarge  2000  0  563  s3  300  0  900 
m3.2xlarge  0  15  562  s4  300  0  900 
c3.large  0  0  600  m1  200  10  450 
c3.xlarge  0  0  600  m2  200  10  375 
c3.2xlarge  0  0  150  m3  200  10  375 
c3.4xlarge  3  15  75  m4  100  10  375 
c3.8xlarge  3  15  75  m5  100  10  375 
r3.large  0  0  600  l1  2  5  75 
r3.xlarge  0  0  600  l2  2  5  75 
r3.2xlarge  0  0  150  l3  2  5  75 
r3.4xlarge  3  0  150  l4  2  5  75 
r3.8xlarge  3  15  75  l5  2  5  75 
l6  2  2  75  
i2.xlarge  0  0  300  
i2.2xlarge  3  15  300  
i2.4xlarge  3  15  75  
i2.8xlarge  2  15  75  
Total  6020  105  7500  Total  2012  80  6000 
Experiment  I  II  III  IV 

Run Time of F1  41.46  2278  N/A  N/A 
Run Time of F2  0.31  N/A  6.84  N/A 
Run Time of COMB  N/A  955  N/A  2336 
Cost by Optimization  4540  45300  66040  73340 
Cost by Heuristics  5431  51102  78628  85930 
Combination  Run Time (seconds)  #Constraints  #Binaries 

1  6.84  3018  1099900 
2  20.47  3018  1512195 
3  13.71  3018  1804207 
4  9.45  3018  1865270 
5  14.99  3018  1719534 
6  13.18  3018  2244564 
7  11.48  3018  2090610 
8  17.59  3018  2506636 
9  16.35  3018  2009020 
10  26.50  3018  2260166 
Experiment  V  VI  VII 

Run Time of F2  15s  0s  261s 
Cost by Optimization  657200  170000  1046271 
Cost by Heuristics  666805  184710  1851922 
#Varialbes of F2  2207686  56  3747950 
#Constraints of F2  6054  633  4018 
#PM  

Cost by Optimization  127120  92700  76100  69040  66040 
Cost by Heuristics  128370  106091  101333  86380  78628 
Active PMs  276  303  337  338  338 
Summary of Tests with Skewed VMPM Ratios; #VM =
5 Conclusions and Discussions
In this paper, we examine the approach of using MIP formulations and algorithms for a special VM placement problem, which has difficult disk anticolocation constraints. One of the key challenges is the potentially long computation time of MIP algorithms. We explore how different problem formulations – by redefining variables – can help to reduce the computation time. Our main effort is on developing the nonobvious formulations F2 and COMB. For a given problem instance, the three formulations often have drastically different numbers of variables and constraints, and the differences in computation time are often enormous. For many problem instances, it is easy to see which formulation may be suitable and which are definitely impractical, by counting the number of variables and the number of constraints. In the end, all three formulations can be useful, but for different problem instances. They all should be kept in the toolbox for tackling the problem. Out of the three, formulation COMB is especially flexible and versatile, and it can solve large problem instances.
The approach used by the paper is extensible to other datacenter resource management problems. For a given problem, different formulations likely exist and they can have very different computation time; which formulation has the least computation time depends on the problem instances. Thus, it is important to explore different formulations and select suitable ones for different instances.
Even with proper formulations, MIP algorithms can only solve what might be considered small to medium problem instances in our application setting, good enough for perhaps 1,000 – 10,000 PMs. To model problems for a large datacenter in its entirety, an MIP formulation may involve trillions of variables and/or constraints, and there is no hope to solve them optimally within acceptable time. In such cases, we show in XTFC17 that a hierarchical decomposition heuristic can be effective. The decomposition method breaks a large, hard problem into many independent subproblems, which can be solved in parallel by separate control servers. Each of the subproblems can be made sufficiently small and solvable quickly using MIP algorithms. The material of this paper is relevant to those subproblems.
Finally, the VM placement problems encountered in practice will likely contain multiple difficult components, expressed by different sets of constraints or requirements. Each of the difficult components may require different techniques to cope with. A complete solution will need to combine those techniques together. This paper examines one such difficult component and provides one class of techniques, which can be used as a building block for solving problems encountered in practice.
6 Acknowledgments
X. Zheng was supported by the Shanghai Committee of Science and Technology, China (Grant No. 14510722300), and the Youth Innovation Promotion Association, CAS.
References
 (1) J. Xu, J. Fortes, Multiobjective virtual machine placement in virtualized data center environments, in: Proc. of IEEE Online Green Communications Conference (GreenCom), 2010.
 (2) M. Wang, X. Meng, L. Zhang, Consolidating virtual machines with dynamic bandwidth demand in data centers, Proc. of IEEE INFOCOM (2011) 71–75.
 (3) H. Jin, D. Pan, J. Xu, N. Pissinou, Efficient VM placement with multiple deterministic and stochastic resources in data centers, in: IEEE Global Communications Conference (GLOBECOM), 2012.
 (4) Cisco Data Center Infrastructure 2.5 Design Guide[Online] http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_Infra2_5/DCI_SRND.pdf.
 (5) W. C. Arnold, D. J. Arroyo, W. Segmuller, M. Spreitzer, M. Steinder, A. N. Tantawi, Workload orchestration and optimization for software defined environments, IBM Journal of Research and Development 58 (2/3) (2014) 11:1–11:12.
 (6) Amazon, Amazon EC2 Instances[Online] http://aws.amazon.com/ec2/instancetypes/.
 (7) Y. Xia, M. Tsugawa, J. A. B. Fortes, S. Chen, Largescale VM placement with disk anticolocation constraints using hierarchical decomposition and mixed integer programming, IEEE Transactions on Parallel and Distributed Systems 28 (5) (2017) 1361–1374.
 (8) AWS storage services overview  a look at storage services offered by AWS, [Online] https://d0.awsstatic.com/whitepapers/AWS%20Storage%20Services%20Whitepaperv9.pdf (2015 Nov.).
 (9) A. LêQuôc, Top 5 ways to improve your AWS EC2 performance, [Online] https://www.datadoghq.com/blog/top5waystoimproveyourawsec2performance/.
 (10) G. Costa, D. Marti, Choosing EC2 instances for NoSQL, [Online] http://www.scylladb.com/2016/02/26/bestamazonec2instancenosql/ (2016 Feb.).
 (11) L. Poland, What is the story with AWS storage?, [Online] http://www.datastax.com/dev/blog/whatisthestorywithawsstorage (2014 Feb.).
 (12) K. O’Dell, Howto: Select the right hardware for your new Hadoop cluster, [Online] http://blog.cloudera.com/blog/2013/08/howtoselecttherighthardwareforyournewhadoopcluster/ (2013 Aug.).
 (13) C. Chekuri, S. Khanna, On multidimensional packing problems, SIAM Journal on Computing 33 (4) (2004) 837–851.

(14)
L. A. Wolsey, G. L. Nemhauser, Integer and Combinatorial Optimization, WileyInterscience, 1999.
 (15) O. Biran, A. Corradi, M. Fanelli, L. Foschini, A. Nus, D. Raz, E. A. Silvera, Stable networkaware VM placement for cloud systems, in: IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2012.
 (16) W. Fang, X. Liang, S. Li, L. Chiaraviglio, N. Xiong, VMPlanner: Optimizing virtual machine placement and traffic flow routing to reduce network power costs in cloud data centers, Computer Networks 57 (1) (2013) 179–196.
 (17) C. Papagianni, A. Leivadeas, S. Papavassiliou, V. Maglaris, C. CervellóPastor, A. Monje, On the optimal allocation of virtual resources in cloud computing networks, IEEE Transactions on Computers 62 (6) (2013) 1060–1071.
 (18) S. Rampersaud, D. Grosu, Sharingaware online virtual machine packing in heterogeneous resource clouds, IEEE Transactions on Parallel and Distributed Systems 28 (7) (2017) 2046–2059.

(19)
M. Trick, Formulations and reformulations in integer programming, in: International Conference on Integration of Artificial Intelligence (AI) and Operations Research (OR) Techniques in Constraint Programming, 2005.
 (20) X. Meng, V. Pappas, L. Zhang, Improving the scalability of data center networks with trafficaware virtual machine placement, in: Proceedings of IEEE INFOCOM, 2010.
 (21) M. Alicherry, T. Lakshman, Network aware resource allocation in distributed clouds, in: Proc. of IEEE INFOCOM, IEEE, 2012, pp. 963–971.
 (22) L. Zhang, X. Yin, Z. Li, C. Wu, Hierarchical virtual machine placement in modular data centers, in: IEEE 8th International Conference on Cloud Computing (CLOUD 2015), 2015.
 (23) J. Jiang, T. Lan, S. Ha, M. Chen, M. Chiang, Joint VM placement and routing for data center traffic engineering, in: Proceedings of IEEE INFOCOM, 2012, pp. 2876–2880.
 (24) M. Sindelar, R. K. Sitaraman, P. Shenoy, Sharingaware algorithms for virtual machine colocation, in: Proceedings of the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA ’11), 2011.
 (25) S. Rampersaud, D. Grosu, Sharingaware online algorithms for virtual machine packing in cloud environments, in: IEEE 8th International Conference on Cloud Computing (CLOUD 2015), 2015.
 (26) X. Li, Z. Qian, S. Lu, J. Wu, Energy efficient virtual machine placement algorithm with balanced and improved resource utilization in a data center, Mathematical and Computer Modelling 58 (56) (2013) 1222–1235.
 (27) A. Marotta, S. Avallone, A simulated annealing based approach for power efficient virtual machines consolidation, in: IEEE 8th International Conference on Cloud Computing (CLOUD), IEEE, 2015, pp. 445–452.
 (28) L. Chen, H. Shen, Consolidating complementary VMs with spatial/temporalawareness in cloud datacenters, in: IEEE INFOCOM, 2014.
 (29) S. Maguluri, R. Srikant, L. Ying, Stochastic models of load balancing and scheduling in cloud computing clusters, in: Proceedings of IEEE INFOCOM, 2012, pp. 702–710.
 (30) A. Hbaieb, M. Khemakhem, M. B. Jemaa, Using decomposition and local search to solve largescale virtual machine placement problems with disk anticolocation constraints, in: 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), 2017, pp. 688–695.
 (31) H. I. Christensen, A. Khan, S. Pokutta, P. Tetali, Approximation and online algorithms for multidimensional bin packing: A survey, Computer Science Review 24 (2017) 63–79.
 (32) B. Han, G. Diehr, J. Cook, Multipletype, twodimensional bin packing problems: Applications and algorithms, Annals of Operations Research 50 (1994) 239–261.
 (33) M. Delorme, M. Iori, S. Martello, Bin packing and cutting stock problems: Mathematical models and exact algorithms, European Journal of Operational Research 255 (1) (2016) 1–20.
 (34) Y. Ajiro, A. Tanaka, Improving packing algorithms for server consolidation, in: Proc. of Computer Measurement Group Conference (CMG), 2007.
 (35) M. Chen, H. Zhang, Y. Y. Su, X. Wang, G. Jiang, K. Yoshihira, Effective VM sizing in virtualized data centers, in: Proc. of IFIP/IEEE Integrated Network Management (IM), 2011.
 (36) W. Wang, D. Niu, B. Li, B. Liang, Dynamic cloud resource reservation via cloud brokerage, in: IEEE International Conference on Distributed Computing Systems (ICDCS 2013), 2013.
 (37) Apache CloudStack Project. [Online] http://cloudstack.org/.
 (38) OpenStack Project. [Online] http://www.openstack.org/.
 (39) Eucalyptus Systems. [Online] http://www.eucalyptus.com/.
 (40) Gurobi, Gurobi Web Site, [Online] http://www.gurobi.com/.
 (41) M. Fielding, Virtual CPUs with Amazon web services, [Online] http://www.pythian.com/blog/virtualcpuswithamazonwebservices/ (June 2014).
 (42) H. Mittelmann, Benchmarks for optimization software, [Online] http://plato.la.asu.edu/bench.html.
 (43) C. Tang, M. Steinder, M. Spreitzer, G. Pacifici, A scalable application placement controller for enterprise datacenters, in: Proc. of WWW, 2007.
Comments
There are no comments yet.