Multi-factorial Optimization for Large-scale Virtual Machine Placement in Cloud Computing

01/18/2020 ∙ by Zhengping Liang, et al. ∙ Chongqing University Shenzhen University NetEase, Inc 4

The placement scheme of virtual machines (VMs) to physical servers (PSs) is crucial to lowering operational cost for cloud providers. Evolutionary algorithms (EAs) have been performed promising-solving on virtual machine placement (VMP) problems in the past. However, as growing demand for cloud services, the existing EAs fail to implement in large-scale virtual machine placement (LVMP) problem due to the high time complexity and poor scalability. Recently, the multi-factorial optimization (MFO) technology has surfaced as a new search paradigm in evolutionary computing. It offers the ability to evolve multiple optimization tasks simultaneously during the evolutionary process. This paper aims to apply the MFO technology to the LVMP problem in heterogeneous environment. Firstly, we formulate a deployment cost based VMP problem in the form of the MFO problem. Then, a multi-factorial evolutionary algorithm (MFEA) embedded with greedy-based allocation operator is developed to address the established MFO problem. After that, a re-migration and merge operator is designed to offer the integrated solution of the LVMP problem from the solutions of MFO problem. To assess the effectiveness of our proposed method, the simulation experiments are carried on large-scale and extra large-scale VMs test data sets. The results show that compared with various heuristic methods, our method could shorten optimization time significantly and offer a competitive placement solution for the LVMP problem in heterogeneous environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As a new resource provisioning and computing model, cloud computing makes it easier and more convenient for users to obtain configurable resources demand in an online manner [52][43]. It not only lowers the online deployment threshold and management complexity but also brings the benefits of reliability and low-risk [32][45]. With the gradual maturity of cloud computing technology, the demand for cloud service by the small and large scale industries are continuously rising, which lead to the expansion of the data center together [8]. The choice of virtual machine placement (VMP) strategy is vital to lower operational cost for cloud service providers.

Virtual machines (VMs) are created according to the hardware computing resources demand of tenants, i.e., CPU, RAM and disk. They independently run an operating system and some applications [16]. The virtualization technology lies the key to the success of cloud computing, which allows multiple VMs to run on the same physical server (PS) simultaneously without mutual interruption on each other [31]. The VMP problems can be given as to seek an optimal solution for placing VMs onto PSs [6][51]. Since the specified VMP problems can refer to NP-Hard problems, it poses challenges for the researchers in finding an optimal solution [24][36].

Up till now, a series of VMP strategies, including heuristic and mathematical methods, have been investigated from different perspectives. According to the difference of target optimization objective, existing researches can be loosely classified as follows: i) target for energy efficient optimization, e.g., an energy-efficient adaptive resource scheduler for networked fog centers (NetFCs)

[39], a minimum energy VM scheduling algorithm (MinES) [11], a holistic virtual machine scheduling algorithm (GRANITE) [25], an energy-efficient knee point-driven evolutionary algorithm (EEKnEA) [50]; ii) target for network traffic optimization, e.g., a reliable placement based on the multi-optimization with traffic-aware algorithm (RPMOTA) [30], a traffic-aware VM optimization (TAVO) scheme on nonuniform memory access (NUMA) systems [10], a multi-objective ACS algorithm (ACS-BVMP) [35]; iii) target for resource allocation optimization, e.g., a correlation-aware VMP scheme [9], a layered progressive resource allocation algorithm [21], an energy-aware resource allocation method (EnReal) [47], a two-tiered on-demand resource allocation mechanism [40]. Although a serious of methods have been successfully developed to address the VMP problems in cloud computing, most of them are carried out on small-scale VMs simulation experiments.

Evolutionary algorithms (EAs) are classified as one of the population-based heuristic methods that simulating the evolutionary process of the population with optimization characteristics [4]. Owing to the effective global population-based search, EAs have been shown their problem-solving in VMP problems [46][2][28]. But the limitation of them is that they often require a large number of fitness evaluations and high time computation to obtain promising solution [27][22]. As the number of VMs increases in the data center, the VMP problem becomes more complicated and computational expensive. Conventional EAs, which are typically designed to solve a single optimization task, have few practical values on solving the large-scale virtual machine placement (LVMP) problem in the actual cloud environment.

Noted that many real-world optimization tasks tend to be potentially correlated [15][14], and some useful knowledge learning from one task could be applied to learn another related task efficiently [5][17]. Motivated by multi-task learning [7][34], the multi-factorial optimization (MFO) technology was introduced to evolve multiple tasks simultaneously by sharing good traits among them in the field of evolutionary computing [19][18]. Principally, MFO offers the ability to fully exploit the implicit parallelism of population-based search during the evolutionary process [53][23].

This paper aims to develop a new EA-based MFO method to complete the placement of VMs onto PSs from the optimization target of resource allocation. The core idea of this paper is depicted in Fig. 1 and described as follow. When optimizing the LVMP problem that considered the population-based methods, the population have to contain the information of all VMs in the data center. Nevertheless, it is worth to note that the VMs do not interfere with each other in the data center. Based on this observation, a crucial inspiration is that a single representation of individual in population is actually not desired to preserve the information of all VMs in the data centers. Further, it means that the LVMP problem could be potentially broken down into multiple small-scale virtual machine placement (SVMP) problems, which could be seen as the form of MFO problem. Therefore, the MFO technology is naturally employed to solve multiple SVMP problems within single population simultaneously. As shown in Fig. 1, after the iteration process, the solution of the LVMP problem can be gained from the obtained solutions of the established MFO problem. Based on the description above, the main contributions of this paper are summarized as follows:

i). The deployment cost based VMP problem is formulated as the form of the MFO problem in heterogeneous environments of the large-scale data center.

ii). A multi-factorial evolutionary algorithms (MFEA) coupled with greedy-based allocation operator is proposed to solve the established MFO problem.

iii). A re-migration and merge operator is designed to provide the integrated solution of LVMP problem from the solutions of MFO problem.

The rest of this paper is organized as follows. The related work of EA-based VMP methods and the foundation knowledge of MFEA are reviewed in Section II. Section III describes the formulation of VMP problem and the overall framework of the proposed method is provided in Section IV. Section V discuss the simulation experiments and analysis of our proposed method. Finally, Section VI deals with conclusion and future work.

Fig. 1: The core idea of solving the LVMP problem used MFO technology

Ii Background

Ii-a EA-based VMP Methods

Different from the mathematical optimization methods, EA-based methods find the optimal solution through population iteration search without mathematical analysis of the target objective function. In recent years, a series of EA-based methods, such as particle swarm optimization (PSO), genetic algorithm (GA), ant colony system (ACS) and so on, have been developed to address various VMP problems in cloud computing, which are reviewed as follows:

i) PSO-based VMP methods. A.-P. Xiong et al. [46] designed energy efficient VMs allocation problem using PSO technology. In their study, the fitness function of PSO is defined as the total Euclidean distance to determine the optimal point between resource utilization and energy consumption. But its limitation of this work is that authors considered single type of VMs. P. Aruna et al. [3] explored a PSO algorithm for the VM provisioning to make the cloud data centers as power efficient, which involves the discussion of the power model for the servers and the implement of the proposed power aware PSO algorithm for the VM provisioning. Recently, another modified binary PSO is proposed by A. Tripathi et al. [42] and applied for solving the VMP problem. It aims to addresses two important Cloud aspects, e.g. efficient energy usage and effective resource utilization.

ii) GA-based VMP methods. David Wilcox et al. [44] formally defined multi-capacity bin packing, which was the generalization of conventional bin packing. And they also develop an reordering grouping genetic algorithm (RGGA) to assign VMs to servers. A. S. Alashaikh et al. [2] adopted the notion of ceteris paribus as an interpretation for the decision maker’s preferences and incorporated it in a constrained multi-objective problem known as VMP problems. They proposed a variant of the NSGA-II algorithm that promotes ceteris paribus preferred solutions and evaluate its applicability. X. Xu et al. [47] built a many-objective VMP optimization model target to minimize energy consumption and maximize load balance, resource utilization, and robustness. They also proposed an energy-efficient knee point-driven evolutionary algorithm (EEKnEA) to address their optimization model.

iii) ACS-based VMP methods. An early study using ACS for VMP problems was introduced by E. Feller et al. [13]. In their study, the VMP problem was formulated as a static multiple dimensional bin packing problem, which the optimization goal was to minimize the number of cloud servers for support current load. To solve the formulated optimization problem, an ACS algorithm was also developed in [13] coupled with a max-min updated method. Nevertheless, this method installs VMs in PSs only based on a single resource. Another study adopting ACS for VMP problems was introduced by X. Liu et al. [29], which consolidates VMs according to multiple resources. In their approach, a different approach was designed to update the pheromones between two pairs of VMs to measure their need for PM in ACS. The number of PSs is the same as the number of the VMs in the first generation and it was decreased as the evolutionary proceed. This work on ACS for VMP problem was improved later by X. Liu et al. [28], which involves an proposed order exchange and migration (OEM) local search techniques to transform an infeasible solution into a feasible solution.

In addition to the above three categories, there are other heuristic algorithms for solving VMP problems. For example, N. K. Sharma et al. [38] introduced approach of GA and PSO referred to as HGAPSO for the placement policy of VMs to PSs. X. Li et al. [26] proposed discrete firefly algorithm to solve VMP problems, which takes firefly’s location as the placement result and brightness as the objective value.

Ii-B Multi-Factorial Evolutionary Algorithm (MFEA)

The multi-factorial evolutionary algorithm (MFEA) is able to evolve two or more tasks simultaneously and accelerate the convergence of each task [19][33]. The unified representation block is built in MFEA to achieve knowledge transfer among tasks, which include the encoding and decoding operation. The encoding operation is called to build an unified express space denoted as . The individual in includes the genetic material of all tasks. The dimension of can be defined as = max{}, where is the dimension of the -th task and is the number of tasks, = 1, 2, …, . Reversely, the decoding step can decipher into task-specific solutions with the -th solution being = (1: ), where (1: ) retrieves the first dimensions of . Some foundation definition of each individual are described as follow:

Factorial Cost: The objective function value of individual on task is defined as the factorial cost of . The factorial costs of on the other tasks are set to infinity.

Factorial Rank: The factorial rank of individual is defined as the rank of on the -th task considering all individuals in .

Skill Factor: The skill factor of individual indicates the task on which performs the most effectively, i.e., , where k = 1, 2, …, K. For the sake of simplicity, we say an individual is specific to a task if the task is the skill factor of .

Scalar Fitness: The scalar fitness of individual is denoted as the reciprocal of the corresponding factorial rank on task , i.e., . A greater scalar fitness value means that

can survive to the next generation at a higher probability.

1:population size, S; number of tasks, K.
2:a series of solution.
3:Initialize the population P
4:Evaluate P
5:while the maximum number of evaluations is not reached do
6:     Reproduce offspring O by assortative matting
7:     Assign the skill factor for O by vertical cultural transmission
8:     Evaluate O
9:     Generate new population NP = P O
10:     Update the scalar fitness of NP
11:     Select the fittest individuals from NP
12:end while
Algorithm 1 The Basic Framework of MFEA

Further, the pseudo code of MFEA is presented in Algorithm 1, which can be summarized as follow.

In the beginning, the initial operations, e.g., randomly generating N individuals in a unified express space Y and assigning the skill factor to each individual in the initial population, are called to form a population. Then, the crossover [12] and polynomial mutation [20] operations are employed to reproduce offspring according to the assortative mating and vertical cultural transmission mechanism. Notably, the assortative mating and vertical cultural transmission are the characteristic and essential components of MFEA, which allow individuals from different tasks to share genetic information with each other. Specifically, the assortative mating mechanism enables the individuals from different tasks to mate with each other at a certain probability, namely random matting probability (rmp). The vertical cultural transmission mechanism randomly assigns skill factors to the offspring. It means that the offspring specific to one task may be switched to another directly, leading to a complex optimization task that may acquire superior solutions by learning from other tasks. More details of assortative mating and vertical cultural transmission are available in [19]. Afterward, the factorial cost, factorial rank, and scalar fitness of each offspring individual are updated. Finally, the elite-based environment selection operator is employed to form the next generation population.

In recent years, MFEA and its variants have been successfully applied to various real-world optimization problems. In particular, L. Zhou et al. [54] proposed the MFEA equipped with a permutation-based unified representation and split-based decoding operator to solve the NP-hard capacitated vehicle routing problems. H. ThiThanh Binh et al. [41] proposed a modified MFEA for cluster shortest path tree problems (CSTP), together with novel genetic operators, e.g., population initialization, crossover, and mutation operators. Furthermore, a novel decoding scheme for deriving factorial solutions from the unified representation in MFEA, which is the critical factor to the performance of any variant of the MFEA, is also introduced in [41]. C. Yang et al. [48] extend their method TMO-MFEA proposed in [49] to solve operational indices optimization, which involves a formulated multi-objective multi-factorial operational indices optimization problem. In their proposed optimization model, the most accurate task is considered to be the original functions of the solved problem, while the remained models are the helper tasks to accelerate the optimization of the most accurate task.

Iii The formulation of VMP Problem

The VMP problems can be seen as linear programming (LP) problems, which usually consider hardware resource constraints of PSs such as CPU, RAM and disk. As the industry’s demand for VMs continues to grow, the single-task EA-based solvers have suffered from some limitations when addressing a LVMP problem. Most of them require more optimization time to complete the allocation of VMs onto PSs in the large-scale data center, which resulted in the poor scalability. In this paper, the LVMP problem is reformulated in the form of the MFO problem, which is described as follows. The used variables and their definitions are summarized in Table

I.

(1)

where represents a SVMP optimization task that decomposed from the LVMP problem, which is described as follow. And is the solution of -th optimization task , = 1, 2 … . is the total number of SVMP optimization tasks.

The main differences between the established MFO problem and other MFO problems lie in two aspects presented as follows. On the one hand, the existing MFO problems aim to solve multiple optimization tasks, and the obtained solution set are corresponding to all tasks. Conversely, the purpose of established MFO problems is to address the single optimization problems. On the other hand, the number of optimization tasks is changed dynamically. It means that varies with the size of the VMs in the data center.

Assume that there are VMs and PSs in the data center. Given VMs assigned to -th VMP optimization task, the number of VMP optimization tasks is defined as follows:

(2)

Considering that the relationship between and is not always divisible, the number of VMs assigned to the -th VMP optimization task , e.g., = , is reallocated as follows:

(3)

where the symbol is the mod operation. Owing to the organic composition of different types of PSs, the data center could be operated with high utilization of PS clusters and achieved energy efficiency. However, the existing VMP researches generally designed the optimization model to lower the number of activated PSs or improve the utilization of PSs cluster, which ignore the heterogeneity between PSs. In the actual cloud environment, when a PS is activated, it brings deployment costs that depends on its configuration. It means that the deployment costs can be naturally used to guide which type of PS to activate for improving the utilization of PSs cluster. According to the description above, this paper proposes a deployment cost based optimization model to solve VMP problems in heterogeneous environment.

Assume that PSs are categorized in to different types according to the specific configuration, where 1. The number of each type of PSs is denoted as , where . Noted that PSs is also assigned to the -th task, which could be given as:

(4)

The assignment tag of VMs in -th task is denoted as , which is a matrix consisted of 0/1 variables. If the in -th task is placed on , then the element of =1, otherwise =0. Similarly, the activated tag of PSs in -th task is denoted as , which is a vector consisted of 0/1 variables. If there are greater than or equal to one VM placed in , then the element of = 1, otherwise = 0. The element and p are formulated as follows:

(5)
(6)

In this work, three typical computing resources (CPU, RAM and disk) in cloud computing are considered as constraint options. Based on the above description, the deployment cost based SVMP optimization task for the established MFO problem can be given as follows:

(7)

where , and represent the computing source requirement CPU, RAM and disk of the -th VM in -th optimization task, respectively.

Symbols Definition
Number of VMs in data center
Number of PSs in data center
Number of PSs types in data center
Number of VMP optimization task
The -th VMP optimization task
Solution of -th VMP optimization task
Number of VMs assign to each VMP optimization task
Number of PSs assign to -th VMP optimization task
Number of -th type PS in data center
Binary variable of VM placement
The -th VM in -th VMP optimization task
CPU requirement of VM
RAM requirement of VM
Disk requirement of VM
Binary variable of PS status
The -th PS in data center
Deployment cost of the -th PS
CPU capacity of PS
RAM capacity of PS
Disk capacity of PS
TABLE I: DEFINITION OF SYMBOLS

Iv Our Proposed Method

Iv-a The Overall Framework

The overall framework of our proposed method is described in Algorithm 2.

Line 1 to 2 represent the pretreatment process of the input VMs list. The input VMs list is randomly assigned to H optimization tasks. Line 3 is the process of building unified express space for H VMP optimization tasks, which is presented in subsection IV.B in detail. Lines 4 to 6 describe a series of initialization operator for generating the population in unified express space. Lines 8 to 9 are the process of offspring reproduction, which is similar to the basic MFEA. The crossover and mutation operators used in assortative matting are described detailedly in subsection IV.D. After that, the generated offspring population is assigned the skill factor according to the vertical cultural transmission. Line 10 is the proposed greedy-based allocation operator, which is described in subsection IV.C. Line 11 is the evaluation of offspring population and lines 12 to 13 represent the updated the scalar fitness for both the offspring and parent population. Line 14 forms the population to survive into the next generation based on the elite-based environment selection. Lines 7 to 15 are executed repeatedly until the termination condition is satisfied. Line 16 is the proposed re-migration and merge operator, which is called to obtained the placement solution of the input VMs list.

1:VMs List, V; PSs List, M; Number of PSs types, L; Number of VMs assigned to each task, N.
2:The placement solution for V.
3:Obtain the number of tasks H according to Eq. (2)
4:Assign V to H tasks randomly
5:Build unified express space among H tasks
6:Initialize the population P
7:Perform greedy-based allocation operator on P Algorithm 3.
8:Evaluate P
9:while the maximum number of evaluations is not reached do
10:     Reproduce offspring O by assortative matting
11:     Assign the skill factor for O by vertical cultural transmission
12:     Perform greedy-based allocation operator on O Algorithm 3.
13:     Evaluate O
14:     Generate new population NP = P O
15:     Update the scalar fitness of NP
16:     Select the fittest individuals from NP
17:end while
18:Perform re-migration and merge operator on P Algorithm 4.
Algorithm 2 The Framework of Our proposed Method

Iv-B Building Unified Express Space

In this subsection, the procedure of building the unified express space is described as follows.

To achieve the implicit knowledge transfer among H VMP optimization tasks, it is essential to construct a unified genetic express space among them. The individuals, which are generated in unified express space, contain all the information of VMs for each task. Note that the number of the same VM types is not equal in each VMP optimization task due to the random assignment process. Different types of VM refer to the different configuration (such as CPU, RAM, or disk). For each different types of VMs, take the maximum number of VMs in all tasks as the number of this types of VMs in unified express space.

Fig. 2 illustrates an example of establishing the unified express space. Suppose that there are five types of VMs in the data center. Their type ID is denoted as type-1, type-2,…, type-5, respectively. And all the VMs are assigned to three VMP optimization tasks randomly. Take the type-2 as an example, the number of type-2 assigned to the first VMP optimization task is 1, the number of type-2 assigned to the second VMP optimization task is 2, and the number of type-2 assigned to the third VMP optimization task is set to be 2. As a result, the number of type-2 is 2 in unified express space. Other types of VMs do the same operation. After the transform process above, the individuals generated in unified express space retain the information of each VMP optimization task and it could be decoded into all tasks.

The representation block of individual in unified express space is shown in Fig. 3. In a individual, if the PS contains VMs, it is assigned 1, otherwise 0. The PSs list not only consists of binary number, but records the information of VMs with which it loaded. In Fig. 3, for example, the symbol of first PS is assigned 1, which indicates that the first PS is activated and installs the type-1 and type-3 VMs.

Compared with the single-factorial solver, the representation block of individual in our method is more effective. Take Fig. 2 as an example, suppose there are 23 VMs in the data center. An possible individual in the unified express space is shown in Fig. 2, it can be seen that each individual generated in unified express space only needs to contain the information of 10 VMs. And the representation block of individual in single-factorial solver is consisted of the information of 23 VMs. The individual with fewer information of VMs means that it can complete the placement of VMs onto PSs in a faster time obviously.

Fig. 2: The process of building unified express space
Fig. 3: The representation block of individual in unified express space

Iv-C Greedy-based Allocation Operator

In this subsection, the proposed greedy-based allocation operator is described as follows.

The VMs list of individual in the unified express space need to be decoded into its corresponding task before the greedy-based allocation operator is performed. The decoding operation takes the first n VMs that meet the types and number requirement of VMs assigned to a task to generate the VMs list. Then the greedy-based allocation operator is performed on the generated VMs list. An example of the decoding operation is depicted in Fig. 4. As shown in Fig. 4, a possible VMs list of individual in unified express space is {1, 2, 3, 5, 4, 5, 3, 2, 5, 4}. After the decoding operation, the generated VMs list for one of the specific task is {1, 2, 3, 5, 4, 3, 4}.

Fig. 4: An example of decoding operation

The pseudo code of greedy-based allocation operator is summarized in Algorithm 3. The data center has a variety of different configuration PSs. For each type PS, we select one to load the generated VMs list until any specific resource (CPU, RAM, or disk) of the selected PS is exhausted or there are no VMs can be loaded in the selected PS. Then we select the PS with the highest comprehensive utilization rate and erase the VMs it loaded from the generated VMs list. The definition of comprehensive utilization rate U of a single PS is below:

(8)

where the parameters , , and are the weight coefficient. To ensure the importance of three selected hardware resource is the same, we set , and as the same weight to guarantee the fairness of them, i.e., = = = 1/3. The utilization of CPU , the utilization of RAM and the utilization of disk are given as follows:

(9)
(10)
(11)

where , and are the CPU, RAM and disk resource capacity of -th PS, respectively.

1:VMs List of a individual, V; PSs List of a individual, M; Number of PSs types, L.
2:The placement solution for V.
3:while V is not empty do
4:     psList = []
5:     for i = 1 to L do
6:         if the remaining amount of M[i] >0 then
7:              for j do = 1 to V
8:                  if [j] can be placed in ps then
9:                       Add [j] to M[i]
10:                  end if
11:                  if M[i] is running out of resources then
12:                       break;
13:                  end if
14:              end for
15:              Calculate the utilization of M[i]
16:              Add M[i] into
17:         end if
18:     end for
19:     Select the ps with the highest utilization in psList
20:     Erase VMs placed in ps from V
21:     M[ps] = M[ps] - 1
22:end while
Algorithm 3 Greedy-based Allocation Operator

Iv-D Crossover and Mutation Operator

The offspring population is reproduced by assortative mating. The crossover and mutation operator used in assortative mating are described in detail as follow.

Crossover operator: The crossover operator aims to reproduce offspring population theoretically with good traits from both parents and hopefully better fitness [44]. In this paper, the crossover operator is implemented in an modified exon shuffling approach similar to the one described by [37]. This approach combines all the activated PSs from randomly selected parents and sorts the PSs by comprehensive utilization, which is calculated based on Eq. (8). The more full PSs are at the front of the list, while the less full PSs are at the end. The crossover operator systematically picks the more full PSs and keeps the VMs that have loaded intact. If the picked PS contains any VMs that have been installed in other PS, the picked PS is discarded and set to be idle status. The remaining VMs that have not been placed in any PSs are randomly disrupted and inserted to the end of the generated VM list. This crossover approach preserves as many VM lists as possible in more full-filled PSs. It also ensures that the generated candidate solutions are all feasible solutions and it is avail to speed up the running time of the algorithm.

Mutation operator: After the crossover operator, each individual is allowed to having their candidate solution modified slightly at a certain probability. The mutation operator aims to facilitate the population to escape from local optima. In this paper, it is realized by randomly swapping the position of two VMs in the VMs list of individual.

Iv-E Fitness Evaluation

After the individual has finished the placement process, we evaluate it fitness according to the status of PSs (activated or idle). The fitness evaluation function is define as the deployment cost of activated PSs list of individual in -th task, which is given as follows:

(12)

where is the deployment cost of -th type PS. After the fitness evaluation process, the scalar fitness of the individual in population that consists of the parent and offspring is updated. Then, in the elite-based environment selection operator, the individuals with smaller scalar fitness are survived to the next generation. In particular, the number of individuals from different tasks survived to the next generation is equal.

Iv-F Re-migration and Merge operator

Although the obtained solutions of each tasks is feasible, they are not the solutions of the original input VMs list. The designed re-migration and merge operator is called to offer the solution of the original input LVMP problems from the obtained solutions of MFO problem, which the pseudo code is presented in Algorithm 4.

For the fittest solution in each VMP optimization task, the re-migration operator is employed to the PS list that are not fully utilized by computing resources, e.g., the available computing resources are left behind. Specifically, the VMs that loaded in the PS which have the remaining compute resource are re-popped, which the new VMs list is generated to be refilled by the greedy-based allocation operator. Finally, the merge operator is executed to combine the activated PSs among all the fittest solution and form the placement scheme of the LVMP problem.

1:Best individual of each task, ; Number of tasks, .
2:The placement solution .
3:Solution = []
4: = []
5:for i = 1 to  do
6:     for j = 1 to in  do
7:         if every resource has a surplus in  then
8:              Remove from
9:              Insert into
10:         else
11:              Add to
12:         end if
13:     end for
14:end for
15:Backfill to Algorithm 3.
Algorithm 4 Re-migration and Merge Operator

Iv-G Time Complexity Analysis

Besides the capability of finding solutions of high quality, time complexity is a significant issue for an optimization algorithm. The time complexity of our proposed method is mainly based on GA. Assume the maximum iteration is “I”, the number of PSs is “M”, the number of PSs types is “L”, the number of VMs is “V”, and the generation size is “G”. The time complexity of our method is mainly depended on the offspring reproduction and greedy-based allocation operator. Further, as seen in Algorithm 3, the time complexity of proposed greedy-based allocation operator is (iteration generation size the number of VMs the number of PSs types the number of VMs), which is (I G V L V). And the the offspring reproduction require (I G M V) time complexity. Since the number of VMs is larger than the number of PSs, based on the analysis above, the time complexity of our method is (I G V V L).

V Experiments

Simulation Experimental tests are carried out in this section to assess the performance of our proposed. All the compared algorithms have been implemented in C++, and ran on a PC with a Pentium Dual CPU i7 and 8.0 GB RAM.

V-a Test Data Set Introduction

Type CPU RAM Disk Cost
General
56-core
128G
1200G
3.49
LargeRAM
84-core
256G
2400G
4.36
HighPerformance
112-core
192G
3600G
5.45
TABLE II: The configuration of Physical Servers

In this work, the resources characterize of VMs and PSs are downloaded from the Huawei cloud website111https://activity.huaweicloud.com/promotion/index.html. Three different types of PSs are adopted to build heterogeneous environments, which are respectively denoted as General, LargeRAM and HightPerformance. The configurations of PSs are shown in Table II. Further, these three different types of PSs have different bias characteristics. LargeRAM has a larger memory, preferring to place the VM with greater RAM demand. HightPerformance has more CPU cores and is suitable for the VMs that required high computing power. The resources characterized of General is between the above two. Besides, we use 100 different types of VMs for the creation of large-scale test data sets, and the configuration of VMs are shown in Table III

. The ratio of CPU and RAM of each VM comes from the Huawei cloud website. The configuration of disk is set to 100-500 (G) and it is randomly generated at 100 (G) intervals, which is considered as bottleneck resource in this experiment. The total ratio of the requirement of CPU, RAM, and disk is approximately 6:9:10 on each test data set, which proves the resource of disk is served as a bottleneck resource in this experiment. In conclusion, taking the data set with the size of 5,000 VMs as an example, it is composed of 5,000 VMs which randomly produced from 100 different types of VMs with the discrete uniform distribution. Point out that the optimal solution to the test data sets are not clear due to the random creation process, which could be served as black-box testing problems.

Type CPU RAM disk Type CPU RAM disk Type CPU RAM disk
(core) (G) (G) (core) (G) (G) (core) (G) (G)
type-1 1 1 100 type-36 2 16 100 type-71 8 32 100
type-2 1 1 200 type-37 2 16 200 type-72 8 32 200
type-3 1 1 300 type-38 2 16 300 type-73 8 32 300
type-4 1 1 400 type-39 2 16 400 type-74 8 32 400
type-5 1 1 500 type-40 2 16 500 type-75 8 32 500
type-6 1 2 100 type-41 4 4 100 type-76 8 64 100
type-7 1 2 200 type-42 4 4 200 type-77 8 64 200
type-8 1 2 300 type-43 4 4 300 type-78 8 64 300
type-9 1 2 400 type-44 4 4 400 type-79 8 64 400
type-10 1 2 500 type-45 4 4 500 type-80 8 64 500
type-11 1 4 100 type-46 4 8 100 type-81 16 16 100
type-12 1 4 200 type-47 4 8 200 type-82 16 16 200
type-13 1 4 300 type-48 4 8 300 type-83 16 16 300
type-14 1 4 400 type-49 4 8 400 type-84 16 16 400
type-15 1 4 500 type-50 4 8 500 type-85 16 16 500
type-16 1 8 100 type-51 4 16 100 type-86 16 32 100
type-17 1 8 200 type-52 4 16 200 type-87 16 32 200
type-18 1 8 300 type-53 4 16 300 type-88 16 32 300
type-19 1 8 400 type-54 4 16 400 type-89 16 32 400
type-20 1 8 500 type-55 4 16 500 type-90 16 32 500
type-21 2 2 100 type-56 4 32 100 type-91 16 64 100
type-22 2 2 200 type-57 4 32 200 type-92 16 64 200
type-23 2 2 300 type-58 4 32 300 type-93 16 64 300
type-24 2 2 400 type-59 4 32 400 type-94 16 64 400
type-25 2 2 500 type-60 4 32 500 type-95 16 64 500
type-26 2 4 100 type-61 8 8 100 type-96 16 128 100
type-27 2 4 200 type-62 8 8 200 type-97 16 128 200
type-28 2 4 300 type-63 8 8 300 type-98 16 128 300
type-29 2 4 400 type-64 8 8 400 type-99 16 128 400
type-30 2 4 500 type-65 8 8 500 type-100 16 128 500
type-31 2 8 100 type-66 8 16 100
type-32 2 8 200 type-67 8 16 200
type-33 2 8 300 type-68 8 16 300
type-34 2 8 400 type-69 8 16 400
type-35 2 8 500 type-70 8 16 500
TABLE III: CONFIGURATION OF VIRTUAL MACHINES

V-B Compared Algorithms

In this experiment, the effectiveness of the proposed method is validated in comparison with other four different types of state-of-art heuristic algorithms, namely HGAPSO [38], OEMACS [28], RGGA [44], and first-fit decreasing (FFD) [1]. OEMACS construct the assignment of VMs by artificial ants based on the proposed global pheromone updating approach. At the same time, the proposed order exchange and migration (OEM) technique in OEMACS is able to turn the infeasible solutions into feasible solutions, which aims to improve the quality of the solution. HGAPSO proposed a hybrid method of genetic algorithm (GA) and particle swarm optimization (PSO) to complete the allocation of VMs to PSs. The RGGA used the approach of exon shuffling to produce offspring. It sorts the PSs list by utilization rate in descending order and systematically picks the more full PSs to keep them intact. Finally, the FFD algorithm arranged the VMs list by first considering CPU requirement, then RAM requirement, and lastly disk requirement in descending order. And it installs the VMs in the first PS with adequate computing resource in the PSs list.

V-C Experiment Setup

In this experiment, the parameters setting of the proposed method are the random mating probability (rmp) and the number of VMs assigned to -th VMP optimization task , which rmp = 0.3 and = 200. The above parameters settings are derived from the sensitivity test experiment, which is presented in subsection V.E. The number of maximal iterations is set to 50, the number of individuals in per task is set to = 5, the total population size is the task number multiplied by , that is, . In addition, the parameters setting of compared algorithm HGAPSO, OEMACS and RGGA set consistently with their original references, with their population size are 10, 5 and 75, and the maximal iteration are 50, 50 and 100, respectively. To conclude, the maximal fitness evaluation of HGAPSO, OEMACS and RGGA are 500, 250 and 7500, respectively. And the fitness evaluation of our method depends on the number of tasks, which is . The experimental results of all algorithms are obtained over 30 independent runs on each instance.

Numerous performance indicators are used to validate the effectiveness of the proposed method comprehensively. They are the running time of each compared algorithm, the number of activated PSs, the comprehensive utilization rate of activated PS cluster, and the total deployment cost. Node that the comprehensive utilization rate of activated PS cluster is calculated by the Eq. (8). Besides these, the utilization of CPU, the utilization of RAM and the utilization of disk are also adopted as performance indicators in this experiment, which is calculated by the Eq. (9), (10) and (11), respectively.

V-D Results and Analysis

Data Set NO. Size Our Method HGAPSO OEMACS RGGA FFD
Time Util Num Cost Time Util Num Cost Time Util Num Cost Time Util Num Cost Util Num Cost
DS1 5000 0.74s 85.47% 534.26 2316.15
3.29s
75.27%
676.70
3079.51
155.02s
79.58%
643.19
2915.69
29.28s
81.28%
613.60
2593.51
58.31%
1081.13
4224.34
DS2 10000 1.48s 85.19% 1056.01 4578.87
9.15s
76.25%
1340.05
6060.06
658.01s
78.79%
1288.26
5847.32
61.11s
81.72%
1195.50
5078.05
58.09%
2151.00
8407.54
DS3 15000 2.27s 84.93% 1595.60 6918.30
19.65s
76.85%
1980.40
8991.78
1556.65s
78.12%
1949.94
8848.67
95.70s
81.52%
1792.27
7624.70
58.06%
3232.07
12625.95
DS4 20000 3.02s 85.27% 2114.13 9174.47
35.63s
77.04%
2618.80
11938.36
3244.99s
78.48%
2611.09
11767.78
127.41s
82.24%
2374.07
10092.80
58.31%
4291.30
16757.45
DS5 25000 3.81s 84.99% 2661.26 11540.25
46.48s
77.98%
3223.50
14669.68
5448.58s
78.27%
3233.81
14650.77
155.82s
81.70%
2965.7
12613.45
57.99%
5367.63
20963.20
DS6 30000 4.65s 85.31% 3211.40 13926.30
68.30s
77.22%
3951.57
17928.70
10932.67s
78.22%
3930.80
17750.57
194.67s
82.09%
3568.77
15175.54
58.15%
6462.23
25242.30
DS7 35000 5.42s 85.11% 3693.43 16026.43
79.76s
77.68%
4585.46
20809.70
19514.45s
78.20%
4571.50
20676.42
228.03s
82.01%
4169.03
17730.90
58.18%
7544.03
29464.20
DS8 40000 6.25s 85.63% 4281.73 18578.08
83.55s
78.36%
5174.06
23597.70
N/A
NA
N/A
N/A
261.61s
82.02%
4783.83
20346.30
58.14%
8658.83
33812.69
DS9 45000 6.94s 85.42% 4759.33 20803.48
107.52s
77.76%
5872.37
26718.80
N/A
N/A
N/A
N/A
303.62s
81.92%
5371.1
22842.35
58.11%
9720.33
37960.60
DS10 50000 7.66s 85.01% 5215.50 22620.94
191.35s
76.04%
6662.77
30265.15
N/A
N/A
N/A
N/A
332.79s
81.94%
5931.03
25232.38
58.14%
10747.77
41975.6
TABLE IV: EXPERIMENTAL RESULT COMPARISONS WITH DIFFERENT SIZE OF VMs

The experiments results, in terms of overall the performance indicators mentioned in subsection V.C, are obtained by all compared algorithms in 30 independent operations and tabulated in Table IV. The superior average experimental results are represented by bold. Since OEMACS has reached 19514.45s running time in test data set DS7, which has few practical application value. The experimental effect of OEMACS in test data sets DS8-DS10 are not given in Table IV. The running time of FFD is only few micro seconds in all test data set, so it is not listed.

Fig. 5: The running time of all compared algorithm on the DS1-DS7

It can be seen from Table IV that on all the test data sets, the proposed method achieves significant superior running time compared to other heuristic algorithms. Explicitly, on the test data set DS7, although the fitness evaluation number of our proposed method is 12250, it completes the task of positioning and optimizing the VMs list used only 5.42s, which proves that the proposed method has favorable real-time capability and expansibility. The reason is that the representation of individual in our method is more effective than that of other heuristic algorithms, which result in dramatically shorten the running time. Compared with the other two modified genetic algorithms HGAPSO and RGGA, they run 79.76s and 228.03s on the test data set DS7, respectively. It must be pointed out that HGAPSO can achieve less running time than RGGA due to the original parameter setting of HGAPSO, in which the number of fitness evaluation is much less than that of RGGA. OEMACS need 19514.45s to complete the assignment process on the DS7, which is the most longest running time in all compared algorithms. Further, it can be seen from tendency Fig. 5 of the running time increase is that the proposed method adds only about 1.0s to each additional 5000 VMs list, while other algorithms increase the larger steps. Particularly the increase in running time of OEMACS is the most obvious. The experimental result in terms of the running time proves that the proposed method is more suitable for solving the VMP problem in the large-scale data center.

In terms of the average comprehensive utilization rate of activated PSs cluster, our method has achieved the highest comprehensive utilization rate among all the compared algorithms, which shows that it can effectively maximize the utilization rate of the activated PSs cluster. In a deeper matter, Fig. 6 shows the utilization rate of CPU, RAM, and disk on the DS7 of all compared algorithms, respectively. As shown in Fig. 6, compared to RGGA, the proposed method obtains the superior utilization rate on CPU and RAM, and comparable utilization rate on disk, which is considered as bottleneck resource in this experiment. And the proposed method shows higher utilization rate on CPU, RAM and disk in the comparison of HGAPSO, OEMACS and FFD. It shows that our method could effectively respond to LVMP problems of which has the bottleneck resource characteristics.

In addition, the average activated number of PSs cluster by the proposed method is less than other compared heuristic algorithm on all test data sets. For example, the average activated number of PSs in the proposed method is 3693.43 on the DS7, which HGAPSO, OEMACS and RGGA are 4585.46, 4571.50 and 4169.03 respectively. The results proves that our method can effectively reduce the number of activated PSs, accordingly achieve the purpose of reducing energy consumption.

Finally, compared with other algorithms, it can be seen from Table IV that the proposed method obtains lower deployment cost than other compared algorithm. This is due to the fact that each VMP optimization task is modeled as an optimization goal with reduced deployment costs in our proposed method. The advantage of cost based model is its ability to lower the deployment cost of data center and deal with VMP problems in heterogeneous environments. Moreover, since each VMP optimization task has the same optimization goals, which means that there are some higher similarity between them, it can lead to effective communication and obtain better optimization results.

Fig. 6: The average utilization of CPU, RAM and disk of all active servers on the DS7

V-E Parametric Analysis

Fig. 7: Convergence test of our proposed method on the DS1

In this experiment, the susceptibility of two parameters in the proposed method is tested on the DS1 independently, e.g. random mating probability rmp and the number of VMs assigned to each task . The situations on the other test data sets are similar.

To analysis the susceptibility of rmp, we set its value from 0.1 to 0.9 with a step length of 0.1, and the experiment results are displayed in Fig. 8. It could be seen from Fig. 8 that as the value of rmp increases, the running times of our proposed method are almost the same, which proves that the change in rmp value has little effect on the running times. In addition, the proposed method obtains the superior experiment result in terms of deployment cost when rmp = 0.5. The reason can be summarized as follow. The same deployment cost based objective optimization function is used for all VMP optimization tasks. It indicates that the knowledge exchange between tasks is positive at a high probability during the process of evolution. And the larger value of rmp increases the probability of knowledge exchange between tasks, then obtains a better result. However, as the value of rmp continues to grow, the deployment cost may be gradually increased. The reason is that the larger value of rmp may cause the sub-population specify to the tasks to fall into local optimum easily due to the loss of population diversity. In summary, the rmp is set to rmp = 0.5.

To test the effect of the number of , making = 100, 200, 500, 800, 1000, 1300, 1500, respectively, and the results are shown in Fig. 9. It can be seen from Fig 9 that, as the value of increases, the running time is increased and the deployment cost has also risen. This is due to the fact that the increase of expand the search space of the sub-population from different tasks, resulting in the quality of the solution decreased. It is worth noting that the is not the smaller, the better. Although the smaller can bring advantages in the representation of individuals, the search space of the sub-population specify to the tasks becomes narrow, and the sub-population is easy to fall into local optimum. Summarily the was set to = 200.

Fig. 8: The effect of different value of rmp on the DS1
Fig. 9: The effect of the number of VMs assigned to each task on the DS1

V-F Convergence Analysis

Take DS1 as an example, the convergence analysis of our proposed method is presented as follows. The situations on the other test data sets are similar. The maximum number of iterations is set as 100 in this experiment. Fig. 7 depicts the convergence curves of the proposed method and other compared algorithms. It can be seen from Fig. 7 that our method obtains better results than other compared algorithms in the first generation, which is crucial to guide the search process. And the convergence of our method is slowing down at the tenth iteration and approaching the local optima with deployment cost 2388.86. Actually, at the early iteration, the proposed method has converged to a superior local optima solution than other compared algorithms. The reason can be summarized as follows. On the one hand, the greedy-based allocation operator is competent to finish the placement of VMs onto PSs, which is able to find superior local optimal solutions of the VMP optimization task at the early iteration. On the other hand, owing to the more effective representation block of the individual in the proposed method, the search space of each VMP optimization task is narrower and it is conducive to finding a local optima solution. The above results show that the proposed method can converge to a better local optima solution more fastly.

V-G Single-Factorial Solver vs Multi-Factorial Solver

In order to explore the effectiveness of multi-factorial optimization technology, the multi-factorial method proposed in this paper is modified to the single-factorial version (SFEA) to compare. In this experiment, SFEA adopts the same initialization operator, crossover and mutation operator, greedy-based allocation operator and environment selection operator as the proposed method. The difference between itself and the proposed multi-factorial method is that SFEA lacks the process of establishing a unified express space. Meanwhile, SFEA is a single-factorial solver and it has no inter-task communication. For fair comparison, the same parameters as the proposed multi-factorial method are used in SFEA. The test data sets DS1 to DS10 are conducted to assess the performance of SFEA, which the experiment result are summarized in Table V.

As seen in Table V, the running times of SFEA on all test instances are longer than its multi-factorial version. This is attributed to the fact that the representation block of individuals in the multi-factorial environment can not only decode into its related task, but also retain the genetic information of other tasks, which resulting in its effective. In the other three performance indicator (comprehensive utilization rate, the number of activated PSs and deployment cost), the proposed multi-factorial method obtains superior results than SFEA on all the test data sets. The reason can be summarized as follows. The search space of population in SFEA is large and complex, which causes slower convergence speed of SFEA. Reversely, the search space of population in the multi-factorial version is relatively narrow, so it can converge quickly. Besides these, the population of the multi-factorial version is not easy to fall into local optimum due to the knowledge transfer appeared among tasks, which is also the key to obtain superior solutions than SFEA.

Data Set NO. Size Our Method SFEA
Time Util Num Cost Time Util Num Cost
DS1 5000 0.74s 85.47% 534.26 2316.15
2.94s
83.60%
565.13
2430.31
DS2 10000 1.48s 85.19% 1056.01 4578.87
10.62s
82.93%
1130.80
4862.40
DS3 15000 2.27s 84.93% 1595.60 6918.30
27.21s
82.71%
1697.20
7311.37
DS4 20000 3.02s 85.27% 2114.13 9174.47
52.71s
83.24%
2259.13
9722.77
DS5 25000 3.81s 84.99% 2661.26 11540.25
85.01s
82.76%
2812.80
12115.50
DS6 30000 4.65s 85.31% 3211.40 13926.30
117.71s
82.84%
3407.10
14652.40
DS7 35000 5.42s 85.11% 3693.43 16026.43
154.72s
82.72%
3982.33
17132.02
DS8 40000 6.25s 85.63% 4281.73 18578.08
201.69s
82.90%
4564.73
19636.10
DS 9 45000 6.94s 85.42% 4759.33 20803.48
255.74s
82.71%
5129.00
22057.97
DS10 50000 7.66s 85.01% 5215.50 22620.94
318.21s
82.60%
5667.50
24376.51
TABLE V: EXPERIMENTAL RESULT COMPARISONS WITH DIFFERENT SIZE OF VMs ON THE PROPOSED METHOD AND SFEA

V-H Extra Large-scale Test Data Sets

In this experiment, to further assess the performance of our proposed method on extra large-scale VMP problems, the extra test data sets with a maximum size of 250,000 VMs is created as the same approach of the subsection V.A. Since OEMACS and FFD have been proved to be unsuitable for solving LVMP problems in the above experiments. In this experiment, the performance of our method is assessed in comparison with HGAPSO and RGGA. The experiment results are summarized in Table VI. As can be seen from Table VI, on the EDS6, the proposed method completes the assignment process for only 37.61s. The experiment results, in terms of the average comprehensive utilization rate, deployment cost and the average number of activated PSs, the proposed method also shows superior performance than HGAPSO and RGGA on all the extra test data sets.

Fig. 10 shows the average running time of the proposed method and compared algorithm on the extra large-scale test data sets. It can be seen from Fig. 10, as the test data sets increases at a large step, the proposed method has the minimum and stability step of increased time, with the running time of only about 7s per 50,000 VMs added. The steps of HGAPSO and RGGA about increased time are relatively large, and their increased time step has a gradually increased tendency. The tuning time of HGAPSO is even longer on all the extra test data sets than RGGA. The experimental results proves that the proposed method can be really applied to deal with the problem of extra large-scale VMP problems, and has the good expansibility and real-time performance.

Extra Data Set NO. Size Our Method HGAPSO RGGA
Time Util Num Cost Time Util Num Cost Time Util Num Cost
EDS1 80000 12.38s 84.92% 8516.60 36915.98
541.18s
75.82%
10786.20
4875.21
516.84s
81.98%
9475.32
40367.05
EDS2 100000 15.81s 85.06% 11615.23 46057.90
921.26s
77.62%
13057.87
59372.83
804.94s
82.01%
11885.54
50558.84
EDS3 130000 19.64s 84.42% 13780.86 59762.06
1163.92s
78.99%
16680.00
75950.89
1087.48
82.10%
15498.02
65898.27
EDS4 150000 22.75s 85.76% 16054.73 69619.23
1659.30s
76.78%
19801.67
89993.44
1312.54s
82.15%
17741.20
75623.81
EDS5 200000 30.28s 84.88% 21062.23 91355.30
2683.92s
75.80%
27005.57
122151.32
1760.24s
82.22%
23521.56
100441.81
EDS6 250000 37.61s 85.65% 26453.77 114808.82
4319.83s
73.72%
34277.91
154519.62
2323.15s
82.20%
29561.23
125991.47
TABLE VI: EXPERIMENTAL RESULT OF EXTRA LARGE-SCALE VMs Test Data Sets
Fig. 10: The average running time of the proposed method, HGAPSO and RGGA on the EDS1-EDS6

Vi Conclusion and Future Work

This paper provides a MFO method to complete the placement of VMs onto PSs in heterogeneous environments of the large-scale data center. We firstly reformulate a deployment cost based VMP problem in the form of the MFO problem. Multiple VMP optimization tasks based on deployment cost are modeled to achieve an organic composition of different configuration of PSs in heterogeneous environments, which lead to reduces the deployment costs directly for cloud providers. Furthermore, a multi-factorial evolutionary algorithm coupled with the greedy-based allocation operator is proposed to address the established MFO problem. After that, a re-migration and merge operator is designed to obtain the integrated solution of the LVMP problem. The comprehensive experiments, including the large-scale and extra large-scale test data sets, are conducted to assess the effectiveness of our proposed method. The experimental result describes that the proposed method can significantly lower the optimization time and provide an effective placement solution for VMP problem in the large-scale data center.

Although our proposed method shows promising problem-solving for the LVMP problems in heterogeneous environments, it only considered the optimization target of resource allocation. The actual data center is faced not only with the challenge in resource scheduling optimization but also included other complex optimization objectives, such as energy consumption optimization, network traffic optimization and so on. For future work, how to effectively integrate many different optimization objectives will continue to be explored based on MFO technology.

References

  • [1] Y. Ajiro and A. Tanaka (2007-01) Improving packing algorithms for server consolidation.. In 2007 33th International Conference Computer Measurement Group, pp. 399–406. Cited by: §V-B.
  • [2] A. S. Alashaikh and E. A. Alanazi (2019) Incorporating ceteris paribus preferences in multiobjective virtual machine placement. IEEE Access 7 (), pp. 59984–59998. External Links: ISSN 2169-3536 Cited by: §I, §II-A.
  • [3] P. Aruna and S. Vasantha (2015-07) A particle swarm optimization algorithm for power-aware virtual machine allocation. In 2015 6th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Vol. , pp. 1–6. External Links: Document, ISSN Cited by: §II-A.
  • [4] T. Back, U. Hammel, and H. -. Schwefel (1997-04) Evolutionary computation: comments on the history and current state. IEEE Transactions on Evolutionary Computation 1 (1), pp. 3–17. External Links: Document, ISSN 1089-778X Cited by: §I.
  • [5] K. K. Bali, Y. Ong, A. Gupta, and P. S. Tan (2019)

    Multifactorial evolutionary algorithm with online transfer parameter estimation: mfea-ii

    .
    IEEE Transactions on Evolutionary Computation (), pp. 1–1. External Links: ISSN 1089-778X Cited by: §I.
  • [6] R. Bianchini and R. Rajamony (2004-11) Power and energy management for server systems. Computer 37 (11), pp. 68–74. External Links: ISSN 0018-9162 Cited by: §I.
  • [7] R. Caruana (1997-07-01) Multitask learning. Machine Learning 28 (1), pp. 41–75. External Links: ISSN 1573-0565 Cited by: §I.
  • [8] J. Chen, C. Wang, B. B. Zhou, L. Sun, Y. C. Lee, and A. Y. Zomaya (2011) Tradeoffs between profit and customer satisfaction for service provisioning in the cloud. In Proceedings of the 20th International Symposium on High Performance Distributed Computing (HPDC), pp. 229–238. External Links: ISBN 978-1-4503-0552-5 Cited by: §I.
  • [9] T. Chen, Y. Zhu, X. Gao, L. Kong, G. Chen, and Y. Wang (2018-04-01) Improving resource utilization via virtual machine placement in data center networks. Mobile Networks and Applications 23 (2), pp. 227–238. External Links: ISSN 1572-8153 Cited by: §I.
  • [10] Y. Cheng, W. Chen, Z. Wang, and X. Yu (2017-06) Performance-monitoring-based traffic-aware virtual machine deployment on numa systems. IEEE Systems Journal 11 (2), pp. 973–982. External Links: Document, ISSN Cited by: §I.
  • [11] X. Dai, J. M. Wang, and B. Bensaou (2016-04) Energy-efficient virtual machines scheduling in multi-tenant data centers. IEEE Transactions on Cloud Computing 4 (2), pp. 210–221. External Links: ISSN Cited by: §I.
  • [12] K. Deb and R. B. Agrawal (1994-09) Simulated binary crossover for continuous search space. Complex Systems 9 (3), pp. 115–148. Cited by: §II-B.
  • [13] E. Feller, L. Rilling, and C. Morin (2011-Sep.) Energy-aware ant colony based workload placement in clouds. In 2011 12th International Conference on Grid Computing (GridCom), Vol. , pp. 26–33. External Links: Document, ISSN Cited by: §II-A.
  • [14] L. Feng, Y. Ong, S. Jiang, and A. Gupta (2017-10) Autoencoding evolutionary search with learning across heterogeneous problems. IEEE Transactions on Evolutionary Computation 21 (5), pp. 760–772. External Links: ISSN 1089-778X Cited by: §I.
  • [15] L. Feng, Y. Ong, M. Lim, and I. W. Tsang (2015-10) Memetic search with interdomain learning: a realization between cvrp and carp. IEEE Transactions on Evolutionary Computation 19 (5), pp. 644–658. External Links: ISSN 1089-778X Cited by: §I.
  • [16] M. Gaggero and L. Caviglione (2019-01) Model predictive control for energy-efficient, quality-aware, and secure virtual machine placement. IEEE Transactions on Automation Science and Engineering 16 (1), pp. 420–432. External Links: ISSN 1545-5955 Cited by: §I.
  • [17] M. Gong, Z. Tang, H. Li, and J. Zhang (2019) Evolutionary multitasking with dynamic resource allocating strategy. IEEE Transactions on Evolutionary Computation (), pp. 1–1. External Links: ISSN 1089-778X Cited by: §I.
  • [18] A. Gupta, Y. Ong, L. Feng, and K. C. Tan (2017-07) Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Transactions on Cybernetics 47 (7), pp. 1652–1665. External Links: ISSN Cited by: §I.
  • [19] A. Gupta, Y. Ong, and L. Feng (2016-06) Multifactorial evolution: toward evolutionary multitasking. IEEE Transactions on Evolutionary Computation 20 (3), pp. 343–357. External Links: ISSN 1089-778X Cited by: §I, §II-B, §II-B.
  • [20] R. Hinterding (1995-11) Gaussian mutation and self-adaption for numeric genetic algorithms. In Proceedings of 1995 IEEE International Conference on Evolutionary Computation, Vol. 1, pp. 384. External Links: ISSN Cited by: §II-B.
  • [21] Jiaxin Li, Dongsheng Li, Yuming Ye, and Xicheng Lu (2015-02) Efficient multi-tenant virtual machine allocation in cloud data centers. Tsinghua Science and Technology 20 (1), pp. 81–89. External Links: Document, ISSN Cited by: §I.
  • [22] J. Knowles (2006-02) ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Transactions on Evolutionary Computation 10 (1), pp. 50–66. External Links: ISSN 1089-778X Cited by: §I.
  • [23] H. Li, Y. Ong, M. Gong, and Z. Wang (2019-10) Evolutionary multitasking sparse reconstruction: framework and case study. IEEE Transactions on Evolutionary Computation 23 (5), pp. 733–747. External Links: ISSN Cited by: §I.
  • [24] K. Li, H. Zheng, and J. Wu (2013-11) Migration-based virtual machine placement in cloud systems. In 2013 IEEE 2nd International Conference on Cloud Networking (CloudNet), Vol. , pp. 83–90. External Links: ISSN Cited by: §I.
  • [25] X. Li, P. Garraghan, X. Jiang, Z. Wu, and J. Xu (2018-06) Holistic virtual machine scheduling in cloud datacenters towards minimizing total energy. IEEE Transactions on Parallel and Distributed Systems 29 (6), pp. 1317–1331. External Links: ISSN Cited by: §I.
  • [26] X. Li, C. Gu, Z. Yang, and Y. Chang (2015-12) Virtual machine placement strategy based on discrete firefly algorithm in cloud environments. In 2015 12th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Vol. , pp. 61–66. External Links: Document, ISSN Cited by: §II-A.
  • [27] D. Lim, Y. Jin, Y. Ong, and B. Sendhoff (2010-06) Generalizing surrogate-assisted evolutionary computation. IEEE Transactions on Evolutionary Computation 14 (3), pp. 329–355. External Links: ISSN 1089-778X Cited by: §I.
  • [28] X. Liu, Z. Zhan, J. D. Deng, Y. Li, T. Gu, and J. Zhang (2018-02) An energy efficient ant colony system for virtual machine placement in cloud computing. IEEE Transactions on Evolutionary Computation 22 (1), pp. 113–128. External Links: ISSN 1089-778X Cited by: §I, §II-A, §V-B.
  • [29] X. Liu, Z. Zhan, K. Du, and W. Chen (2014-07) Energy aware virtual machine placement scheduling in cloud computing based on ant colony optimization approach. pp. . External Links: ISBN 978-1-4503-2662-9, Document Cited by: §II-A.
  • [30] J. Luo, W. Song, and L. Yin (2018) Reliable virtual machine placement based on multi-objective optimization with traffic-aware algorithm in industrial cloud. IEEE Access 6 (), pp. 23043–23052. Cited by: §I.
  • [31] C. Mastroianni, M. Meo, and G. Papuzzo (2013-07) Probabilistic consolidation of virtual machines in self-organizing cloud data centers. IEEE Transactions on Cloud Computing 1 (2), pp. 215–228. External Links: ISSN 2168-7161 Cited by: §I.
  • [32] K. Mershad, H. Artail, M. A. R. Saghir, H. Hajj, and M. Awad (2017-10) A study of the performance of a cloud datacenter server. IEEE Transactions on Cloud Computing 5 (4), pp. 590–603. External Links: ISSN 2168-7161 Cited by: §I.
  • [33] Y. Ong and A. Gupta (2016) Evolutionary multitasking: a computer science view of cognitive multitasking. Cognitive Computation 8 (2), pp. 125–142. Cited by: §II-B.
  • [34] S. J. Pan and Q. Yang (2010-10)

    A survey on transfer learning

    .
    IEEE Transactions on Knowledge and Data Engineering 22 (10), pp. 1345–1359. External Links: ISSN 1041-4347 Cited by: §I.
  • [35] Y. Qin, H. Wang, F. Zhu, and L. Zhai (2018) A multi-objective ant colony system algorithm for virtual machine placement in traffic intense data centers. IEEE Access 6 (), pp. 58912–58923. External Links: Document, ISSN Cited by: §I.
  • [36] A. Rahimi, L. Khanli, and S. Pashazadeh (2015-06) Energy efficient virtual machine placement algorithm with balanced resource utilization based on priority of resources. Computer Engineering and Applications Journal 4, pp. 107–118. Cited by: §I.
  • [37] P. Rohlfshagen and J. Bullinaria (2007-01) A genetic algorithm with exon shuffling crossover for hard bin packing problems. In 2007 Genetic and Evolutionary Computation Conference (GECCO), pp. 1365–1371. Cited by: §IV-D.
  • [38] N. K. Sharma and G. R. M. Reddy (2019-01) Multi-objective energy efficient virtual machines allocation at the cloud data center. IEEE Transactions on Services Computing 12 (1), pp. 158–171. External Links: Document, ISSN 1939-1374 Cited by: §II-A, §V-B.
  • [39] M. Shojafar, N. Cordeschi, and E. Baccarelli (2019-01) Energy-efficient adaptive resource management for real-time vehicular cloud services. IEEE Transactions on Cloud Computing 7 (1), pp. 196–209. External Links: ISSN 2168-7161 Cited by: §I.
  • [40] Y. Song, Y. Sun, and W. Shi (2013-01) A two-tiered on-demand resource allocation mechanism for vm-based data centers. IEEE Transactions on Services Computing 6 (1), pp. 116–129. External Links: Document, ISSN Cited by: §I.
  • [41] H. ThiThanh Binh, P. Dinh Thanh, T. Ba Trung, and L. Phuong Thao (2018-07) Effective multifactorial evolutionary algorithm for solving the cluster shortest path tree problem. In 2018 IEEE Congress on Evolutionary Computation (CEC), Vol. , pp. 1–8. External Links: ISSN Cited by: §II-B.
  • [42] A. Tripathi, I. Patahk, and D. P. Vidyarthi (2017-06) Energy efficient vm placement for effective resource utilization using modified binary pso. Computer Journal 61 (6), pp. 832–846. Cited by: §II-A.
  • [43] A. Weiss (2007-12) Computing in the clouds. netWorker 11 (4), pp. 16–25. External Links: ISSN 1091-3556 Cited by: §I.
  • [44] D. Wilcox, A. McNabb, and K. Seppi (2011-06) Solving virtual machine packing with a reordering grouping genetic algorithm. In 2011 IEEE Congress of Evolutionary Computation (CEC), Vol. , pp. 362–369. External Links: Document, ISSN 1941-0026 Cited by: §II-A, §IV-D, §V-B.
  • [45] J. Wu (2012-08) Green wireless communications: from concept to reality [industry perspectives]. IEEE Wireless Communications 19 (4), pp. 4–5. External Links: ISSN Cited by: §I.
  • [46] A. Xiong and C. Xu (2014-06) Energy efficient multiresource allocation of virtual machine based on pso in cloud data center. Mathematical Problems in Engineering 2014 (), pp. 1–8. External Links: ISSN Cited by: §I, §II-A.
  • [47] X. Xu, W. Dou, X. Zhang, and J. Chen (2016-04) EnReal: an energy-aware resource allocation method for scientific workflow executions in cloud environment. IEEE Transactions on Cloud Computing 4 (2), pp. 166–179. External Links: ISSN 2168-7161 Cited by: §I, §II-A.
  • [48] C. Yang, J. Ding, Y. Jin, C. Wang, and T. Chai (2019-07) Multitasking multiobjective evolutionary operational indices optimization of beneficiation processes. IEEE Transactions on Automation Science and Engineering 16 (3), pp. 1046–1057. External Links: ISSN 1545-5955 Cited by: §II-B.
  • [49] C. Yang, J. Ding, K. C. Tan, and Y. Jin (2017-12) Two-stage assortative mating for multi-objective multifactorial evolutionary optimization. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Vol. , pp. 76–81. External Links: Document, ISSN Cited by: §II-B.
  • [50] X. Ye, Y. Yin, and L. Lan (2017) Energy-efficient many-objective virtual machine placement optimization in a cloud computing environment. IEEE Access 5 (), pp. 16006–16020. External Links: Document, ISSN Cited by: §I.
  • [51] F. Zhang, G. Liu, X. Fu, and R. Yahyapour (2018-Secondquarter) A survey on virtual machine migration: challenges, techniques, and open issues. IEEE Communications Surveys Tutorials 20 (2), pp. 1206–1243. External Links: ISSN 1553-877X Cited by: §I.
  • [52] Q. Zhang, L. Cheng, and R. Boutaba (2010-05-01) Cloud computing: state-of-the-art and research challenges. Journal of Internet Services and Applications 1 (1), pp. 7–18. External Links: ISSN 1869-0238 Cited by: §I.
  • [53] X. Zheng, A. K. Qin, M. Gong, and D. Zhou (2019) Self-regulated evolutionary multi-task optimization. IEEE Transactions on Evolutionary Computation (), pp. 1–1. External Links: Document, ISSN 1089-778X Cited by: §I.
  • [54] L. Zhou, L. Feng, J. Zhong, Y. Ong, Z. Zhu, and E. Sha (2016-12) Evolutionary multitasking in combinatorial search spaces: a case study in capacitated vehicle routing problem. In 2016 IEEE Symposium Series on Computational Intelligence, Vol. , pp. 1–8. External Links: ISSN Cited by: §II-B.