I Introduction
Cloud infrastructures are widely deployed to support various emerging applications such as: Google App Engine, Microsoft Window Live Service, IBM Blue Cloud [1], augmented reality (AR), collaborative learning, multimedia recognition and retrieval [2]. Data center (DC) operators such as Amazon and Microsoft provision cloud computing and storage services via Amazon AWS and Microsoft Azure, respectively [3]. Largescale DCs, which are the fundamental engines for data processing, are the essential elements in cloud computing [4, 5]. Information and Communications Technology (ICT
) is estimated to be responsible for about
of the worldwide energy consumption by 2020 [6]. The energy consumption of DCs accounts for nearly of the total ICT energy consumption [6]. Hence, the energy consumption of DCs becomes an imperative issue.Renewable energy, which includes solar and wind, produced domestic electricity of the United States in 2011 [7], and will be widely adopted to reduce the brown energy consumption of ICT [8]. Here, brown energy refers to the energy derived from nonrenewable resources (e.g., fossil fuel), which generate carbon emissions; green energy refers to the energy derived from renewable resources (e.g., solar, wind, tide, etc. [9]), which do not generate carbon emissions. For example, Parasol is a solarpowered DC [10]. In Parasol, GreenSwitch, a management system, is designed to manage the work loads and power supplies [10]. The availability of renewable energy varies in different areas and changes over time. The work loads of DCs also vary in different areas and at different time. As a result, the renewable energy availability and energy demands in DCs usually mismatch with each other. This mismatch leads to inefficient renewable energy usage in DCs. To solve this problem, it is desirable to balance the work loads among DCs according to their green energy availability. Although the current cloud computing solutions such as cloud bursting [11], VMware and F5 [12] support interdatacenter (interDC) virtual machine (VM) migration, it is not clear how to migrate VM among renewable energy powered DCs to minimize their brown energy consumption.
Elastic Optical Networks (EONs), by employing orthogonal frequency division multiplexing (OFDM) techniques, not only provide a high network capacity but also enhance the spectrum efficiency because of the low spectrum granularity [13]. The granularity in EONs can be GHz or even smaller [14]. Therefore, EONs are one of the promising networking technologies for interDC networks [15].
Powering DCs with renewable energy can effectively reduce the brown energy consumption, and thus alleviate green house gas emissions. DCs are usually colocated with the renewable energy generation facilities such as solar and wind farms [16]. Transferring renewable energy via the power grid may introduce a significant power loss of up to , and it is desirable to use the renewable energy locally (close to the source of the renewable energy generation) rather than transferring the energy back to the power grid [7].
In this paper, we investigate the renewable energyaware interDC VM migration (REAIM) problem that maximizes the renewable energy utilization by migrating VMs among DCs. In this paper, is defined as the ratio of the maximum available network resources that can be used for VM migration over the whole network resources, and it is used to control the available network resources for VM migration. Meanwhile, is the ratio of the network resources, which are occupied by the other traffic such as the background traffic, over the total network resources. Fig. 1 shows the architecture of an interDC network. The vertices in the graph stand for the optical switches in EONs. DCs are connected to the optical switches via IP routers^{1}^{1}1In this paper, we focus on the EONs. The design and optimization of the IP networks are beyond the scope of this paper.. These DCs are powered by hybrid energy including brown energy, solar energy, and wind energy. For example, assume that DC lacks renewable energy while DC and DC have superfluous renewable energy. Some VMs can be migrated out of DC in order to save brown energy. Because of the background traffic and the limited network resource, migrating VMs using different paths (Path or Path
) has different impacts on the network in terms of the probability of congesting the network. It is desirable to select a migration path with minimal impact on the network.
There are different types of communications. Manycast is the communications from one source node to a set of destination nodes, and this destination node set is a subset of the candidate destination node set [17]. In our previous work ([18](Cloudcom paper), [19](technical report)), we presented the REAIM problem, which has a set of source nodes/DCs and a set of destination nodes/DCs. A DC with sufficient computing resources and abundant renewable energy can accommodate VM migration from multiple DCs, which lack renewable energy; a DC, which lacks renewable energy, can migrate its VMs to multiple DCs to alleviate the mismatch between the energy workload demands and the renewable energy generation. In this work, we need to migrate VMs from a few source DCs to a few destination DCs. Moreover, the migration may happen from one source DC to many DCs or from many DCs to one DC. Then, we have a source node set and a destination node set, thus resulting in manymanycast communications.
To the best of our knowledge, this is the first time manymanycast communication is proposed. Moreover, this is the first work to minimize the cost of the brown energy consumption of DCs in an interDC network overlaid on elastic optical infrastructures via VM migration. The rest of the paper is organized as follows. Section II describes the related work. Section III formulates the REAIM problem based on integer linear programming (ILP). Section IV briefly analyzes the property of the REAIM problem, proposes a few heuristic algorithms to solve the problem, and discusses how to implement these algorithms in the interDC network. Section V demonstrates the viability of the proposed algorithms via extensive simulation results, and compares the performance of the proposed heuristic algorithms with the optimal result derived by using CVX. Section VI concludes the paper.
Ii Related Work
Owing to the energy demands of DCs, many techniques and algorithms have been proposed to minimize the energy consumption of DCs [20]. Ghamkhari and MohsenianRad [20] developed a mathematical model to capture the tradeoff between the energy consumption of a data center and its revenue of offering Internet services. They proposed an algorithm to maximize the revenue of a DC by adapting the number of active servers according to the traffic profile. Fang et al. [21] presented a novel power management strategy for the DCs, and their target was to minimize the energy consumption of switches in a DC. Cavdar and Alagoz [22] surveyed the energy consumption of servers and network devices of intraDC networks, and showed that both computing resources and network elements should be designed with energy proportionality. In other words, it is better if the computing and networking devices can be designed with multiple sleeping states. A few green metrics are also provided by this survey, such as Power Usage Effectiveness (PUE) and Carbon Usage Effectiveness (CUE).
Deng et al. [23] presented five aspects of applying renewable energy in the DCs: the renewable energy generation model, the renewable energy prediction model, the planning of green DCs (i.e., various renewable options, avalabity of energy sources, different energy storage devices), the intraDC work loads scheduling, and the interDC load balancing. They also discussed the research challenges of powering DCs with renewable energy. Gattulli et al. [24] proposed algorithms to reduce emissions in DCs by balancing the loads according to the renewable energy generation. These algorithms optimize renewable energy utilization while maintaining a relatively low blocking probability. Lin and Yu [25] proposed a distributed green networking approach to save energy for intraDC networks by shutting off unused links. Kantarci et al. [26] presented the intraandinter data center VM placement problem, and their objective was to minimize the energy consumption of all DCs and optical network components in the IPoverWDM DC networks. They did not consider the renewable energy generation in this work. Meanwhile, EONs are different from WDM networks: they are more spectrumefficient but more complicated, and scheduling requests in EONs are more complicated than WDM networks (because of the new constraints of EONs: path continuity constraint, spectrum continuity constraint and spectrum nonoverlapping constraint). Fang et al. [27] investigated the power consumption problem of the network components in the backbone networks by jointly considering the data service placement and traffic flow routing.
Zhang et al. [28] studied the dynamic service placement problem in a geographically distributed data center network; they proposed a framework to solve this problem based on the game theoretic models; they also considered a cloud platform shared by multiple service providers, and tried to minimize the operational cost of each service provider [28]. Wu et al. [29] addressed the DC placement problem by jointly minimizing the brown energy consumption and the number of DCs, but they did not consider the constraint of the core network. Wu et al. [30] investigated the DC placement problem, the brown energy consumption problem, and DC cost problem in an interDC network with VM migration. Here, they assumed all requests are provisioned by the shortest path with unlimited network migration bandwidth. Mandal et al. [7] studied renewable energyaware VM migration techniques to reduce the brown energy consumption of DCs in an IPoverWDM network. The VMs are uniformly placed across DCs, and the endusers are provisioned by the VMs through the shortest path in the beginning; then, the VM migration starts when the traffic load generation does not match the renewable energy generation: VMs in one DC, which lacks renewable energy, will be migrated to another DC, which has extra renewable energy. They proposed a migrationcostaware algorithm to enhance the renewable energy utilization, and the VM migration happens when the migration cost (the cost of the brown energy consumption) is smaller than a predefined energy threshold; they also proposed a trafficaware heuristic algorithm, which can relax the migrationcostaware algorithm according to the traffic intensity. They considered path hops and the required bandwidth in the core network in migrating VMs among DCs. However, how to allocate bandwidth for the migration requests and how to assign a path in the optical network for the VM migration were not addressed.
Buyya et al. [31] investigated the architectural elements of the InterCloud for the utilityoriented federation of cloud computing environments. Ardagna et al. [32] studied the computing resource allocation problem in a multitier virtualized data center, and a linear utility function, which includes revenues and penalties, is used. Ayoub. et al. [33] investigated the routing and bandwidth assignment problem for an interDC network, and showed that the network performance can be improved by incorporating the path information in assigning bandwidth for VM migration. In our work, we assume the bandwidth required for migrating each VM is known a priori; many VMs are aggregated together to form a data stream and then transmitted from one DC to anther DC via a lightpath; the same spectrum is used in the whole lightpath in EONs. VMs are taken to be independent from each other in this work; our work can be extended to multiple VMs belonging to the same application. If one application is provisioned by hosting VMs in multiple servers, we need to add two more variables to mark the set of servers selected for serving this application, and the set of VMs employed in the selected servers for this application. If we migrate one application from one DC to another DC, all VMs, which support this application, should be migrated.
EONs provide a higher connection capacity bandwidth, a lower spectrum granularity, a more flexible spectrum allocation, and more elasticity against timevarying traffic as compared to the WDM networks [34]. The large amount of traffic generated by the VM migration may congest the optical network even the optical network can provide huge bandwidth, and may increase the blocking rate of the network. EONs also incur more constraints as compared to the WDM networks in provisioning requests. Therefore, it is very challenging to investigate the REAIM problem in the interDC networks over the elastic optical infrastructure.
In our previous work [18] (Cloudcom paper), we studied the REAIM problem in an interDC network over the elastic optical infrastructure and presented preliminary results; we did not show how to map the VMs into lightpath requests nor did we show how to allocate network resources for user requests under a controllable backbone in migrating VMs. In this paper, we formulate the REAIM problem based on the ILP model; we use CVX and Gurobi to solve this ILP problem for small network instances and propose a few heuristic algorithms to solve the REAIM problem for large network configurations.
Iii Network Model and Problem Formulation
In this section, we present the network model, the energy model, and the formulation of the REAIM problem. The key notations are summarized in Table I.
Symbol  Definiton 

The capacity of a link in terms of spectrum slots.  
The maximum number of servers in the th DC.  
The capacity of a frequency spectrum slot.  
The maximum number of CPU cores in one server.  
Per unit energy cost for the th DC.  
Per unit migration cost for the th DC.  
The required bandwidth for the th request in the th DC.  
The required CPU cores for the th request in the th DC.  
The traffic set.  
The migration granularity defines the maximum bandwidth (capacity of a transmitter) that can be used in one migration.  
The used spectrum slot ratio of the th path.  
The number of used spectrum slots in the th path and start from spectrum for the th request from the th DC.  
The maximum available network resources ratio.  
The maximum number of migrations allowed per DC.  
The set of VM migration requests in the th migration.  
The static power consumption of a server.  
The dynamic power consumption of a server.  
The power consumption of a server with no work load.  
The power consumption of a server with full work load.  
The power usage efficiency.  
The power consumption of the th server in the th DC.  
The CPU utilization of the th server in the th DC.  
The amount of renewable energy generation in the th DC.  
The amount of brown energy consumption of the th DC.  
The bandwidth of a guard band for each transmission.  
The upper bound of , and it equals to the total bandwidth requirement of all requests in terms of spectrum slots, . 
Iiia Network Model
We model the interDC network by a graph, . Here, , and are the node set, the link set and the spectrum slot set, respectively. The set of DC nodes is denoted as . We assume that all DCs are powered by hybrid energy. We denote as the set of DCs, which do not have sufficient renewable energy to support their work loads, and as the set of DCs, which have surplus renewable energy. Each DC serves its own user requests in terms of provisioning VMs. Owing to the mismatch of the renewable energy and workload demands, we need to migrate VMs to alleviate this mismatch. During the migration, and refer to two different sets of DCs, acting as source and destination DCs for VM migration, respectively. We define as the migration granularity, which determines the maximum routing resource that can be used in one migration to each DC.
IiiB DC Model and Energy Model
Each request is provisioned by a VM, and different VMs may require different CPU resources. We assume all DCs have the same size, all servers in each DC have the same configuration, and the power consumption of each server includes static power consumption (fixed) and dynamic power consumption (variational).
Denote , , and as the static, dynamic, idle, and peak power consumption of a server, respectively. and define a server with no workload state and full workload state, respectively. is the power consumption of the th server in the th DC as shown in Eq. (1), and the CPU utilization of this server is defined as () [20]. The relationship among , , and are shown in Eq. (1). Here, is the number of used CPU cores of a server; is the maximum number of CPU cores in one server; is the PUE of a DC. PUE is defined as the total power consumption of a DC (including lighting, cooling, etc. [35]) over all servers’ power consumption in this DC. We also assume the idle servers are not turned off, and one server is in the idle state when it does not host any VM; otherwise, it is active. is set as , as watts, and as watts according to [20].
(1) 
Since the energy consumption of the core network equipment is very small as compared to that of the DC, we only consider the energy consumption of the DCs in this work. Denote as the amount of renewable energy available at the beginning of a migration cycle in the th DC. Here, a migration cycle is the time interval, from the start time when the SDN controller refreshes the energy and workload status of all DCs for VM migration to the start time of the next update. Then, the brown energy consumption of the th DC can be expressed in Eq. (2).
(2) 
IiiC Problem Formulation
The following variables are defined to formulate the REAIM problem.
if the th server in the th DC is used to provision the th request; otherwise, it is .
if the th path is used in the th migration from the th DC; otherwise, it is 0.
if the th request from the th DC is migrated to the th server of the th DC; otherwise, it is .
if the th request in the th DC is migrated out; otherwise, it is .
: a nonnegative integer variable, which represents the required network bandwidth in the th migration from the th DC to the th DC, and it is 0 if no migration happens.
: a nonnegative integer variable, which represents the required network bandwidth of the th path in the th migration from the th DC, and it is 0 if no migration happens.
: a nonnegative integer variable, which represents the index of the starting spectrum slot in the th path in the th migration from the th DC, and it is a positive integer if the th migration from the th DC is provisioned by using the th path; otherwise, it is 0.
The objective of the REAIM problem is to minimize the total cost of the brown energy consumption of all DCs via VM migration by considering the DC service constraints, the transition constraints and the network constraints. Eq. (3) is the objective function, where the first term represents the cost of the total brown energy consumption, the second term is the cost of VM migration in the DCs, and the third term is the cost of VM migration in the network. To emphasize the cost of the brown energy consumption, is relatively small as compared to . Here, Eqs. (4)(8) are DC service constraints, Eqs. (9)(14) are transition constraints, and the network constraints are shown in Eqs. (15)(22). The problem is formulated as follows.
Eq. (4) constrains each request to be only provisioned once, i.e., one request can be provisioned locally or can be migrated to other DCs. Eq. (5) constrains each request to be migrated no more than once. Eq. (6) is the server CPU capacity constraint, and ensures the used CPU cores to be no more than the available ones. Eqs. (7)(8) impose the brown energy consumption constraint, which is transformed from Eq. (2).
Eq. (9) is the VM migration bandwidth constraint, which ensures the used bandwidth from a DC is bigger than or equal to the bandwidth of all migrated VMs. Eqs. (10)(12) are path selection constraints, which ensure that only one path is selected for each migration. Eqs. (10)(11) ensure that the required network bandwidth for one migration is not zero when a path is used for migration. Eq. (12) ensures that only one path is used for each migration. Here, is the path set from the th DC to the th DC for all migrations, and is the path set from the th DC for the th migration. Here, is the upper bound of , which equals to the total bandwidth requirement in terms of spectrum slots. Eq. (13) is the DC migration bandwidth constraint, which ensures that the required migration bandwidth from a DC equals to the total bandwidth of all used lightpaths of this DC. Eq. (14) is the migration bandwidth capacity constraint, which ensures that the used bandwidth of a lightpath is smaller than the maximum bandwidth (capacity of the transceiver) in one transmission. Here, is the bit rate per symbol of a path which is determined by the modulation. For example, if BPSK is used for the modulation.
(3)  
(4)  
(5)  
(6)  
(7)  
(8)  
(9)  
(10)  
(11)  
(12)  
(13)  
(14)  
(15)  
(16)  
(17)  
(18) 
(19)  
(20)  
(21)  
(22) 
Eq. (15) constrains the network congestion ratio to be less than , which is the maximum network congestion ratio allowed for routing in the network. In Eq. (15), is the spectrum slot ratio of the th path, which is defined as the ratio of the number of occupied spectrum slots in the th path to the total number of spectrum slots of this path. is defined as the number of used spectrum slots in the th path starting from the spectrum for the th migration from the th DC.
Eq. (16) is the starting spectrum slot constraint, which ensures that the starting spectrum slot index of a path is a positive integer if this path is selected; it is zero if this path is not selected. Eq. (17) is the spectrum allocation constraint, which ensures that the bandwidth of allocated spectrum to a path equals to the provisioned bandwidth. Here, is the required bandwidth of the th path for the th request from the th DC, and equals to zero if no migration happens. Eq. (18) is the available network bandwidth capacity constraint, which ensures that the bandwidth used in migrating VMs not to exceed the total available network resources.
(23)  
(24) 
is a Boolean variable as defined in Eq. (23), which equals if the starting spectrum slot index of the th path in the th migration from the th DC is smaller than that of the th path for the th migration from the th DC; otherwise, it is . Since this definition is not linear, it is transformed into Eqs. (19)(20). Eqs. (21)(22) are the spectrum nonoverlapping and the continuity constraints. This spectrum nonoverlapping constraint ensures that the spectrum used for two different paths does not overlap when the paths have more than one common link. Here, is a Boolean indicator as defined in Eq. (24), which equals if the path used in the th migration from the th DC and that used in the th migration from the th DC have at least one common link; otherwise, it is . is a function to get the path information (all nodes in the path). We use an example to illustrate these equations. For example, if and , then Eq. (20) ensures , Eq. (19) is relaxed and . Eq. (21) becomes Eq. (25), which ensures the spectrum nonoverlapping constraint and the continuity constraint. Eq. (22) is automatically satisfied in this case. After that, Eq. (25) is transformed to Eq. (26) when (), which ensures that the starting spectrum slot index of the th path in the th migration from the th DC is bigger than the total occupied spectrum slots of the th path in the th migration from the th DC and a guard band.
(25)  
(26) 
In provisioning spectrum slots for requests in EONs, the path continuity constraint, spectrum continuity constraint and nonoverlapping constraint must be considered. For the path continuity constraint, a lightpath must use the same subcarriers in the whole path for a request. For the spectrum continuity constraint, the chosen subcarriers must be continuous if a request needs more than one subcarriers. For the nonoverlapping constraint, two different lightpaths must be assigned with different subcarriers if they have one or more common links. Since we use a path based method to formulate the REAIM problem, the path continuity constraint of the network is already taken into account. Meanwhile, a guard band is required for each transmission.
Iv Problem Analysis and Heuristic Algorithms
Iva Problem Analysis
The REAIM problem is tackled in three steps: i) determining VM migration requests in DCs, ii) performing routing and spectrum allocation (RSA) in EONs, and iii) allocating computing resources in the DCs. To solve the REAIM problem, both the energy costs in DCs and required network resources for the migration should be considered. For example, when a DC consumes brown energy, it is desirable to migrate some VMs to other DCs, a path should be allocated for routing from the source DC to the destination DC, and the spectrum slots should be assigned along this path to ensure the migration. Therefore, it is challenging to solve REAIM, which is proven to be NPhard by reducing any instance of the multiple knapsack problem into the REAIM problem, as detailed in the Appendix.
In the REAIM problem, the optimal workload distribution is calculated based on the current workload demands, green energy generation, and the availability of the network resources. The optimal solution is then derived based on the optimal workload distribution by using the CVX toolbox. Since it is difficult to obtain the optimal workload distribution, we use the suboptimal workload distribution instead, which is calculated by relaxing the network constraints. Then, the heuristic algorithms execute the migration according to this suboptimal workload distribution. For the REAIM problem, many VMs are migrated from many source DCs to many destination DCs, and it is a manymanycast communications problem. Since the REAIM problem is NPhard, we propose a few heuristic algorithms to solve this problem by splitting manymanycast communications into many anycast communications. These algorithms determine which VM should be migrated to which DC and select a proper routing path in the network to avoid congesting the network, namely, Anycast with Shortest Path routing (AnycastSP) algorithm, Anycast with Maximum bandwidth Path routing (AnycastMP) algorithm, Anycast with Ergodic Path routing (AnycastEP) algorithm, and Anycast with Joint Resources Ergodic routing (AnycastJRE) algorithm, respectively.
IvB Heuristic Algorithms
For all heuristic algorithms, the inputs are the traffic set , the available green energy , the DC set , and ; the outputs are the DC set which lack renewable energy, the DC set which have abundant renewable energy, and the migration request set from to for each migration cycle.
The AnycastSP algorithm, as shown in Alg. 1, is to find the shortest routing path that satisfies the VM migration requirement and the network resource constraints. The migration will try to use the shortest path from to ; the request set is carried out if the network congestion constraint is satisfied; otherwise, the migration is denied. Afterward, we update and for the next migration. After many rounds of migration, if or is empty, or Eq. (9) is not satisfied, the migration is terminated. Details of the AnycastSPR algorithm is described in Algorithm 1.
The complexity of AnycastSP is . Here, is the complexity to determine the suboptimal work loads, is the complexity to determine the starting spectrum slot index, and is the complexity in building the VM set for the migration. is the complexity of determining the path for AnycastSP.
When the work load of the network is heavy or too many VMs need to be migrated, it is difficult for the AnycastSP algorithm to find the shortest path with available spectrum slots. Then, AnycastSP may block some migration requests, and leads to high brown energy consumption of DCs. Here, we propose another benchmark algorithm (AnycastMP), which places more weight on network resources. AnycastMP checks shortest paths from the source node to the destination node, and picks up the idlest path to provision the migration requests. It aims to find a path with more available spectrum slots at the expense of a higher complexity. The main difference between AnycastMP and AnycastSP is using different ways to determine a path. Details of the AnycastMP algorithm is described in Algorithm 2. The complexity of AnycastMP is . Here, is the complexity to determine the suboptimal work loads, that to determine the starting spectrum slot index, and that to build the VM set for the migration. is the complexity of determining the path for AnycastMP. The most complex part is to determine the set of VMs for the migration.
To better address the REAIM problem, two Ergodic routing algorithms, AnycastJRE and AnycastEP, are proposed. AnycastJRE checks shortest paths from the source node to the destination node, also checks the available computing resources of the destination DCs, and picks up the path with the maximum weight to provision the migration requests. One DC with less energy migration requirement will be added to if this migration fails, and this DC will be removed from and . The migration will continue until or is empty. Details of the AnycastJRE algorithm is described in Algorithm 3. The complexity for AnycastJRE is . Here, is the complexity to determine the suboptimal work loads, that to determine the starting spectrum slot index, and that to build the VM set for the migration. is the complexity of calculating the weight . is the complexity of determining path for AnycastJRE. The most complex parts are to determine the set of VMs for the migration and to calculate the weight.
(27) 
(28) 
AnycastJRE with Ergodic Path routing becomes AnycastEP if Eq. (27) is replaced by Eq. (28) to calculate the weight, and step is deleted. The complexity for AnycastEP is , which is the same as AnycastJRE except that the complexity of calculating weight is . The most complex part is to determine the set of VMs for the migration.
IvC Algorithm Implementation
Here, we show how to implement the algorithms in the interDC network. The core optical network is a centralized architecture and optical circuit switching (OCS) is employed for swapping data. There is a controller in this network, and our algorithms are running inside the controller. For each migration cycle, each DC will exchange its green energy information, work load information, server information and network utilization information from the controller. It takes a very short time (less than one second or a few seconds) for the heuristic algorithms to execute the migration. Here, one migration cycle is set as one hour in our simulation. When the controller receives the exchanged information from all DCs, it runs one the four proposed heuristic algorithms to calculate VM migration requests. After that, these VMs are migrated among DCs through the EON. The whole process is repeated until it is terminated by the controller.
V Performance Evaluations
The algorithms for solving the REAIM problem are evaluated in this section. MATLAB is used for the simulations, which are run on a Dell desktop with core line Intel Core and GB RAM. In this work, we assume VM migration is carried out in every migration cycle, and a migration cycle is set as one hour. Meanwhile, the user requests provisioned by VMs (last for one hour or more). For the VMs which run for more than one hour, they may be migrated in every migration cycle, depending on the workload distribution and the availability of the renewable energy. We use CVX [37] with Gurobi [38] to solve the ILP problem.
2 requests per DC  3 requests per DC  
Algorithms  Obj  Obj2  Time  Obj  Obj2  Time 
Without Migration  1849.5  1849.5  0  1742.6  1742.6  0 
ILP  1169.9  1168.7  224.29  1160  1158.9  2035 
AnycastSP  1484.8  1484.1  0.02  1501.9  1501.2  0.02 
AnycastMP  1432.8  1432.1  0.04  1444.1  1443.4  0.04 
AnycastEP  1216.3  1215.3  0.07  1311.1  1310.1  0.09 
AnycastJRE  1213  1211.8  0.09  1186.4  1185.2  0.09 
Va Evaluations of small scale problems
We evaluate the performance of the heuristic algorithms, and compare the results with the optimal results derived by using CVX for small network configurations with user requests per DC for the sixnode topology, user requests per DC for the sixnode topology, and user requests per DC for the NSF topology. For all simulations, we repeat each simulation five times to obtain the average value. Tables II and III summarize the simulation results for the sixnode topology (the same as Fig. 1 with 1200 length for each link) and NSF topology (Fig. 2
), respectively. The user requests for each DC is generated according to a Poisson distribution. There are
and DCs for the sixnode topology and NSF topology, respectively. is set to and for the sixnode topology and NSF topology, respectively. is set to server, and the price of electricity is randomly chosen in a range cents. The other parameters used for our evaluations are the same as Table IV.2 requests per node,  
Algorithms  Obj  Obj2  Time (sec) 
Without Migration  2427.9  2427.9  0 
ILP  1275  1273.4  1533.5 
AnycastSP  1844.7  1844  0.02 
AnycastMP  2269.5  2269.3  0.03 
AnycastEP  1342.7  1341.1  0.09 
AnycastJRE  1321.6  1320  0.07 
Network topology  NSFNET 

{1, 2, 3, …, 14}  
CPU cores  
servers  
cents  
per unit  
W  
W  
spectrum slots  
Gbps  
paths  
100 Gbps  
CPU cores  
Gbps  
requests per node 
Here, “obj” is referred to the objective defined in Eq. (3), and “obj2” is referred to the total cost of brown energy consumption (). Table II shows the results of all heuristic algorithms and the optimal results derived by using CVX in the sixnode topology. The number of migrations are set to and , respectively. The performance of AnycastEP and AnycastJRE are very close to the optimal results. AnycastMP’s performance is a litter better than that of AnycastSP. Even there are only user requests per DC, it takes nearly four minutes to achieve the optimal result, and it takes nearly 34 minutes to achieve the optimal result for user requests per DC.
Table III shows the the results of all heuristic algorithms and the optimal result derived by using CVX for the NSF topology. The gap between the optimal result and that of AnycastEP is , and the gap between the optimal result and that of AnycastJRE is . AnycastMP’s performance is a litter worse than that of AnycastSP, because AnycastMP places more weight on the bandwidth of the network, and the destination DC selected by AnycastMP may not have computing resources.
VB Evaluations of large scale problem
The NSFNET topology [39, 40] consists of nodes with DCs are located at . The DCs are assumed to be equipped with wind turbines and solar panels, which provide the DCs with renewable energy. The constant is set as the price of electricity [41]. The capacity of a spectrum slot is set to 12.5Gbps. The capacity of the network is set to spectrum slots, and is the number of available spectrum slots for migration. , i.e., the maximum number of shortest paths that can be used in AnycastJRE (EP), is . The VM bandwidth requirement is randomly selected from Gbps, and the computing requirement is randomly selected from CPU cores. Parameters which are used for the evaluation are summarized in Table IV.
We repeat the simulation times. Fig. 3 shows the total cost of brown energy consumption of different provisioning strategies when equals to . Apparently, all algorithms can save brown energy substantially as compared to without migration. AnycastSP, AnycastMP, AnycasEP, and AnycasJRE save up to , , , and cost of brown energy as compared with the strategy without migration, respectively. AnycastJRE and AnycastEP achieve better performance because they check for more possible paths and DCs, while AnycastSP and AnycastMP stop immediately when one migration fails. Fig. 4 shows the total cost of brown energy consumption of different provisioning strategies when equals to . AnycastSP, AnycastMP, AnycasEP, and AnycasJRE save up to , , , and cost of brown energy as compared with the strategy without migration, respectively. Nearly all algorithms achieve better performance for as compared to . AnycastMP always uses the path with the largest bandwidth to provision the VM migration requests, and thus can accommodate the least number of VM migrations.
Figs. 34 show that the total cost of the brown energy consumption increases as the workload increases; migration cannot reduce the cost of the brown energy consumption much under a heavy work load as compared to the strategy without migration because the renewable energy of all DCs is nearly fully utilized by their own work loads. However, this does not mean that migration is useless, but rather that less migration is needed for heavy work loads.
In order to obtain a better analysis, we evaluate the algorithms under different from to , as shown in Figs. 5–7. All algorithms show that the total cost of brown energy consumption increases as traffic load increases for a fixed . The results also show that the total cost of brown energy consumption decreases as increases, when the traffic load is fixed because more network bandwidth resource is available for VM migration for a bigger . AnycastJRE incurs the highest computing complexity and hence achieves the lowest cost of the brown energy consumption.
Figs. 8–9 show the number of migrations under and , respectively. AnycastMP incurs the least number of migrations in these two figures; AnycastSP requires a few more migrations than AnycastMP; AnycastJRE incurs the most number of migrations; AnycastEP is comparable to AnycastJRE in terms of the required number of migrations. This is why AnycastEP’s performance is close to that of AnycastJRE. Since migration granularity is set as Gbps, the maximum bandwidth capacity of each migration is Gbps. All these results show that more migrations incur lower cost of the brown energy consumption because migration reduces the brown energy consumption of dirty DCs, and improves the renewable energy consumption utilization efficiency of green DCs.
Vi Conclusion
InterDC VM migration brings additional traffic to the network, and the VM mitigation is constrained by the network capacity, rendering interDC VM migration a great challenge. This is the first work that addresses the emerging renewable energyaware interDC VM migration problem in the interDC network over the elastic optical infrastructure. The REAIM problem is a manymanycast communications problem, and the main contribution of this paper is to minimize the total cost of the brown energy consumption of the DCs via VM migration with consideration of the available network resources in an interDC network over the elastic optical infrastructure. The REAIM problem is formulated as an ILP problem, and proven to be NPhard. The results are compared with the optimal results for small network configurations. We propose a few heuristic algorithms to solve the large network configurations, and their viabilities in minimizing the cost of the brown energy consumption in interDC migration have been demonstrated via extensive simulations.
Appendix A Proof of the REAIM problem is NP hard
The REAIM problem includes migrating, routing, and spectrum assignment (MRSA). Here, migrating refers to finding a feasible destination DC for migration, and RSA is required to find a path and assign a (or a few) spectrum slot(s) for each request. We reduce any instance of multiple knapsack problem into the REAIM problem.
A multiple knapsack problem can be defined as follows: Given a set of items and a set of knapsacks ; each item is characterized by a weight , a volume and a value ; each knapsack is limited by a volume capacity and a weight capacity [42]. The objective is to place as many items as possible in all knapsacks, without exceeding the limit of each knapsack, such that the total value of items placed in all knapsacks is maximized.
For the REAIM problem, a request is mapped into an item and each DC into a knapsack . The computing resource and the bandwidth resource requirements for the request are mapped into the volume and the weight , respectively. The cost of the brown energy of the th request is mapped into the value . The maximum available computing resource in DC is mapped into the volume capacity , and the maximum available spectrum slots in the EON into the weight capacity . The objective is to minimize the cost of brown energy, which is the same as the objective of maximizing the green energy utilization according to the cost of brown energy. Without considering the routing and spectrum assignment, any instance of multiple knapsack can be reduced into the REAIM problem. Since the multiple knapsack problem is NPhard [42], the REAIM problem is also NPhard. Additionally, the routing and spectrum assignment problem is also proved to be an NPhard problem [Christodoulopoulos2011]. As a result, the REAIM problem is NPhard.
References
 [1] M. Sadiku, S. Musa, and O. Momoh, “Cloud computing: Opportunities and challenges,” IEEE Potentials, vol. 33, no. 1, pp. 34–36, Jan. 2014.
 [2] L. Yang, J. Cao, S. Tang, D. Han, and N. Suri, “Run time application repartitioning in dynamic mobile cloud environments,” IEEE Trans. on Cloud Computing, vol. 4, no. 3, pp. 336–348, Jul. 2016.
 [3] A. Jonathan, M. Ryden, K. Oh, A. Chandra, and J. Weissman, “Nebula: Distributed edge cloud for data intensive computing,” IEEE Trans. on Parallel and Distributed Systems, vol. PP, no. 99, pp. 1–1, Jun. 2017.
 [4] Y. Zhang and N. Ansari, “HERO: Hierarchical energy optimization for data center networks,” IEEE Systems Journal, vol. 2, no. 9, pp. 406–415, Jun. 2015.
 [5] Y. Zhang and N. Ansari, “On architecture design, congestion notification, TCP incast and power consumption in data centers,” IEEE Communications Surveys Tutorials, vol. 15, no. 1, pp. 39–64, Jan. 2013.
 [6] M. Pickavet, W. Vereecken, S. Demeyer, P. Audenaert, B. Vermeulen, C. Develder, D. Colle, B. Dhoedt, and P. Demeester, “Worldwide energy needs for ICT: The rise of poweraware networking,”, in Proc. ANTS, pp. 1–3, Dec. 2008.
 [7] U. Mandal, M. Habib, S. Zhang, B. Mukherjee, and M. Tornatore, “Greening the cloud using renewableenergyaware service migration,” IEEE Network, vol. 27, no. 6, pp. 36–43, Nov. 2013.
 [8] T. Han and N. Ansari, “Powering mobile networks with green energy,” IEEE Wireless Communications, vol. 21, no. 1, pp. 90–96, Feb. 2014.
 [9] D. Zeng, J. Zhang, S. Guo, L. Gu, and K. Wang, “Take renewable energy into CRAN toward green wireless access networks,” IEEE Network, vol. 31, no. 4, pp. 62–68, Jul. 2017.
 [10] I. Goiri, W. Katsak, K. Le, T. Nguyen, and R. Bianchini, “Designing and managing data centers powered by renewable energy,” IEEE Micro, vol. 34, no. 3, pp. 8–16, May 2014.
 [11] T. Wood, K. Ramakrishnan, P. Shenoy, J. van der Merwe, J. Hwang, G. Liu, and L. Chaufournier, “CloudNet: Dynamic pooling of cloud resources by live WAN migration of virtual machines,” IEEE/ACM Trans. on Networking, vol. PP, no. 99, pp. 1–16, Aug. 2014.
 [12] Enabling long distance live migration with f5 and VMware vMotion. [Online]. Available: https://f5.com/resources/whitepapers/enablinglongdistancelivemigrationwithf5andvmwarevmotion
 [13] W. Shieh, X. Yi, and Y. Tang, “Transmission experiment of multigigabit coherent optical OFDM systems over 1000km SSMF fibre,” IET Electronics Letters, vol. 43, no. 3, pp. 183–184, Feb. 2007.
 [14] J. Armstrong, “OFDM for optical communications,” IEEE J. of Lightwave Technology, vol. 27, pp. 189–204, Feb. 2009.
 [15] C. Develder et al., “Optical networks for grid and cloud computing applications,” Proceedings of the IEEE, vol. 100, pp. 1149–1167, May 2012.
 [16] S. Figuerola et al., “Converged optical network infrastructures in support of future internet and grid services using IaaS to reduce GHG emissions,” IEEE J. of Lightwave Tech., vol. 27, no. 12, pp. 1941–1946, Jun. 2009.
 [17] A. Fallahpour, H. Beyranvand, and J. Salehi, “Energyefficient manycast routing and spectrum assignment in elastic optical networks for cloud computing environment,” IEEE J. of Lightwave Tech., vol. 33, no. 19, pp. 4008–4018, Oct. 2015.
 [18] L. Zhang, T. Han, and N. Ansari, “Renewable energyaware interdatacenter virtual machine migration over elastic optical networks,” in Proc. IEEE Cloudcom 2015, Nov. 30 – Dec. 2, 2015.
 [19] L. Zhang, T. Han, and N. Ansari, “Renewable energyAware interdatacenter virtual machine migration over elastic optical networks,” NJIT Advanced Networking Lab., Tech. Rep. TRANL2015005; also archived in Computing Research Repository (CoRR), arXiv:1508.05400, 2015.
 [20] M. Ghamkhari and H. MohsenianRad, ‘Energy and performance management of green data centers: A profit maximization approach,” in IEEE Trans. on Smart Grid, vol. 4, no. 2, pp. 1017–1025, Jun. 2013.
 [21] S. Fang et al., “Energy optimizations for data center network: Formulation and its solution,” in Proc. IEEE GLOBECOM, pp. 3256–3261, Dec. 2012.
 [22] D. Cavdar and F. Alagoz, “A survey of research on greening data centers,” in Proc. GLOBECOM, pp. 3237–3242, Dec. 2012.
 [23] W. Deng, F. Liu, H. Jin, B. Li, and D. Li, “Harnessing renewable energy in cloud datacenters: opportunities and challenges,” IEEE Network, vol. 28, no. 1, pp. 48–55, Jan. 2014.
 [24] M. Gattulli, M. Tornatore, R. Fiandra, and A. Pattavina, “Lowcarbon routing algorithms for cloud computing services in IPoverWDM networks,” in Proc. ICC 2012, pp. 2999–3003, Jun. 2012.
 [25] Q. L. Lin and S. Z. Yu, “A distributed green networking approach for data center networks,” IEEE Communications Letters, vol. 21, no. 4, pp. 797–800, Apr. 2017.
 [26] B. Kantarci, L. Foschini, A. Corradi, and H. T. Mouftah, “Interandintra data center VMplacement for energyefficient largeScale cloud systems,” IEEE Globecom Workshops, pp. 708–713, Dec. 2012.
 [27] W. Fang et al., “Optimising data placement and traffic routing for energy saving in Backbone Networks,” Trans. on Emerging Tele. Tech., vol. 25, no. 9, pp. 875–964, Sept. 2014.
 [28] Q. Zhang et al., “Dynamic service placement in geographically distributed clouds,” IEEE J. on Selected Areas in Comm., vol. 31, no. 12, pp. 762–772, Dec. 2013.
 [29] Y. Wu, M. Tornatore, S. Thota, and B. Mukherjee, “Renewableenergyaware data center placement in optical cloud networks,” in Optical Fiber Communications Conference and Exhibition (OFC), pp. 1–3, Mar. 2015.
 [30] Y. Wu, M. Tornatore, S. Ferdousi, and B. Mukherjee, “Green data center placement in optical cloud networks,” IEEE Trans. on Green Comm. and Networking, vol. 1, no. 3, pp. 347–357, Sept. 2017.
 [31] R. Buyya, R. Ranjan, and R. N. Calheiros, “Intercloud: Utilityoriented federation of cloud computing environments for scaling of application services,” in Intl. Conf. on Alg. and Arch. for Par. Proc., pp. 13–31, May 2010.
 [32] D. Ardagna, B. Panicucci, M. Trubian, and L. Zhang, “Energyaware autonomic resource allocation in multitier virtualized environments,” IEEE Trans. on Services Computing, vol. 5, no. 1, pp. 2–19, Jan. 2012.
 [33] O. Ayoub, F. Musumeci, M. Tornatore, and A. Pattavina, “Efficient routing and bandwidth assignment for interdatacenter live virtualmachine migrations,” IEEE/OSA J. of Optical Communications and Networking, vol. 9, no. 3, pp. B12–B21, Mar. 2017.
 [34] L. Velasco, A. P. Vela, F. Morales, and M. Ruiz, “Designing, operating, and reoptimizing elastic optical networks,” IEEE J. of Lightwave Tech., vol. 35, no. 3, pp. 513–526, Feb. 2017.
 [35] A. Kiani and N. Ansari, “Toward lowcost workload distribution for integrated green data centers,” IEEE Communications Letters, vol. 19, no. 1, pp. 26–29, Jan. 2015.
 [36] L. Zhang, T. Han, and N. Ansari, Energyaware Virtual Machine Management in Interdatacenter Networks over Elastic Optical Infrastructure, NJIT Advanced Networking Lab., Tech. Rep. TRANL2017006; also archived in CoRR, arXiv:1711.00973, 2017.
 [37] CVX Research, Inc., “CVX: Matlab software for disciplined convex programming, version 2.0,” http://cvxr.com/cvx, Aug. 2012.
 [38] Gurobi optimization. [Online]. Available: http://www.gurobi.com/
 [39] L. Zhang, T. Han, and N. Ansari, “Revenue driven virtual machine management in green datacenter networks towards big data,” in IEEE GLOBECOM, pp. 1–6, Dec. 2016.
 [40] L. Zhang and Z. Zhu, “Spectrumefficient anycast in elastic optical interdatacenter networks,” Elsevier Optical Switching and Networking (OSN), vol. 14, no. 3, pp. 250–259, Aug. 2014.
 [41] [Online]. Available: http://www.eia.gov/electricity/annual/html /epa_02_10.html

[42]
M. Dawande, J. Kalagnanam, P. Keskinocak, F. S. Salman, R. Ravi, “Approximation algorithms for the multiple knapsack problem with assignment restrictions,”
Journal of Combinatorial Optimization
, vol. 4, issue 2, pp. 171–186, Jun. 2000.
Comments
There are no comments yet.