Log In Sign Up

Design and Provision of Traffic Grooming for Optical Wireless Data Center Networks

by   Abdulkadir Celik, et al.
King Abdullah University of Science and Technology

Traditional wired data center networks (DCNs) suffer from cabling complexity, lack flexibility, and are limited by the speed of digital switches. In this paper, we alternatively develop a top-down traffic grooming (TG) approach to the design and provisioning of mission-critical optical wireless DCNs. While switches are modeled as hybrid optoelectronic cross-connects, links are modeled as wavelength division multiplexing (WDM) capable free-space optic (FSO) channels. Using the standard TG terminology, we formulate the optimal mixed-integer TG problem considering the virtual topology, flow conversation, connection topology, non-bifurcation, and capacity constraints. Thereafter, we develop a fast yet efficient sub-optimal solution which grooms mice flows (MFs) and mission-critical flows (CFs) and forward on predetermined rack-to-rack (R2R) lightpaths. On the other hand, elephant flows (EFs) are forwarded over dedicated server-to-server (S2S) express lightpaths whose routes and capacity are dynamically determined based on the availability of wavelength and capacity. To prioritize the CFs, we consider low and high priority queues and analyze the delay characteristics such as waiting times, maximum hop counts, and blocking probability. As a result of grooming the sub-wavelength traffic and adjusting the wavelength capacities, numerical results show that the proposed solutions can achieve significant performance enhancement by utilizing the bandwidth more efficiently, completing the flows faster than delay sensitivity requirements, and avoiding the traffic congestion by treating EFs and MFs separately.


page 1

page 4

page 15


LightFDG: An Integrated Approach to Flow Detection and Grooming in Optical Wireless DCNs

LightFDG is an integrated approach to flow detection (FD) and flow groom...

Fast Steerable Wireless Backhaul Reconfiguration

Future mobile traffic growth will require 5G cellular networks to densif...

METTEOR: Robust Multi-Traffic Topology Engineering for Commercial Data Center Networks

Numerous optical circuit switched data center networks have been propose...

Optimal Job Scheduling and Bandwidth Augmentation in Hybrid Data Center Networks

Optimizing data transfers is critical for improving job performance in d...

A Reliability Study of Parallelized VNF Chaining

In this paper, we study end-to-end service reliability in Data Center Ne...

Optimal Steerable mmWave Mesh Backhaul Reconfiguration

Future 5G mobile networks will require increased backhaul (BH) capacity ...

On Coding for Reliable VNF Chaining in DCNs

We study how erasure coding can improve service reliability in Data Cent...

I Introduction

Data centers (DCs) are becoming an intrinsic part of the computing infrastructures for emerging technologies which requires storage and processing of massive amounts of data. While some enterprises and governmental institutions build and sustain their own DCs, some others meet these demands by renting or purchasing a portion of large-scale DC networks (DCNs) offered by gigantic technology firms [1]

. DCN applications include but not limited to cloud services, big data, artificial intelligence, content delivery, neural networks, and cellular infrastructure; all of which necessitates sophisticated and interconnected storage and computing resources. For instance, cloud radio access is provisioned to be an enabler of the fifth generation (5G) networks by centralized processing of a massive number of devices

[2]. Therefore, it is also necessary to be able to handle highly diversified types of traffic as a regular phone call and blue light box call cannot be treated in the same priority.

Scalability of DCNs is required to accommodate a large number of servers with adequate speeds and bandwidths. However, today’s DCNs interconnects network equipment via unshielded twisted pair cables or fiber-optic wires, which has the following disadvantages; cabling cost and complexity, lack of flexibility, and underutilization of the available bandwidth [3]. Fortunately, wired DCNs can be augmented with wireless technologies such as multi-gigabit mmWave [4] or multi-terabit free-space optic (FSO) [5]. While mmWave technology offers some degree of penetration, it suffers from interference and short ranges due to high attenuation. Thanks to the line of sight (LoS) links among the receivers, FSO can alternatively provide an interference-free communication and naturally improve the physical layer security. However, this necessitates replacing the traditional grid-based rack arrangement of wired DCNs with a physical topology design that facilitates the LoS links. It is shown that an outdoor WDM-FSO link can achieve 1.28 Tbps (32x40 Gbps) capacity on 32 wavelengths over 212 meters distance [6].

Unlike the outdoor FSO links which suffer from hostile channels impediments (e.g., scintillation, pointing error, and atmospheric turbulence, etc.), the quality of indoor FSO links within DCNs is primarily determined by the link distance as they operate in an acclimatized environment. Therefore, indoor FSO links can offer higher bandwidth at longer distances and the bandwidth can be further enhanced with wavelength division multiplexing (WDM) methods [7]. In [8], an FSO based DCN design is presented based on fixed, non-mechanical, FSO links by means of fully connected rows/columns of racks. A more generalized OWCells concept is proposed by Hamza et. al. in [9] where fixed LoS optical wireless communication (OWC) links are employed to interconnect racks arranged in regular polygonal topologies. Readers can also refer to [10] for a survey of recent advances in physical topology realizations in wireless DCNs.

Regardless of the potentially achievable multi-terabit link capacities by combining WDM and FSO, the DCN bottleneck is still determined by data processing capability of power-hungry state-of-art switches which can handle 1-10 Gbps rate at each port. Alternatively, power consumption and processing limitations can be mitigated by optical or hybrid optoelectronic switches. As the flow bandwidth requests can be much lower than the capacity offered by WDM channels, traffic grooming (TG) arises as a necessary operation which refers to the aggregation of subwavelength flows onto high-speed lightpaths subject to equipment costs and capacity [11]. Considering the enormous number of flows generated by servers across the DCN, TG is also vital to alleviate the network management complexity by combining the same class of flows and provision their common QoS demands altogether. However, TG poses daunting challenging as its subproblems have been shown to be NP-hard [12]. Accounting for dynamically changing traffic conditions of DCNs, these entail developing fast yet efficient suboptimal TG policy designs. Accordingly, our goal in this paper is design and provision of WDM-FSO based WDCNs from a TG approach which is aware of various quality of service (QoS) domains including flow size, delay sensitivity, and priority.

I-a Related Work

Recent efforts on WDCNs can be exemplified as follows: The idea of using 60 GHz technology in DCNs is first shared in [13] where Ramachandran et. al. identified the problems of wired DCNs and discussed the potentials and challenges of using 60 GHz band. In [4], results of 60 GHz link measurement and simulation showed that directional links are essential for link stability, interference mitigation, and higher throughput. In [14, 15, 16], Cui et. al. addressed the wireless channel allocation and scheduling to overcome congestion and interference of hybrid 60 GHz DCNs. On the other hand, interference mitigation is obtained by steered-beam mmWave links in wireless packet-switching networking DCNs [17].

Riza et. al. proposed mechanically steerable FSO links in [18] where a transceiver is mounted to a pedestal platform which allows for vertical and rotational motion to establish links among the racks. In [5], inter-rack communications are realized via FSO links reflected on a ceiling mirror similar to Flyways in [4]. Likewise, Bao et. al. presented FlyCast FSO DCN in [19] where they employ splitting state of the switchable mirrors to enable multicast without the need for a switch. In [20], Arnon developed a physical DCN topology to establish intra-rack and inter-rack communications. We refer interested readers to [10] where Hamza et. al. surveyed the recent efforts on WDCNs under the classification of 60 GHz and FSO communications and compare virtues and drawbacks of potential wireless technologies for DCNs.

Since power consumption and speed limit are two major drawbacks of packet-switched architectures, power reduction and throughput enhancement have been obtained by optical switches which may operate in time, space and wavelength domains. For example, all-optical switches can reach Petabit-scales by employing an arrayed waveguide grating router (AWGR) with hundreds of ports and hundreds of wavelength each with tens of Gbps data rate [21]. Alternatively, circuit switching based hybrid architectures are proposed to use semiconductor optical amplifiers or micro-electro-mechanical switches in [22] and [23], respectively. Also, a mixed switching architecture is proposed in [24] which combines wavelength routing, circuit switching, and broadcast-and-select mechanisms.

In the past decades, TG was extensively studied for synchronous optical networks for a variety of topologies (please refer to [11] and references therein). However, to the best of our knowledge, its potentials and prospects are not studied in the realm of DCNs except in [25, 26] where traffic is simply groomed into three classes of wavelengths which are confined to broadcasting within racks and higher layer switches. A lightweight flow detection mechanism is proposed for TG in wireless DCNs in [27] where the detector is designed as a module installed on the Virtual-Switch/Hypervisor. Emulation results show that the detection speed can have a significant impact on system throughput. In our previous work [1], we formulated the optimal TG problem and proposed 3-step grooming policy for MFs. This paper extends [1] by accounting for mission-critical flows and introducing delay analysis along with extensive emulation results.

I-B Main Contributions

The main contributions of the paper can be summarized as follows:

  • We formulate an optimal TG problem for mission-critical DCNs which mainly consists of three NP-Hard subproblems: 1) Virtual topology design, i.e., lightpath provisioning and routing over the physical topology; 2) Wavelength assignment to the lightpaths; and 3) Developing a grooming policy and routing the groomed traffic requests on the virtual topology. Although solving such a problem is highly time-consuming even for small-scale DCNs, optimal TG formulation is a key to understand the grooming nature of DCNs and to develop effective heuristic methods.

  • We propose a fast yet high-performance sub-optimal TG policy where CFs and MFs are groomed at servers and ESs to obtain larger rack-to-rack (R2R) mission-critical flows (CFs) and mice flows (MFs), respectively. In order to forward R2R flows, we establish dedicated R2R lightpaths and set their capacity according to delay constraints. These lightpaths are pre-determined and always active to immediately serve any CF and MF arrival. On the other hand, elephant flows (EFs) are forwarded over dedicated S2S express lightpaths whose routes and capacity are dynamically determined based on wavelength and capacity available after CFs and MFs.

  • CFs are prioritized by considering priority queues where high priority queues are modeled as while low priority queues are modeled as queuing discipline. Based on queuing theory, we finally provide performance analysis of mission-critical DCNs including waiting times, delay, maximum hop-count, and the probability of blocking.

I-C Paper Organization

The remainder of the paper is organized as follows: Section II presents the network model. Section III formulates the optimal TG problem and Section IV develops the proposed sub-optimal solution. Section V analyzes the performance of mission-critical DCNs. Emulation results are presented in Section VI and Section VII concludes the paper with a few remarks.

Fig. 1: Proposed topology for and spine/leaf ratio of .

Ii Network Model

Ii-a Network Topology

To employ FSO in DCNs, we adopt a two-layer spine & leaf Clos architecture where every bottom-layer (leaf-layer) switch is connected to each of the top-layer (spine-layer) switches in a full-mesh topology [28]. The leaf layer consists of top-of-rack switches, a.k.a., edge switches (ESs), which is connected to servers within racks. There also exists , where is the spine & leaf ratio, core switches (CSs) which are responsible for interconnecting all leaf switches such that every ES connects to every CS. Unlike the spanning tree protocol-based routing within the traditional three-layered architecture, the leaf & spine architecture employs layer-3 routing which permits all connections to be utilized at the same time while avoiding loops and remaining stable. Furthermore, it is easy to expand the DCN with additional hardware and capacity if oversubscription of links occurs. One of the main concern with the spine & leaf architecture is the extra cabling complexity which can be solved by the use of WDM-FSO links as follows:

Different from the physical topology designs realized by mechanical devices, reflectors on side walls, and ceiling mirrors, FSO links between the switches can be directly established by an array of optical transceivers as shown in Fig. 1. In this way, performance degradation caused by steering delay and reflection loses can be taken out of consideration. Fig. 1 can be implemented by equipping OWC transceiver box of ESs with a metallic breadboard at their top where transceivers can be fixed such that laser-diodes (transmitters) are fixed on the breadboard via a mount which allows vertical and horizontal alignment towards photo-diodes (receivers) and vice versa. CSs can be located above such that OWC transceiver box and metallic board face downward toward ESs. Alternatively, ESs and CSs can stay at the same level whereas metallic breadboard of CSs is fixed above.

A similar physical topology set up is also designed in [20] where racks are arranged in circular cells such that neighboring racks can communicate using LoS links and assumes edge switches can communicate with core switches located at higher layers. OWcell is another potential physical topology concept where racks are positioned in regular polygonal topologies to be interconnected with fixed LoS OWC links [9]. It is worth noting for our case that as the DCN sizes increases, deployment optimization is necessary to avoid laser beam collisions. Since our main contribution is not focused on optimizing the physical topology of optical wireless DCNs, we assume that rack deployment and laser mount directions are set properly to avoid any laser beam collisions.

Ii-B Hybrid Crossconnect (HXC) Architecture

As shown in Fig. 2, ESs (CSs) are designed as HXCs and have () input and output ports which are connected to receivers and transmitters, respectively. The signals at each of the input ports are first demultiplexed into individual wavelengths and then processed by optical cross-connect (OXC) or digital cross-connect (DXC) units. OXC is a generic optical switch (i.e., can follow any architectural design in [21, 22, 23, 24]) and responsible for wavelength routing operations. OXCs may also execute some grooming functions using optical couplers and decouplers [26].

On the other hand, DXCs are traditional packet switches and provide flexibility and bandwidth efficiency by grooming several low-speed flows into a high-capacity lightpath which is simply a wavelength circuit/path between two nodes. It operates on O-E-O conversion principle and can handle incoming flows at a given time, which determines grooming ratio of an HXC, i.e., , . We assume that servers and DXC are tunable to any wavelength. Notice that DXC ports are also limited by processing speed limitation which is typically in the order of Gbps. All incoming flows to any HXC port is first handled by OXC based on the flow classifications: EFs are directly routed to the corresponding output ports according to the destination address whereas MFs are first forwarded to the DXC to be groomed by other MFs sharing the same destination, then groomed flow is fed back to the OXC and routed to the final destination.

Fig. 2: Illustration of considered HXC and lightpaths.

Ii-C Channel Model

Consider WDM-FSO links formed by directed laser-diode transmitters and photo-diodes receivers which employ intensity-modulation and direct-detection (IM-DD). Thanks to the WDM, FSO links can be treated as parallel channels where received signal at HXC from HXC on wavelength can be given as


where is the transmit intensity, represents the optical channel gain,

is the Gaussian noise with zero mean and unit variance, and

is the received signal. Due to hardware and safety concerns, transmit signal intensity has to satisfy an individual and total average intensity constraint given by and , respectively. As optical channel variations are very slow in comparison to the symbol duration [29], is further assumed to be constant throughout a transmission block and modeled as follows [30]. If the emitted light-beam from the source is focused on the receiver such that the diverging lens expands the beam to a planar beam diameter, the channel gain can be modeled as [31, Chapter 1]


where () is the transmitter (receiver) gain, () is the transmitter lens (planar beam) diameter, is the carrier wavelength, () transmitter (receiver) efficiency, is the pointing loss factor, narrowband filter transmission factor, is the free-space path loss, and is the distance between transceivers.

Optical intensity channel capacities are studied in [32], where upper and lower bounds are shown to converge at high-SNRs as follows


where is defined as the bandwidth of a single wavelength. Albeit improved capacity and high fan-out advantage thanks to increased number of wavelengths, employing WDM equipment yields additional expenses such as extra space, monetary costs, power consumption, etc. These expenses are expected to proportionally increase with the required number of wavelengths, , and thus the type of WDM modules employed. For example, coarse WDM modules can support up to 8/16 wavelengths [33] whereas dense WDM modules can provide up to 40/80/160 wavelengths at the expense of higher costs and power consumption [34]. Since WDM multiplexes wavelengths into a single carrier wavelength, the number, and cost of required LDs is directly determined by scaling parameters and , regardless of .

Iii TG Problem Statement and Formulation

Iii-a Preliminaries and Problem Statement

In this section, we provide preliminaries to the TG formulation which adopts the following topology definitions:

Definition 1.

Physical Topology refers to a weighted unidirectional graph where and represent the sets of network nodes and physical links, respectively. A link is defined as a unidirectional physical connection between nodes, i.e., uplink (UL) or downlink (DL), and weighted by channel gain as given in (2). Each direction consists of wavelengths and established by a pair of FSO transmitter and receiver.

Definition 2.

Virtual Topology corresponds to a weighted unidirectional graph where and represent the sets of network nodes and lightpaths, respectively. A lightpath between two generic nodes and is denoted as and characterized by a route of physical links, a single wavelength of such links, and intensities/capacities allocated to each link.

For a visual explanation, let us focus on a simple DCN with two wavelengths as shown in Fig. 2. Physical topology is depicted in black colored dashed and dot-dashed lines for first and second wavelengths, respectively. Example routes are given as , , and and drawn in different colors and line styles. Note that we can define 4 distinct lightpaths on each of these routes as FSO links are directional and each direction has two wavelengths. A connection demand between two nodes can be met by a combination of the lightpaths. For example, the figure at the right-hand side shows the possible 4 lightpaths for connection requests between 7 and 9. Moreover, these connection requests can alternatively be satisfied by following a different path, e.g., . However, a wavelength of a physical link cannot be exploited by two different lightpaths due to the collision constraint. Throughout the paper, intermediate routing is not allowed, for instance, is not a valid routing. Even if such an intermediate routing is allowed, it cannot be an optimal solution for two reasons: 1) It unnecessarily exhausts the computational power of multiple switches along the path and 2) It yields more latency due the processing time of multiple switches along the path.

Notations and given parameters:
Number of DXC I/O ports of node .
Total number of servers.
Originating and terminating points of a physical (i.e., FSO) link.
Originating and terminating points of a lightpath which may traverse multiple FSO links.
A flow among a total number of traffic requests from all sources. Each corresponds flow for a source and destination server pair, i.e., . Note that may traverse through single or multiple lightpaths, e.g., or .
Number of FSO links from to , .
Number of wavelengths on iff .
Capacity of on wavelength .
Bandwidth demand matrix,

, where each entry is a vector of flow requests, i.e.,

and is the total number of flows from to , . Note that is necessarily not a symmetric matrix.
Digital processing capacity of .
Optimization Variables:

A binary variable for routing on physical topology; equals 1 iff a lightpath between node pair

is routed on FSO link on wavelength .
Number of lightpath from to on wavelength .
Total number of lightpath from to , . Note that and are different variables.
A binary variable to define different physical routes of lightpaths between the same pair of nodes and on the same wavelengths; equals 1 iff a , , lightpath between node pair is routed on FSO link on wavelength
A binary variable, iff employs lightpath from to on wavelength as an intermediate virtual link.
Real valued capacity exploited by on lightpath(s) between from to , .
A binary indicator for TG; is iff and are groomed into lightpath on wavelength from to .


TABLE I: Notations, parameters, and variables

TG is usually split into three joint subproblems: 1) designing the virtual topology, i.e., routing and lightpath provisioning over the physical topology; 2) assigning the wavelength to the lightpaths, and 3) developing a grooming policy and routing the groomed traffic requests on the virtual topology. Noting that each of these sub-problems is NP-hard [12]

, TG is also an NP-hard problem which falls into mixed-integer linear programming (MILP) class. Considering the resulting high number of binary and integer variables, obtaining an optimal TG solution is highly time-consuming even for a small size of DCNs. However, we believe formulating the optimal problem is necessary to gain an insight into the inherent features of TG within DCNs and to develop fast yet high-performance heuristic solutions.

Iii-B Problem Formulation

Using standardized TG formulations [12, 35], we formulate this MILP problem based on parameters/variables in Table I. The main design goal can be set to any objective function, e.g., throughput maximization, delay minimization, the minimum number of required wavelength, minimum power consumption, etc. Unlike the traditional TG in passive optical mesh networks, we also need to take the intensity allocation of the links as they do not have fixed capacities as in fibers. Accordingly, we formulate the problem based on the following assumptions and constraints:

  1. HXCs are not capable of wavelength conversion. Thus, a lightpath must be routed on the same wavelength.

  2. Bifurcation of flows is not allowed. That is, a connection request cannot be divided and routed separately.

  3. DXCs can groom as many flows into a lightpath as needed, as long as DXC processing capability and wavelength capacity are not exceeded.

  4. I/O ports of servers and DXCs are tunable to any wavelength.

Virtual topology constraints:


where (4) and (5) ensure that number of originating and terminating lightpaths at HXC do not exceed the number of DXC I/O ports, respectively. That is, (4) and (5) limits the total number of lightpaths which can be processed by the DXC at a given time. On the other hand, (6) is the expression for the total number of lightpaths provisioned between nodes and , which is exploited later by other constraints.

Lightpath routing (flow conservation) constraints:


where (7) assures that there are no incoming (outgoing) flows for originating (terminating) node () of lightpath on wavelength . Supported by the underlying physical topology, the total number of lightpaths on wavelength from to is given in (8). Constraints in (9) and (10) ensure the wavelength continuity and protection against lightpath collisions, respectively. Constraints (11) and (12) permits at most one lightpath to be routed on among all lightpaths. Finally, (13) prevents intermediate routing by setting total number of physical link hops to as a valid route (S ES CS ES D) takes exactly 4 hops to reach the destination.

Connection topology constraints:


where (14) defines the total number of connections established on all wavelengths and lightpaths from to . The constraint in (15) guarantees that there is no incoming and outgoing traffic to the source and destination of a traffic request, respectively. On the other hand, (16) preserves the continuity of the flows on single or multiple lightpaths. Even though different flows between a source-destination pair are allowed to be split to different lightpaths, wavelengths, or routes, non-bifurcation keeps a certain flow intact and exploit only one lightpath, wavelength, and physical route tuple.

Non-bifurcation constraints:


where (17)-(22) satisfy non-bifurcation of traffic among lightpaths between different nodes, among wavelengths of a lightpath between the same pair of nodes, and among different physical routes a lightpath between the same pair of nodes and on the same wavelength. In other words, by imposing this set of equations flows are enforced to stay as a whole beginning from the source until the destination.

Capacity and delay constraints:


where (23) and (24) constitute the upper bounds on single wavelength intensity and total intensity allocation of an FSO link, respectively. Capacity is a function of the intensity variable as formulated in (3) and constraint in (25) ensures that total traffic request of set of flows, which are groomed on the same physical route on a certain lightpath and wavelength pair, must comply with the capacity of that route, i.e., the lowest capacity along the physical route. Finally, the total amount of incoming and outgoing traffic to be processed is limited by processing capability of nodes as in (26). Please note that the bandwidth demand of each flow is determined based on flow size and maximum affordable delay of the flow, which satisfies the delay constraints.

Iv TG Policy Design for DCNs

In this section, we develop a TG policy for delay constrained DCNs where flows are classified into EFs, MFs, and CFs. While CFs and MFs have high priority, EFs have low priority. The maximum delays affordable by high and low priority flows are denoted as

and , respectively. The delay constraint is essential especially for the mission-critical applications as delayed packets are considered useless. Accordingly, we define the following TG policy rule-set for the proposed model.

  1. We assume that flows can be classified, via packet classification or flow matching, in a timely and efficient manner. While MF and CFs are groomed into larger traffic, EFs are not groomed and treated separately.

  2. Lightpaths are first provisioned for groomed CFs and then groomed MF. The residual network resources are then used to provision EF lightpaths.

  3. Bifurcation at HXC’s are not allowed as splitting and combining EFs can consume a significant portion of DXC processing capability at the origin and terminal points of lightpaths, respectively. On the other hand, splitting CFs and MF may not be necessarily efficient.

  4. Aside from wavelengths, there exists a dedicated wavelength for a central controller to broadcast signals for TG and intensity allocation commands. Current intensity and wavelength availability state of DCN is formulated in graphs and , respectively, where is the set of nodes, presents available light intensity, and presents binary edge weights for wavelength availability. These graphs are always kept updated over the control wavelength.

Iv-a TG and Lightpath provisioning for MF

CF and MF grooming is designed to take place at source and switches in three TG steps as follows:

  1. S2S Step: Each server grooms all flow arrivals destined to the same destination server. Indeed, the web and database servers receive a very large number of concurrent requests where the clients could be from the same subnet or different subnets. Since servers know whatever packets that belong to which flow and their destination subnet, handling flows destined to the same server is helpful to reduce workload and complexity of dealing with individual flow entities.

  2. S2R Step: Each server further grooms all S2S flows destined to the same destination and transfer it to the ESs. The packets of the same flow are groomed into a jumbo packet (i.e., frame) and propagated to the ES or the virtual switch if the source and destination reside on the same machine. Enabling jumbo frames can improve the efficiency of data transmissions, and hence, the network performance. The processing overhead will be reduced because the CPUs of DXCs need to process a single jumbo packet at a time rather than multiple packets. Therefore, the main motivation behind the S2S and S2R steps is to reduce the workload on digital switches as they have limited processing speed at each port.

  3. R2R Step: Received S2R flows are then groomed according to their destination rack to obtain R2R flows. At this point, R2R flows are only needed to be routed toward their final destination rack without being processed by core switches, which allows employing very fast and effective routing mechanisms.

Above steps are applied to CFs and MF separately as they have a different priority and delay requirement. Thanks to WDM, a large set of route and wavelength possibility is already available. Hence, groomed CFs (MFs) from Rack to Rack is provided with a dedicated lightpath which is denoted as () and defined by route, wavelength, and intensity allocation tuple. Notice that the grooming and de-grooming occur at the edge switches of source and destination servers, respectively. On the other hand, core switches function as a router without involving in any data processing task. Thus, packets stay intact and are routed all together along the predetermined lightpaths. Moreover, since we define only one route for each R2R pair, it is impossible to forward portions of the groomed R2R flows over multiple paths which are never defined. Unlike the wired DCNs with the uniform link capacities, wireless DCNs experience heterogeneous link capacities due to the distance differences. Thus, to determine R2R lightpaths, we first create matrices and to record the number of short paths and corresponding total intensity cost between rack pairs, respectively. After that, lightpath provisioning starts from rack pairs with a lower number of shortest paths. In this way, route diversity of rack pair is not affected by rack pair if . Among pairs with the same number of shortest path, the tie is broken by prioritizing the pair with the highest cost. This iterative procedure is repeated until all lightpaths are determined. Required minimum number of wavelengths to set dedicated R2R lightpaths can be derived as


which is exactly based on the fact that intermediate routing is not allowed and a pair of wavelength is needed for each rack pairs, one for MF and the other for EF. Since R2R flows are assigned to dedicated lightpaths and predetermined route-wavelength pairs, the proposed approach has low complexity and incurs almost no decision making delay. Unlike the generic topology is shown in Fig. 1, CSs are not required to equipped with DXCs as TG is executed only by servers and ESs in the proposed TG policy. Since R2R flow size is limited by DXC port speed, the capacity of MF wavelengths must also be upper bounded by DXC speed, which naturally opens some room for extra intensity required by some EFs as explained next.

2: Initialize the virtual topology
3: Initialize the residual intensity graph
4: Initialize the wavelength availability graph
7:for each traffic request arrival  do
8:      if  is an CF then
9:            Employ 3-Step TG, i.e., S2S, S2R, and R2R.
10:            Forward groomed traffic over R2R-CF lightpaths
11:      else if  is an MF then
12:            Employ 3-Step TG, i.e., S2S, S2R, and R2R.
13:            Forward groomed traffic over R2R-MF lightpaths.
14:      else
16:             Update the virtual topology
17:             Update residual intensity
18:             Update wavelength availability
19:      end if
20:end for  
21:procedure Lighpath Provisioning ()
22:       Create the KSP matrix and corresponding cost matrix .
23:      while There is unprovisioned lightpaths do
24:            Determine the rack pair
25:             Allocate intensity as per (29)
26:             Record lightpath RackRack.
27:             Update and .
28:             Update the virtual topology
29:             Update residual intensity
30:             Update wavelength availability
31:      end while return ,
32:end procedure  
33:procedure EF Lighpath Provisioning ()
34:       Determine feasible routes
35:       Calculate intensities of as per (30)
36:       Compute route capacities as per (31)
37:       Decide on the Best-Fit route as per (32)
38:       Determine on wavelength First-Fit Scheme
39:       Record the lightpath from to return ,
40:end procedure
Algorithm 1 Proposed TG Approach

Iv-B Intensity Allocation

WDM based FSO links provide two significant advantages: increased channel diversity (a.k.a, high fan-out) and capacity flexibility. While assigning wavelengths with the fixed intensities/capacities (as in the WDM fiber links) ignores the flexibility provided by optical wireless technology, allocating unnecessarily high intensities to a certain wavelength destroys the wavelength availability for future flows.111For example, consider an extreme scenario where a single wavelength is allocated with the full intensity and remaining wavelengths cannot be available at all. Alternatively, one can consider a fixed-uniform intensity allocation scheme which does not account for traffic characteristics of flows. In such a case, bandwidth utilization can reduce in a significant manner since some wavelengths are allocated with more/less intensity than they require. To have an improved bandwidth utilization, it is necessary to have a fast yet efficient intensity allocation method for adjusting link capacities according to groomed traffic requirements.

We first allocate intensities of the link-wavelength pairs along the R2R CF (MF) routes as follows: Average groomed CF (MF) size between Rack and Rack is denoted as () where is the flow size and () is the total CF (MF) arrival rate from servers of Rack to servers of Rack. Accordingly, if a groomed R2R traffic is routed over an FSO link between nodes and , required transmission intensity on this link can be calculated from (3) as follows


where is the affordable maximum delay by CFs and MFs. Available wavelengths and intensity levels after CF/MF allocations are shared among EFs based on a fair-share policy which guarantees intensity allocation for each wavelength. Thus, the required intensity by traffic request of size on wavelength of link is given as


where is the available intensity on after CF/MF allocations, is available number of wavelengths for EFs on , and is the set of these wavelengths. In (30), the first term is the total intensity available after CF/MF allocations and fair-share policy warranty whereas the second term is the demanded intensity to meet requirements of flow . Therefore, (30) enforces each wavelength to exploit the exact demanded intensity as long as it obeys fair-share policy. That is, a power demand exceeding equalized power policy can be obtained from the room opened by less demanding flows.

Iv-C Lightpath provisioning for EFs

As aforementioned, EFs are treated separately and are not subject to TG as they already have significant traffic requests. Since DXCs have limited processing power which constitutes a network bottleneck, forwarding EFs e along with the MFs may cause unnecessarily extra processing overhead. Hence, once an EF is detected, an S2S lightpath is required to be established between the source and destination servers, which is routed over OXCs and terminated after the session completion. That is, EFs are sent express through OXCs on a specific wavelength and route pair. Based on and , each server maintains shortest-paths lists toward all destinations, available total light intensity on each hop of shortest-paths, and indices of available wavelengths.

Let us consider a traffic request classified as an EF of size . Taking the wavelength continuity into consideration, a route is feasible only if there exists an available wavelength at its every hop. Denoting the set of such routes from source to destination as , the achievable capacity of the routes are determined as follows


which determines the route capacity by the minimum capacity of the links along the routing path. Based on calculated route capacities and available number of wavelengths, route is determined based on a best-fit policy


where . Apparently, (32) favors for routes with higher available wavelength and capacity. The next step is assigning a wavelength to the selected route, which can be done by a variety of methods, e.g., random, first-fit, least/most used, etc. Since they are shown to perform very close to each other [36], we employ the first-fit scheme and assign the lowest index of available wavelengths. Please note that route selection in (32) is desirable due to its low computational complexity.

If a flow is rejected on a certain route, it competes for the second best-fit route calculated in (32), and so forth. In the case of multiple servers decide to use the same wavelength for their EFs, the centralized scheduler follows the Shortest Job First (SJF) scheduling policy. Accordingly, competing flows are ordered based on their sizes and the flow with the smallest size is served first. The packets of the other flows are kept wait on a low priority queue until the wavelength is available to serve the second priority flow. Contingent upon all above details, proposed TG policy is summarized in Algorithm 1. In the realm of optical mesh networks, traffic grooming (TG) is a traditional way of effectively utilizing the available bandwidth by combining sub-wavelength capacity flows into larger flows. On the contrary, proposed Algorithm 1 goes beyond of this traditional approach by proposing a simple yet expeditious three steps grooming method developed especially for DCN architecture. Thanks to the flexibility of optical wireless links, efficient bandwidth utilization can be obtained by adjusting the wavelength capacities via power/intensity allocation based on traffic load characteristics of DCNs. Therefore, Algorithm 1 applies to the DCNs with adjustable and heterogeneous link capacities as well as to DCNs with fixed-uniform link capacities (e.g., fiber links).

V Delay Analysis of Mission-Critical DCNs

In this section, our objective is to obtain an approximate expression of the blocking probability due to either buffer overflow or violating certain end-to-end delay threshold. We consider a typical mission-critical application servers scenario where servers receive specific information, and then, transmit them over multiple switches to another application server for processing and decision making.

V-a Traffic Model

Data packet arrival is modeled as an exponential inter-arrival times with rates . The intermediate FSO switches are modeled as single server facilities with two priority queues and are responsible for forwarding the data to the application server. Arrival rates of the MF packets are denoted as and scheduled to the higher priority queue denoted as , where is the switch index. On the other hand, the arrival rate of EF packets is denoted as and scheduled to the lower priority queue . In some special cases, some non-critical packets could suddenly become critical according to the current context. Therefore, such packets will be scheduled in the high-priority queue with probability

. In general, both critical and non-critical packet arrivals are assumed to be Poisson distributed with rates

and , respectively.

Since there are two different types of traffic and consequently priority queues, the total rate of high-priority traffic and low-priority traffic is given by:


where can be regarded as the arrival rate of CFs. Also, the size and rate of EFs are generally larger than the size of the MFs as well as CFs. However, the primary challenge with the EFs is their rate rather than their size. In practice, the packets of a data flow arrive at a network switch in a concatenate sets representing the congestion windows (CWND) of their senders. As the elephant flow has a larger data size, its CWND has enough time to grow up faster by harnessing available capacity as well as left spaces of the completed MFs.

Thereby, the CWND of elephant flows is in order of magnitude larger than the CWND of MFs. In some cases, an MF complete before its CWND examines the maximum available capacity [37, 38]

. Network switches are assumed to operate following exponential distributed service time of mean

 [39, Chapter 3]. To maintain a guaranteed behavior for the mission-critical traffic, the expected delay experienced by each data packet (since its transmission till it reaches its destination) must not exceed a predefined threshold . Therefore, a data packet with a cumulative delay exceeding upon arrival to the application server will simply be flagged and then an appropriate decision will be employed. Thus, if we let

be a random variable that stands for the end-to-end delay, then the probability of dropping a data packet at the application server is given by


In this case, our objective is to compute the maximum allowable hop-count such that

, and estimate the packet dropping probability for a given number of hops in the network. Looking into the above objectives, we aim at analyzing how the above metrics vary with service, scheduling, and application policies considering reconfigurable FSO links for our WDCN model. We assume that

is the propagation delay between the application servers and the optical switches.

V-B Delay Analysis

As mentioned earlier, our network consistent with switches with two priority queues and possibly with different service rate distributions. Initially, we are interested in obtaining the average waiting time

and second moment

for priority that will be experienced by sampling data packets at each switch. Since the output of each queue is approximated as a Poisson process, we look into each switch in isolation and temporarily drop the switch index . Thus, the waiting time of a high priority packet is given as


where, is the service time of packet and is the residual time. From (36), using Little’s formula, we obtain the average waiting time of the high priority queue as,


where is the fraction of time a switch is serving high priority traffic. By raising both sides of (36) to the second power and taking expectation, the second moment of waiting time for high-priority traffic is derived as


where the approximation is based on the assumption of negligible last term as is typically small especially for MF that have limited traffic. We understand from the above that if high priority traffic is limited, then high priority queues contain at most one packet almost all the time, and waiting time for high priority packets is chiefly due to the residual time only. For lower-priority traffic, the waiting time of a low priority packet can be expressed as,


which essentially implies that the waiting time of a low-priority packet is the summation of four components: 1) The time to serve existing high priority packets, 2) The time to serve existing low-priority packets that are ahead in the queue, 3) The time to serve new high priority packets that arrive while the low-priority packet is waiting, and 4) Residual time due to the service. Accordingly, the average waiting time for the low priority queue denoted by is given as222(40) and (41) are based on the following assumptions: 1) The distribution of interarrival time is exponential and 2) The distribution of network switches service-time is general.


and the second moment is derived as


where the coefficient is defined as

and is the variance of the number of packets in the four cases mentioned earlier and is assumed to be negligible in a steady state. Hence the last approximation follows. One immediate check is to note that (41) reduces to (V-B) whenever , which agrees with expectation because in the latter case lower-priority traffic essentially becomes higher-priority traffic.

To complete our analysis, we need to find the first moment and second moment of which are essential to calculate the average waiting time for the packets in both the high priority queue and the low priority queues. The first moment of the residual time is defined as


which is obtained by employing the first moment of the remaining service time of M/G/1 queue, , and Little’s law. Based on the law of total expectations, we have . Applying Little’s law yields


Accordingly, (42) and (43) can be used to obtain .

V-C Maximum Hop-count

The maximum allowable hop-count that respects the QoS delay constraint is given by




and are respectively given by (37) and (40) for each , is the service time random variable at switch , and is the propagation delay between switch and switch . In the special case where all switches are identical, we obtain the simpler expression:


with . To draw qualitative insights, we note in the latter case that if traffic is limited, i.e. , then is approximately given by the first order Taylor expansion:


Here, is the minimum average delay at a node regardless of traffic utilization. Hence, the impact of increasing traffic utilization in IoT application is largely influenced by the ratio of residual service time to such minimum average delay.

V-D Blocking Probability

To obtain the average number of packets that exceed a predefined threshold

, we approximate the sum of all service times and waiting times experienced by a packet from source to sink by a normal distribution

[40]. We also know the first and second moments of the random variables as derived earlier. We are interested to obtain the end-to-end delay for both high-priority traffic and low-priority traffic . Thus, we have




where denotes service time at node and denotes waiting time at node for traffic with priority .

Therefore, the final desired probability of arriving later than is a weighted sum according to whether a data packet is scheduled in the high-priority queue or the low-priority queue:


where is the error function [41, Eq.(8.250.1)].

Vi Numerical Results

Emulations are conducted by using Mininet emulator [42] which uses real virtual hosts, POX-eel controllers [43], and real software switches, i.e., OVS switches [44]. The system characteristics of the used machine are Ubuntu 14.04 LTS installed on 16 x (2.5GHz-Intel Xeon CPU E5-2680v3), and the memory is 128 GiB. Iperf is used to generate MF and EFs of sizes 100KB and 100MB333The common block sizes in Hadoop and MapReduce algorithms are 64MB and 128MB [43]., respectively.

Emulated DCN topology has six spine switches and twelve leaf each with has 25 hosts, i.e., 300 hosts in total. Even though we would like to use a 10 Gbps and 100 Gbps for coaxial cables and FSO/fiber links, we scaled down all the link capacities by a factor of 10 (i.e., 1 Gbps for coaxial and 10 Gbps for FSO/fiber links) since Mininet is limited by the processing capacity of the host machine where we implement emulations. Each FSO link consists of 4 wavelengths, which are realized as virtual links in Mininet. Since the emulator is limited in DCN size, optical channel gains are not distinguishably different, thus, assumed to be identical.

Vi-a Workloads

We use MapReduce to mimic workloads of real DCNs, which its shuffle-phase communication pattern has servers from every rack communicate with another servers in a different rack. For instance, the hosts in have been divided into five sets and every set has servers, let’s say four mice and one elephant. Each is communicating with servers of rack , , different than other s of the same rack.

Vi-B Routing Algorithms

Equal-Cost-Multi-Path routing (ECMP) [45] is a widely used DCN routing method which uses the packet header information, such as the IP/MAC addresses and TCP port numbers, as a key for a hash function. Throughout this section, ECMP refers to traditional DCNs with coaxial cables. The outgoing path is the output hash value modulo the number of outgoing paths. This strategy splits the flows among available paths. Since the header information for an individual flow is the same during the session, the packets of the same flow are always forwarded via the same path; maintaining the packet orders and avoiding flow bifurcation. We used OVS bundle command with symmetric_l4 hash function in ECMP algorithm.

ECMP-FSO is an adaptation of ECMP to WDM-FSO case where wavelengths equally share the available FSO link capacity. That is, ECMP-FSO does not have any flexibility due to the lack of an intensity allocation mechanism and thus equivalent to WDM-fiber links. In FSO, the speeds of the links are in order of magnitude faster than non-FSO links. In this evaluation, we use a factor of ten which means the FSO link is 10 faster than traditional DCN links. In this routing method, the link capacity is equally divided between the wavelength which means the capacity of every wavelength is fixed to 2.5 Gbps. Each flow was assigned to a single wavelength. However, when flows are more than the available number of lightpaths, the packets of the waiting flows are enqueued until a lightpath is available for transmission.

TG-FSO is the proposed algorithm where the intensity is first allocated to the R2R mice-flow-wavelengths to meet MF demands. The remaining intensity is used by EFs as explained in Section IV-B.

(a) Throughput.
(b) FCTs
(c) CWNDs
Fig. 3: Comparison of different throughputs in terms of throughput, FCTs, and CWNDs.

Vi-C Network Throughput Results

Since is set to four, we have KB MF in every R2R communication. Hence, the needed capacity by R2R groomed MF is Gbps with a ms service duration. Due to the TCP behavior; a window of packets per RTT, and the waiting time in the link buffer, some of the flow finished after . In average 86% of the flows complete before . However, when the wavelength capacity increased to 5 Gbps, 100% of the flows satisfied the time constraint. The flow completion time of EF flows achieved about the needed time, i.e., one second, for the transmitted flows. The throughput and flow completion time (FCT) results are displayed in Fig. 2(a) and 2(b), respectively.

At all configurations, proposed TG-FSO algorithm outperforms the ECMP and ECMP-FSO cases. The achievable throughput of the proposed algorithm (3.2 Gbps) is about 3.4 times the ECMP algorithm and about 1.65 times the ECMP-FSO. We should emphasize that the sustained capacity for MF is 10 Gbps and 3.2/5 Gbps for ECMP-FSO and TG-FSO, respectively. Therefore, the transmission power consumption of 3.2/5 Gbps TG-FSO is lower than ECMP-FSO by about 3.2/2 times, respectively. For the EFs, on the other hand, the highest achievable average throughput was 7.47 Gbps which is about 13 and 3.6 times the ECMP and ECMP-FSO throughputs, respectively. However, when we increased the capacity of the allocated wavelength for MF from 3.2 Gbps to 5Gbps to satisfy the flow completion time constraint, i.e., 1 ms, the throughput of EF was decreased from 7.2 Gbps to about 5Gbps.

Due to the low capacity of the traditional data center network, the evaluated flows are completed after the time constraint on average. Accordingly, Fig. 2(b) shows the FCT speed up of the different cases concerning the requested maximum, . However, the transmitted flows by using other algorithms in the same experiment complete before the time constraint. For instance, EFs in TG-FSO 3.2 Gbps complete the service nine times faster before the required service duration. We found that our algorithm in all of its versions satisfied the time constraint even if we increase the flow size or decrease the time constraint in order of magnitudes, i.e., 100MB or second.

Even though we had to set FSO link capacity to 10 Gbps due to the Mininet restrictions, Ciaramella et al. tested an outdoor 32 wavelength WDM-FSO system over several hundred meters and recorded 40 Gbps wavelength capacity [6]. Accordingly, the potential of the proposed design can be understood better when the 13 times performance enhancement is scaled up to a higher number of wavelengths and capacities.

Vi-D Network Delay Results

In this part of evaluations, we need to study the positive and negative impacts of the suggested queuing discipline on the performance of EF as well as mission-critical, CF, traffic. Accordingly, we selected the leaf-spine lightpaths that were dedicated to forward elephant flows and configured them with two priority queues. The high priority queue was dedicated for CFs, while the low priority was used for elephant flows. The system forwards the elephant with the full capacity as long as the high priority queue has no waiting packets. Also, we maintained the well-known data center traffic ratio, where 80% of transmitted flows is MFs and 20% is EFs [46, 47], i.e., . In the traffic characteristics of social DCN, the authors of [48] found that the majority of bytes during a sub-second interval are carried by large flows. Also, they found that the traffic pattern remains stable for a long period (i.e., days) and 57.5% of the total DCN traffic is between specific racks, which is aligned with a recent study in Microsoft DC [49].

We collected the performance figures during different link capacities, 25%, 50%, 75%, and 100%, respectively. The capacity of the lightpath is 10Gbps. We collected the average response time, the number of packets in the queue and the size of CWND of all flows transmitted through that lightpath. We used TCP Probe, which is a well-known tool in network measurement, to get the sender and receiver IP addresses, time of transmitting, packet length, round-trip time and CWND of all flows. Also, we logged the statistics of the low and high priority queues of the evaluated lightpath for every one millisecond. Finally, we compared the mathematical model and the collected Mininet results.

(a) Mininet vs. Analytical.
(b) Different Algorithms
Fig. 4: Response time comparison for (a) Mininet vs. analytical and (b) different algorihtms.

Before we explain our results, we need to justify our claim about the growing behavior of elephant flow CWND. To do this, we measured the mean of CWNDs of the evaluated EF and CFs. We found that the average size of EF flow CWND is 3.9 larger than the CWND of mission-critical as well as MF flows as shown in Fig. 2(c) which illustrates the minimum, mean and the maximum achieved CWNDs of EF and CFs. Fig. 3(a) shows the comparison between the response time of the mathematical model and the Mininet results. We can see from the figure how the response time increases with the increase in load. In this testing, we use the RTT, technically defined as Short RTT, of every transmitted flow, i.e., the CF and EF, through the evaluated lightpath. These statistics are collected from the kernel of the communicating hosts by using TCP Probe. The response time of MF during the evaluation with different algorithms is illustrated in Fig. 3(b). The TG-FSO outperforms other algorithms when the load is less than 70%. However, during high load both of the two TG-FSO settings as well as ECMP-FSO present close results.

(a) EFs
(b) CFs
(c) MFs
Fig. 5: CWND-CDFs for EFs, CFs, and MFs.
(a) FCTs
(b) Number of packets.
(c) Queue loads
Fig. 6: FCTs, number of packets, and queue loads for different algorithms.

On the second part, we study the impact of the proposed queuing discipline on the CWND of EF and CFs. Fig. 4(a) shows the CDF of the CWND of EF and CFs. For the sake of a complete picture of evaluation, we extracted the CWND of some of MF sources; the results are displayed in Fig. 4(c). The CWND of mission-critical and MF flows grow at a faster rate compared to EF CWNDs. Also, the EF CWND of TG-FSO in both cases of single and double queues grew at a faster rate than other EF CWNDs. This clarifies certain phenomena; the CWND grows faster with high wavelength capacity and because the MF flows have a small amount of data to transmit they complete during the slow start phase. Although the EF has been forwarded to a low priority, its CWND grows up to about 130 packets and 470 packets in slow links, while the CWND of MF and CFs remained around its mean; 20 packets. These results manifest that the EF strives to utilize available capacity by increasing its CWND and the CFs finish before they examine the available lightpath capacity.

The impact of the suggested queuing discipline on the flow completion time of EF and CFs are illustrated in Fig. 5(a). We can see how the suggested queuing discipline reduced the flow completion time of CFs by almost 2. Fortunately, the flow completion time of EF flows before and after the implementation of the queuing discipline are almost the same. Although we plot the FCT results in log scale, we didn’t see a change on the FCT of EF flows. In term of waiting time, we measured the number of packets waiting in the queue. The Fig. 5(b) shows the number of packets waiting in the queue when the lightpaths have single and two queues. We can see in the figure that the number of packets waiting in the high priority queue is tiny, close to zero, compared to the low priority queue as well as the single queue. Since the lightpath of has a single queue for all the flow classes, its the EF and CFs forwarded enqueued in the same queue. This means that the number of packets presented in the curve includes both the EF and mission-critical packets. Also, the number of packets in the queue increased almost exponentially with the load. When we measured the queue occupancy in MF evaluation, we found that all the evaluated algorithms except ECMP have zero packets in the queue of the evaluated lightpath. The results are illustrated in Fig. 5(c).

Vii Conclusions

In this paper, we addressed the design and provisioning of mission-critical wireless DCNs from a TG perspective. To mitigate the system limitations of traditional wired DCNs, we considered a wireless approach by using hybrid optoelectronic switches and WDM capable FSO links. Contingent upon the problem formulation, we developed a fast yet high-performance sub-optimal solution which significantly improved the throughput of CFs, MFs, and EFs. Based on priority queues, the performance analyses of low and high priority flows are provided for important service characteristics including the waiting time, delay, maximum hop count, and blocking probability. By grooming the sub-wavelength traffic and adjusting the wavelength capacities according to the groomed traffic requests, numerical results clearly showed that the proposed solutions achieved significant performance enhancement by utilizing the bandwidth more efficiently, completing the flows faster than delay sensitivity requirements, and avoiding the traffic congestion by treating EFs and MFs separately.


  • [1] A. Celik, A. Al-Ghadhban, B. Shihada, and M. Alouini, “Design and provisioning of optical wireless data center networks: A traffic grooming approach,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Apr. 2018, pp. 1–6.
  • [2] M. Peng, Y. Li, J. Jiang, J. Li, and C. Wang, “Heterogeneous cloud radio access networks: A new perspective for enhancing spectral and energy efficiencies,” IEEE Wireless Commun., vol. 21, no. 6, pp. 126–135, 2014.
  • [3] A. Celik, B. Shihada, and M.-S. Alouini, “Wireless data center networks: Advances, challenges, and opportunities,” arXiv preprint arXiv:1811.11717, 2018. [Online]. Available:
  • [4] D. Halperin et al., “Augmenting data center networks with multi-gigabit wireless links,” in Proc. ACM SIGCOMM, 2011, pp. 38–49.
  • [5] N. Hamedazimi, Z. Qazi, H. Gupta, V. Sekar, S. R. Das, J. P. Longtin, H. Shah, and A. Tanwer, “Firefly: A reconfigurable wireless data center fabric using free-space optics,” in Proc. ACM SIGCOMM, 2014, pp. 319–330.
  • [6] E. Ciaramella, Y. Arimoto, G. Contestabile, M. Presi, A. D’Errico, V. Guarino, and M. Matsumoto, “1.28 terabit/s (32x40 gbit/s) WDM transmission system for free space optical communications,” IEEE J. Sel. Areas Commun., vol. 27, no. 9, pp. 1639–1645, Dec. 2009.
  • [7] A. Celik, B. Shihada, and M.-S. Alouini, “Optical wireless data center networks: potentials, limitations, and prospects,” in Proc. Broadband Access Communication Technologies XIII.   SPIE, 2019, pp. 1–7. [Online]. Available:
  • [8] A. S. Hamza, J. S. Deogun, and D. R. Alexander, “Free space optical data center architecture design with fully connected racks,” in proc. IEEE Global Commun. Conf. (GLOBECOM), Dec. 2014, pp. 2192–2197.
  • [9] A. S. Hamza, S. Yadav, S. Ketan, J. S. Deogun, and D. R. Alexander, “Owcell: Optical wireless cellular data center network architecture,” in IEEE Intl. Conf. Commun. (ICC), May 2017, pp. 1–6.
  • [10] A. S. Hamza, J. S. Deogun, and D. R. Alexander, “Wireless communication in data centers: A survey,” IEEE Commun. Surveys Tuts., vol. 18, no. 3, thirdquarter 2016.
  • [11] S. Huang and R. Dutta, “Dynamic traffic grooming: the changing role of traffic grooming,” IEEE Commun. Surveys Tuts., vol. 9, no. 1, pp. 32–50, First 2007.
  • [12] K. Zhu and B. Mukherjee, “Traffic grooming in an optical wdm mesh network,” IEEE J. Sel. Areas Commun., vol. 20, no. 1, pp. 122–133, Jan. 2002.
  • [13] R. Kokku, R. Mahindra, and S. Rangarajan, “60ghz data-center networking: wireless=¿ worryless,” Tech. Rep., 2008. [Online]. Available:
  • [14] Y. Cui, H. Wang, and X. Cheng, “Channel allocation in wireless data center networks,” in proc. IEEE INFOCOM, Apr. 2011, pp. 1395–1403.
  • [15] Y. Cui, H. Wang, X. Cheng, and B. Chen, “Wireless data center networking,” IEEE Wireless Commun., vol. 18, no. 6, pp. 46–53, Dec. 2011.
  • [16] Y. Cui, H. Wang, X. Cheng, D. Li, and A. Ylä-Jääski, “Dynamic scheduling for wireless data center networks,” IEEE Trans. Parallel Distrib. Syst., vol. 24, no. 12, pp. 2365–2374, Dec 2013.
  • [17] Y. Katayama, K. Takano, Y. Kohda, N. Ohba, and D. Nakano, “Wireless data center networking with steered-beam mmwave links,” in proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Mar. 2011, pp. 2179–2184.
  • [18] N. A. Riza and P. J. Marraccini, “Power smart in-door optical wireless link applications,” in proc. 8th Intl. Wireless Commun. Mobile Comput. Conf. (IWCMC), Aug 2012.
  • [19] J. Bao, D. Dong, B. Zhao, Z. Luo, C. Wu, and Z. Gong, “Flycast: Free-space optics accelerating multicast communications in physical layer,” SIGCOMM Comput. Commun. Rev., vol. 45, no. 4, pp. 97–98, Aug. 2015.
  • [20] S. Arnon, “Next-generation optical wireless communications for data centers,” in Proc. Broadband Access Commun. Technol. IX, vol. 9387.   SPIE, 2015, p. 938703.
  • [21] S. J. B. Yoo, Y. Yin, and R. Proietti, “Elastic optical networking and low-latency high-radix optical switches for future cloud computing,” in proc. IEEE Intl. Conf. Comput. Netw. Commun. (ICNC), Jan. 2013, pp. 1097–1101.
  • [22] N. Farrington et al., “Helios: A hybrid electrical/optical switch architecture for modular data centers,” SIGCOMM Comput. Commun. Rev., vol. 40, no. 4, pp. 339–350, Aug. 2010.
  • [23] G. Wang, D. G. Andersen, M. Kaminsky, K. Papagiannaki, T. E. Ng, M. Kozuch, and M. Ryan, “C-through: Part-time optics in data centers,” SIGCOMM Comput. Commun. Rev., vol. 40, no. 4, pp. 327–338, Aug. 2010. [Online]. Available:
  • [24] M. Fiorani, S. Aleksic, M. Casoni, L. Wosinska, and J. Chen, “Energy-efficient elastic optical interconnect architecture for data centers,” IEEE Commun. Letters, vol. 18, no. 9, pp. 1531–1534, Sept. 2014.
  • [25] G. C. Sankaran and K. M. Sivalingam, “Scheduling in data center networks with optical traffic grooming,” in proc. IEEE 3rd Intl.Conf. Cloud Netw. (CloudNet), Oct. 2014, pp. 179–184.
  • [26] ——, “Optical traffic grooming-based data center networks: Node architecture and comparison,” IEEE J. Sel. Areas Commun., vol. 34, no. 5, pp. 1618–1630, May 2016.
  • [27] A. Al-Ghadhban, A. Celik, B. Shihada, and M. Alouini, “LightFD: A lightweight flow detection mechanism for traffic grooming in optical wireless DCNs,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Dec. 2018, pp. 1–6.
  • [28] M. Alizadeh, S. Yang, M. Sharif, S. Katti, N. McKeown, B. Prabhakar, and S. Shenker, “pfabric: Minimal near-optimal datacenter transport,” in in Proc. ACM SIGCOMM, 2013, pp. 435–446.
  • [29] A. Chaaban, Z. Rezki, and M. S. Alouini, “Fundamental limits of parallel optical wireless channels: Capacity results and outage formulation,” IEEE Trans. Commun., vol. 65, no. 1, pp. 296–311, Jan. 2017.
  • [30] J. M. Kahn and J. R. Barry, “Wireless infrared communications,” Proc. IEEE, vol. 85, no. 2, pp. 265–298, Feb. 1997.
  • [31] H. Kaushal, V. Jain, and S. Kar, “Free-space optical channel models,” in Free Space Optical Communication.   Springer, 2017, pp. 41–89.
  • [32] A. Lapidoth, S. M. Moser, and M. A. Wigger, “On the capacity of free-space optical intensity channels,” IEEE Trans. Inf. Theory, vol. 55, no. 10, pp. 4449–4461, Oct. 2009.
  • [33] I.-T. S. Sector, “Spectral grids for wdm applications: Cwdm wavelength grid,” ITU-T Recommendation G. 694.2, 2003.
  • [34] ——, “Spectral grids for wdm applications: Dwdm frequency grid,” ITU-T Recommendation G. 694.1, 2003.
  • [35] R. Ul-Mustafa and A. E. Kamal, “Design and provisioning of wdm networks with multicast traffic grooming,” IEEE J. Sel. Areas Commun., vol. 24, no. 4, p. 53, 2006.
  • [36] H. Zang, J. P. Jue, B. Mukherjee et al., “A review of routing and wavelength assignment approaches for wavelength-routed optical wdm networks,” Optical networks magazine, vol. 1, no. 1, pp. 47–60, 2000.
  • [37] G. Appenzeller, I. Keslassy, and N. McKeown, Sizing router buffers.   ACM, 2004, vol. 34, no. 4.
  • [38] C. Fraleigh, F. Tobagi, and C. Diot, “Provisioning ip backbone networks to support latency sensitive traffic,” in proc. IEEE Intl. Conf. Comput. Commun. (INFOCOM), vol. 1.   IEEE, 2003, pp. 375–385.
  • [39] D. P. Bertsekas, R. G. Gallager, and P. Humblet, Data networks.   Prentice-Hall International New Jersey, 1992, vol. 2.
  • [40]

    S. Zahl, “Bounds for the central limit theorem error,”

    SIAM Journal on Applied Mathematics, vol. 14, no. 6, pp. 1225–1245, 1966.
  • [41] I. S. Gradshteyn and I. Ryzhik, Table of Integrals, Series, and Products, 7th ed.   Amsterdam: Elsevier/Academic Press, 2007.
  • [42] N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McKeown, “Reproducible network experiments using container-based emulation,” in Proc. the 8th Intl. Conf. Emerg. Netw. Exper. Tech.   ACM, 2012, pp. 253–264.
  • [43] POX. [Online]. Available:
  • [44] B. Pfaff, J. Pettit, K. Amidon, M. Casado, T. Koponen, and S. Shenker, “Extending networking into the virtualization layer.” in Hotnets, 2009.
  • [45] C. Hopps, “Analysis of an equal-cost multi-path algorithm,” RFC 2992, Internet Engineering Task Force, 2000.
  • [46] W. Bai, K. Chen, H. Wang, L. Chen, D. Han, and C. Tian, “Information-agnostic flow scheduling for commodity data centers.” in NSDI, 2015, pp. 455–468.
  • [47] H. Zhang, J. Zhang, W. Bai, K. Chen, and M. Chowdhury, “Resilient datacenter load balancing in the wild,” in Proc. Conf. ACM Special Interest Group on Data Commun.   ACM, 2017, pp. 253–266.
  • [48] A. Roy, H. Zeng, J. Bagga, G. Porter, and A. C. Snoeren, “Inside the social network’s (datacenter) network,” SIGCOMM Comput. Commun. Rev., vol. 45, no. 4, pp. 123–137, Aug. 2015.
  • [49] M. Ghobadi, R. Mahajan, A. Phanishayee, N. Devanur, J. Kulkarni, G. Ranade, P.-A. Blanche, H. Rastegarfar, M. Glick, and D. Kilper, “Projector: Agile reconfigurable data center interconnect,” in Proc. ACM SIGCOMM Conf.   ACM, 2016, pp. 216–229.