QuicSDN: Transitioning from TCP to QUIC for Southbound Communication in SDNs

07/18/2021 ∙ by Puneet Kumar, et al. ∙ Santa Clara University 0

Transport and security layer protocols make up the backbone of communication between end point devices. In Software Defined Networking (SDN), these protocols play a crucial role in both control-plane and data-plane communications. However, the current transport and security layer protocols: TCP and TLS, are unable to keep up with the pace of SDN application development. For these applications, the TCP/TLS protocol suite generates excessive network overhead. After identifying the main overhead origins, we demonstrate that using QUIC as the SDN transport layer protocol significantly reduces the overhead and improves the efficiency of the network. In this paper, we introduce quicSDN to enable robust, low-overhead communication between the controller and switches. We ran a variety of experiments to highlight quicSDN's benefits, and compared experimental results with transport-layer overhead prediction models. quicSDN's performance is evaluated in terms of network overhead reduction and we also demonstrated quicSDN's connection migration capabilities. First, we compare the differences in controller-switch communication overhead between tcpSDN(SDN over TCP) and quicSDN. Overhead reduction was measured in three scenarios: flow rule installation, queue configuration, and flow rule statistics polling. Second, we compare the connection migration of quicSDN and tcpSDN; QUIC's ability to quickly migrate connections allows for reduced total traffic in comparison to TCP. Overall, our results show that quicSDN outperformed tcpSDN in our SDN scenarios, and as such, QUIC is a promising candidate as an SDN transport layer protocol in the future.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

SDN simplify new application development and facilitate network monitoring and management [20]. Nowadays, SDN architectures are used in various types of deployments, such as data center networks, WAN [33], NFV [74], 5G [36], and edge/fog computing [26, 54].

The two primary components of an SDN architecture are the controller(s) and switch(es). A logically centralized controller implements the control plane, and the switches implement the data plane. The controller (e.g., Ryu [17], ODL [18]) provides functionalities such as network topology discovery as well as flow rule computation and installation. The controller provides a set of north-bound API to facilitate application development. Communication between the controller and switches is achieved via southbound interfaces ranging from traditional protocols such as SNMP [8] to more sophisticated ones such as OpenFlow [43], OVSDB [51], and NETCONF (Network Configuration Protocol) [19]. In this paper, we primarily focus on OpenFlow and OVSDB. These two protocols are the most widely deployed and are supported by off-the-shelf switches [12, 35] and controllers [44, 60]. Also, some vendors have proposed customized versions of these protocols (e.g., Arista’s DirectFlow [61], Cisco-OpenFlow-Plugin [62], HP OpenFlow [63]).

The communication between switches and controllers imposes significant overhead that has severe consequences on processing and bandwidth resources [37, 24]. In this paper, we look at the control-data planes’ communication from a different perspective—the transport-layer protocols. Currently, communication reliability, packet reordering, and security are achieved by using TCP (and TLS) as the south-bound transport-layer protocols. However, using this protocol introduces a plethora of challenges in terms of overhead, connection multiplexing, and connection migration, as we detail below. Throughout this paper, we use the term tcpSDN to refer to SDN that utilize TCP as the transport layer of their southbound protocols.

OpenFlow has become the de facto standard for manipulating switches’ data planes. Controllers use OpenFlow to configure flow rules and flow tables. The control and data planes exchange a variety of OpenFlow messages that impose significant overheads to the network. The three main message types exchanged between a controller and a switch are the PACKET_IN, FLOW_MOD, and MULTIPART_REQUEST/REPLY messages. When a packet arrives at a switch and does not match any existing flow rule, then the packet is forwarded to the controller via a PACKET_IN message. In large networks such as data centers, an enormous amount of PACKET_IN messages are generated from table misses [2, 48, 14]. As the table miss rate increases, the communication overhead between the controller and switches elevates, and this elevation results in a higher overhead of TCP [73, 38]. FLOW_MOD packets, which are used to install or modify flow rules, can be sent reactively in response to a PACKET_IN message, or proactively in anticipation of expected traffic. For example, the controller may proactively install flow rules based on the switches’ flow statistics to address requirements such as load balancing. FLOW_MOD message sizes are dependent on flow rule complexity [10], so for large networks with complex rules, these messages can become quite large. Whether FLOW_MOD messages are sent reactively or proactively, they impose an additional overhead on the network.

For a controller to maintain an up-to-date view of network status, it needs to poll the switches at regular intervals for configuration status updates. To do so, the controller sends a MULTIPART_REQUEST message to each switch for each feature that it wishes to collect statistics on, and each switch responds with a corresponding MULTIPART_REPLY message. The sizes of these poll messages are variable, and depend on the the switch’s configuration [10]; switches with many queues and large flow tables transmit several large messages for each poll event, resulting in large control traffic overheads in the network. Each switch generates tens of kilobytes of MULTIPART_REPLY control messages per second, and for enterprise-grade datacenters with many heavily configured switches, this overhead imposes significant penalties to network responsiveness. Study in [52, 10] described the network overhead imposed by MULTIPART_REQUEST and MULTIPART_REPLY messages during polling statistics. This overhead is exacerbated when the state of a single flow is monitored by several switches, resulting in duplicate data being sent to the controller from multiple switches. These additional interactions between the controller and the switches generates even more TCP overhead.

OVSDB, which is used to configure QoS functionalities such as queues, performs configurations through RPC "transact" interactions and imposes significant overhead in dynamic networks. Configuring a queue on a switch involves multiple OVSDB messages: (i) the queue is added to the switch, (ii) the queue is added to a specific packet scheduler, and (iii) the switch responds with an RPC "update" to inform the controller of the new configurations. For more complicated switch operations, more messages are required and the overhead is even greater [50, 6, 66, 21, 69, 10].

Apart from protocol overhead, the use of TCP introduces challenges in other areas of SDN as well. One major challenge is connection migration, which is an important feature in NFV. When TCP is used, any change to connection parameters disrupts the connection. For instance, to evenly distribute network load among available controller, the assignment of switches to controllers may change over time [9, 79, 11]. However, this assignment disrupts the established TCP connections. Several researchers [70, 25] have tried to mitigate the negative effects of connection migration, but their efforts are limited to the application level.

In this work, we introduce quicSDN as a novel software architecture to address the drawbacks and challenges of using TCP as the transport layer protocol for southbound SDN protocols. Specifically, instead of using TCP, quicSDN uses a new transport layer protocol called QUIC [30]). Although QUIC was designed with web traffic in mind (HTTP3 [42]), its enhancements over TCP are applicable across various domains. These enhancements include the ability to multiplex different streams, reduction in connection establishment latency, elimination of the head-of-line blocking problem, and TCP ambiguity. Towards providing a framework for transitioning from tcpSDN to quicSDN, we present a full implementation of quicSDN using RYU controller and switches running OVS and OVSDB. In particular, the proposed framework details aspects such as understanding and removing the intertwined dependency of RYU, OVS, and OVSDB on TCP and replacing them with QUIC, as well as establishing IPC methods to allow RYU, OVS, and OVSDB to communicate with QUIC. It is worth mentioning that the proposed framework can be used to integrate additional southbound protocols (e.g., NETCONF) and controllers (e.g., OpenDaylight). We then present empirical evaluation of quicSDN versus tcpSDN on a testbed to evaluate control traffic overhead in different scenarios such as flow rule setup, queue configuration, and statistics collection. A summary of quicSDN’s performance benefits are as follows: (i) flow setup overhead is reduced by 25%, 28%, 34%, and 50% for 10, 1000, 10K, and 100K flows installation per second, respectively. (ii) queue configuration overhead is reduced by 30%, 34%, and 52% when configuring 100, 10K, and 100K queues per second, respectively; (iii) overhead of statistics polling is reduced by 30%, 34%, and 51% for 100, 10K, and 100K flows; (iv) QUIC’s support for seamless connection migration removes the overhead of connection reestablishment when the transport layer connection is interrupted.

The rest of this paper is organized as follows: Section VII overviews the related work. Section II provides background on the relevant transport and security layer protocols. Section III present the architecture of quicSDN. Section IV discusses the implementation, algorithms, and pertinent details of quicSDN. In section V, we analyze the overheads associated with QUIC and TCP and present mathematical models for overhead prediction. Empirical evaluations and discussions are presented in Section VI. We conclude the paper in Section VIII.

Ii Transport and Security Layer

In this section, we first discuss the most prevalent transport and security layer protocols. We then discuss QUIC and its potential benefits leveraged in the SDN architectures.

Ii-a Transport Layer Protocols

TCP is a widely-used transport protocol to ensure packet ordering and reliability in end-to-end message delivery. Despite its prevalence, TCP has several major shortcomings. First, it causes considerable communication overhead. This overhead is caused by connection establishment and ACK packets. For every transmitted TCP segment, there is an ACK sent from receiver to sender, irrespective of their payload. Small payload segments can generate a lot of TCP ACKs, which can significantly increases the TCP overhead for small amount of data exchanged between the end points. The second shortcoming of TCP is its inefficient packet reordering. When packets arrive out of order at an end point, the TCP protocol rearranges them and pushes them to the application. The receiver needs to wait for the sender to retransmit lost segments. This causes the HOL blocking problem. HOL occurs when a segment is not ready to be processed, while the segments received before this segment belong to other messages that can be processed [64, 56]. The third shortcoming of TCP is the lack of connection migration support. A TCP connection is uniquely identified by its source and destination IP addresses and port numbers. Any change in these elements brings down the connection, disrupts the application state machine, and triggers the utilization of processor resources to save the current state before tearing down the connection gracefully.

In contrast to TCP, UDP offers no connection-oriented features. Each UDP datagram is sent in a single IP packet, and such mechanism eradicates the burden on the application to keep track of message boundaries.

Ii-B Security Layer Protocols

The two most prominent security layer protocols used with transport protocols are TLS and DTLS. TLS is a stateful cryptography protocol which generates a unique symmetric key after the handshake. The symmetric key is valid for the lifetime of the connection. This symmetric key encrpyts and decrypts the segments received in order by TCP. On the other hand, DTLS can encrypt and decrypt out-of-order packets, making it suitable for connectionless protocols such as UDP. Although TLS and DTLS provide similar levels of security, DTLS requires an explicit sequence number in each packet and requires the use of stream ciphers.

Ii-C Quic

Built on top of UDP, the QUIC protocol is a connection-oriented application-layer transport protocol meant to improve on TCP’s limitations. QUIC closes the gap between transport and security layer protocols by incorporating them into streams. This protocol reduces the connection establishment overhead of TCP/TLS to 0-RTT by reusing the server credentials from the past connections. Furthermore, QUIC reduces the transport layer overhead by multiplexing connections into a single UDP pipeline. Next, we discuss the potential benefits of utilizing QUIC in SDN.

Ii-C1 Connection Establishment

TCP with TLS requires three RTTs to establish a secure connection before any data is exchanged between server and client. By leveraging a multi-stage key exchange, QUIC combines the transport and security layer connection establishment procedures to minimize connection establishment overhead. In the first stage, the client sends a hello message (CHLO) to retrieve the server configuration. Since the client is unknown to the server, the server responds with a REJ packet. The REJ packet contains the server configuration, long term Diffie-Hellman value, key agreement, connection ID (cid), and initial data. The client then authenticates the server by verifying the certificate chain and signature. After authentication, the client sends a complete CHLO packet to the server and finishes the first handshake. At this stage, the client has the initial keys and is ready to exchange application data with server. In 0-RTT, the client sends application data to the server by using the initial keys, even before receiving a reply from the server. Upon a successful handshake, the server sends a complete hello (SHLO) to the client and concludes the final and repeat handshake. Apart from the initial handshake packets, QUIC packets are fully authenticated and partially encrypted. The non-encrypted part of the packet is used for routing and is also used to decrypt the remaining part of the packet.

In tcpSDN, when switches move to new controllers for load balancing purposes, controllers and switches have to establish new TCP connections. If the switch-to-controller connections are short-lived, then there will be a large connection establishment overhead. Unlike tcpSDN, quicSDN can facilitate new connections between new controllers and switches in 1-RTT. For short-lived connections, quicSDN can establish connections in 0-RTT, if the controller and switch have communicated in the past.

Ii-C2 Connection Migration

QUIC uses a unique connection ID (cid) to identify each connection. This allows for connection rebinding even if connection parameters such as IP or ports are modified. Typically, the server requests a cid for the lifetime of the connection. The connection migration process entails two end-point entities: an initiator and a responder; these entities carry out a QUIC connection migration in two stages. The first stage is to probe a path via path validation packets [28]. During this stage, the end-points assume that the peer is willing to accept packets with the new IP address. If the end-point does not support the existing segment exchange rate, it will re-establish congestion control [27]. In the final stage, probe packets from the peer ensure that the connection migration has been successful. Unlike tcpSDN, quicSDN can migrate connections to other end points without disrupting the connection state. This facilitates the moving of switches to new controllers without disrupting the existing connections.

Ii-C3 QUIC Security

QUIC combines transport and security protocols. Establishing secure connections between switches and controller is essential. QUIC uses TLS 1.3 [58], which includes two layers: the handshake and record layers. TLS 1.3 has faster handshake and offers more secure cipher suites than TLS 1.2. TLS 1.3 can be used in some flavor of TCP, such as TCP Fast Open (TFO), where SYN packet itself can carry the data. However, TFO still bears 2-RTT for connection establishment. On the other hand, QUIC with TLS 1.3 can establish connection in 1-RTT.

Ii-C4 Multiplexing

In HTTP1.1 opens one TCP connections for every request/response. HTTP2 introduced multiplexing, where applications can send one request/response over one stream and streams are carried over one TCP connection. This significantly reduced the TCP overhead. Since those improvements were tied to HTTP2 application, a general-purpose transport protocol SPDY was introduced [71]. Instead of opening several connections between a client and a server, SPDY creates a TCP pipeline and connections are converted to streams inside the TCP pipeline. QUIC inherits this from SPDY; it multiplexes multiple connections between the two end-points and converts them into streams inside an UDP pipeline. A stream presents a lightweight abstraction of server-client connection and is uniquely identified by cid. In tcpSDN, if it is desired to have two or more protocols such as OpenFlow and OVSDB operating between a switch and controller, then both have to open their own TCP connections. On the other hand, since quicSDN supports multiplexing, both protocols can use the same UDP pipeline to communicate.

Ii-D Congestion and Flow Control

QUIC’s congestion control mechanism provides a richer set of features compared to TCP [31]. For instance, consider the TCP ambiguity problem, where TCP cannot determine if the ACK was for the original or re-transmitted packet. QUIC solves this problem by assigning a unique packet number to each packet, irrespective of being an original or re-transmitted packet. QUIC also reduces congestion control by using a NACK based scheme. In a NACK based scheme, instead of acknowledging every packet, a receiver notifies the sender about a specific lost packet. Each NACK packet represents two numbers: the largest acknowledged packet, and the unacknowledged packets with packet number less than the largest packet number observed. QUIC’s ACKs support 256 NACK values [32].

In TCP, a sender can be blocked from sending data when the entire receiver buffer is consumed. QUIC addresses this problem via two methods: First, with connection-level flow control, an upper-limit is imposed on the entire connection for a sender’s aggregated buffer on the receiver. Second, flow-level flow control imposes an upper-limit on the connection-level buffer size. QUIC uses a window update frame for advertising per-stream absolute byte offset for received, delivered, and sent packets.

Iii Design and Overall Architecture

This section presents a high-level overview of the interactions between components of quicSDN: QUIC, OVS, and RYU. As Figure 1 shows, ovsdb-server, ovs-switchd, and quic-client run on the switch machine, and ryu-ovsdb, ryu-of, and quic-server run on the controller machine.

Fig. 1: Overall Software Architecture implemented by all switches and controllers in the network.

Iii-a Inter Process Communication (IPC)

Since QUIC is an application-layer protocol, it cannot be used as an operating system’s inbuilt transport protocol in the way that TCP or UDP can. Therefore, an IPC is required to facilitate communication between the QUIC and the application processes. This section describes the pros and cons of various IPC methods for quicSDN.

Iii-A1 Shared Memory

In order to allow these different components to communicate through shared data structures, all of these components can be compiled as one application. Another method is to use shared memory via a memory map. One of the main drawbacks of these methods is the lack of extensibility and abstraction. Specifically, accessing the source code of all the modules is necessary to implement these methods. For example, if there is a plan to extend a switch’s features by adding an additional component, then its code must be fully available to be integrated with the existing ones. Even when the new component’s source code is available, the developer needs to be familiar with the code. For example, to allow the concurrent execution of components, code modification and introduction of new threads is usually required. Furthermore, when employing these methods, the larger code size and the lack of clear interfaces between modules results in harder code debugging and enhancement.

Accessing shared data structures also causes race conditions. It is essential to acquire mutually-exclusive locks to avoid race conditions among message producers and consumers. These locks can cause performance bottleneck by introducing differences in the rate of packets processed by switch- or controller-side processes. This observation has been made in multiple studies [49, 34].

Iii-A2 Message Passing

Compared to shared memory, message passing methods are easier to implement and more extensible. The two primary methods of message passing are message queues and UDS. To simplify system extensibility, the latter method is used because it is supported by all major programming languages. There are two primary types of UDS sockets at the transport layer level: stream sockets and datagram sockets. With stream UDS, received data is in form of stream bits. The arrival of stream bits can be out-of-order and it is required to be put in order based on message boundaries. Finding message boundaries in stream sockets is difficult because they are byte oriented and the receiver needs to parse the received bytes and rearrange them, which introduces additional overhead. On the other hand, datagram sockets are faster and allow an entire message to be passed, obviating the need for message boundary detection and can be used for implementing various scheduling methods.

Iii-B Switch

There are two entities on the switch side: OVS and quic-client. OVS incorporates two daemons: ovs-switchd and ovsdb-server, which handles OpenFlow and OVSDB respectively. The CLI commands for connection establishment with tcpSDN are:

  • ovs-vsctl set-controller <bridge name> tcp:<controller-IP>:<port>

  • ovs-vsctl set-manager tcp:<controller-IP>:<port>

In our implementation of OVS in quicSDN, ovsdb-server and ovs-switchd use UDP sockets to communicate with quic-client. Figure 2 presents the quicSDN OVS architecture. Two new CLIs were developed to accept UDP as transport layer argument:

  • ovs-vsctl set-controller <bridge name> udp:<controller-IP>:<port>

  • ovs-vsctl set-manager udp:<controller-IP>:<port>

The udp_vconn_class class and its associated function pointers were developed to search for the "udp" keyword in the CLI and open a UDP connection to the quic-client. The opened connection is mapped to a stream pointer FD, which is defined in new_lds_fd. The aforementioned process is used for both ovsdb-server and ovs-switchd.

Fig. 2: quicSDN switch architecture. The figure highlights the modifications to OVS and how packets are processed by the quic-client.

quic-client spawns two UDP servers listening on ports 6653 and 6640. The messages received on these ports are processed and multiplexed in quic-client and then transmitted to the quic-server. To avoid thread blockage while waiting for packet arrivals, we use async I/O operations by leveraging the libevent library [55]. The libevent library is a concurrent, highly scalable network library that provides APIs to call a callback function when a specific event occurs on the FD. Two newly introduced FD for sockets on ports 6653 and 6640 are mapped to stream pointers in quic-client to communicate with ovsdb-server and ovs-switchd. The two callbacks associated with these FD are used to detect activity on the sockets. The QUIC RFC [29]

mandates the use of even and odd stream IDs for client-initiated and server-initiated connections, respectively. In order to distinguish packets received on on ports 6653 and 6640, different streams IDs are selected for OpenFlow and OVSDB. Since all stream IDs from client-initiated connections in quic-client are even, we reserve all even stream IDs divisible by 3 for connecting to ryu-of and the rest are used for ryu-ovsdb. Then packets are multiplexed into the same UDP pipeline for transmission to quic-server.

Iii-C Controller

The RYU controller’s asynchronous I/O infrastructure is based on the eventlet library [72], which is a highly scalable and non-blocking I/O library. The eventlet library socket implementation is different than the standard socket.socket python class. Underneath, eventlet library implements sockets as GreenSockets, which recognizes the "set_nonblocking=True" keyword parameter and sets the GreenSocket into a non-blocking state to support async I/O operations. RYU spawns a GreenSocket-based server and registers an event loop (_recv_loop) to receive data on the socket. In order to make it UDP compatible, _recv_loop is modified by dismantling all the TCP related code and modifying the callbacks. Figure 3 shows the controller architecture. The two main entities are quic-server and RYU. The RYU entity includes two daemons, ryu-of and ryu-ovsdb, which communicate with quic-server over a datagram connection on ports 6653 and 6640.

Fig. 3: quicSDN controller architecture. This figure highlights the modifications to RYU and how packets are processed by quic-server.

After receiving packets from quic-client, quic-server performs demultiplexing by disassembling streams based on their IDs. If the stream ID is divisible by 3, then the packet is deliverd to ryu-of, otherwise it is delivered to ryu-ovsdb.

Iv Implementation

This section describes the APIs and functions used to support the quicSDN architecture. There are four major entities: OVS, quic-client, quic-server, and RYU. This section mainly focuses on the newly developed and modified APIs in these entities. In order to highlight newly introduced and modified APIs, we use color coding schemes. Blue highlights indicate newly developed APIs, while red highlights indicate modified APIs. Unmodified, supportive APIs and functions (no color highlights) are also mentioned. We used three programming languages in our implementations: QUIC is in C++, RYU is in python, and OVS is in C. We use these different languages to meet the varying performance and programmability requirements of different SDN software components. For example, we use Python to program the controller because it simplifies application development and extensibility. However, since the software switch demands high performance, we use C in that use-case.

Iv-a Ovs

This section describes the modifications applied to OVS to make it compatible with quicSDN. With tcpSDN, the OpenFlow and OVSDB protocols used by OVS are implemented through the Linux kernel’s TCP transport layer infrastructure. The transport layer parameters are defined in the rconn structure. There is one rconn structure per transport connection between the controller and switch. For instance, the OpenFlow and OVSDB connections will have different rconn structures even though the endpoints are the same. The rconn structure is used to maintain socket, port, and other protocol related information. To support quicSDN, the rconn structure needs to be modified to support UDP. The OpenFlow and OVSDB protocols are implemented as service objects in OVS. Each service object is an abstract protocol process. For instance, in order to start the OpenFlow and OVSDB services, OVS will create one service object for each. Each service object is tied to its own rconn structure. Appendix A describes the details about the connection establishment algorithm.

The OVS OpenFlow service is created by issuing the CLI command (discussed in section III-B) and specifying the service parameters. Before an OVS service is created, the vconn_lookup_class function looks up the requested transport protocol against a list of predefined connection classes. The UDP connection class udp_vconn_class inherits the connection related function pointers, including open, close, connect, recv, and send. The open function establishes a connection to the OpenFlow controller and should not block while waiting for connection requests or replies. If the connection establishment cannot be completed immediately, then the socket returns EINPROGRESS and retries in the background. close tears down the connection gracefully, send sends, and recv receives OpenFlow messages. Similar to open, recv should not block while waiting for messages to arrive.

Using the newly modified transport layer infrastructure, the function new_udp_lds opens a UDP socket for each OVS service. This socket is registered to a new FD in new_uds_fd. This FD is attached to the function pointers open, close, recv, and send. At this point, the rconn structure is populated and the OVS service enters the CONNECTING state. The OpenFlow state is dependent on the underlying transport layer protocol. Since there are no state transitions in UDP to show whether the connection is in an established state or not, the OVS service immediately transitions into the ACTIVE state.

1 function main()
2         p_openflow, p_ovsdb, p_quic = {sock, port, addr}
3         client_arg = {p_openflow, p_ovsdb, p_quic}
4         populate client_arg from CLI
5         p_openflow = Connect to ovs-switchd on port 6653
6         p_ovsdb = Connect to ovsdb-server on port 6640
7         if !((client_arg) then
8                 return
9                
10         close(p_openflow, p_ovsdb, p_quic)
11         return  
12 function start_client(client_arg)()
13         s1 = client_argp_openflowsock
14         File *fp_ofl fileno(s1)
15         s2 = client_argp_ovsdbsock
16         File *fp_odb fileno(s2)
17         sock = create UDP socket to connect to quic-server
18         File *fd_ fileno(sock)
19         // set event callbacks
20         fd_ readcd(), writecb()
21         fp_ofl ofl_cb()
22         fp_odb odb_cb()
23         _quic init()
24         if !( then
25                 return
26                
27         return  
28 function init()
29         _quicc = Initialize new client
30         set fd_, fp_ofl, fp_odb event callbacks
31         return _quic 
32 function run(client_arg)
33         if (session_file) then
                 // 0-RTT Scenario
34                 if !(resume()) then
35                         return  
36                
37         else
                 // 1-RTT Scenario
38                 do_handshake()
39                 if !(connect() then
40                         return  
41                
42        schedule_retransmit()
         // Starting event loop
43         ev_run(ev_d, 0)
44         return  
45 function readcb(ev_loop *loop, ev_io *w)
46         auto c = <client *>wdata;
47         on_read()
48        
49 function writecb(ev_loop *loop, ev_io *w)
50         auto c = <client *>wdata
51         on_write()
52        
53 function ofl_cb(ev_loop *loop, ev_io *w)
54         auto c = <ofl *>wdata
55         thistype_flag = openflow
56         on_ofl_odb_read()
57        
58 function odb_cb(ev_loop *loop, ev_io *w)
59         auto c = <ofdb *>wdata
60         thistype_flag = ovsdb
61         on_ofl_odb_read();
62        
Algorithm 1 quic-client
1 function on_read()
2         array<uint8_t, 65536> buf
3         while true do
4                 if !(recvfrom(thisfd_, buf.data, buf.len)) then
5                         return  
6                 if !(feed_data(buf.data, buf.len) then
7                         return  
8                
9        
10 function feed_data(uint8_t data, int data_len)
11         if handshake_completed then
12                 if !_con_recv(data, datalen, &stream_id) then
13                         return  
14                 if stream_id is divisible by 3 then
15                         sendto()
16                 else
17                         sendto()
18                
19         else
20                 if !) then
21                         return  
22                 else
23                         handshake_completed = true
24                
25        return  
26 function on_write()
27         if (send_queue.size 0) then
28                 if ! then
29                         return  
30                
31         if !handshake_completed then
32                 if !) then
33                         return  
34                 else
35                         handshake_completed = true
36                
37         while true do
38                 if !_conn_write_pkt() then
39                         return  
40                 write_streams()
41                 return
42        
43 function write_streams()
44         if (thisopenflow) then
45                 int stream_id = generate_stream_id_divisisble_by_3()
46                
47         else if (thisovsdb) then
48                 int stream_id = generate_normal_stream_id()
49                
50         on_write_stream(stream_id, send_queue.buf)
51         return  
52 function on_write_stream(stream_id, buf)
53         while true do
54                 auto n = _conn_write_stream(ndatalen)
55                 if n 0 && and ndatalen > 0 then
56                         data.seek(ndatalen)
57                        
58                 send_packet()
59                 if buf.size() = 0 then
60                        
61                
62        return  
63 function on_ofl_odb_read()
64         array<uint8_t, 65536> buf_ofl
65         array<uint8_t, 65536> buf_odb
66         if activity detected of fp_ofl then
67                 if (recvfrom(thisfp_ofl, buf_ofl.data, buf_ofl.len)) then
68                         send_queue.push(buf_ofl)
69                        
70                
71         if activity detected of fp_odb then
72                 if recvfrom(thisfp_odb, buf_odb.data, buf_odb.len)) then
73                         send_queue.push(buf_odb)
74                         return  
75                
76        
Algorithm 2 quic-client contd.

Iv-B QUIC Client and Server

QUIC uses a client-server model. The original development goal of QUIC was to replace TCP as a reliable transport protocol for HTTP3. Our architecture is different from HTTP3 over QUIC. Unlike HTTP3, there are multiple applications interacting with QUIC in quicSDN. At the switch, ovsdb-server and ovs-switchd are communicating with quic-client. At the controller, ryu-of and ryu-ovsdb are communicating with quic-server. We picked ngtcp2 [47] and modified it for the QUIC code.

Our previous work [41] explains the internal workings of quic-server and quic-client. In [41], QUIC was implemented for MQTT in IoT scenarios, where it was divided into server-client agents and common APIs. The server-agent APIs are responsible for serving requests from clients, invoking common APIs, negotiating versions, and completing QUIC handshakes. The client-agent APIs are implemented to prepare the requests, rearrange the responses, and interact with common APIs. The common APIs are responsible for invoking the 0-RTT scenario, encrypting and decrypting the packets, and storing the cryptographic keys. This section focuses on the modifications that are relevant to quicSDN.

Iv-B1 QUIC Client

Algorithm 1 presents the quic-client code. quic-client spawns two UDP servers listening on ports 6653 and 6640 on localhost (A1: L5-6), to intercept all connections requests and data packets from ovsdb-server and ovs-switchd. We define a set structure for storing IP address, port, and socket information. There are three sockets in quic-client: two sockets for UDP servers, and one socket for connecting to quic-server. For each of these three sockets, we define a set structure to store connection information. The sets are for ovs-switchd (struct p_openflow), ovsdb-server (struct p_ovsdb), and quic-server (struct p_quic).

The ngtcp implementation only allows one IP address and port to be specified in the QUIC CLI commands. We developed a new QUIC CLI command to populate the above-mentioned three sets (A1: L3):

  • <quic_client_path> <quic server addr> <quic server port> <openflow port> <ovsdb port>

To support asynchronous I/O operations on each socket (A1: L19-21), a stream pointer mapped to a FD associated with the sockets. In conjunction with existing FD fd_ for a socket connected to quic-server, two more FD named: fp_ofl and fp_odb are introduced for each socket connected to ovs-switchd and ovsdb-server respectively. Any activity detected on these FD invokes a callback function. We developed two callback functions, ofl_cb and odb_cb, (A1: L47-54) for ovsdb-server and ovs-switchd and modified the existing ones, readcb (A1: line 41) and writecb (A1: L44).

readcb is invoked when FD fd_ determines that a packet has been received. Inside readcb, on_read receives data from the socket and gives it to feed_data (A2: L1-20), which is responsible for processing QUIC handshake and data packets. If the QUIC handshake has been completed successfully, then it is confirmed that all the necessary security keys have been exchanged between quic-client and quic-server (A2: L26-30). The function _con_recv checks if the received packet contains the long or short QUIC header by inspecting the most significant bit of octet 0 (0x80) (A5: L1-6). The long header is used for QUIC version [15] and 1-RTT keys negotiations and the short header is used for subsequent data communications. crypt_quic_message[41] (A5: L6) parses the packet and performs all necessary QUIC related operations such as connection establishment and key management.

writecb is invoked to send the packet out to quic-server (A1: L44). The QUIC handshake is initiated by the on_write function (A2: L22). Inside on_write, _conn_write_pkt encrypts the packet ( A5: 10-12). write_streams is then called to check if the packet is destined for ovs-switchd or ovsdb-server in order to generate appropriate stream IDs (A2: L38-40).

Packets that are received on the sockets connected to ovsdb-server and ovs-switchd invoke odb_cb and ofl_cb callbacks (A1: L47-54) respectively. Both callbacks invoke on_ofl_odb_read, where packets are pushed to the send_queue for QUIC processing (A2: L52-61).

1 function main()
2         p_openflow, p_ovsdb, p_quic = {sock, port, addr}
3         server_arg = {key, cert, p_ovsdb, p_openflow, p_quic}
4         Populate server_arg from CLI p_openflow = Connect to ryu-of on 6653 port
5         p_ovsdb = Connect to ryu-ovsdb on 6640 port
6         if !(start_server(server_arg) then
7                 return ;
8                
9         close(p_openflow, p_ovsdb, p_quic)]
10         return ;
11        
12 function start_server(server_arg)
13         s1 = server_argp_openflowsock
14         s2 = server_argp_ovsdbsock
15         sock = create UDP server to accept quic-client connections
16         File *fp_ofl = fileno(s1)
17         File *fp_odb = fileno(s2)
18         File *fd_ = fileno(sock)
19         // set event callbacks
20         fd_ readcd(), writecb()
21         fp_ofl ofl_cb()
22         fp_odb odb_cb()
23         ev_run(ev_d, 0)
24        
25 function readcb(ev_loop *loop, ev_io *w)
26         auto c = <client *>wdata
27         on_read()
28        
29 function writecb(ev_loop *loop, ev_io *w)
30         auto c = <client *>wdata
31         on_write()
32        
33 function ofl_cb(ev_loop *loop, ev_io *w)
34         auto c = <ofl *>wdata
35         thistype_flag = openflow
36         on_ofl_odb_read()
37        
38 function odb_cb(ev_loop *loop, ev_io *w)
39         auto c = <ofdb *>wdata
40         thistype_flag = ovsdb
41         on_ofl_odb_read()
42        
Algorithm 3 quic-server
1 function on_read()
2         buffer<uint8_t, int, port> buf
3         while true do
4                 if !(recvfrom(thisfd_, buf.data, buf.len)) then
5                         return
6                 hd = thishd
7                 if (buf[0] & 0x80) then
8                         _pkt_decode_hd_long(&hd, buf.data())
9                        
10                 else
11                         _pkt_decode_hd_short(&hd, buf.data())
12                        
13                _accept(buf.data(), buf.len)
14                
15        return  
16 function on_write()
17         if !(handshake_completed) then
18                 do_handshake()
19                 handshake_completed = true
20                
21         else
22                 if !schedule_restransmit() then
23                         return  
24                
25        buf = send_queue.front()
26         stream_id = _conn_map[bufport]
27         on_write_stream(stream_id)
28         for ;; do
29                 _conn_write_pkt()
30                 if n = 0 then
31                        
32                        
33                 send_packet(buf)
34                
35        return  
36 function on_ofl_odb_read()
37         while true do
38                 buffer<uint8_t, int, port> buf_ofl
39                 if (recvfrom(thisfp_ofl, data, datalen)) then
40                         buf_ofl.data = data
41                         buf_ofl.len = datalen
42                         buf_ofl.port = thisport;
                         // e.g 6653 for openflow
43                         send_queue.push(buf_ofl)
44                        
45                 buffer<uint8_t, int, port> buf_odb;
46                 if (recvfrom(thisfp_odb, data, datalen)) then
47                         buf_odb.data = data
48                         buf_odb.len = datalen
49                         buf_odb.port = thisport
                         // e.g 6640 for ovsdb
50                         send_queue.push(buf_odb)
51                        
52                
53        return  
Algorithm 4 quic-server contd.
1 function _con_recv(data, datalen, &s)
2         if (data[0] & 0x80) then
3                 _pkt_decode_hd_long()
4                
5         else
6                 _pkt_decode_hd_short()
7                
8        crypt_quic_message(decrypt)
9        
10 function _conn_write_stream()
11         find_stream_info(dest)
12         _conn_write_pkt()
13 function _conn_write_pkt()
14         conn_write_probe_pkt()
15         crypt_quic_message(encrypt)
16        
17 function _accept(data, datalen)
18         plain_text = crypt_quic_message(decrypt)
19         if (thisstream_id is divisible by 3) then
20                 _conn_map[opnflow_port] = stream_id sendto(thisfp_ofl)
21         else
22                 _conn_map[odb_port] = stream_id sendto(thisfp_odb)
23        
Algorithm 5 Common APIs

Iv-B2 QUIC Server

Algorithm 3 presents the quic-server implementation. quic-server connects to the ryu-of and ryu-ovsdb, which are listening on ports 6653 and 6640, respectively (A3: L5-6). Similar to quic-client, we define three sets, p_quic, p_openflow and p_ovsdb, for quic-server, ryu-of, and ryu-ovsdb (A3: L2). p_openflow and p_ovsdb stores the socket information for northbound connections to ryu-of and ryu-ovsdb, while p_quic stores the socket information for the southbound connection to quic-client. As previously mentioned, ngtcp’s CLI commands contains only one IP address and port, so a new CLI command

  • <quic_server_path> <quic server addr> <quic server port> <key> <certificate> <openflow port> <ovsdb port>

was developed for quic-server to populate the three sets with the correct information (A3: L3). On the quic-server, sockets are also non-blocking and capable of asynchronous I/O operations (A3: L14). Three FD are mapped as stream pointers to the sockets. The existing FD fd_ is modified, and two new FD: fp_ofl and fp_odb (A:3 L15-16) are developed. FD fd_ is for the QUIC Connection to quic-client, FD fp_ofl is for the connection to ryu-of, and FD fp_odb is for the connection to ryu-ovsdb. These FD monitor the sockets via an event loop and invoke callbacks if any activity is detected. Callbacks readcb and writecb are invoked if activity is detected on FD fd_. As Algorithm 4 describes,ofl_cb is invoked if any activity is detected on FD fp_ofl, and odb_cb is invoked if any activity is detected on FD fp_odb.

on_read function is responsible for reading the FD to get buffer from receiving packets (A:4 L4). This buffer is then evaluated to check if the header in the corresponding packet is a long header or a short header (A:4 L7). The packet is then passed to the _accept function for decryption (A:4 L1).

The on_write function is for sending packets to quic-client (A:4 L13-27). This function first evaluates and performs a QUIC handshake with quic-client to exchange cryptographic keys (A:4 L15). The buffer received in on_write contains the OpenFlow or OVSDB port information to maintain an external map (_conn_map) of port to streamID mapping for reverse traffic (A:4 L21). The function on_write_stream searches for an existing stream, and if the stream does not exist yet, it opens a new one and packs the data into the newly opened stream (A:4 L22). _conn_write_pkt is responsible for encrypting the packets and placing them into the tx queue (A:5 L10-12).

The function on_ofl_odb_read is called by ofl_cb and odb_cb callbacks (A:4 L29-42). This function is responsible for exchanging packets between ryu-of, ryu-ovsdb, and quic-server.

Iv-B3 crypt_quic_message

This API consists of the decrypts_message and encrypts_message functions, responsible for decrypting and encrypting packets in phases. Each phase has a different set of keys. Prior to acquiring the symmetric keys, QUIC goes through 4 phases. The first phase is the Initial Key Agreement, where each party sets and exchanges the initial key and additional information, such as HMAC. Both parties then agree to a common key (), which is derived from the Client Initial Key () and the Server Initial Key (). The Second stage is the Initial Data Exchange, where data will start getting encrypted and decrypted by using an AEAD Scheme [59] and . The third phase is the Key Agreement, where the session key () is derived from the client session key (), server session key (), and , where . The Fourth phase is the Data Exchange. In this phase, data is sent using the associated AEAD scheme and . The server uses to encrypt and to decrypt packets, while the client uses to encrypt and to decrypt packets. In addition, crypt_quic_message

also prepares the initialization vector (

) and salt for the cryptographic keys.

Iv-C Ryu

In tcpSDN, RYU apps inherit the TCP transport layer infrastructure in the form of base classes. The most important base class is OFPHandler. It declares a controller base class object called OpenFlowController. Inside the OpenFlowController object, a server is spawned to receive and process all packets via an event loop. Any packet sent to the server will be pushed to the RYU app for processing. In quicSDN, to make RYU UDP-based, the first task is to replace the TCP infrastructure and have RYU spawn a UDP server instead. This modification is challenging due to RYU’s current event loop callback mechanism. RYU apps ryu-of and ryu-ovsdb are started with the CLI command:

  • ryu-manager --ofp-listen-port <port num> <app name>

It is important to note that no changes were made in ryu-of or ryu-ovsdb’s state machines. In tcpSDN, packets are processed in the eventlet library which is implemented using Greensocket. We modify the RYU event loop to make sure that packets are not pushed to the library, but are instead processed in the app itself. For quicSDN, we introduce a new packet processing mechanism at the socket handling layer increases processing speeds. Appendix B shows the implementation for ryu-of and ryu-ovsdb. The OpenFlow and OVSDB controllers both use the same transport layer infrastructure.

V Modeling Transport Protocol Overheads

In this section, we analyze the experimental data and generate models to predict the amount of protocol overhead involved with both TCP and QUIC transactions. Specifically, we present models to highlight the improvements in traffic overhead in quicSDN as compared to tcpSDN.

Each packet exchanged between the controller and switch, assuming that the Ethernet header is of fixed size, contains three things: IP header, transport layer header, and payload. The benefits of quicSDN over tcpSDN come from: (i) smaller transport protocol header sizes and (ii) more efficient utilization of the IP headers. In tcpSDN, the transport protocol header consists of only the TCP header, whose size ranges from 20 to 40 bytes. Since there is no multiplexing in TCP, every message exchanged between client and server has to bear the full header cost. The overhead associated with sending the set of messages over TCP is:

(1)

Here, is the size of the IP header, is the size of the TCP header, and is network’s MTU.

The headers for QUIC are significantly different than TCP headers, as they are implemented on top of UDP with an additional set of QUIC-specific metadata. There are two types of QUIC headers: long headers and short headers. As mentioned in Section II-D, the long header is only used at the start of the connection, and subsequent packets use short headers for the remaining lifetime of the connection. The long header has a fixed cost of 20 bytes, which is paid during the 1-RTT connection establishment and does not contribute to the overhead. The size of QUIC’s short header ranges from 3 to 11 bytes; when combined with the 8 bytes of an UDP header, QUIC incurs anywhere between 11 and 19 bytes of transport protocol overhead per packet. A quick comparison between TCP and QUIC reveals that even in the worst-case QUIC scenario, its transport overhead will still be less than TCP’s best-case transport overhead. Furthermore, QUIC is capable of multiplexing, and can combine multiple streams into a single UDP pipeline, reducing the IP and UDP overheads. The amount of streams that are packed into each packet depends on many variables such as message size, data rate, QUIC implementation, etc. Each stream is uniquely identified by a stream id, which is a 62 bit integer, constrained by the peer-advertised maximum stream id.

With this knowledge, we can predict that for a system where is the size of the UDP header, is the length of the QUIC short header, is the size of the QUIC STREAM frame header, and is the average number of streams per packet, the overhead associated with sending the set of messages over QUIC is:

(2)
(3)

Since QUIC headers are smaller than TCP headers and QUIC is capable of multiplexing multiple streams into a single UDP packet, it is guaranteed that the number of packets and the total overhead associated with QUIC is always less than the number of packets and total overhead associated with TCP. Unlike Equation (1), Equation (2) takes into consideration QUIC’s ability to multiplex multiple streams into a single UDP packet; the transport layer header costs are split amongst each of the streams carried in the packet and reduces each streams’ overheads. This analysis, combined with the results presented in Section VI, confirms that QUIC is capable of reducing the transport protocol overheads in SDN scenarios.

To confirm the validity of the mathematical formulations of transport layer overheads, we empirically compared the overhead of TCP and QUIC. We then compare the observed values with the overhead values generated by the mathematical formulations. We maintain a fixed message size by sending a file from a source to a destination using TCP and QUIC. Once the network traffic has been captured, we subtract the file’s size from the total IP traffic sent across the network to get the amount of transport layer overheads. Figure 4 presents the results.

Fig. 4: Comparison of transport layer overheads between quicSDN and tcpSDN. These results confirm that the overhead of QUIC is considerably lower than that of TCP. Also, this assures the validity of the analytical models presented for accurate overhead prediction.

As the figure shows, the observed overheads match with our expectations, that TCP exhibits greater transport layer overhead than QUIC, and that the observed overheads match the predicted amount of transport layer overheads. The predicted overheads for both TCP and QUIC are very close to the observed overheads; for TCP and QUIC, the error of analytical models are 2.2% and 3.1%, respectively.

Vi Results and Evaluation

In this section, we present an empirical evaluation of quicSDN versus tcpSDN. Figure 5 presents the testbed used for these experiments.

Fig. 5: As figure shows, one machine is a switch (OVS) and other one is controller(RYU). quic-client and quic-server are installed on switch and controller respectively. Switch and Controller are kept L3 hop away to simulate real internet network.

For the tcpSDN experiments, we install OVS on the Switch, and install RYU on the controller. All of the connections in the tcpSDN experiments use TCP CUBIC [22]

, which is available by the Linux kernel (3.13.11). To enable QUIC for the quicSDN experiments, quic-client is hosted on the switch and quic-server is hosted on the controller. Each experiment was repeated thirty times. The error bars represent the upper and lower quartiles. We present two categories of experiments: first, we compare the overhead values of quicSDN and tcpSDN; second, we evaluate how quicSDN and tcpSDN react to and recover from a broken connection between the switch and the controller.

Vi-a Overhead Reduction

We measure the differences in transport layer overheads in three scenarios: flow rule installation, queue configuration, and statistics polling. Table I summarizes the results. In the following, we first explain these experiments and then discuss the results.

Flow Installation Queue Configuration Statistics Polling
10 1000 10K 100K 100 10K 100K 100 10K 100K
tcpSDN (Bytes) 38978 391896 1441000 6631000 50283 1901804 1119843000 20070 2527814 1701342468
quicSDN (Bytes) 29217 282063 949000 3313000 15472 1245871 570093000 13953 1644596 825151097
Traffic Reduction 25.04% 28.02% 34.14% 50.03% 30.77% 34.49% 52.43% 30.48% 34.94% 51.50%
TABLE I: Comparison of the average number of bytes exchanged per second between the Controller and the Switch using quicSDN and tcpSDN.

Vi-A1 Flow Rule Installation

In this experiment, while the controller installs flow rules on the switch, we measure the amount of traffic exchanged between the controller and switch. To install flow rules, we use a Python script on the controller to construct and send OpenFlow FLOW_MOD messages to the switch at 10, 1000, 10K and 100K installations per second. quicSDN exchanges about 25%, 28%, 34%, and 50% lesser traffic than tcpSDN for 10, 1000, 10K, and 100K flow rule installations per second, respectively.

Vi-A2 Queue Configuration

In this experiment, we measure the overhead of sending OVSDB messages from the controller to the switch to configure queues. A queuepusher python script equipped with REST API was developed to generate an OVSDB queue configuration based on queue_id, topology ID, and node ID. The queuepusher script pushed the queue configuration to the controller, which then sent the OVSDB queue configuration message to the switch. For a queue installation rate of 100 queues/sec, quicSDN reduces the control traffic overhead between the controller and switch by about 30% in comparison to tcpSDN. As we increase the queue installation rate to 10K and 100K queues per second, the performance enhancement of quicSDN over tcpSDN increases.

Vi-A3 Statistics Polling

In this experiment we measure overhead of quicSDN and tcpSDN in terms of pulling statistical data from the switch when 100, 10K, and 100K flow rules are programmed into the switch’s flow table. To determine the overhead, the controller polled the switch’s flow rule statistics every second for 100, 10K, and 100K installed flow rules. When polling the statistics of 100, 10K, and 100K installed flow rules, quicSDN reduced the consumed bandwidth by about 30%, 34%, and 51% in comparison to tcpSDN.

Vi-A4 Discussion

The results show that in all of the experiments, the overhead of quicSDN is significantly lower than the overhead of tcpSDN. Additionally, it is important to note that as we increase the configuration rate, quicSDN uses increasingly lower bandwidth than tcpSDN. The primary reasons of this reduction are: quicSDN’s ability to support stream multiplexing, and QUIC’s shorter header sizes. Through stream multiplexing, quicSDN can combine several packets under one QUIC short header, whereas, tcpSDN requires each packet to carry a separate TCP header. Most QUIC communication uses the short header (3-11 bytes), which is significantly smaller than the TCP header (20-40 bytes). Since QUIC runs on top of UDP, which has a 8 byte header, there is a total of 11-19 bytes of transport protocol headers for QUIC packets. In most scenarios, this will be less than the TCP header size. Since QUIC multiplexes several packets into a single packet, quicSDN can significantly reduce the bandwidth needed for transport protocol headers in comparison to tcpSDN. This explains the overhead reductions observed in the experiments.

Vi-B Connection Migration

In this experiment, we installed a flow rule to enable communication between the client and server shown in Figure 5. The client downloads a file from the server, and while the download is in progress, we polled statistics of the flow every second until the file was completely transferred. At the 1200th second into the file transfer, the switch and controller were disconnected by bringing down the interface on the switch. This also caused the disruption of datapath connection between client and server. After bringing up the switch’s interface within the defined probe interval of OVS, a new source port was assigned to establish a new transport connection between switch and controller. Since there is a change in source port, server and client established a new TCP connection and restarted the file transfer from the beginning. On the other hand, in quicSDN, the original QUIC connection was resumed between the switch and controller, even though the source port was modified. Figure 6 presents the statistics polling data exchanged between the client and server for tcpSDN and quicSDN. As the results show, tcpSDN ended up transferring more bytes in polling statistics for file transfer than quicSDN. This is due to the fact that file transfer started from the beginning in tcpSDN after the reestablishment of the TCP connection between the switch and controller. Since QUIC connections are not dependent on endpoint’s IP or port numbers, in quicSDN, QUIC connection between controller and switch was resumed within the OVS probe interval. Consequently, datapath connection between client and server resumed as well.

It is important to note that the improvement in quicSDN is subject to the amount of data transferred before the connection between the controller and switch goes down. For example, if the connection did not go down, then tcpSDN would have taken approximate same time as quicSDN, to transfer the data.

Fig. 6: Figure shows the polling of flow statistics during the file transfer between client and server. In figure (a) Shows the behavior of tcpSDN after the connection is dropped, which forces the switch and controller to establish a new TCP connection. Hence, the file transferred started from the beginning. (b) Shows the behavior of quicSDN when the connection is dropped. Since the connection was successfully resumed, the file transfer between client and server also resumed

Vii Related Work

Vii-a SDN Scalability

The communication overhead and delay between a controller and its associated switches have been widely explored in the literature. The primary methods used to mitigate these overheads are: (i) increasing each switches’ autonomy to handle flows, (ii) selecting optimal controller placement, and (iii) using multiple controllers to reduce switch to controller distances.

To reduce the amount of switch-controller communication, Hedera [1] allows switches to handle mice flows using ECMP, and switches only consult the controller when dealing with elephant flows. Hedera defines an elephant flow as a flow that consumes more than 10% of the host NIC’s bandwidth. DIFANE [75] distributes OpenFlow wildcards across switches to allow the switches to perform local routing. Curtis et al. [14] show that polling statistical data from switches interferes with FLOW_MOD messages and reduces flow rule setup rate. They also demonstrate that the low bandwidth between a switching appliance’s CPU and ports’ ASIC introduces a significant communication delay between switch and controller when installing new flow rules. They propose DevoFlow, which devolves the control of many flows back to switches, and the controller only targets significant flows. DevoFlow uses wildcard rules to reduce the number of interactions with the controller while also reducing TCAM usage. Mahout [13] uses the sender’s TCP buffer size to identify mice flows and decide whether communication with the controller is necessary. Kim et al. [39] proposes a flow management scheme to reduce the number of OpenFlow PACKET_IN messages sent to the controller, thereby reducing the network overhead caused by entry misses in a flow table. Their proposed scheme reduces the number of table misses by maintaining inactive flow entries for as long as possible. The inactive flow entries are maintained as long as the flow table still has space; once the flow table starts filling up, inactive flow entries are deleted. Qin et al. [57] demonstrate the challenges of controller assignment in edge computing networks. Both controller-switch and inter-controller traffic overheads were analyzed in networks with varying numbers of nodes. They show that the relationship between the amount control traffic and the number of nodes in a network is linear. They model and propose a solution to the controller placement problem, which can reduce device management delay by 25%.

Onix [40] provides a wide range of primitives for developing control applications in environments such as WAN and public cloud. To simplify this process while maintaining scalability, API are provided for distributed implementation. For example, control applications can utilize these API to access the information maintained by Onix instances. HyperFlow [67] synchronizes the status of distributed controllers and provides control applications with uniform, consistent access to the overall network data. Kandoo [23] assumes local processing is available close to the switches. Applications that rely on local information are assigned to these local controllers while non-local applications run in a root controller. Bera et al. [4] proposes a dynamic controller assignment scheme to maximize controller reactivity in heterogeneous networks. They accomplish this by selecting a controller to manage new flows that arrive at switches in the network, such that controller-switch delay and controller overheads are optimized. Disco [53] targets synchronization among controllers that manage multiple, heterogeneous networks. They use AMQP, which utilizes TCP, to support east-west communication among controllers; AMQP allows controllers to subscribe and publish to topics.

To enhance communication reliability with controllers, Zhang et al. [77] propose a min-cut algorithm for controller placement. The network is first partitioned with the minimum inter-partition cut, and inside each partition the node with minimum distance to other nodes is found. Survivor [46]

uses path diversity as a metric of their formulated linear programming mode to determine controller location. Simulation results show that the connectivity loss of the proposed method is between 2 to 3x less than

[77]. Beheshti et al. [3] argue the importance of providing switches with alternative paths to connect to the controller as soon as the primary path is dropped. The proposed routing algorithm takes into account both distance and resilience to path failures.

Van Bemten [68] use switches from multiple vendors and demonstrates that switch management operations are not predictable and reliable. With Pica switches, as the number of FlowMod messages per second increase, the switch shows two behaviors: the number of ignored rules increases, and some rules are reported to be installed while they have not been. Although the number of rules in hardware is always the same with Pica, for HP and DELL switches the number of rules depends on the match/action combination.

Vii-B QUIC Protocol

QUIC outperforms TCP in several types of networks. Our previous work [41] demonstrated that in IoT networks, QUIC outperforms TCP in terms of memory usage, processor utilization, latency, and network overheads. Carlucci et al. [7] showed the higher throughput of QUIC compared TCP in under-buffered networks. Zheng et al. [78] compared QUIC with TCP/TLS and HTTP2. Their experiments were focused on lossy networks such as WiFi. Based on their findings, they also concluded that QUIC performs better when the network response time is low. Biswal et al. [5] affirmed in their experiments that QUIC is always better performing protocol than TCP in lossy networks, such as WiFi. Das et al. [16] compared page load time of HTTP/1.1, SPDY, and QUIC for objects of different sizes and network configurations for bandwidth and delay. Their results show that QUIC page loading time is better (precisely, 10-6-%) than TCP. Megyesi et al. [45] confirmed the claims presented in [16]. Yu et al. [76] presented comparison of QUIC and TCP for multi-streaming based applications. They customized their testbed to host multi-streaming application to emulate real internet scenario. Their findings show that, if network packet loss is negligible and buffer size is high, TCP can outperform QUIC. As the buffer size is reduced and packet loss is injected in the network, QUIC starts outperforming TCP.

It is fair to assume that that benefit of QUIC can be in seen in SDN applications as well. QUIC can reduce network overhead through its usage of long and short headers. The long header is used in the beginning of the connection and the short header is used for the rest of the connection. The short header can be as small as 3 bytes, which is significantly smaller than a TCP header; this will reduce overall network overheads. Instead of using a standard 4-tuple to identify a connection, QUIC uses a separate connection id value that is independent of IP addresses or port numbers. This allows for more robust connection migration features. Incorporating QUIC in SDN for transport and security layer protocols are complementary to the above-mentioned research work, as none of those studies have considered using QUIC to reduct transport layer overhead and increase device mobility. Most importantly, this work strikes the attention of application developers to enhance the control plane of SDN.

Viii Conclusion

To meet the proliferate needs of SDN applications, transport layer modifications and enchancements are necessary. The current TCP/TLS protcol suite is inadequate for SDN application requirements; it imposes excessive transport layer overheads and restricts device mobility. We introduce benefits such as transport layer overhead reduction and control plane device mobility by bringing the transport and security layer components into userspace. The implementation of these protocols in userspace also eliminates the need for kernel modification in middleboxes and end points. In today’s virtualization-based SDN architectures, these overhead and mobility benefits are indispensable.

In this paper, we justified the latent benefits of QUIC over TCP/TLS in SDN, while preserving the connection reliability features. We a) proposed a novel software architecture, b) developed new APIs, and c) modified existing APIs in order to implement quicSDN. We then ran experiments which highlighted the benefits of quicSDN over tcpSDN and demonstrated the validity of our analytical transport layer overhead models. We concluded that quicSDN outperforms tcpSDN in all relevant SDN control plane scenarios.

Some of the potential areas for future work are as follows: One potential area for future work is implementing a kernel bypass for QUIC-SDN communication. Currently, our architecture uses UDP for inter-process communication, which involves the kernel for all passed messages. Message throughput and latency can be further improved by using a kernel bypass to bring packet processing into userspace. Another potential area for future work is increased MTU flexibility. Currently, QUIC packets have a fixed size of 1392 bytes for handshake packets [65], which includes the Ethernet, IP, and UDP headers. Due to the fixed size MTU in handshake packets, QUIC does not allow for packet fragmentation. Finally, we have not examined the potential benefits of flow and congestion control in QUIC; the benefits of QUIC’s advanced flow and congestion control mechanisms have yet to be fully analyzed and quantified.

Appendix A Implementation of OVS Over UDP

OVS services are connection oriented and require a reliable connection between end host devices. The quicSDN architecture hosts OVS services and quic-client on the same machine, which eliminates the necessity for reliable connection between them. Since TCP is no longer required, UDP becomes a strong contender as we have discussed in section III. Algorithm 6 explains how an OVS service can be created using UDP as the transport layer protocol.

1 /* Service can be OpenFlow or ovsdb, both
2 * take the same connection creation paths */
3 function service_create()
4         /* Search for "udp" keyword */
5         if !(vconn_verify_name(command_line_input())) then
6                 return  
7                
8         rconn = rconn_create()
9         acquire_mutex_lock(rconnlock)
10         if !vconn_lookup_class(name) then
11                 return  
12                
13         num num_vconn_class;
14         for  to num  do
15                 class vconn_class[i]
16                 if class name then
17                         udp_vconn_class()
18                
19        release_mutex_lock(rconnlock)
20         return  
21 function udp_vconn_class()
22         /* Opening UDP socket */
23         if !new_udp_lds() then
24                 return  
25                
26         /* Create UDP Connection */
27         new_lds_fd()
28         return  
Algorithm 6 OVS’s Transport Layer Interface

Appendix B Implementation of RYU over UDP

Algorithm 7 shows the implementation of RYU applications over UDP. A UDP socket is opened to interact with quic-server while ryu_config captures the socket and port and spawns a UDP server. process_packets opens the packet and replies based on the message type.

1 function main()
2         function init_ryu()
3                 get_appManager_instance
4                 param = {addr, port}
5                 param get_cli_conf()
6                 create_OpenFlowObject()
7                 return  
8         function create_OpenFlowObject()
9                 ryu_config = {param, sock}
10                 ryu_config = spawn_udp_server()
11                 spawn_server_loop()
12                 return  
13         function spawn_server_loop()
14                 datapath_handler = {ofp_events, callback}
15                 datapath_handler = get_loop_event()
16                 process_packets(datapath_handler)
17                 return  
18        
Algorithm 7 RYU’s Transport Layer Interface

References

  • [1] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, A. Vahdat, et al. (2010) Hedera: dynamic flow scheduling for data center networks.. In NSDI, Vol. 10, pp. 89–92. Cited by: §VII-A.
  • [2] M. Alsaeedi, M. M. Mohamad, and A. A. Al-Roubaiey (2019) Toward adaptive and scalable openflow-sdn flow control: a survey. IEEE Access 7, pp. 107346–107379. Cited by: §I.
  • [3] N. Beheshti and Y. Zhang (2012) Fast failover for control traffic in software-defined networks. In IEEE Global Communications Conference (GLOBECOM), pp. 2665–2670. Cited by: §VII-A.
  • [4] S. Bera, S. Misra, and N. Saha (2020) Traffic-aware dynamic controller assignment in sdn. IEEE Transactions on Communications. Cited by: §VII-A.
  • [5] P. Biswal and O. Gnawali (2016) Does quic make the web faster?. In 2016 IEEE Global Communications Conference (GLOBECOM), pp. 1–6. Cited by: §VII-B.
  • [6] C. Caba and J. Soler (2015) Apis for qos configuration in software defined networks. In Proceedings of the 1st IEEE Conference on Network Softwarization (NetSoft), pp. 1–5. Cited by: §I.
  • [7] G. Carlucci, L. De Cicco, and S. Mascolo (2015) HTTP over udp: an experimental investigation of quic. In Proceedings of the 30th Annual ACM Symposium on Applied Computing, pp. 609–614. Cited by: §VII-B.
  • [8] J. Case, M. Fedor, M. L. Schoffstall, and J. Davin (1990) RFC1157: simple network management protocol (snmp). RFC Editor. Cited by: §I.
  • [9] M. Cello, Y. Xu, A. Walid, G. Wilfong, H. J. Chao, and M. Marchese (2017) BalCon: a distributed elastic sdn control via efficient switch migration. In IEEE International Conference on Cloud Engineering (IC2E), pp. 40–50. Cited by: §I.
  • [10] J. Chen and B. Dezfouli (2021) Modeling control traffic in software-defined networks. In 7th IEEE International Conference on Network Softwarization (NefSoft), Cited by: §I, §I, §I.
  • [11] G. Cheng, H. Chen, Z. Wang, and S. Chen (2015) DHA: distributed decisions on the switch migration toward a scalable sdn control plane. In IFIP Networking Conference (IFIP Networking), pp. 1–9. Cited by: §I.
  • [12] Cisco (2018)(Website) External Links: Link Cited by: §I.
  • [13] A. R. Curtis, W. Kim, and P. Yalagandula (2011) Mahout: low-overhead datacenter traffic management using end-host-based elephant detection. In IEEE INFOCOM, pp. 1629–1637. Cited by: §VII-A.
  • [14] A. R. Curtis, J. C. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, and S. Banerjee (2011) DevoFlow: scaling flow management for high-performance networks. In ACM SIGCOMM, pp. 254–265. Cited by: §I, §VII-A.
  • [15] M. D. Schinazi (2020)(Website) External Links: Link Cited by: §IV-B1.
  • [16] S. R. Das (2014) Evaluation of quic on web page performance. Ph.D. Thesis, Massachusetts Institute of Technology. Cited by: §VII-B.
  • [17] R. Devel (2017)(Website) External Links: Link Cited by: §I.
  • [18] R. Devel (2021)(Website) External Links: Link Cited by: §I.
  • [19] R. Enns, M. Bjorklund, J. Schoenwaelder, and A. Bierman (2011) Network configuration protocol (netconf). Cited by: §I.
  • [20] S. Fang, Y. Yu, C. H. Foh, and K. M. M. Aung (2013) A loss-free multipathing solution for data center network using software-defined networking approach. IEEE transactions on magnetics 49 (6), pp. 2723–2730. Cited by: §I.
  • [21] J. Flathagen, T. M. Mjelde, and O. I. Bentstuen (2018) A combined network access control and qos scheme for software defined networks. In 2018 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), pp. 1–6. Cited by: §I.
  • [22] S. Ha, I. Rhee, and L. Xu (2008) CUBIC: a new tcp-friendly high-speed tcp variant. ACM SIGOPS operating systems review 42 (5), pp. 64–74. Cited by: §VI.
  • [23] S. Hassas Yeganeh and Y. Ganjali (2012) Kandoo: a framework for efficient and scalable offloading of control applications. In Proceedings of the first workshop on Hot topics in software defined networks, pp. 19–24. Cited by: §VII-A.
  • [24] J. Hu, C. Lin, X. Li, and J. Huang (2014) Scalability of control planes for software defined networks: modeling and evaluation. In IEEE 22nd International Symposium of Quality of Service (IWQoS), pp. 147–152. Cited by: §I.
  • [25] T. Hu, P. Yi, J. Zhang, and J. Lan (2018) A distributed decision mechanism for controller load balancing based on switch migration in sdn. China Communications 15 (10), pp. 129–142. Cited by: §I.
  • [26] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young (2015) Mobile edge computing—a key technology towards 5g. ETSI white paper 11 (11), pp. 1–16. Cited by: §I.
  • [27] J. Iyengar and I. Swett (2020)(Website) External Links: Link Cited by: §II-C2.
  • [28] J. Iyengar and M. Thompson (2011)(Website) External Links: Link Cited by: §II-C2.
  • [29] J. Iyengar and M. Thompson (2020)(Website) External Links: Link Cited by: §III-B.
  • [30] J. Iyengar and M. Thomson (2018) QUIC: a udp-based multiplexed and secure transport. Internet Engineering Task Force, Internet-Draft draftietf-quic-transport-17. Cited by: §I.
  • [31] J. Iyengar (2016)(Website) External Links: Link Cited by: §II-D.
  • [32] J. Iyengar (2020)(Website) External Links: Link Cited by: §II-D.
  • [33] S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh, S. Venkata, J. Wanderer, J. Zhou, M. Zhu, et al. (2013) B4: experience with a globally-deployed software defined wan. ACM SIGCOMM Computer Communication Review 43 (4), pp. 3–14. Cited by: §I.
  • [34] H. Jung, H. Han, A. Fekete, G. Heiser, and H. Y. Yeom (2014) A scalable lock manager for multicores. ACM Transactions on Database Systems (TODS) 39 (4), pp. 1–29. Cited by: §III-A1.
  • [35] Juniper (2018)(Website) External Links: Link Cited by: §I.
  • [36] A. Kaloxylos (2018) A survey and an analysis of network slicing in 5g networks. IEEE Communications Standards Magazine 2 (1), pp. 60–65. Cited by: §I.
  • [37] M. Karakus and A. Durresi (2017) A survey: control plane scalability issues and approaches in software-defined networking (sdn). Computer Networks 112, pp. 279–293. Cited by: §I.
  • [38] E. Kim, Y. Choi, S. Lee, and H. J. Kim (2017) Enhanced flow table management scheme with an lru-based caching algorithm for sdn. IEEE Access 5, pp. 25555–25564. Cited by: §I.
  • [39] E. Kim, S. Lee, Y. Choi, M. Shin, and H. Kim (2014) A flow entry management scheme for reducing controller overhead. 16th International Conference on Advanced Communication Technology. Cited by: §VII-A.
  • [40] T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievski, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue, T. Hama, et al. (2010) Onix: a distributed control platform for large-scale production networks.. In OSDI, Vol. 10, pp. 1–6. Cited by: §VII-A.
  • [41] P. Kumar and B. Dezfouli (2019) Implementation and analysis of quic for mqtt. Computer Networks 150, pp. 28–45. Cited by: §IV-B1, §IV-B, §VII-B.
  • [42] Ed. A. M. Bishpop (2019)(Website) External Links: Link Cited by: §I.
  • [43] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner (2008) OpenFlow: enabling innovation in campus networks. ACM SIGCOMM Computer Communication Review 38 (2), pp. 69–74. Cited by: §I.
  • [44] J. Medved, R. Varga, A. Tkacik, and K. Gray (2014) Opendaylight: towards a model-driven sdn controller architecture. In Proceeding of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, pp. 1–6. Cited by: §I.
  • [45] P. Megyesi, Z. Krämer, and S. Molnár (2016) How quick is quic?. In 2016 IEEE International Conference on Communications (ICC), pp. 1–6. Cited by: §VII-B.
  • [46] L. F. Müller, R. R. Oliveira, M. C. Luizelli, L. P. Gaspary, and M. P. Barcellos (2014) Survivor: an enhanced controller placement strategy for improving sdn survivability. In IEEE Global Communications Conference, pp. 1909–1915. Cited by: §VII-A.
  • [47] ngtcp2 team (2018)(Website) External Links: Link Cited by: §IV-B.
  • [48] M. Noormohammadpour and C. S. Raghavendra (2017) Datacenter traffic control: understanding techniques and tradeoffs. IEEE Communications Surveys & Tutorials 20 (2), pp. 1492–1525. Cited by: §I.
  • [49] R. Odaira and K. Hiraki (2003) Selective optimization of locks by runtime statistics and just-in-time compilation. In Proceedings International Parallel and Distributed Processing Symposium, pp. 6–pp. Cited by: §III-A1.
  • [50] D. Palma, J. Goncalves, B. Sousa, L. Cordeiro, P. Simoes, S. Sharma, and D. Staessens (2014) The queuepusher: enabling queue management in openflow. In Third European workshop on software defined networks, pp. 125–126. Cited by: §I.
  • [51] B. Pfaff and B. Davie (2013) The open vswitch database management protocol. Internet Requests for Comments, RFC Editor, RFC 7047. Cited by: §I.
  • [52] X. T. Phan and K. Fukuda (2017) Toward a flexible and scalable monitoring framework in software-defined networks. In 31st International Conference on Advanced Information Networking and Applications Workshops (WAINA), pp. 403–408. Cited by: §I.
  • [53] K. Phemius, M. Bouet, and J. Leguay (2014) Disco: distributed multi-domain sdn controllers. In IEEE Network Operations and Management Symposium (NOMS), pp. 1–4. Cited by: §VII-A.
  • [54] C. Powell, C. Desiniotis, and B. Dezfouli (2020) The fog development kit: a platform for the development and management of fog systems. IEEE Internet of Things Journal 7 (4), pp. 3198–3213. Cited by: §I.
  • [55] N. Provos and N. Mathewson (2003) Libevent—an event notification library. Cited by: §III-B.
  • [56] F. Qian, V. Gopalakrishnan, E. Halepovic, S. Sen, and O. Spatscheck (2015) Tm3: flexible transport-layer multi-pipe multiplexing middlebox without head-of-line blocking. In Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies, pp. 1–13. Cited by: §II-A.
  • [57] Qin,Qiaofeng, K. Poularakis, G. Iosifidis, and L. Tassiulas (2018) SDN controller placement at the edge: optimizing delay and overheads. IEEE Conference on Computer Communications (INFOCOM), pp. 684–692. Cited by: §VII-A.
  • [58] E. Rescorla and Mozilla (2018)(Website) External Links: Link Cited by: §II-C3.
  • [59] P. Rogaway (2002) Authenticated-encryption with associated-data. In Proceedings of the 9th ACM conference on Computer and communications security, pp. 98–107. Cited by: §IV-B3.
  • [60] RYU (2018)(Website) External Links: Link Cited by: §I.
  • [61] RYU (2019)(Website) External Links: Link Cited by: §I.
  • [62] RYU (2019)(Website) External Links: Link Cited by: §I.
  • [63] RYU (2019)(Website) External Links: Link Cited by: §I.
  • [64] M. Scharf and S. Kiesel (2006) NXG03-5: head-of-line blocking in tcp and sctp: analysis and measurements. In IEEE Globecom, pp. 1–5. Cited by: §II-A.
  • [65] R. Shade (2014) QUIC—next generation muliplexed transport over udp. streamed live Feb 11, pp. 29. Cited by: §VIII.
  • [66] S. Sharma, D. Staessens, D. Colle, D. Palma, J. Goncalves, R. Figueiredo, D. Morris, M. Pickavet, and P. Demeester (2014) Implementing quality of service for the software defined networking enabled future internet. In Third European workshop on software defined networks, pp. 49–54. Cited by: §I.
  • [67] A. Tootoonchian and Y. Ganjali (2010) Hyperflow: a distributed control plane for openflow. In Proceedings of the internet network management conference on Research on enterprise networking, Vol. 3. Cited by: §VII-A.
  • [68] A. Van Bemten, N. Ðerić, A. Varasteh, A. Blenk, S. Schmid, and W. Kellerer (2019) Empirical predictability study of sdn switches. In ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), pp. 1–13. Cited by: §VII-A.
  • [69] F. Volpato, M. P. Da Silva, A. L. Gonçalves, and M. A. R. Dantas (2017) An autonomic qos management architecture for software-defined networking environments. In IEEE Symposium on Computers and Communications (ISCC), pp. 418–423. Cited by: §I.
  • [70] C. Wang, B. Hu, S. Chen, D. Li, and B. Liu (2017) A switch migration-based decision-making scheme for balancing load in sdn. IEEE Access 5, pp. 4537–4544. Cited by: §I.
  • [71] X. S. Wang, A. Balasubramanian, A. Krishnamurthy, and D. Wetherall (2014) How speedy is spdy?. In 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14), pp. 387–399. Cited by: §II-C4.
  • [72] whichlinden (2019)(Website) External Links: Link Cited by: §III-C.
  • [73] R. Ying, W. Jia, C. Luo, and Y. Wu (2019) Expedited eviction of invalid flow entries for sdn-based epc networks. In IEEE/CIC International Conference on Communications in China (ICCC), pp. 298–303. Cited by: §I.
  • [74] F. Z. Yousaf, M. Bredel, S. Schaller, and F. Schneider (2017) NFV and sdn—key technology enablers for 5g networks. IEEE Journal on Selected Areas in Communications 35 (11), pp. 2468–2478. Cited by: §I.
  • [75] M. Yu, J. Rexford, M. J. Freedman, and J. Wang (2010) Scalable flow-based networking with difane. ACM SIGCOMM 40 (4), pp. 351–362. Cited by: §VII-A.
  • [76] Y. Yu, M. Xu, and Y. Yang (2017) When quic meets tcp: an experimental study. In 2017 IEEE 36th International Performance Computing and Communications Conference (IPCCC), pp. 1–8. Cited by: §VII-B.
  • [77] Y. Zhang, N. Beheshti, and M. Tatipamula (2011) On resilience of split-architecture networks. In IEEE Global Telecommunications Conference-GLOBECOM, pp. 1–6. Cited by: §VII-A.
  • [78] Y. Zheng, Y. Wang, M. Rui, A. Palade, S. Sheehan, and E. O. Nuallain (2018) Performance evaluation of http/2 over tls+ tcp and http/2 over quic in a mobile network. Journal of Information Sciences and Computing Technologies 7 (1). Cited by: §VII-B.
  • [79] Y. Zhou, K. Zheng, W. Ni, and R. P. Liu (2018) Elastic switch migration for control plane load balancing in sdn. IEEE Access 6, pp. 3909–3919. Cited by: §I.