Q-DATA: Enhanced Traffic Flow Monitoring in Software-Defined Networks applying Q-learning

09/04/2019 ∙ by Trung V. Phan, et al. ∙ TU Chemnitz 0

Software-Defined Networking (SDN) introduces a centralized network control and management by separating the data plane from the control plane which facilitates traffic flow monitoring, security analysis and policy formulation. However, it is challenging to choose a proper degree of traffic flow handling granularity while proactively protecting forwarding devices from getting overloaded. In this paper, we propose a novel traffic flow matching control framework called Q-DATA that applies reinforcement learning in order to enhance the traffic flow monitoring performance in SDN based networks and prevent traffic forwarding performance degradation. We first describe and analyse an SDN-based traffic flow matching control system that applies a reinforcement learning approach based on Q-learning algorithm in order to maximize the traffic flow granularity. It also considers the forwarding performance status of the SDN switches derived from a Support Vector Machine based algorithm. Next, we outline the Q-DATA framework that incorporates the optimal traffic flow matching policy derived from the traffic flow matching control system to efficiently provide the most detailed traffic flow information that other mechanisms require. Our novel approach is realized as a REST SDN application and evaluated in an SDN environment. Through comprehensive experiments, the results show that—compared to the default behavior of common SDN controllers and to our previous DATA mechanism—the new Q-DATA framework yields a remarkable improvement in terms of traffic forwarding performance degradation protection of SDN switches while still providing the most detailed traffic flow information on demand.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Software Defined Networking (SDN) is a new networking concept, which provides enormous capabilities for dynamic network traffic control and management [1]. By detaching the control plane from the data plane, it removes some restrictions of legacy networks. A centralized entity called SDN controller has a global network view that allows for a policy-based traffic management and a faster and more dynamic response to network state and traffic variations [1].

Although there are numerous available mechanisms for traffic analysis, traffic flow management and resilience [2] in SDN based networks, some significant challenges still remain to be addressed [3]. In particular, adapting the granularity of traffic forwarding while protecting forwarding devices from an overflow situation is one critical issue. As most current traffic management approaches rely on the default flow matching strategies of the available SDN controllers, it is difficult to perform traffic forwarding with variable granularity. For example, the Open Network Operating System (ONOS) [4] and OpenDaylight (ODL) [5] SDN controllers, by default, apply Reactive Forwarding based on layer 2 information, which uses the MAC address for flow matching only. Therefore, an incoming packet is matched to a flow entry by just using its layer 2 destination address. However, security and traffic monitoring mechanisms require traffic flow handling based on layer 3 and layer 4 information. A flow matching scheme that uses MAC and IP (and maybe also TCP/UDP) header fields requires much expensive TCAM memory for storing the respective flow rules [6] and in case the incoming traffic flow pattern is highly dynamic, this might lead to a significant degradation of the traffic forwarding performance in the data plane. Besides, the control plane might be affected because of a large number of packet_in messages [7].

In this paper, we propose a reinforcement learning based traffic flow matching control framework called -DATA, to enhance the performance of traffic flow monitoring in SDN based networks and proactively prevent flow-table overflow in SDN switches. We first describe a traffic flow matching control mechanism that applies a reinforcement learning based approach (-learning) for optimizing the traffic flow granularity in the data plane. It also considers the forwarding performance status of SDN switches derived by a Support Vector Machine algorithm. Next, we outline the -DATA framework that incorporates the optimal traffic flow matching policy derived from a -learning based Traffic Flow Matching Policy Creation module to efficiently provide detailed traffic flow information that other mechanisms, e.g., for traffic engineering, traffic monitoring, and intrusion detection, require. In particular, a Support Vector Machine algorithm is utilized to simultaneously analyse the current network traffic and predict the SDN switch performance degradation. Based on the prediction result the -learning based Traffic Flow Matching Policy Creation module issues an optimum action on changing the traffic flow matching scheme. Note that this proposal partially inherits111We leverage the use of the Support Vector Machine based performance degradation prediction mechanism and the traffic flow matching scheme change based on destination hosts from our previous study [8]. our previous work [8] which is explained later on.

The paper is structured as follows. Section II provides related work and our previous study. Section III presents our approach for maximizing the level of traffic flow granularity based on -learning. Section IV explains the -DATA framework in detail. Our experiments and results are outlined in section V and section VI, respectively. Section VII provides a summary and outlines some ideas for future studies.

Ii Related Work

Ii-a Existing Methods for Flow Rule Control and Management in Software Defined Networks

Many studies already addressed issues related to flow rule installation and management in SDN switches - a topic that is of high interest in the SDN research community [9, 10, 11, 12, 13, 14, 15, 16]. There exist several approaches for controlling TCAM utilization with the primary target of flow rule compression or aggregation.

The authors in [9] propose an online routing scheme that constrains flow-table resources in SDN switches. Similarly, in [10] the objective is to maximize the number of flow entries in the data plane considering the limited flow-table space in SDN switches. Nonetheless, these methods do not address the problem of protecting the network infrastructure when a sudden traffic increase is happening.

The studies in [11, 12, 13] deal with TCAM resource management. In particular, [11] outlines a solution for flow-table size reduction based on three criteria including Consistency222All the flows must be allotted with the same actions after the reduction., Absoluteness333All the manually added rules must be executed in the highest priority. and Accuracy444The statistics data must be accurate all the time.. An incoming packet classification approach is presented in [12] which exploits the temporal locality of network traffic to predict the flow of incoming packets. If the prediction is correct the forwarding latency and power consumption can be reduced trough avoiding the full flow-table lookup process in the TCAM. Rifai et al. introduce a framework called MINNIE [13] for flow-table compression using wildcard rules. Furthermore, the authors in [14] argue that for storing simple packet forwarding rules based on MAC addresses or VLAN IDs cheap SRAM memory is sufficient, while more complex matching rules (with more matching fields) might require the use of fast but expensive TCAM memory. Considering this, the amount of TCAM memory in SDN switches can be significantly reduced. The solution outlined in [15] applies the concept of flow rule aggregation by restructuring the matching fields. By that, the number of flow rules can be significantly reduced. Another approach for dynamic flow matching is proposed in [16] where a flow matching policy considering the DSCP values of different traffic types is applied.

The mentioned solutions only focus on flow-table size reduction and on enhancing the data plane forwarding performance. Contrary to our solution they do not consider the possibility of adaptively changing the traffic flow matching scheme depending on the current network state and the level of detail of traffic flow information that other mechanisms, for e.g., traffic engineering and monitoring, require.

Ii-B Destination-aware Adaptive Traffic Flow Rule Aggregation

In our previous work, we proposed a destination-aware adaptive traffic flow rule aggregation solution named DATA [8] for adapting the number of flow entries in SDN switches according to the level of detail of traffic flow information that other mechanisms require and at the same time preventing SDN switch performance degradation.

We analyzed common SDN flow matching strategies of the ONOS [4] and ODL [5] SDN controllers and their implications. We denoted the MAC Matching Only Scheme as MMOS strategy and the Full Matching Scheme as FMS strategy. Using MMOS the ability to track and monitor network traffic for security or forensic analysis is limited, whereas applying the FMS strategy can result in significant degradation of the forwarding performance or even to an SDN switch outage in case the maximum number of flow entries is reached. To solve this problem, we applied a 2-dimensional Support Vector Machine (SVM) algorithm [17] to anticipate the switch performance degradation well before it occurs and to trigger the flow matching scheme change in time. After analyzing the SDN switch performance and if a potential forwarding performance degradation is figured out, the Analyzer applies Algorithm 3 (see appendix section) to find some destination hosts whose associated flows are most critical regarding the forwarding performance of the SDN switch, i.e., have the most flow entries. Afterwards the Analyzer co-operates with the built-in forwarding application of the SDN controller to conduct the traffic flow matching scheme changes555In order to perform a traffic flow matching scheme change for a destination host, the built-in forwarding application firstly deletes all flow entries related to the destination host in the switch, then it installs a flow entry with a new match field combination in the switch. for these destination hosts. These actions can be either to change to MMOS if a sudden increase or an overflow of the flow-table space in an SDN switch is expected or to return to FMS in case there is no overflow risk (see Algorithm 4 in appendix section).

Our DATA approach outperforms legacy flow rule matching schemes in terms of the number of flow entries in the SDN switches, the average rate in the SDN control plane and the number of errors and exceptions.

Although, the DATA method has many advantages in comparison to legacy approaches, some issues still should be addressed for further improvement - e.g., the limited number of only two flow matching schemes (MMOS and FMS) and the lack of feedback about the impact of the respective flow matching scheme on the network performance. Therefore, in this paper, we propose a novel traffic flow matching control mechanism that can flexibly switch between many different flow matching schemes based on the current network state. The novel scheme provides a much higher level of detail of traffic flow information even in case of high traffic load, while effectively preventing flow-table overflow and degradation of the data plane forwarding performance.

Fig. 1: Reinforcement learning-based model of a traffic flow matching control system in SDN based networks

Iii Maximizing the traffic flow granularity applying a -learning algorithm

Fig. 1 shows a traffic flow matching control mechanism based on reinforcement learning. The traffic flow matching control mechanism is realized as an SDN application and the environment is represented by the devices in the data plane, i.e., the SDN switches. In the following, we have a look at a single SDN switch representing the environment and investigate the traffic flow matching control mechanism. We assume that a state of the SDN switch at a time is represented by a tuple including the total number of current flow entries () and the number of flow entry changes () between two consecutive observations; the long-term goal of the control system is to maximize the traffic flow granularity in state of SDN switch while protecting the switch from forwarding performance degradation.

Regarding the system operation, in a given state the control mechanism initiates an action to change the traffic flow matching scheme in SDN switch . Afterwards, a new state is observed and a reward is calculated as soon as the change of the traffic flow matching scheme is executed. Then the next action

is applied to the environment in order to achieve the long-term goal. The traffic flow matching control mechanism based on reinforcement learning operates via agent-environment interaction and can be modeled as a Markov Decision Process (MDP)

[18]. In the following, the MDP model is outlined in detail.

Iii-1 State Space

The state space of SDN switch is defined as follows:

(1)

where is the current total number of flow entries in switch , is the number of flow entry changes between two consecutive observations and is the maximum number of flow entries in switch . The state of SDN switch is defined as tuple = . In our previous study [8] we already discussed the reasons for choosing the tuple (,) as the representative for the state of an SDN switch. The reasons can be summarized as follows: the effort for flow entry searching and matching in an SDN switch is proportional to the number of matching fields and an SDN switch has a maximum capacity () for storing the flow entries. Accordingly, the change of the number of flow entries indicates the control plane load (wrt. of_mod and of_removed messages sent between SDN controller and switch) affecting both the SDN switch and the controller performance.

Iii-2 Action Space

denotes a list of all feasible match field combinations, e.g., ¡”matchTcpUdpPorts”, ”matchIpv4Address”, ”matchVlanId”,…¿ in case of the ONOS controller [4]. The action space for changing the traffic flow matching scheme in the SDN switch is defined by

(2)

where represents a traffic flow matching scheme change related to a destination host (as discussed in section II-B) in SDN switch .

Iii-3 Immediate Reward Function

On the one hand, whenever, through executing an action, the total number of current flow entries in the SDN switch reaches the limit (which then leads to a performance degradation), the traffic flow matching control system should not get any reward for this action. On the other hand, the more matching fields a flow entry contains, the more detailed information is available for that flow. Hence, we determine the immediate reward as the average number of matching fields of all flow entries in the SDN switch :

(3)

where is the current total number of flow entries in the switch , is an integer number representing the number of enabled match fields in flow entry .

Iii-4 Optimization Formulation

We define an optimization problem to acquire the optimal policy applicable in state , denoted by , that maximizes the long-term reward, i.e., the traffic flow granularity in the SDN switch while protecting it from forwarding performance degradation. In particular, in state , the agent issues an optimal action to get close to or reach the long-term reward. The MDP under consideration is finite and the state space contains at maximum 2 states. The optimization problem is formulated as follows:

(4)

where is the cumulative reward for SDN switch under a policy , is the immediate reward associated with policy for a switch at iteration , and SVM() is the result of the Support Vector Machine algorithm predicting the forwarding performance of the SDN switch. A ”Good” result of the SVM algorithm means that the switch can handle the current number of flow entries without any forwarding performance problems.

In order to solve the optimization problem, we apply the -learning algorithm [18] which uses a -table to represent all possible state-action pairs within the environment as shown in Fig. 1. The -learning agent can learn from its own decisions at each iteration, and the algorithm will converge to the optimal policy after a certain number of iterations [18]. The expected return of state under policy is denoted as . It is expressed as follows:

(5)

where [0, 1) is a discount factor that indicates the importance of the long-term reward [18]. The optimal policy in state represents an action that yields the maximum value of the expected return :

(6)

Thus, for all state-action (,) pairs, the optimal -functions are

(7)

Hence can be expressed as . By conducting different actions to the environment the optimal -function value, i.e., , for all state-action (,) pairs is figured out. In particular, the -function is updated at each iteration as follows:

(8)

where , . is the -value for a state-action pair , is the immediate reward for the SDN switch at an iteration , [0, 1] is the discount factor and [0,1] is the learning rate. Moreover, to mitigate the exploration and exploitation dilemma that has direct impact on the convergence rate of any learning algorithms, the -greedy algorithm [18] is applied. Instead of always taking the best action according to the network state, the

-learning agent will take some random actions, where the probability of a random decision is determined by the value of epsilon,

. In its learning phase, the -learning agent first of all arbitrarily initializes the -table for all state-action pairs and afterwards updates it by using Equation 8. Accordingly, the agent acquires a trained or converged -table.

In summary, the -learning agent generates the optimal policy for a state representing an action that needs to be taken to maximize the value of the function, i.e., . Algorithm 1 provides implementation details of the -learning algorithm.

1:  Inputs: ; for a state-action pair (,) , , initialize a -table entry arbitrarily; initialize values of , , and , respectively.
2:  loop
3:     Current state .
4:     Execute action according to an exploratory policy ().
5:     Obtain a new state and an immediate reward .
6:     Update the -table entry for (,) using Equation 8.
7:     Update .
8:  end loop
9:  Outputs .
Algorithm 1 Optimal traffic flow matching policy creation with -learning algorithm
Fig. 2: -DATA framework architecture

Iv -DATA framework

In this section, the design and operation of the -DATA framework for enhanced traffic flow monitoring and proactively prevention of forwarding performance degradation in SDN based networks is outlined.

Iv-a -DATA Framework Architecture

Fig. 2 shows the -DATA framework architecture consisting of the Built-in Forwarding application located in the control plane and the REST -DATA application residing in the SDN application plane.

Iv-A1 Built-in forwarding application

Most of the well-known SDN controllers [4, 5] provide basic forwarding functionality by running a built-in forwarding application to create flow rules which are then downloaded to the SDN switches. We propose to add a REST API interface to the built-in forwarding application to securely communicate with the -DATA application. Initially, the -DATA App instructs the built-in forwarding application to apply the Full Matching Scheme (FMS) strategy.

Iv-A2 -DATA App

In -DATA a Statistics Collector periodically gets raw information about all traffic flows traversing the SDN switches from the SDN controller via the REST APIs [4, 5]. The collected statistical data of the SDN switch is forwarded to a Statistics Extractor and Distributor for extracting and distributing flow statistics information to other modules, i.e., the SVM based Performance Degradation Prediction module, the MAC Matching Only Scheme Control module, the Overflow Control module and the -learning based Traffic Flow Matching Policy Creation module. The SVM based Performance Degradation Prediction module is designed to anticipate the performance degradation of the SDN switch well before it occurs [8] and to provide the prediction result to the -learning based Traffic Flow Matching Policy Creation module and the MAC Matching Only Scheme Control module. The Overflow Control module acts as an immediate reaction mechanism against a flow-table overflow situation, e.g., in case the network is under a Denial-of-Service attack. The MAC Matching Only Scheme Control module monitors and checks conditions for a traffic flow matching scheme change to FMS in the SDN switch . The -learning based Traffic Flow Matching Policy Creation module relies as discussed above on a converged -table to choose the most appropriate traffic flow matching scheme for a given state of the SDN switch . Finally, the Traffic Flow Matching Policy Formulation module formulates policies received from the Overflow Control, the MAC Matching Only Scheme Control and the -learning based Traffic Flow Matching Policy Creation modules and sends them to the Built-in Forwarding application for implementation in the SDN switch .

Fig. 3: Operational workflow of the -DATA framework

Iv-B Operational Workflow

Initially, the Statistics Collector sends a request to the SDN controller to ask for network topology information. Then, it launches a monitor thread for each SDN switch - see Fig. 2. In regular time intervals (observation period), the monitor thread gathers raw traffic flow statistics from the SDN switch and passes them to the Statistics Extractor and Distributor where the tuple (,)—the total number of current flows and the flow number changes—is determined. Afterwards, this data is forwarded to the Overflow Control module, the SVM based Performance Degradation Prediction module and the MAC Matching Only Scheme Control module.

Fig. 3 shows the detailed operational workflow of the -DATA framework. Firstly the Statistics Extractor and Distributor module compares to and if the current total number of flow entries in a switch reaches its upper limit, then it is denoted as an overflow situation. In this case, the Overflow Control module has to find an appropriate traffic flow matching change policy for some destination hosts (derived from Algorithm 3) which have most flow entries in switch , so that the overflow situation is mitigated. We suppose that a source-destination node pair (having a unique pair of IP addresses), that transfers traffic through switch , puts flow entries () on average in the switch (e.g., either request or response flows). denotes the total number of unique IP address pairs in the flow-table of the switch . In case of a non-saturation attack the number of hosts (represented by their IP addresses666Initially the Built-in Forwarding application applies the FMS scheme, hence IP address information is available before an overflow problem appears in the switch .) sending traffic through the SDN switch is usually much less than the maximum number flow entries . Thus, if , there are some destination hosts serving a lot of incoming requests from other hosts or being under saturation attacks (e.g., Denial-of-Service attacks). Hence, it is reasonable to match incoming traffic flows related to these destination hosts using only MAC address information. This avoids a sudden overflow situation of switch . Otherwise, the Overflow Control module handovers to the -learning based Traffic Flow Matching Policy Creation module to issue an optimal traffic flow matching policy for the destination hosts via Algorithm 2.

In case the current total number of flow entries is less than the switch’s upper limit , the SVM based Performance Degradation Prediction module checks for a potential performance degradation of the SDN switch based on the tuple (,), and forwards its prediction result to the MAC Matching Only Scheme Control module and the -learning based Traffic Flow Matching Policy Creation module. If the switch state is predicted as Good, the MAC Matching Only Scheme Control module checks whether there exists a MMOS flow matching policy for any of the destination hosts. If a MMOS flow matching policy is found, then Algorithm 4 is applied to check the conditions for a change to the FMS strategy. In case a possible performance degradation is detected for switch , and if the total number of flow entries is increasing (¿0), then the -learning based Traffic Flow Matching Policy Creation module executes Algorithm 2 to apply the most appropriate traffic flow matching policy for destination hosts (derived from Algorithm 3) in the switch .

  Input: A tuple at iteration ; ; .
  begin
   Utilize a converged -table from Algorithm 1.
    .
   Drive optimal action = from the -table.
   Apply for derived from Algorithm 3.
   Update {see Fig. 3}.
   Get reward using Equation 3.
   Update -table using Equation 8.
  end
Algorithm 2 -learning based traffic flow matching policy creation for the SDN switch

V Experiments

V-a Example SDN Network Scenario

In order to evaluate the performance of the -DATA framework, we leverage the MaxiNet framework [19] to emulate a simple SDN based network consisting of 3 Web servers (S1-S3) (using Apache Web server images) and 5 hosts (H1-H5) which are all connected to a single SDN switch (implemented as OpenvSwitch). The emulated SDN network runs within one Linux machine and is controlled by a remote ONOS SDN controller running on another physical machine. For ease of deployment, we place both the -DATA App and the ONOS SDN controller on the same Linux machine.

V-B Training -learning and SVM Algorithms

Initially, for training the -learning agent we use the Hping3 tool [20] installed in hosts (H1-H5) to randomly generate traffic between hosts and Web servers. The -learning agent depends on the collected data for making decisions about changing the traffic flow matching scheme, and for updating its -table accordingly. In particular, we set the value to 0.8 in order to have 80% of random actions in a set of 9 match field combinations, and the state observation time is set to 10.0 seconds. For training the SVM algorithm, we apply the same traffic generation strategy as for the -learning agent training phase and initially apply the FMS scheme. Afterwards, we monitor any errors or exceptions indicating that the switch cannot handle new flow requests, and set = -1 as a label for the associated tuple (,). Otherwise, we set = +1. These labelled samples are then used for training the SVM algorithm.

We observe that the switch starts getting overflowed or cannot handle new flow rules if the current total number of flows is around 3000 () [6, 8]. Setting the idle_timeout value (after which the flow entries are removed) to 10 seconds, the safety threshold for the packet rate the switch can handle is 300 packets per second assuming that each packet belongs to a different traffic flow rule (worst case assumption). Therefore, for traffic generation, we apply three levels: low load (=100), medium load (=200) and high load (=300).

V-C Experiment Setup

We conduct several experiments with different flow matching strategies: MMOS only, FMS only, the novel -DATA framework (with , , ) and the DATA scheme [8]. The built-in forwarding application of the ONOS SDN controller applies Reactive Forwarding.

In order to show the performance enhancement in traffic flow monitoring in SDN based networks with the

-DATA framework, we implement a SOM-based IDS application (Self Organizing Map algorithm

[21]) to detect abnormal traffic on top of the ONOS controller. We consider some common attacks, which can make the SDN switch become overflowed, comprising TCP SYN flood [22], Port scanning [22], Low and Slow Denial-of-Service [6]. The attack traffic is stemmed from hosts and it is directed to Web servers in our setup.

For the performance analysis, traffic from the 5 hosts towards the 3 servers is generated randomly with three different load levels (, , ). During the experiments we trace the total number of flow entries in the SDN switch, the average number of packet_in messages per second to the ONOS controller, errors and exceptions in the ONOS controller, the frequency of traffic flow matching policy changes, the CPU utilization of the controller machine and the attack detection performance of the SOM-based IDS.

Vi Results

Vi-a Network related Performance Results

Vi-A1 Total number of traffic flow entries in the SDN switch

(a) (b) (c)
Fig. 4: Total number of flow entries in the SDN switch for three different traffic loads: (a) Low Load =100, (b) Medium Load =200 and (c) High Load =300

As can be seen in Fig. 4, the MMOS scheme accounts for a very low amount of traffic flow entries in all scenarios. For the low and medium load cases, FMS, -DATA (, and ) and DATA are supposed to have the same amount of traffic flow entries in the switch since the total number of flow entries is always below the critical level (). Note, that there are some minor variations for -DATA with and because the -learning based Traffic Flow Matching Policy Creation module is allowed to take random actions that leads to a Good state of the SDN switch with a high immediate reward value (average number of match fields of a flow entry) and to no further flow entry changes in the remaining time.

In the high load scenario, the FMS scheme leads to errors and exceptions after a short period of time causing a massive reduction in the number of flow entries because the SDN switch and the ONOS controller suspend their operation. In case of DATA, after reaching the switch’s flow-table entry upper limit (), the flow matching scheme is changed to MMOS for some destination hosts leading to a very small amount of flow entries in the remaining time. In contrast, the -DATA framework maintains a significant number of flow rules by applying appropriate traffic flow matching policies, e.g., a layer 2 & layer 3 matching scheme which provides a higher traffic flow matching granularity and avoids the performance degradation of the switch. Besides, the -learning based Traffic Flow Matching Policy Creation module depends on future states, i.e., , and tries to maximize the traffic flow matching granularity by changing to other schemes which provide more traffic flow information details. Therefore we observe some changes in the number of flow entries during our experiments.

(a) (b) (c)
Fig. 5: (a) Average packet_in rate (pkts/s) to the ONOS controller for different traffic flow matching schemes and traffic loads, (b) Total number of traffic flow matching scheme changes (during 500 seconds experiment duration) for different loads, (c) Average CPU utilization of the controller machine (during 500 seconds experiment duration)

Vi-A2 Average packet_in message rate to the ONOS controller

Fig. 5 (a) illustrates the average number of packet_in messages per second arriving at the Built-in Forwarding application. Contrary to the FMS and DATA schemes, for all traffic loads, the -DATA framework with the optimal traffic flow matching policy () allows the ONOS controller to process an acceptable packet_in rate. This significantly reduces the workload of the Built-in Forwarding application because of less new flow installation queries. The results for the -DATA scheme with and are expected to be better for a longer experiment duration.

Vi-A3 Errors and exceptions

Another key criterion for the performance evaluation of the -DATA solution is the time until an error or exception (observed by the ONOS terminal) occurs due to a degraded SDN switch. Our measurements show that the FMS scheme causes disconnected channels errors and FlowRuleManager exceptions in the ONOS controller after 7 to 10 seconds since the high traffic load is generated. For the other traffic load cases, no errors and exceptions are observed.

Vi-A4 Frequency of changing flow matching policy

We record the total number of traffic flow matching scheme changes of the proposed -DATA framework and the DATA scheme. As shown in Fig. 5 (b), the DATA scheme tries to keep the SDN switch in a Good state as long as possible—therefore no changes in the traffic flow matching scheme occur for low and medium load scenarios, but some changes happen in the high load case (i.e., a change from FMS to MMOS). Contrary, -DATA performs some changes depending on newly incoming traffic flows in the switch. In particular, in the high load case, -DATA with performs several flow matching scheme changes, e.g., between layer 2 & layer 3 matching and FMS, to provide more traffic flow information details while guaranteeing that the SDN switch forwarding performance does not degrade.

Vi-A5 Computational overhead

Fig. 5 (c) shows measurements of the CPU utilization of the controller machine. It can be seen that the three -DATA scheme variants (, , ) consume more CPU resources for all traffic loads. This is due to the fact that the -DATA App actively monitors and analyzes the network traffic, especially in the case of high traffic load. It tries to maximize the traffic flow matching granularity and to avoid any performance degradation of the switch. Nonetheless, this computational overhead is acceptable considering the benefits of the -DATA Scheme.

TCP SYN flood attack detection performance (%)
MMOS FMS -DATA -DATA -DATA DATA
0.0 98.02 97.05 96.04 96.53 97.52
0.0 97.64 96.53 97.37 96.06 98.05
0.0 0.0 85.20 81.20 82.00 0.0
Port scanning attack detection performance (%)
MMOS FMS -DATA -DATA -DATA DATA
0.0 96.53 97.45 96.00 96.33 97.30
0.0 98.22 96.43 97.01 96.62 97.22
0.0 0.0 84.34 84.10 82.56 0.0
Low and Slow DoS attack detection performance (%)
MMOS FMS -DATA -DATA -DATA DATA
0.0 96.32 96.25 95.70 97.12 95.78
0.0 96.34 97.36 96.54 96.21 95.92
0.0 0.0 88.24 85.32 85.45 0.0
TABLE I: Anomaly detection performance of different traffic flow matching solutions and traffic loads

Vi-B Anomaly Detection Performance Results

In order to show the enhancement of the traffic flow monitoring capability provided by the -DATA framework, we evaluate the anomaly detection performance of the SOM-based IDS application for three attack types, i.e., TCP SYN flood777Attackers try to send as fast as possible TCP segments with different spoofed source IP addresses and TCP ports to the Web servers leading to a large number of new flow entries in the SDN switch in a short time period., Port scanning888Attackers try to send as many as possible TCP segments with different destination ports to the Web servers and wait for response packets. and Low and Slow Denial-of-Service999Attackers periodically send requests as slow as possible with little resources and try to keep all installed flow entries in the SDN switch alive as long as possible, which renders the victim inaccessible.. For the evaluation we apply the following fitness function:

(9)

where represents the Detection rate, Accuracy, False alarm rate and and are weight values which are equally set to 1/3 in our evaluation.

As shown in Table I, no alert is raised in case of the MMOS scheme for all attack types and traffic loads because the traffic towards the Web servers is grouped into flow entries in the switch that makes it for the IDS impossible to detect any attacks. In the low and medium load scenarios, for the FMS, -DATA (, , ) and DATA schemes, three attacks are detected by the IDS with similar levels of attack detection performance.

In the high traffic load case, for FMS, the operation of the SDN controller and the switch are suspended. This makes the IDS application unable to gather traffic information from the SDN controller and to detect the attacks. For the DATA scheme the SDN switch stays operational, however traffic flows targeting to the servers are aggregated to some MMOS flows in the SDN switch. Hence there is no chance101010Nevertheless, for a larger scale network that comprises several switches (like the enterprise network in our previous study [8]), the SOM-based IDS is expected to achieve a good attack detection performance as some switches carry attack traffic flows and still stay operational (i.e., are in a Good forwarding performance state). to recognize malicious traffic flows towards the Web servers for all three attack types. Contrary, -DATA, by frequently changing between different flow matching schemes, provides more detailed traffic flow information and enables the IDS application to recognize the attack presence. However, because of the variation of statistics information caused by the traffic flow matching scheme change in the switch, the attack detection performance in case of high traffic load is lower than for low and medium traffic loads.

Vii Conclusion

In this paper, we present a traffic flow matching control framework based on reinforcement learning called -DATA which improves traffic flow monitoring in SDN based networks and proactively prevents performance degradation of SDN switches. We conduct a comprehensive performance analysis of the -DATA framework. Our results show that—compared to the default behavior of common SDN controllers and to our previous DATA scheme—the new -DATA framework by applying always the optimal traffic flow matching policy yields remarkable performance benefits. In our future work, we intend to focus on an optimized integration of traffic flow matching control and traffic anomaly detection.

Viii Acknowledgments

This work has been performed in the framework of the Celtic-Plus project SENDATE Secure-DCI, funded by the German BMBF (ID 16KIS0481).

Appendix A Algorithms from our previous work [8]

  Input: = : set of destination hosts and respective number of flow entries associated with these hosts in switch ; = : total number of current flow entries in switch ; = 1: first index. Output: = : set of destination hosts.
  begin
  Sort in descending order of the current flow (from highest to lowest numbers).
  loop
     .([])
      = 1 + {One MMOS flow entry is installed in switch }.
      = - {Delete flow entries in switch }.
      =
     if  = +1 then
        break {Switch can handle entries}.
     else
         = {Switch cannot handle entries}.
     end if
  end loop
  return .
Algorithm 3 Identification of the destination hosts whose flows are most critical to the performance of SDN switch
  Input: = : set of destination hosts and respective packet rate of MMOS flows associated with these hosts in switch ; (, ): total number of current flow entries and maximum number of flow entries in switch ; : number of flow entries that might be added in switch . Output: = : set of destination hosts.
  begin
  for ; ; ++ do
      = * {Worst case assumption: each packet is associated with a new entry in switch }.
     if ( +  then
        .append[].
     else
        continue
     end if
  end for
  return .
Algorithm 4 Identification of the MMOS flows/destination hosts related to SDN switch for which changing back to FMS is feasible

References

  • [1] B. A. A. Nunes, M. Mendonca, X. N. Nguyen, K. Obraczka, and T. Turletti, “A survey of software-defined networking: Past, present, and future of programmable networks,” IEEE Communications Surveys Tutorials, vol. 16, pp. 1617–1634, Third 2014.
  • [2] Z. Shu, J. Wan, J. Lin, S. Wang, D. Li, S. Rho, and C. Yang, “Traffic engineering in software-defined networking: Measurement and management,” IEEE Access, vol. 4, pp. 3246–3256, 2016.
  • [3] I. F. Akyildiz, A. Lee, P. Wang, M. Luo, and W. Chou, “Research challenges for traffic engineering in software defined networks,” IEEE Network, vol. 30, pp. 52–58, May 2016.
  • [4] ONOS, “Description of the onos controller.” www.onosproject.org, May 2019.
  • [5] OpenDaylight, “Description of the opendaylight controller.” www.opendaylight.org, May 2019.
  • [6] T. A. Pascoal, Y. G. Dantas, I. E. Fonseca, and V. Nigam, “Slow tcam exhaustion ddos attack,” in IFIP International Conference on ICT Systems Security and Privacy Protection, pp. 17–31, Springer, 2017.
  • [7] H. Wang, L. Xu, and G. Gu, “Floodguard: A dos attack prevention extension in software-defined networks,” in 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, pp. 239–250, June 2015.
  • [8] T. V. Phan, M. Hajizadeh, N. Tuan Khai, and T. Bauschert, “Destination-aware adaptive traffic flow rule aggregation in software-defined networks,” in 2019 International Conference on Networked Systems (NetSys), March 2019.
  • [9] Z. Guo, R. Liu, Y. Xu, A. Gushchin, A. Walid, and H. J. Chao, “Star: Preventing flow-table overflow in software-defined networks,” Computer Networks, vol. 125, pp. 15 – 25, 2017. Softwarization and Caching in NGN.
  • [10] X. Jia, Q. Li, Y. Jiang, Z. Guo, and J. Sun, “A low overhead flow-holding algorithm in software-defined networks,” Computer Networks, vol. 124, pp. 170 – 180, 2017.
  • [11] B. Leng, L. Huang, X. Wang, H. Xu, and Y. Zhang, “A mechanism for reducing flow tables in software defined network,” in 2015 IEEE International Conference on Communications (ICC), pp. 5302–5307, June 2015.
  • [12] P. T. Congdon, P. Mohapatra, M. Farrens, and V. Akella, “Simultaneously reducing latency and power consumption in openflow switches,” IEEE/ACM Transactions on Networking, vol. 22, pp. 1007–1020, June 2014.
  • [13] M. Rifai, N. Huin, C. Caillouet, F. Giroire, J. Moulierac, D. L. Pacheco, and G. Urvoy-Keller, “Minnie: An sdn world with few compressed forwarding rules,” Computer Networks, vol. 121, pp. 185 – 207, 2017.
  • [14] B. Stephens, A. Cox, W. Felter, C. Dixon, and J. Carter, “Past: Scalable ethernet for data centers,” in Proceedings of the 8th International Conference on Emerging Networking Experiments and Technologies, CoNEXT ’12, (New York, NY, USA), pp. 49–60, ACM, 2012.
  • [15] S. Luo, H. Yu, and L. M. Li, “Fast incremental flow table aggregation in sdn,” in 2014 23rd International Conference on Computer Communication and Networks (ICCCN), pp. 1–8, Aug 2014.
  • [16] A. Mimidis, C. Caba, and J. Soler, “Dynamic aggregation of traffic flows in sdn: Applied to backhaul networks,” in 2016 IEEE NetSoft Conference and Workshops (NetSoft), pp. 136–140, June 2016.
  • [17] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines: And Other Kernel-based Learning Methods. New York, NY, USA: Cambridge University Press, 2000.
  • [18] R. S. Sutton, A. G. Barto, et al., Introduction to reinforcement learning, vol. 135. MIT press Cambridge, 1998.
  • [19] P. Wette, M. Draxler, and A. Schwabe, “Maxinet: Distributed emulation of software-defined networks,” in 2014 IFIP Networking Conference, pp. 1–9, June 2014.
  • [20] hping3, “Description of the hping3 tool.” www.hping.org, May 2019.
  • [21] R. Braga, E. Mota, and A. Passito, “Lightweight ddos flooding attack detection using nox/openflow,” in IEEE Local Computer Network Conference, pp. 408–415, Oct 2010.
  • [22] Q. Yan, F. R. Yu, Q. Gong, and J. Li, “Software-defined networking (sdn) and distributed denial of service (ddos) attacks in cloud computing environments: A survey, some research issues, and challenges,” IEEE Communications Surveys Tutorials, vol. 18, pp. 602–622, Firstquarter 2016.