AccFlow: Defending Against the Low-Rate TCP DoS Attack in Wireless Sensor Networks

03/15/2019 ∙ by Yuan Cao, et al. ∙ Shenzhen University 0

Because of the open nature of the Wireless Sensor Networks (WSN), the Denial of the Service (DoS) becomes one of the most serious threats to the stability of the resourceconstrained sensor nodes. In this paper, we develop AccFlow which is an incrementally deployable Software-Defined Networking based protocol that is able to serve as a countermeasure against the low-rate TCP DoS attack. The main idea of AccFlow is to make the attacking flows accountable for the congestion by dropping their packets according to their loss rates. The larger their loss rates, the more aggressively AccFlow drops their packets. Through extensive simulations, we demonstrate that AccFlow can effectively defend against the low-rate TCP DoS attack even if attackers vary their strategies by attacking at different scales and data rates. Furthermore, while AccFlow is designed to solve the low-rate TCP DoS attack, we demonstrate that AccFlow can also effectively defend against general DoS attacks which do not rely on the TCP retransmission timeout mechanism but cause denial of service to legitimate users by consistently exhausting the network resources. Finally, we consider the scalability of AccFlow and its deployment in real networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I introduction

Wireless Sensor Network (WSN) is a powerful network of widely distributed sensing, computing, storage and communication [1, 2]. As WSN promises to bring immense values to our daily life, it also opens a door to many challenges. The open nature, low computation capacity and limited battery power often make WSN susceptible to the to many threats [3]. One of the emerging attack is the “Low-rate TCP DoS Attack”, in which attackers launch DoS attack by exploiting TCP retransmission timeout mechanism [4]. To launch such an attack, the attackers set up periodic on-off “square-wave” traffic whose peak transmission rate is large enough to exhaust the network bandwidth. When attacked, the legitimate TCP flows experience severe packet losses and enter retransmission timeouts. If the period of the attacking flow is close to the retransmission timeouts, the legitimate TCP flows will face another peak when they are trying to recover from the timeouts. As a result, they again suffer from severe packet losses and are forced to enter even longer retransmission timeouts. The cycle repeats and the legitimate TCP flows are throttled to nearly zero throughput. Compared to the general DoS attacks in which malicious users cause denial of service to legitimate users by sending continuous high rate flows rather than relying on the TCP retransmission timeout mechanism, time-averaged bandwidth usage of the low-rate TCP DoS attacking flow is small, even much less than the total available bandwidth. This is why we call such an attack the low-rate TCP DoS attack.

Another interesting characteristic for the low-rate TCP DoS attacking flow is that its periodic traffic pattern is similar to that of the legitimate TCP periodic traffic such as the video traffic that adopts the DASH [5] standard. In spite of the similar traffic pattern, the fundamental difference between the benign TCP periodic flow and the low-rate TCP DoS attacking flow is that the former backs off by entering retransmission timeout when its packets are lost whereas the latter does not.

Although the low-rate TCP DoS attack has been proposed for nearly ten years, it has not been fully addressed. Sun et al. [6] use signal processing (autocorrelation of the traffic) to detect the periodic burst attack at the congested router. Whenever attacks have been detected, the router traces back to its upstream routers to find the attack source. Such a solution may not work if the congested router has multiple upstream routers so that the bursty traffic it detects consists of the aggregate traffic from these upstream routers. Therefore, it is possible that the upstream routers cannot detect the bursty attacking traffic which stops the tracing back process. The work by Chang et al. [7] addresses this problem by assigning high priorities to the packets which are destined to high loss rate TCP application ports. However, such a defense mechanism can be breached if the attackers send large volumes of traffic to a specific protected port to cause a high loss rate at this port. Consequently, the attackers’ traffic will be marked as high priority traffic. Furthermore, because both of the aforementioned solutions target merely on the ideal low-rate TCP DoS attack, one alternative strategy for attackers to crack the defense could be splitting their traffic into multiple attacking flows to trigger distributed DoS attacks. Also they do not illustrate how their defending protocols will impact the benign periodic flows such as the aforementioned video traffic. In this paper, we develop the AccFlow (representing Accountable Flow) protocol which effectively defends against the low-rate TCP DoS attack without causing any performance degradation to the benign periodic flows. Furthermore, AccFlow also provides a strong defense against the general DoS or DDoS attacks.

Different from previous literatures, we incorporate the concept of the Software-Defined Networking (SDN) [8] and flow accountability when designing AccFlow. The SDN architecture, in which the centralized controller makes the decision of packet routing and forwarding so that it can explicitly execute different policies to different flows, is proposed to provide flexibility and novelty to configure the network. For instance, one of the benefits for such a centralized architecture is that network operators are able to coordinate the traffic to build low latency and congestion free networks, especially data center networks [9] [10]. Although not proposed for solving security problems in computer networking, the concept of SDN provides novel ways to rethink and address such problems [11]. Specifically, with the centralized network architecture, the controller is capable to do online traffic monitoring and analysis. Whenever attacking flows are detected, it blocks them and saves the network resources for legitimate flows. The advantages of such SDN-based defending techniques are twofold. Firstly, it responds to attacks in realtime. Secondly, it immediately benefits the deployed routers or Autonomous Systems (ASes) because it does not rely on reconfiguration at other parts of the network. In spite of the aforementioned advantages, flow-based security protocols need to be scalable to deal with huge numbers of attacking flows, especially when we consider to deploy the protocols in Wide Area Networks (WANs). In this paper, we propose to use flow aggregation111We use source IP address-based flow aggression. The detailed explanation is in subsections IV-B and V-A. and virtual centralized controller222The concept of virtual centralized controller is that we use multiple coordinated processors to serve as the centralized controller. The details are in subsection V-B. to solve the scalability problem so that our protocol can be deployed in both Local Area Networks (LANs) and WANs.

The reason why the low-rate TCP DoS attack and other kinds of DoS attacks work well is that whenever a congestion happens, the router drops the packets from all flows regardless of who cause the congestion. In other words, accountability for the congestion is not considered for packets dropping [12]

. Consequently, legitimate TCP flows which strictly comply with the congestion avoidance protocol and transmit at reasonable rates are equally blamed on the congestion although it is caused by attacking flows which send large volumes of traffic and exhaust the network bandwidth. Therefore, in order to effectively tackle the DoS attacks, AccFlow takes into consideration the accountability for congestions when dropping packets. Specifically, the more accountable the flows are for the congestion, the more aggressively AccFlow drops their packets. We associate each flow’s accountability for the congestion to its loss rate. The higher its loss rate, the more accountable it is for the congestion. The reason is that attacking flows, who are more accountable for the congestion, are always featured with high loss rates since they have to keep sending excessive numbers of packets so as to overflow the network. However, it is relatively rare for the legitimate TCP flows, who are less accountable for the congestion, to suffer from consistently high loss rates because they will reduce their transmission rates and even enter timeouts when their packets are dropped. Therefore, the loss rate of a flow is positively correlated to its accountability for the congestion. AccFlow protects the network by dropping packets from higher loss rate flows with higher probabilities whereas dropping packets from lower loss rate flows with lower probabilities.

Through substantial amounts of simulations on ns-3 [13], we demonstrate that AccFlow can effectively defend against both the low-rate TCP DoS attack and general DoS attacks. In summary, the contributions of this paper are as follows.

  • AccFlow is the first SDN-based security protocol that considers flow accountability when defending against DoS or DDoS attacks.

  • We demonstrate that AccFlow, which does not cause any performance degradation to benign flows, can effectively defend against both the low-rate TCP DoS attack and general DoS attacks even if attackers are able to vary their strategies.

  • We use flow aggregation and virtual centralized controller to solve the scalability problem of AccFlow and make it deployable in real networks.

The rest of the paper is organized as follows. In section II, we give a brief introduction to the low-rate TCP DoS attack. In section III, we elaborate on the AccFlow protocol. In section IV, we thoroughly study the effectiveness of AccFlow in different simulation settings. In section V, we consider the deployment of AccFlow in real networks and its interaction with other security protocols. Finally, we conclude in section VI.

Ii Low-rate TCP DoS Attack

In this section we briefly introduce the low-rate TCP DoS attack and its effectiveness to cause denial of service to legitimate TCP flows. The ideal low-rate TCP DoS attacking flow can be represented by a triple , where indicates the peak data rate, indicates the attacking period and indicates the burst duration within one period, as illustrated in Figure 1(a). In order to overflow the network, needs to be larger than the bottleneck link bandwidth. should be small enough, compared to the RTTs of legitimate TCP flows, to attack most of the traversing flows. is negatively correlated to if the amount of traffic generated by the attackers in one period is fixed. A detailed discussion on the choice of , and can be found in [4].

(a) Low-rate TCP DoS attacking flow.
(b) Network topology.
Fig. 1: Attacking traffic and network topology.
(a) Throughput without being attacked.
(b) Throughput under attack.
Fig. 2: Effectiveness of the low-rate TCP DoS attack.
Fig. 3: Aggregate loss rate for normal and attacked scenarios.
Simulation Setting Legitimate TCP Flows Attack Scales Aggregate Attacking Rate
Setting One 5 flows with same rate to attacking flows
Setting Two 9 flows with different rates ranging from to to attacking flows
Setting Three 9 flows with different rates ranging from to 5 attacking flows to
TABLE I: Simulation Settings.
(a) Setting One
(b) Setting Two
(c) Setting Three
Fig. 4: Under all three settings, AccFlow can effectively defend the legitimate TCP flows from being attacked. The achieved throughput for each legitimate flow is almost the same as its original transmission rate.
(a) 20 Attacking Flows in Setting One
(b) 30 Attacking Flows in Setting Two
(c) 40 Aggregate Attacking Rate in Setting Three
Fig. 5: Convergence time of AccFlow.

We set up simulations on ns-3 platform to illustrate the effectiveness of low-rate TCP DoS attack. We create a “dumbbell” network topology whose bottleneck link bandwidth is , as illustrated in Figure 1(b). In this simulation setup, legitimate TCP flows and one attacking flow are traversing the bottleneck link. We configure the network so that each legitimate TCP flow is transmitting at and the attacking flow triple is . Note that we scale down the bottleneck link bandwidth and flow rates in order to accelerate the simulation. As illustrated in Figure 2(a), without being attacked, all legitimate TCP flows fairly share the bottleneck link bandwidth and achieve their desired data rates. However, they are throttled to nearly zero throughput under attack, as illustrated Figure 2(b). Note that the time-averaged data rate of the attacking flow (around ) is far less than the bottleneck link bandwidth. Therefore, attackers use much less resources to achieve very effective DoS attacks.

Iii AccFlow Design

In this section, we elaborate on AccFlow. AccFlow is a transport layer protocol that is deployed on the SDN centralized controller. The controller monitors all traversing flows, conducts flow analysis and then instructs the switches or routers to execute different routing and forwarding policies to different flows. We illustrated the architecture of AccFlow in Figure 6. AccFlow includes two major modules, i.e., Aggressive Detection and Early Drop. Aggressive Detection is used to block the attacking flows that are behaving aggressively enough to be detected whereas Early Drop is used to protect the network when attackers try to evade detection by smartly varying their attacking strategies.

Fig. 6: AccFlow architecture.

Iii-a Aggressive Detection

In order to detect the attacking flows, we need to find unique features that distinguish them from legitimate ones. Since the attackers consistently generate high volumes of traffic to overflow the network, the loss rates of their flows (the ratio of lost packets over total transmitted packets) are supposed to be higher than the legitimate flows. Therefore, a straightforward way to differentiate legitimate flows and attacking flows is using loss rate. To verify the effectiveness of this intuitive solution, we study the loss rate of each flow in the simulation conducted in the previous section. The centralized controller monitors the traffic and periodically conducts statistical analysis for all traversing flows. The controller’s flow analysis period should be two or three times of the typical RTTs of the traversing flows so that on one hand the controller can accurately learn the behaviors of the traversing flows and on the other hand it can react to attacks very fast. In our simulation, we set the period to be , which is about two times of the typical RTTs of the traversing flows. In the rest of the paper, we use the term detection period to indicate the controller’s analysis period. The simulation results are illustrated in Figure 7(a).

(a) Loss rate of each flow.
(b) Uniform loss rate of each flow.
Fig. 7: Loss rate and uniform loss rate of each flow.

It is clear that the loss rate of the attacking flow is consistently high. However, the loss rates for some legitimate flows in some detection periods are even higher. Therefore, purely relying on loss rate to detect the attacking flow may result in false detection. After analyzing the traffic sending by each flow, we realize that the loss rate for a legitimate flow in one detection period is high because it only sends one packet in that detection period and the only packet is dropped. Specifically, after packet losses, the legitimate flow backs off and waits for one retransmission timeout before entering the TCP slow start process. At the beginning of slow start, it sends out one packet to probe the available bandwidth. If the network is extremely congested, it is highly possible that the newly generated packet is dropped. If so, the legitimate flow is forced to enter an even longer retransmission timeout. This explains why the loss rate of a legitimate flow is either (the only packet is dropped) or (waiting in retransmission timeout). As for the attacking flow, it consistently sends out huge numbers of packets and never backs off even though a lot of the precedent packets were dropped. Therefore, the attacking flow has a consistently high loss rate.

In our framework, we propose to use Uniform Loss Rate (ULR) which is the product of the loss rate and usage rate to differentiate the attacking flow from legitimate flows. The usage rate of one flow is the ratio of the number of its transmitted packets in one detection period over the total number of arriving packets from all flows in this detection period. The attacking flow is featured with high ULR since it has to consistently send large numbers of packets (high usage rate) and never backs off even if its packets are dropped (high loss rate). As illustrated in Figure 7(b), there is a notable gap between the ULR of the attacking flow and legitimate flows. Therefore, ULR is an effective feature to leverage to differentiate the attacking flow from legitimate ones. Whenever detecting a flow with excessively high ULR whereas other flows’ ULRs are close to zero, the centralized controller will identify it as an attacking flow and completely blocks its traffic.

Although simple and accurate when defending against the ideal low-rate TCP DoS attack, Aggressive Detection requires a large enough ULR gap to differentiate between the attacking flow and legitimate flows. Finding such a reasonable ULR threshold is difficult, especially when attackers vary their strategies to reduce the ULRs of their flows. For example, instead of launching one attacking flow, attackers can split their traffic into flows and synchronize them to create the periodic burst flow. The usage rate of each individual attacking flow will reduce by percentage, so does its ULR. As the number of synchronized attacking flows increases, the ULR gaps between legitimate flows and attacking flows decrease, which makes it difficult for the controller to detect the attacking flows. However, since the network still experiences the same amount of attacking traffic, the DoS attack will continue to be effective. We present the shortcoming of Aggressive Detection in Figure 8. Note that when the attackers split their traffic into synchronized flows, the ULR differences between attacking flows and legitimate flows are close to zero. To tackle such distributed attacks, we design the Early Drop module in the next subsection.

(a) 10 attacking flows.
(b) 50 attacking flows.
Fig. 8: ULR of each flow under distributed DoS attack.

Iii-B Early Drop

Early Drop is proposed to effectively deal with the aforementioned distributed attacks. The design of Early Drop is also based on the consistent flow monitoring and periodic flow analysis by the centralized controller. However, Early Drop does not purely rely on the ULR to detect attacking flows. In fact, Early Drop is a heuristic algorithm illustrated in Algorithm

1 that conducts flow-based packet dropping according to each flow’s loss rate without explicitly detecting attacking flows. Next, we elaborate on the Early Drop algorithm.

1if the beginning of th detection period then
2        Calculate the aggregate loss rate ;
3        for each traversing flow  do
4               Calculate its usage and loss rate ;
5              
6       
7 if  then
8        for each arriving packet do
9               Find the flow it belongs to;
10               if  then
11                      if  then
12                             Drop the packet with probability ;
13                             else if  then
14                                   Drop the packet with probability ;
15                            
16                     
17              
18       
Algorithm 1 Early Drop

The algorithm is executed in every detection period. The first 1 lines of the code are executed only once at the beginning of each detection period whereas the rest lines of the code are executed whenever a packet arrives in this detection period. The aggregate loss rate (line 1) is the ratio of the number of dropped packets from all flows in the previous detection period to the total number of arriving packets in the previous detection period. Similarly, usage of flow (line 1) is the number of packets sent by in the previous detection period. Loss rate of (line 1) is the ratio of the number of dropped packets from in the previous detection period to . Note that all these values , and used in the current detection period are calculated based on the statistics obtained in the previous detection period. If the current detection period is the first detection period, the controller initializes all these values to be zero. All these values remain the same in this detection period and will be updated at the beginning of the next detection period. The rest lines of the codes, from line 1 to the end, conduct the packet dropping policy according to these values.

We add a condition in line 1 for packet dropping. This is because Early Drop starts dropping packets even before the network bandwidth is exhausted. Therefore, it is necessary to make sure that the network is being attacked before applying such an aggressive dropping policy. In other words, Early Drop is a self protective mechanism which automatically begins dropping packets when the network shows a sign of being attacked. We use aggregate loss rate to verify whether the network is being attacked or not for the following reasons. Legitimate TCP flows comply with the congestion avoidance protocol and back off when their packets are lost due to severe congestion. Thus, even though they have large original transmission rates, they will tailor their actual data rates to suit the available bandwidth. Therefore, it is rare for the network to witness a consistently high aggregate loss rate under normal situations. However, when attackers are trying to cause denial of service to legitimate flows by exhausting the network bandwidth, they have to continuously generate traffic even though many of their packets are dropped, i.e., they never back off when supposed to do so. As a result, the network will experience a very high aggregate loss rate under attack. We verify our analysis by studying the aggregate loss rate in both normal and attacked scenarios. The experimental results, illustrated in Figure 3, show that even legitimate flows each with original transmission rate are traversing the bottleneck link, the aggregate loss rate is well bellow 10 percentage.333The real network is always bandwidth over-provisioned to tolerate the traffic burst caused by legitimate flows, which makes it rare for the network to have a large aggregate loss rate under normal situation. However, when the attackers are trying to launch attack, the aggregate loss rate is more than 65%. Thus, aggregate loss rate is an effective feature to indicate whether the network is under DoS attack or not. The network operators can have different configurations for the threshold based on their own policies and traffic characteristics. For instance, if they want to aggressively protect their network, they need to set a relatively low threshold for and vice versa.

The main idea of the Early Drop algorithm is that it drops the packets from accountable flows with reasonable probabilities. By “accountable”, we mean that Early Drop only blames the flows who are accountable for the congestion. By “reasonable”, we mean that Early Drop drops the packets of accountable flows according to their loss rates. We achieve accountability by line 1 of the algorithm. In particular, if a flow only sends one or two packets during one detection period, it is not accountable for the congestion so that Early Drop will not drop its packets. We add the threshold to make sure that Early Drop will not falsely blame the legitimate TCP flows who have just recovered from retransmission timeouts and send one packet in the beginning of the TCP slow start process to probe the available bandwidth. should be small and increases with the duration of the detection period since a longer detection period may contain more TCP slow start processes. We set as 5 in our simulations when we test the effectiveness of AccFlow in the next section. Furthermore, we accomplish reasonability by considering its loss rate while dropping packets from a particular flow (lines 1 to 1). Specifically, Early drop divides all flows into two groups, i.e., high loss rate group and low loss rate group. All flows whose loss rates are above half of the aggregate loss rate will be categorized into the high loss rate group and Early drop immediately drops their packets according to their loss rates. On the contrary, flows whose loss rates are no greater than half of the will be assigned to the low loss rate group and Early Drop applies packet dropping to these flows only when the number of queueing packets in the router is larger than . The threshold is used to indicate that the network is slightly congested so that it is positively related to the router’s buffer size. We set to be % of the router’s buffer size in our simulations. Again the network operators can have different configurations for according to their policies. To sum up, Early Drop blames the flows that are accountable for the congestion and the higher their loss rates, the more aggressively it drops their packets. The fundamental difference between Early Drop and other Active Queue Management disciplines such as RED and WRED is that Early Drop selectively drops packets from more accountable flows (often the attacking flows) early before the router buffer is exhausted so that the packets from less accountable flows (often the legitimate flows) can be enqueued. However, RED and WRED simply drop all arriving packets when the buffer is full so that the legitimate flows will suffer from denial of service.

Note that we can use Aggressive Detection as a patch to Early Drop. In particular, Early Drop is always active to protect the network whereas Aggressive Detection will be applied to completely block the attacking flows if they perform aggressively enough to be detected by the controller. In the next section, we thoroughly test the effectiveness of AccFlow through substantial amounts of simulations.

Iv Effectiveness of AccFlow

In this section, we thoroughly study the effectiveness of AccFlow on ns-3 platform in four major simulation setups. We use the network topology illustrated in Figure 1(b) in all simulation setups whereas the traversing flows are different under different setups.444AccFlow is not limited to the simple dumbbell network topology. It is effective to protect both the inter-domain and intra-domain traffic. The first setup regards the distributed low-rate TCP DoS attack, in which we consider different types of legitimate traffic and different attacking strategies. The second setup is about another DoS attack derived from the low-rate TCP DoS attack. We call it Short Selfish TCP Flow (SSTF) attack because the attackers selfishly consume nearly the whole network resources by generating excessive numbers of short TCP flows. The third setup is designed to verify that AccFlow does not falsely drop packets from legitimate periodic flows. Finally, we consider the general DoS attacks in the fourth setup.

Iv-a Distributed Low-Rate TCP DoS Attack

In this setup, we design three different simulation settings. In setting one, we have legitimate TCP flows each with transmission rate. The attackers are able to launch different scales of distributed attacks by splitting their traffic into different numbers of synchronized subflows, ranging from from .555Note that we scale down the number of flows in order to accelerate the simulation. As you can see in our experiment results, the performance of AccFlow is not impacted by the scale of DDoS attacks. The aggregate attacking rate of all these synchronized attacking flows is about , which is three times of the bottleneck link bandwidth. The attacking period is and attacking duration in one period is . In setting two, we use the same attacking traffic as that of setting one but we have legitimate TCP flows each with a different transmission rate, ranging from to . The reason why we have both setting one and setting two is that TCP uses the Max-min fairness [14], where the network first satisfies the flows with smaller demands (lower transmission rates) and then evenly distributes the bandwidth to flows with larger demands if the network resources are limited. As a result, in a congested network, flows with smaller transmission rates can get their fair bandwidth shares more easily than flows with higher rates. Therefore, we need to consider both the two settings in our simulation. In the third setting, we test the effectiveness of Accountable when attackers are varying their attacking rates from to . Without loss of generality, we assume that attackers split their traffic into attacking flows in this setting and we use the same legitimate TCP traffic as that of the setting two. We summarize the three simulation settings in Table I.

The simulation results are illustrated Figure 4. Figure 4(a) illustrates the results for setting one when five legitimate TCP flows are traversing the network. Since the whole bandwidth of the bottleneck link is , which is large enough to hold all the legitimate traffic, all the flows should not experience any packet losses666In this paper, we only consider packet losses caused by network congestions. and achieve their ideal throughput when they are not attacked. As we can see in Figure 4(a), AccFlow can effectively protect the legitimate flows from being attacked since the average throughput of the legitimate flows is close to the desired transmission rate. Furthermore, the performance of AccFlow does not degrade as the number of attacking flows increases. Figure 4(b) illustrates the simulation results for setting two. Note that we set up legitimate flows in this setting but we only plot the results for of them in the figure for concise presentation. With AccFlow, all legitimate flows are able to achieve their desired data rates even under large scales of attacks. In setting three, we vary the aggregate attacking rates from to . Again we plot the simulation results for legitimate flows in the figure. The results show that AccFlow can also effectively defend the network even if attackers are able to change their attacking rates.

Here, we make a detailed explanation for the effectiveness of AccFlow. Consider one legitimate TCP flow and one attacking flow in our simulations. Assume that attackers launch DoS attacks in detection period . Due to severe packet losses in , the legitimate TCP flow envisions a heavily congested network and enters retransmission timeout. Therefore, in the next detection period , either its loss rate is zero if it is still waiting in the timeout or its usage is very low if it just recovers from the timeout and sends small amounts of packets to probe the available bandwidth. Under both scenarios, AccFlow will not further blame the legitimate flow. However, the attackers have to continuously send high volumes of traffic in order to overflow the bottleneck link. Thus the attacking flow still has a large usage in the next detection period in spite of its high loss rate in . Under such a situation, AccFlow will early drop its packets according to its loss rate. Furthermore, if the network is still congested after early drop, the router itself will also drop packets since it cannot deal with so much traffic. Therefore, the attacking flow will experience an even larger loss rate in detection period . This cycle repeats so that the loss rate of the attacking flow increases in each detection period until the network is not congested or its loss rate equals to one. Under both scenarios, the attacking flow can no longer harm the network.

In order to be a realtime defending technique, AccFlow needs to react to attacks very fast. Here we study the convergence time of AccFlow, i.e., how long it takes AccFlow to clear up the attacks. We define the convergence time as the time when all legitimate flows’ loss rates are zero. Without loss of generality, we randomly pick up one simulation setup in each of the three settings listed in Table I to study its convergence time. Specifically, we test the scenarios where attackers set up and attacking flows in setting one and setting two, respectively. As for the setting three, we use the case when attackers are generating traffic at rate of . Our simulation results are illustrated in Figure 5. Under all three scenarios, AccFlow can react to the attack quickly and the convergence time is in the order of seconds or tens of seconds.

As a flow-based defending technique, AccFlow needs to be scalable to deal with large numbers of attacking flows. Although the performance of AccFlow does not decline as the number of attacking flows increases (as illustrated in our simulation results), huge numbers of flows will exhaust the CPU and storage of the centralized controller. We discuss the scalability problem and propose our solutions in section V where we consider the deployment of AccFlow in real networks.

Iv-B Short Selfish TCP Flow Attack

In this section we discuss a very effective DoS attack which is similar to but still fundamentally different from the low-rate TCP DoS attack. The attacking technique is that malicious users periodically set up many short TCP flows to gain unfair share of network resources. Specifically, the early coming short TCP flows congest the network and cause all flows (including both legitimate TCP flows and themselves) to enter retransmission timeouts. Then the attackers selfishly start new short TCP flows to occupy the whole network bandwidth since no one besides attackers is transmitting now. The interesting point of such DoS attacks is that it seems that the attackers never deviate from the TCP protocol since these short TCP flows will back off when their packets are lost. However, the attackers are able to cause very severe denial of service to legitimate users simply by breaking their traffic into small short flows. We name such an attack the Short Selfish TCP Flow (SSTF) attack. Note that the difference between the SSTF attack and low-rate TCP DoS attack is that the former cannot synchronize all these short TCP flows to create the regular periodic burst traffic since transmitters have to wait for the ACKs before sending new packets. We set up simulations to illustrate the effectiveness of the SSTF attack. In our simulation we have attackers and each of them sends one short TCP flow with data rate every . Furthermore, we have legitimate TCP flows with transmission rate each. The results, illustrated in Figure 9, show that the SSTF attack is able to effectively throttle the legitimate users to almost zero throughput.

Fig. 9: Effectiveness of the SSTF attack.

Now we explain why the SSTF attack can cause such an effective denial of service to the legitimate users even though each individual short flow itself behaves exactly the same as a legitimate TCP flow.777 Here we mean each short TCP flow complies with the TCP protocol and will back off when a congestion happens. First let us reconstruct the attacking procedure. Assume that attackers first set up short TCP flows to cause congestion. Then all the legitimate flows and attacking flows in will suffer from severe packet losses and enter retransmission timeouts. After a short period, attackers set up another set of short TCP flows . Since no one is transmitting now, will occupy the whole network resources. When the legitimate flows try to recover from timeouts, they may face another set of short attacking flows started by the attackers after flows in finish. Again, congestion happens and the legitimate flows are forced to enter even longer retransmission timeouts. The cycle repeats and the attackers are able to selfishly utilize nearly the whole network resources. In a word, by sacrificing a small fraction of their traffic, the attackers create a “clear” network environment for most of their traffic and cause severe denial of service to the legitimate users.

The trick played by the attackers is to evade accountability by continuously generating fresh short flows. In particular, it is the flows in that cause the congestion so that we have no reason to blame the flows in . However, when flows in experience severe packet losses, the source should realize that the network is congested and should not start new flows. Thus, flow set should not be generated because both and come from the same source (attackers). Therefore, although each individual short TCP flow complies with the TCP protocol, the attackers still behave maliciously by periodically setting up new TCP flows even though the previous flows experience high loss rates.

We propose to use flow aggregation to defend against the SSTF attack. Specifically, all flows with the same source IP address will be aggregated as one flow.888It is possible to conduct flow aggregation based on other properties such as source IP address and application port pair. We leave the discussion of different aggregating properties in future works. Consequently, although flow and flow are different flows, they may have the same source IP address since they are both generated by the attackers. As a result, AccFlow will aggregate them as one flow. Therefore, flow will be blamed for the congestion caused by flow . Similarly, the subsequent flows will be accountable for the congestion caused by their precedent flows as long as they have the same source IP address. Thus, the attackers are not able to selfishly over-utilize the network resources by creating new flows. A potential problem for conducting such flow aggregation is that attackers can spoof their source IP addresses to keep generating new flows. However, the network security community has proposed effective mechanisms such as Stackpi [15] and packet filters [16] to prevent source IP spoofing. AccFlow can embrace such security protocols to prevent attackers from faking flows. Another potential problem is that the Network Address Translation (NAT) router translates the IP addresses of the hosts within its LAN to its public IP address. Then all the flows from different hosts will have the same source IP address after they leave the LAN. When they reach other remote sites which deploy AccFlow, they will be aggregated as one flow. Thus, a single compromised host within the LAN may cause denial of service to all the legitimate hosts within the LAN since their flows are aggregated as the same flow. To solve the problem, we can deploy AccFlow on the NAT router so that it will drop the packets from the local attacking flow and save bandwidth for legitimate flows. Therefore, the local attacking flow will not be able to leave the LAN to attack the remote sites.

(a) Setting One
(b) Setting Two
(c) Setting Three
Fig. 10: AccFlow effectively defends against the SSTF attack.

We test the effectiveness of AccFlow when the network is faced with the SSTF attack under similar simulation settings in Table I. As illustrated in Figure 10, Accountable Flow can effectively defend against the SSTF attack. Note that we set the minimum number of attacking flows to be since the SSTF attack needs to be distributed in order to be effective. The convergence time is also in the order of seconds to tens of seconds. We do not present the results for the convergence time in the paper due to space constraint. Note that the achieved data rates (throughput) for higher rate flows, i.e., ones with rates and , are slightly less than their desired data rates. We attribute such slight performance degradation to the fact that TCP Max-min fairness serves the low rate flows first in congested networks.

Iv-C Benign Periodic Flow

Real life networks, such as the Internet, also carries periodic or bursty flows whose traffic pattern is similar to that of the low-rate TCP DoS attacking flows. One example is that YouTube generates periodic traffic by loading chunks of a video with pauses between each chunk [17]. In this subsection, we show that AccFlow does not falsely drop the packets from benign periodic flows through the following four experiments. In the first experimental setup, normal TCP flows each transmitting at rate and one benign periodic flow are sharing the network resources. The analysis in [17] reveals that the peak rate for a typical YouTube flow ranges from hundreds of kilobytes to several megabytes. The interval of each video chunk ranges from hundreds of milliseconds to seconds. We set the peak rate and period of the periodic flow in our simulation to be and so that it can represent the real video traffic. Since the bottleneck link bandwidth is which is large enough to hold all the traffic, the first setup is congestion free. In the second setup, we create a fairly congested network by generating normal TCP flows and the same periodic flow. In the third setup, the network becomes quite congested by carrying normal TCP flows and the same periodic flow. Finally, in the fourth setup, the network is very congested as normal TCP flows and the same periodic flow are traversing the bottleneck link.

The simulation results are illustrated in Figure 11. For clear presentation, we use characters “N”, “F”, “Q” and “V” to represent words “Not”, “Fairly”, “Quite” and “Very”, respectively. The character “Cg” represents the word “Congested”. Thus “F Cg” means that the network is fairly congested, which corresponds to the second setup. As illustrated in Figure 11(a), the benign periodic flow achieves its desired throughput in all these four setups no matter whether AccFlow is applied or not. Furthermore, AccFlow also does not have any negative effect on the normal TCP flows, as illustrated in Figure 11(b). The results indicate that AccFlow can harmoniously coexist with benign flows without causing any performance degradation. The reason is that compared to attacking flows, the benign periodic flow is not trying to overflow the congested link. Therefore, the network will not suffer from high aggregate loss rate. As a result, AccFlow does not apply its aggressive dropping policy. Even though in some situations where a legitimate bursty flow has a large enough peak rate to cause a high aggregate loss rate so that AccFlow drops some packets, the negative effect does not propagate since legitimate flows will back off by entering retransmission timeouts after the packet losses. As a result, the network will become less congested and the aggregate loss rate will drop below the threshold .

(a) Benign Periodic Flow.
(b) Normal Flows.
Fig. 11: AccFlow harmoniously coexists with the benign periodic flow and normal flows without causing performance degradation.
(a) Setting One
(b) Setting Two
(c) Setting Three
Fig. 12: AccFlow provides a strong defense against the general DoS attacks.

Iv-D General (D)DoS Attack

Although designed to solve the low-rate TCP DoS attack, AccFlow can also serve as the defending technique for general DoS attacks. The difference between the low-rate TCP DoS attack and the general DoS attacks is that the former has to rely on the TCP retransmission timeout mechanism to launch attacks whereas the latter causes denial of service to the legitimate users simply by sending large volumes of traffic. As aforementioned, the core idea of AccFlow is to make attacking flows accountable for the congestion by early dropping their packets according to their loss rates. Therefore, any attacking flows that are accountable for the congestion are not able to cause denial of service to the legitimate flows by over-utilizing network resources. In fact, the reason why AccFlow is effective to deal with the general DoS attacks is that we do not leverage on the periodic nature of the low-rate TCP DoS attacking flows while designing the algorithm.

To test the effectiveness of AccFlow when the network is faced with the general DoS attacks, we use similar simulation settings listed in Table I except that the attackers consistently generate traffic without pause. As illustrated in Figure 12, AccFlow can effectively defend against the general DoS attacks. Furthermore, AccFlow also has quick convergence time under such attacks, which is in the order of tens of seconds.

V Deployment of AccFlow

In this section, we consider the interaction of AccFlow with other security protocols and its deployment in real networks. Although the SDN-based security protocol has its advantages, such as flexible control and realtime reaction to attacks, it also has several potential problems. One of the major challenges is the scalability issue due to the centralized network architecture. Since the centralized controller needs to create an entry in the routing table for each distinct flow, huge numbers of distinct flows will exhaust its the CPU and storage resources. In this paper, we propose to use flow aggregation and virtual centralized controller to solve the scalability problem.

V-a Flow Aggregation

Recall that we aggregate flows according to their source IP addresses to deal with the SSTF attack in subsection IV-B. In fact, by using the aforementioned security protocols to deal with source IP spoofing, flow aggregation can also prevent the attackers from amplifying their attacking scale by faking huge numbers of flows. Furthermore, with protocols like ingress filter [18] and Passport [19], ASes can limit the range of their acceptable source IP addresses. As a result, the number of distinct attacking flows that can be used to attack the bottleneck link is limited. Moreover, the existing security protocols, such as MiddlePolice [20, 21], Mirage [22], Phalanx [23], Pushback [24] and DoS-limiting architecture [25], can be applied to further limit the attacking scales. For instance, Mirage adopts the concept of frequency hopping in wireless networks to “hop” the destination IP addresses among all available addresses. Each time a user wants to send traffic to this site, it has to solve a computational puzzle to get the new IP address, which will limit the volumes of traffic that the computationally limited attackers can send. MiddlePolice, on the other head, allows the destination to determine which source IPs are allowed through self-defined traffic control policies. Since AccFlow does not cause disruption to the existing network infrastructure, it can effectively interact with these security protocols to defend the network against extremely large scale DDoS attacks.

V-B Virtual Centralized Controller

In order to deal with large numbers of flows, we can also adopt the concept of virtual centralized SDN controller. Specifically, multiple processors can serve as the conceptual centralized controller. Each processor keeps its routing table and tackles a certain number of flows. In order to tolerate individual processor failures within the distributed virtual centralized controller system, we can adopt the Paxos protocols [26]. In fact, the B4 architecture [9], a world wide large scale SDN data center network built by Google, also adopts the concept of virtual centralized controller by clustering their networks to deal with millions of flows traversing the Google’s data centers. By embracing these techniques to solve the scalability problem, AccFlow can be deployed in real networks. We present a straightforward deployment architecture in Figure 13.

Fig. 13: Deployment of AccFlow.

We consider deploying AccFlow on both core routers and border routers. Typically, border routers are responsible for dealing with the inter-domain flows such as BGP sessions [27] whereas core routers are carrying the traffic across the AS and may execute a particular traffic engineering policy such as MPLS [28]. Consider the situation where a remote legitimate client and the attacker are sending their traffic to the AS through an undeployed border router . Since the attacking flow exhausts the bandwidth of the victim border router , the client’s flow is throttled to zero throughput (flow is not able to traverse in Figure 13). Attacking flow continues to propagate in the AS until it counters a deployed core router which drops its packets to save the network resources for the legitimate flow (attacking flow is not able to traverse in Figure 13). When the AS deploys AccFlow on its border router , it protects the inter-domain traffic from being attacked so that can safely enter the AS. Apart from launching inter-domain attacks, the attacker can also compromise the nodes within the AS to generate local DoS attacking flows, such as the attacking flow . The deployed core router can protect the network from such an attack. Also the deployed routers can stop the local attacking flows from leaving the AS to attack remote sites. To sum up, AccFlow is compatible with the existing security protocols and is incrementally deployable without disruption to the existing network infrastructure.

Vi Conclusion

In this paper we develop the AccFlow, an incrementally deployable SDN-based protocol, to serve as a countermeasure against both the low-rate TCP DoS attack and general DoS attacks in WSN. The main idea of AccFlow is to make the attacking flows accountable for the congestion by dropping their packets according to their loss rates. We test the effectiveness of AccFlow under four major simulation setups. In the first setup, attackers launch low-rate TCP DoS attacks at different scales and data rates. In the second setup, attackers vary their strategies by maliciously creating excessive numbers of short TCP flows to occupy the network resources. The third setup is designed to study the impact of AccFlow on benign flows. Finally, in the fourth setup, attackers are launching the general DoS attacks by continuously generating traffic without pause. Through substantial amounts of simulations in each setup, we demonstrate that AccFlow, which does not cause any performance degradation to benign flows, can effectively defend against both the low-rate TCP DoS attack and the general DoS attacks even if attackers are able to vary their strategies. Finally, we discuss the scalability of AccFlow and its deployment in real networks.

References

  • [1] S. Siddiqui, S. Ghani, and A. A. Khan, “Adp-mac: An adaptive and dynamic polling-based mac protocol for wireless sensor networks,” IEEE Sensors Journal, vol. 18, no. 2, pp. 860–874, Jan 2018.
  • [2]

    S. Zidi, T. Moulahi, and B. Alaya, “Fault detection in wireless sensor networks through svm classifier,”

    IEEE Sensors Journal, vol. 18, no. 1, pp. 340–347, Jan 2018.
  • [3] S. Nagar, S. S. Rajput, A. K. Gupta, and M. C. Trivedi, “Secure routing against ddos attack in wireless sensor network,” in 2017 3rd International Conference on Computational Intelligence Communication Technology (CICT), Feb 2017, pp. 1–6.
  • [4] A. Kuzmanovic and E. W. Knightly, “Low-rate TCP-targeted denial of service attacks: the shrew vs. the mice and elephants,” in ACM SIGCOMM, 2003.
  • [5] T. Stockhammer, “Dynamic adaptive streaming over http–: standards and design principles,” in Proceedings of the second annual ACM conference on Multimedia systems.   ACM, 2011, pp. 133–144.
  • [6] H. Sun, J. C. S. Lui, and D. K. Y. Yau, “Defending against low-rate tcp attacks: Dynamic detection and protection,” in IEEE ICNP, 2004.
  • [7] C.-W. Chang, S. Lee, B. Lin, and J. Wang, “The taming of the shrew: Mitigating low-rate tcp-targeted attack,” IEEE TON, 2010.
  • [8] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Openflow: Enabling innovation in campus networks,” ACM SIGCOMM, 2008.
  • [9] S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh, S. Venkata, J. Wanderer, J. Zhou, M. Zhu, J. Zolla, U. Hölzle, S. Stuart, and A. Vahdat, “B4: Experience with a globally-deployed software defined wan,” ACM SIGCOMM, 2013.
  • [10] C.-Y. Hong, S. Kandula, R. Mahajan, M. Zhang, V. Gill, M. Nanduri, and R. Wattenhofer, “Achieving high utilization with software-driven wan,” in ACM SIGCOMM, 2013.
  • [11] S. Shin, P. Porras, V. Yegneswaran, M. Fong, G. Gu, and M. Tyson, “Fresco: Modular composable security services for software-defined networks,” in Proceedings of Network and Distributed Security Symposium, 2013.
  • [12] Z. Liu, “FlowPolice: Enforcing Congestion Accountability to Defend against DDoS Attacks,” Ph.D. dissertation, University of Illinois at Urbana-Champaign, 2015.
  • [13] “Ns-3: a discrete-event network simulator.” [Online]. Available: http://www.nsnam.org/
  • [14] E. L. Hahne, “Round-robin scheduling for max-min fairness in data networks,” IEEE J.Sel. A. Commun.
  • [15] A. Yaar, A. Perrig, and D. Song, “Stackpi: New packet marking and filtering mechanisms for ddos and ip spoofing defense,” Selected Areas in Communications, IEEE Journal on, vol. 24, no. 10, pp. 1853–1863, 2006.
  • [16] Z. Duan, X. Yuan, and J. Chandrashekar, “Controlling ip spoofing through interdomain packet filters,” Dependable and Secure Computing, IEEE Transactions on, vol. 5, no. 1, pp. 22–36, 2008.
  • [17] P. Ameigeiras, J. J. Ramos-Munoz, J. Navarro-Ortiz, and J. M. Lopez-Soler, “Analysis and modelling of youtube traffic,” Transactions on Emerging Telecommunications Technologies, vol. 23, no. 4, pp. 360–377, 2012.
  • [18] P. Ferguson and D. Senie, “Network ingress filtering: Defeating denial of service attacks which employ ip source address spoofing,” United States, 2000.
  • [19] X. Liu, A. Li, X. Yang, and D. Wetherall, “Passport: Secure and adoptable source authentication,” in USENIX NSDI, 2008.
  • [20] Z. Liu, H. Jin, Y.-C. Hu, and M. Bailey, “MiddlePolice: Toward Enforcing Destination-Defined Policies in the Middle of the Internet,” in ACM CCS, 2016.
  • [21] Z. Liu, H. Jin, Y.-C. Hu, M. Bailey, “MiddlePolice: Fine-Grained Endpoint-Driven In-Network Traffic Control for Proactive DDoS Attack Mitigation,” 2017.
  • [22] P. Mittal, D. Kim, Y.-C. Hu, and M. Caesar, “Mirage: Towards deployable ddos defense for web applications,” arXiv preprint arXiv:1110.1060, 2011.
  • [23] C. Dixon, T. Anderson, and A. Krishnamurthy, “Phalanx: Withstanding multimillion-node botnets,” in USENIX NSDI, 2008.
  • [24] R. Mahajan, S. M. Bellovin, S. Floyd, J. Ioannidis, V. Paxson, and S. Shenker, “Controlling high bandwidth aggregates in the network,” ACM SIGCOMM, 2002.
  • [25] X. Yang, D. Wetherall, and T. Anderson, “A dos-limiting network architecture,” in ACM SIGCOMM, 2005.
  • [26] T. Chandra, R. Griesemer, and J. Redstone, “Paxos made live-an engineering perspective (2006 invited talk),” in Proceedings of the 26th ACM Symposium on Principles of Distributed Computing-PODC, vol. 7, 2007.
  • [27] S. Kent, C. Lynn, and K. Seo, “Secure border gateway protocol (s-bgp),” IEEE JSAC, 2000.
  • [28] D. O. Awduche and J. Agogbua, “Requirements for traffic engineering over mpls,” 1999.