LearningCC: An online learning approach for congestion control

08/03/2020
by   Songyang Zhang, et al.
Northeastern University
0

Recently, much effort has been devoted by researchers from both academia and industry to develop novel congestion control methods. LearningCC is presented in this letter, in which the congestion control problem is solved by reinforce learning approach. Instead of adjusting the congestion window with fixed policy, there are serval options for an endpoint to choose. To predict the best option is a hard task. Each option is mapped as an arm of a bandit machine. The endpoint can learn to determine the optimal choice through trial and error method. Experiments are performed on ns3 platform to verify the effectiveness of LearningCC by comparing with other benchmark algorithms. Results indicate it can achieve lower transmission delay than loss based algorithms. Especially, we found LearningCC makes significant improvement in link suffering from random loss.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/30/2020

TCP D*: A Low Latency First Congestion Control Algorithm

The choice of feedback mechanism between delay and packet loss has long ...
01/07/2019

Beyond socket options: making the Linux TCP stack truly extensible

The Transmission Control Protocol (TCP) is one of the most important pro...
09/09/2019

An Evaluation of BBR and its variants

The congestion control algorithm bring such importance that it avoids th...
12/12/2017

Congestion Control Approach based on Effective Random Early Detection and Fuzzy Logic

Congestion in router buffer increases the delay and packet loss. Active ...
09/02/2018

Congestion Control for RTP Media: a Comparison on Simulated Environment

To develop low latency congestion control algorithm for real time taffic...
03/07/2020

An Online Learning Based Path Selection for Multipath Video Telephony Service in Overlay

Even real time video telephony services have been pervasively applied, p...
03/13/2021

An Empirical Study of Ageing in the Cloud

We quantify, over inter-continental paths, the ageing of TCP packets, th...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Without congestion control mechanism, it is doubtful whether the Internet can evolve into today’s scale, which severs more than half population of the world. By dynamically adjusting delivery rate, congestion control attempts to make efficient utilization of channel resource effectively while preventing overloading the network. After the first congestion collapse even happens in 1986, Jocobson [1] recommended to apply additive-increase/multiplicative-decrease (AIMD) for rate control rule in TCP, which is later developed into Reno. Since then, congestion control is an active research topic.

Reno takes packet loss event as congestion signal. Reno flow can send one more packet into network to probe the maximum available bandwidth in every round trip time (rtt). On detecting packet loss event, a connection will reduce the congestion window by half to alleviate congestion.

Even the AIMD law is said to save the Internet from congestion collapse, it may still provide degenerated performance over some network scenarios. Firstly, a connection may suffer from long latency. In today’s wired networks, routers are configured with excessively large buffer, in which the packet loss events seldom happen. Reno flow will keep increasing the number of inflight packets and many packets will be buffered in routers. Secondly, non-congested packet loss event has detrimental effect on throughput of these loss based rate control algorithms, which is common in wireless networks due to interference and signal attenuation. Reducing the congestion window frequently after random loss events will leave the bottleneck operating at speeds considerably below its capacity. Thirdly, the approach to increase congestion window by one over each rtt may be too slow to probe the maximum available bandwidth in long fat pipe.

Due to these drawbacks, different variants have been proposed for some specific networks. Cubic [2] is proposed for long fat links. Westwood [3] is for wireless networks. Recently, to design new algorithm which could achieve maximum delivery rate and simultaneously maintain minimal delay becomes a trend. Vivace [4] and BBR [5] are examples of such effort. The performance of BBR has been evaluated on ns3 [6].

There are works optimizing congestion control strategies through machine intelligence. Remy [7] uses offline training to find the optimal mapping from observed network states to control actions. Other works [8] [9]

use deep reinforce leaning approach. A large number of epochs are needed to train the control parameters in various network scenarios. The policy lookup process may takes orders of magnitude longer compared to hand-crafted methods. These factors hinder these algorithms to be applied in real networks.

Vivace searches for better action through online optimization. Time is divided into consecutive monitor intervals (MI). Each MI is devoted to test the implication between an action and the performance, which is measured by utility function.

In this work, we present LearningCC to solve the congestion control problem by multi-armed bandits framework. Instead of adjusting the congestion window with fixed rule, serval options are provided. Each option is taken as an arm of a slot machine. With the help of reward function, tradeoff is made by sender between exploration and exploitation to determine an optimal choice based on live empirical evidence. The effectiveness of the proposed method is evaluated on a small scale network topology. Simulation results indicate that LearningCC can achieve lower transmission delay compared with Reno/Cubic. It can gain higher throughput when coexisting with Reno/Cubic flows compared with Vivace. What’s more, LearningCC achieves the best channel utilization in lossy networks than these benchmark algorithms.

Ii Key design on LearningCC

The congestion window adjustment rule in AIMD is given in Equation (1). In Reno, and . Such hand-crafted rule make assumption that packet loss is an indication to congestion. When such assumption is violated, halving the congestion window will achieve inferior performance. It is hard to find an always optimal hand-crafted policy in a wide range of real networks.

(1)

Hence, the congestion window update rule in LearningCC is updated in a more flexible method. Instead of increasing the congestion window with fixed step in each round, there are several options to update congestion window dynamically in LearningCC. For example, the value on can be chosen from (1, 2, 3,). For a short time, the congestion window can be increased by 2 () in each rtt to gain better channel utilization. And it may be also feasible to set . Since it is hard to know which option can generate best benefit, MAB is used to learn the best action under such uncertain environment.

In MAB, a gambler makes decision on which arm to pull in a K-slot machine. Reward is only observed when an arm is selected and the goal of the gambler is maximize the cumulative reward. Due to lack of oracle perspective, it is hard for gambler to pull the arm always generating the highest reward in each time. The gambler has to pull multiple arms to identify the optimal choice. The gambler will try alternatives to acquire reward distribution information in exploration phase. As time goes by, the gambler gains the information on the reward distribution of each arm. The gambler can exploit the arm that gives the highest reward as much as possible.

During the persistent of a session, the throughput is calculated from acknowledged packets. When a new packet is sent out, the packet state information (bytes, sent_time, delivered) is recorded in sent_packet. Here, counts the total bytes of packets that have been successfully delivered to the peer. When an acknowledged packet arrives, the duration () and the delivered packets () in this round are known. A measurement on throughput can be calculated as Equation (2).

(2)

In recent proposed congestion control solutions, the goal is to maximize throughput while simultaneously minimizing transmission delay. The value of is defined as reward. When an ack arrives, the instant reward of a congestion window update action is defined in Equation (3). The smoothed round trip time () is got by a low pass filter in Equation (4) and is empirically set as 0.125. When the measurement on rtt is first got, .

(3)
(4)

When a new action is chosen, its impact on a netowork system is delayed by one round. The reward can be calculated when theese packets sent after the selection of an action get acknowledged from peer. Each arrival of an ack will generate an instant reward. The exponential filter is applied again to update the reward in Equation (5) and the factor (0.85) gives more importance to recent instant reward. Here, is the index of an action.

(5)

In traditional MAB problem, the gambler can choose an arm at each time step. For congestion control, the reward is not instantly observable but is delayed by at least one round after an action is selected. The decision-making process is not triggered by fixed time step but is triggered by congestion event. The congestion event is inferred from increased delay and lost packet.

(6)

In LearningCC, the values of , and are monitored. is the minimal rtt and is the maximum smoothed round trip delay during the observation time window. is the minimal rtt after an action is chosen. In addition to packet loss event, the link is deemed falling into congestion when a new sample rtt exceeds defined in Equation (6) and to choose a new congestion window update rule is triggered. is empirically set as 0.8. Firstly, endpoint enters the recover state and the congestion window will be reset as as shown in Algorithm 1. is 0.9. The operation to actively reduce congestion window is to alleviate link congestion.

is the maximum estimated throughput in 10 rtts and the throughput is got by Equation (

2). Since the endpoint has already responded link congestion event, will be resampled. The detail on action selection for is given in Algorithm 2. The -greedy method is applied for decision making. When the random generated value is smaller than (0.3), the endpoint enters the exploration procedure (line 7) and an action is randomly chosen from action table (). Otherwise, the endpoint will choose the action that has maximum reward in reward table during exploitation procedure (line 9). MSS denotes maximum segment size.

0:    packet_number, has_loss, rtt
1:  if  then
2:     if  then
3:        return
4:     end if
5:     
6:     
7:     
8:     
9:  end if
Algorithm 1 CongestionWindowBackoff
0:    packet_number,rtt
1:  if  then
2:     return
3:  end if
4:  if  then
5:     
6:     if  then
7:        Exploration()
8:     else
9:        Exploitation()
10:        
11:     end if
12:  end if
13:  if  then
14:     
15:  end if
16:  
17:  
18:  if  then
19:     
20:     
21:  end if
Algorithm 2 OnPacketAcked

Iii Fluid model of LearningCC

Following the method in [3], the theory throughput of LearningCC is analyzed by fluid model.

denotes the probability of congestion event. For every acknowledged packet, the increment of

is . For every congestion event, the decrement on is denoted by . The expected increment on per update step is then: . The delivery rate at time t is . The time between update steps is the inter arrival time of two adjacent acks: . The derivative on is given in Equation (7).

(7)

For Reno flow, , . By substituting , we could further get:

(8)

Let , the throughput gained by a Reno flow at equilibrium is:

(9)

For LearningCC, the increase factor is updated in an online learning fashion. We assum the average increase factor is . . According to Equation (7), the fluid model of a LearningCC flow can be got:

(10)

Let and , the throughput gained by a LearningCC flow at equilibrium is:

(11)

When , . We analyze a simple case. When , the value of is larger than 1.1 and LearningCC can achieve higher throughput than Reno flow when they traverse the same path. Such requirement seems easy to be met. And we will show by experiments that learningCC is more robust in lossy networks.

Iv Evaluation

All experiments are running on ns3.26. The code and all scripts to reproduce the results presented in this work can be found in repository [10].

The dumbbell topology in Figure 1 is built to evaluate LearningCC. These parameters on each link in Table I are bandwidth (B, in unit of Mbps), one way propagation delay (D, in unit of milliseconds) and maximum queue delay (Qdelay, in unit of milliseconds). The maximum queue delay is converted to maximum buffer length () in routers. Four flows are involved. Flow1 and flow2 send packets over pasth1 (n0 to n4). Flow3 and flow4 use path2 (n1 to n5). Each experiment lasts 300 seconds.

Fig. 1: Network topology
Case l1 l2 l3 l4 l5
(BW,OWD,Qdelay)
1 (100,10,60) (5,10,60) (100,10,60) (100,10,60) (100,10,60)
2 (100,10,120) (5,10,120) (100,10,120) (100,20,120) (100,10,120)
3 (100,30,100) (6,10,100) (100,10,100) (100,20,100) (100,20,100)
4 (100,10,150) (6,10,150) (100,10,150) (100,20,150) (100,20,150)
5 (100,5,90) (8,10,90) (100,5,90) (100,15,90) (100,5,90)
6 (100,20,120) (8,20,120) (100,20,120) (100,20,120) (100,20,120)
TABLE I: Configuration on the dumbbell topology
Fig. 2: Average one way delay
Fig. 3: Jain’s fairness index
Fig. 4: Throughput ratio
Fig. 5: Channel Utilization (3.5%)
Fig. 6: Channel Utilization (5%)

Protocol fairness is an important indication to reflect whether a flow can converge to a fair bandwidth line when sharing link with other flows with same protocol. The four flows are configured with the same congestion control algorithm (Reno, Cubic, Vivace, or LeraningCC) . Vivace is the closest relevant congestion control mechanism to our design, which also takes an online learning optimization approach.

Once a packet is injected into network, the sent timestamp is tagged into the packet object in ns3. When it arrives to destination, one way transmission delay and its length are recorded. One way transmission delay reflects the buffer occupation status in routers.

(12)
(13)

According to collected data, the average one way transmission delay of the two flows on path1 is calculated. The result on delay is given in Figure 2. The Jain’s fairness index [11] in Equation (12) is exploited to indicate how fair the bandwidth is shared. The closer Jain’s fairness index is to 1, the better in terms of bandwidth allocation fairness. is the average throughput of a session, which is defined in Equation (13). is the persistence time of a session and is the length of all received packets. The result on fairness in given in Figure 3.

Some conclusions can be summarized from these metrics. Compared with Reno and Cubic, Learning CC achieves lower transmission delay. When the delay signal exceeds the predefined threshold, the switch to a different action according to reward values will be triggered. Vivace flows have the lowest transmission delay. In term of fairness, LearningCC and Reno gain better performance.

Iv-a Bandwidth competence

(14)

A route can be shared by many flows with different congestion control algorithms in real networks. The loss based congestion control algorithms still dominate current Internet. It is important for a flow with newly designed algorithm to achieve better throughput or to avoid being starved by other flows. Such property can motivate the deployment of a new algorithm. In this part, the performance of LearningCC and Vivace is tested when they share route with Reno/Cubic flows. In each test, flow 1 and flow3 take a learning based approach (LearningCC or Vivace) for rate control while Flow2 and flow4 is configured with a loss based method (Reno or Cubic). The throughput ratio is defined in (14) to measure the bandwidth competence ability. is the average throughput of flow1 and is the average throughput of flow2. The configuration on each link remains the same as the previous part.

The result on throughput ratio is given in Figure 4. The throughput of a LearningCC flow is slightly higher than a Reno flow. It means LearningCC can maintain well inter-protocol fairness. LearningCC can gain higher bandwidth when sharing bottleneck with Cubic. But the throughput of a Vivace flow is quite low and Vivace flow is nearly starved by Reno or Cubic flow in each test.

Iv-B Performance in lossy links

The random packet loss event are common in wireless networks due to interference and signal attenuation. The bottleneck l2 is configured with different random loss rate (1%,1.5%, 2%, 2.5%, 3%,3.5%, 4%, 4.5%, 5%). These values are based on the measurement in wireless network from Uber [12]. The four flows will take the same algorithm for rate control. The channel utilization defined in Equation (15) is computed. denotes the capacity of bottleneck l2 and is the running time in each test.

(15)

Due to space limitation, the results on channel utilization of these algorithms when the bottleneck is configured with 3.5% and 5% random loss rate are given in Figure 5 and Figure 6. The random packet loss event is wrongly interpreting as congestion signal. Reno/Cubic flows will frequently reduce the congestion window in random loss environment. As shown in Figure 5, Reno flows achieve 31% channel utilization and Cubic flows achieve 27% channel utilization in test 6. In all tests, LearningCC makes better channel utilization than Vivace. LearningCC flows achieve channel utilization above 90% even when the bottleneck introduces 5% random loss.

V Conclusion

In this letter, a new perspective is provided to solve the congestion control problem with an online learning approach. By mapping each congestion window increment rule to an arm in multi-armed bandit scenario, -greedy is applied to discovery which decision generates the maximum reward through trial and error. LearningCC is further implemented on ns3 simulator and its performance is compared with three benchmark algorithms. When the small scale network is occupied by LearningCC flows, LearningCC achieves lower transmission delay at the cost of reduced channel utilization. Even Vivace has the lowest transmission delay, Vivace flow maintains lower throughput and nearly gets starved. LearningCC can achieve similar throughput when competing bandwidth with Reno and it maintains higher throughput when sharing bottleneck with Cubic. Most importantly, the channel utilization of LearningCC is less affected by random loss, which makes it fit to be applied in wireless networks.

References

  • [1] V. Jacobson, Congestion avoidance and control, SIGCOMM Comput. Commun. Rev. 18 (4) (1988) 314–329.
  • [2] S. Ha, I. Rhee, L. Xu, Cubic: a new tcp-friendly high-speed tcp variant, ACM SIGOPS operating systems review 42 (5) (2008) 64–74.
  • [3] L. A. Grieco, S. Mascolo, Mathematical analysis of westwood+tcp congestion control, IEE Proceedings - Control Theory and Applications 152 (1) (2005) 35–42.
  • [4] M. Dong, T. Meng, D. Zarchy, E. Arslan, Y. Gilad, P. B. Godfrey, M. Schapira, Pcc vivace: Online-learning congestion control, in: Proceedings of the 15th USENIX Conference on Networked Systems Design and Implementation, NSDI’18, USA, 2018, pp. 343–356.
  • [5] N. Cardwell, Y. Cheng, C. S. Gunn, S. H. Yeganeh, V. Jacobson, Bbr: Congestion-based congestion control, Queue 14 (5) (2016) 50:20–50:53.
  • [6] S. Zhang, An evaluation of bbr and its variants (2019). arXiv:arXiv:1909.03673.
  • [7] K. Winstein, H. Balakrishnan, Tcp ex machina: Computer-generated congestion control, SIGCOMM Comput. Commun. Rev. 43 (4) (2013) 123–134.
  • [8] W. Li, F. Zhou, K. R. Chowdhury, W. Meleis, Qtcp: Adaptive congestion control with reinforcement learning, IEEE Transactions on Network Science and Engineering 6 (3) (2019) 445–458.
  • [9] K. Xiao, S. Mao, J. K. Tugnait, Tcp-drinc: Smart congestion control based on deep reinforcement learning, IEEE Access 7 (2019) 11892–11904.
  • [10] Implementation on learningcc, https://github.com/SoonyangZhang/learningcc.
  • [11] R. K. Jain, D.-M. W. Chiu, W. R. Hawe, A quantitative measure of fairness and discrimination for resource allocation in shared computer systems, Eastern Research Laboratory, Digital Equipment Corporation, Hudson, MA.
  • [12] Employing quic protocol to optimize uber’s app performance, https://eng.uber.com/employing-quic-protocol/.