More Than The Sum Of Its Parts: Exploiting Cross-Layer and Joint-Flow Information in MPTCP

11/20/2017 ∙ by Tanya Shreedhar, et al. ∙ 0

Multipath TCP (MPTCP) is an extension to TCP which aggregates multiple parallel connections over available network interfaces. MPTCP bases its scheduling decisions on the individual RTT values observed at the subflows, but does not attempt to perform any kind of joint optimization over the subflows. Using the MPTCP scheduler as an example, in this paper we demonstrate that exploiting cross-layer information and optimizing scheduling decisions jointly over the multiple flows, can lead to significant performance gains. While our results only represent a single data point, they illustrate the need to look at MPTCP from a more holistic point of view and not treat the connections separately, as is currently being done. We call for new approaches and research into how multiple parallel connections offered by MPTCP should be used in an efficient and fair manner.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Transport Control Protocol (TCP) is an essential part of the Internet and is used by a majority of applications to reliably transport their data over the network. Since its inception, TCP has seen a lot of research, particularly regarding how its performance and throughput could be improved [23, 11, 22]. More recent work has led to the development of Multipath TCP (MPTCP), standardized by IETF, which is an extension to standard TCP. It operates under the viewpoint that end hosts like smartphones, servers, etc., are equipped with multiple access interfaces such as Ethernet, WiFi, 3G/4G, Bluetooth and can form multiple parallel network paths between communicating devices. Unlike a single path TCP connection, MPTCP utilizes multiple parallel TCP subflows between hosts via (all) available network interfaces for simultaneous data transfer. It achieves robustness and resilience to link failures, provides seamless connection handovers over different network interfaces and aggregates throughput and bandwidth of the underlying TCP connections [3, 26]. Due to the benefits offered by MPTCP, researchers have proposed utilizing it in datacenters [24], opportunistic networks [25] etc. The Linux implementation of the protocol has been made open-source to network researchers [21] and a commercial version is being used by Apple Inc. to support its digital assistant Siri in iOS and MacOS systems. Recently, Apple Inc. has also made its MPTCP API open to iOS application developers such that they can fully utilize its capability [13].

Figure 1: Queues in MPTCP transmission path

MPTCP protocol design is influenced by network compatibility and ensuring fairness to existing TCP connections. Figure 1 shows the network stack of an MPTCP-compliant client. MPTCP adds a scheduling layer over existing TCP connections and routes application packets to one of the subflows based on a decision parameter. Efficient scheduling decisions can improve the delay performance of MPTCP. Several schedulers have been proposed by researchers but due to its compliance with modular TCP design, the default scheduler injects packets on a subflow with the lowest smoothed TCP RTT (SRTT) value [19]. TCP RTT presents delayed information of the internal network behavior such as congestion, packet drops, queueing, etc. and does not explicitly indicate the reason for the change in network conditions. Unlike traditional TCP, MPTCP has the capability to proactively switch TCP flows if it senses any issues on one of the flows. However, MPTCP primarily treats the individual streams as separate entities and does not attempt to optimize performance across them.

In this paper, we argue that MPTCP should take a more comprehensive view over individual subflows and attempt to optimize the overall performance, as opposed to treating them as separate parallel entities. MPTCP currently is limited to the TCP-level information provided by the individual TCP subflows, but we claim that a more holistic approach is needed to exploit the possibilities offered by multiple flows to the full extent. This implies the need to take a different approach in MPTCP research and in general on how to exploit parallel resources in networking, which can also serve other areas of networking [1, 10, 14, 15].

To illustrate one aspect of the problem and to provide motivation for future work, we present several issues in current implementation and working of MPTCP due to its reliance on TCP layer information. We show via controlled experiments that the default minSRTT scheduler of MPTCP essentially forces the protocol to use only one of the many available flows and thus leads to lower performance. To demonstrate the possible performance that can be achieved by MPTCP, we develop the QueueAware scheduler which considers network interface device queue size while making scheduling decisions. We evaluate and compare the performance of QueueAware with minSRTT scheduler in several realistic scenarios over WiFi and 4G network interfaces. We also sketch how other parameters and network conditions currently ignored by MPTCP could be used for improved efficiency of network communication.

The remainder of the paper is organised as follows. In Section 2, we show the gap in performance of the default MPTCP scheduler via controlled experiments. In Section 3, we describe the scheduling policy used by QueueAware. Section 4 describes how we evaluate the efficacy of QueueAware using ns-3 simulations. Section 5 quantifies gains in performance achieved by QueueAware over the default MPTCP scheduler. We discuss related works in Section 6. In Section 7 we discuss various network parameters and network conditions that are currently overlooked by MPTCP when making scheduling decisions. These motivate future avenues for research. We conclude in Section 8.

2 Downsides of Ignoring Local Information

(a)
(b)
Figure 2: Experimentally obtained goodputs and RTT(s) of two MPTCP subflows using non-interfering WiFi access paths. The paths and network are shown in Figure 3
Figure 3: Topology used in experiments and simulations

Figure 1(a) exemplifies goodput obtained from controlled testbed experiments that show how the default MPTCP scheduler optimizes over two available TCP subflows that use non-interfering end-to-end paths. The network topology used in the experiment is shown in Figure 3. The topology emulates a situation where the access network is a bottleneck, but the core network has high-speed pipes.

Neither flow drops any packets during the length of the experiment. One would, therefore, expect to fill packets on both flows and achieve stable goodput on both. However, MPTCP more often than not prefers to send packets on one flow over the other and is unable to optimize jointly over the available paths. As a result, MPTCP is only able to utilize of available aggregated bandwidth in the experiment. We will argue that the reasons for this are two-fold: (a) decision making by the scheduler only uses the SRTT values of the flows, which is an end-to-end feedback mechanism and hence delayed; and (b) this delayed SRTT feedback leads to the scheduler selecting one flow over the other for undesirably long intervals, instead of suitably allocating packets on both flows. These reasons are in fact a consequence of MPTCP being incognizant of cross-layer information that is readily available locally.

Interestingly, the scheduling decisions that lead to high device queue occupancy and increase in SRTT were made using values of SRTT that corresponded to an earlier interval when the device queue was lightly loaded. So while a device queue (local to the MPTCP sender and used by an MPTCP flow) is full with packets, MPTCP remains oblivious to the same. Instead, it waits to be informed via a delayed end-to-end SRTT based feedback mechanism about the fact that the flow it has been assigning packets to is in fact loaded. In the process, it loses out on many opportunities of scheduling packets to the other better flow, one experiencing lower queue delay. Further, as the device queue is drop-tail in nature, i.e. it starts dropping incoming packets if full, MPTCP has to retransmit locally dropped packets, the information of which it infers after a missing ACK. Finally, a rapid dip in SRTT is noticed when a flow, which was earlier heavily queued but stayed unused for a while, is assigned new packets. These packets experience smaller waiting times and thus have a small RTT.

Next, we describe the QueueAware scheduler for MPTCP which is motivated by the observation that MPTCP doesn’t have to rely on delayed increase in SRTT to identify local queue delay. We show that instead, using the occupancy of the device queues together with SRTT enables MPTCP to use all available flows more efficiently.

3 QueueAware Scheduler

Figure 4: Queueing abstraction of an end-to-end MPTCP connection with two flows

The MPTCP scheduler chooses amongst one of many available TCP subflows for each application packet arrival, such that the end-to-end throughput is maximized. We consider a simplified queue theoretical abstraction to capture the essentials of this problem. Specifically, we model each subflow by a service facility. Figure 4 illustrates the abstraction for an MPTCP end-to-end connection that uses two TCP flows. The abstraction allows us to apply results from analysis of multi-queue systems[27].

In our queueing abstraction, packets generated by an application arrive into a queue that models the TCP send buffer (Figure 1). Packets in this queue are assigned to one of the available service facilities in a first-come-first-serve (FCFS) manner. Each facility consists of a finite queue and a server. Packets inside a facility are serviced in an FCFS manner.

The queue in a service facility resembles the network interface queue that is used by the TCP subflow corresponding to it. The server includes the network interface card, the destination host (all layers of the TCP/IP stack), and intermittent nodes in the core and the access network used by the flow.

Origins of the QueueAware scheduler: Many analytical works on queueing systems have looked at scheduling customers/packets to parallel servers[27, 29, 30, 31]. Packet arrivals are modeled as a random process (for example, Poisson process) with an arrival rate

. The time a server takes to service packets (service time) is modeled as a random variable. Each server has a known service rate, which is the inverse of the expectation of the service time. The two servers in Figure 

4 have service rates of and . For many general arrival processes and service time distributions, when all servers are stochastically identical, the policy of choosing a service facility with a minimum number of packets, is known to be optimal [27, 29, 31]

, that is it maximizes the number of packets serviced in a given amount of time. For the case of non-identical servers, we were unable to find an optimal policy. However, for the case when the arrivals are Poisson and the service times are non-identical but exponentially distributed, a scheduling policy that assigns a packet to a service facility that minimizes the conditional expected waiting time of the packet, conditioned on the knowledge of the number of packets waiting for service in the facility, is shown to have good performance 

[27]. Our QueueAware scheduler uses the policy in an MPTCP setting.

Consider service facilities indexed . Let facility have a service rate of . Let be the number of packets waiting for service in facility at time . The policy assigns a packet to a service facility given by

(1)

Note that is the expected service time of a packet in facility . As a result, the conditional waiting time of a packet that enters such a facility is , which is the sum of the expected service times of the packets currently waiting for service in the facility.

Adapting policy (1) to multiple end-to-end TCP subflows: The number of packets in service facility (corresponding to TCP subflow

) is the number of packets in the corresponding device queue and can be obtained. However, we must estimate the service rate

.

Consider the th packet arrival. Let be the time the packet is assigned to a service facility. Let be the time that a TCP ACK acknowledges receipt of the packet. The RTT of the packet can be denoted as . Note that the RTT includes the time this packet waits in the queue in its assigned service facility before it starts service and the time it spends in service. Let the wait time be 111For simplicity of exposition we ignore the time a TCP ACK may have to wait in a queue before being sent to the TCP layer.. This time can be calculated locally at the MPTCP sender. The time that the packet spends in service begins when the packet enters the NIC for transmission and ends when a TCP ACK for the packet is received. Given and , we have . The estimate of the service rate is updated on receipt of a TCP ACK. Let be the current estimate of the average service time of facility . On receipt of a TCP ACK for the packet, we update

(2)

where applies appropriate weights to the last estimate of the average and the most current service time. We use in this work. The corresponding estimate of the service rate is . At time , QueueAware schedules to the TCP subflow that satisfies

(3)

4 Evaluation Methodology

We simulated network topologies of the kind shown in Figure 3 using the network simulator ns-3. An MPTCP client uses two TCP subflows to the MPTCP server. The simulator ns-3 has an implementation of MPTCP [8] that we modified to include QueueAware. We compare the performance of the default minSRTT scheduler of MPTCP with that of QueueAware.

We simulated scenarios where both the MPTCP flows have reliable paths, and where one of the paths is unreliable and drops TCP packets. We will show that QueueAware, unlike minSRTT, is able to jointly use available flow capacities to achieve larger per flow (and aggregate) goodputs. Also, it does at least as well as the default minSRTT scheduler in scenarios where paths used by TCP subflows are unreliable.

Reliable Paths: We consider two non-interfering paths that see no packet drops but the corresponding access networks (last mile in Figure 3) have different bottleneck rates. Specifically, we simulated the following network scenarios:

  • Identical WiFi Access Points: MPTCP client uses its WiFi interfaces to connect to identical WiFi access points providing reliable links. This use case may occur in an enterprise WiFi network, where a client may have more than one good access point in its vicinity. We set the WiFi link rate to each access point to Mbps.

  • Non-identical WiFi Access Points: The use case remains the same as above. However, one of the two access points has a faster link of Mbps, for example, because of greater proximity to the MPTCP client.

  • Heterogeneous networks of WiFi and 4G: MPTCP client uses its WiFi interface and G interface to connect to access points of the different technologies. The WiFi link rate is Mbps and G link rate is Mbps. Though the link rate of G is large, packets experience larger RTT over G [5].

Unreliable Paths: We simulated an MPTCP client that uses its two WiFi interfaces to connect to access points. However, the tcp subflow using one of the access points suffers TCP packet errors. We set packet error rate to . In a real setting, this could happen because the client is close to losing coverage from the access point or because the access point is heavily loaded.

Lastly, for a selection of paths and network technologies, we simulated short time MPTCP subflows by performing a small file upload of MB.

For all simulations, the backbone link was modeled as a Ethernet with rate Mbps, and the core network was modeled as a Mbps link. Our choice of links within the access networks makes them the bottleneck. The application at the MPTCP client generated a load of Mbps. All results were averaged over 10 simulation runs. The access networks, backbone, and the core see no traffic other than that created by our MPTCP client. We defer performance evaluation of QueueAware under more realistic loads and larger numbers of MPTCP clients to future work.

5 Results

(a) Identical WiFi access points
(b) Non-identical WiFi access points
(c) WiFi and G.
(d) One unreliable path
Figure 5: Goodputs of MPTCP flows when using QueueAware and minSRTT, for the scenarios described in Section 4

5.1 Reliable Paths

We consider the scenario in which both TCP subflows use reliable paths. Figure 4(a) compares subflow goodputs for when there are two identical WiFi access points. Observe that both QueueAware and minSRTT show similar behavior in total goodputs per flow. However, a larger goodput per flow is achieved by QueueAware. As a result, it achieves a increase in aggregate goodput over minSRTT. This is because minSRTT, as a consequence of only using the delayed feedback provided by SRTT, ends up scheduling packets to a subflow, which has a high device queue occupancy, for a longer time. Next, we shed more light on the differences in the behaviors of the schedulers, in Figures 5(a)7(b).

Figures 5(a) and 5(b) show the goodputs on the two flows obtained respectively by QueueAware and minSRTT as a function of time. While QueueAware can maintain a stable and almost equal goodput over both the flows, minSRTT at any given time chooses one flow over the other. This behavior of goodputs is made clear by Figures 6(a) and 6(b) that show SRTT behavior in same time duration. Subflows experience high SRTT for longer stretches of time when using the minSRTT scheduler, compared to when using the QueueAware scheduler. This corresponds to longer stretches of full queue occupancy of packets when using minSRTT (see Figure 7(b)), when compared to the QueueAware scheduler (see Figure 7(a)).

Figure 4(b) shows the goodput performance when subflows use non-identical WiFi access points. It can be observed that QueueAware and minSRTT both perform similarly for the subflow that uses the faster link of Mbps. However, importantly, QueueAware makes good use of the other available Mbps link. Under QueueAware, the subflow using the slower link gets about of the goodput obtained by the subflow using the faster link. On the other hand, minSRTT gets hardly any goodput on the subflow using the slower link. The use of locally available cross-layer device queue occupancy information together with SRTT enables QueueAware to make good use of both the available WiFi interfaces.

Finally, Figure 4(c) shows the goodput performance when one of the subflows uses a WiFi interface and the other uses a G interface that has twice the WiFi link rate. Recall that the G network suffers from larger RTT. Even in this case we observe that QueueAware utilizes bandwidth offered by both network interfaces considerably better than minSRTT. QueueAware achieves more aggregate goodput when compared to minSRTT. The scheduler achieves a goodput that is that of minSRTT via G. The underutilization of G by minSRTT has earlier been observed in real-world deployment tests. It has been found that the default scheduler utilizes the WiFi sub-flow of time [6].

(a) QueueAware
(b) minSRTT
Figure 6: Comparison of goodputs of subflows obtained by QueueAware and minSRTT
(a) QueueAware
(b) minSRTT
Figure 7: Comparison of SRTT of subflows obtained by QueueAware and minSRTT
(a) QueueAware
(b) minSRTT
Figure 8: Comparison of queue lengths of subflows obtained by QueueAware and minSRTT

5.2 Unreliable Paths

On detecting packet loss, the congestion window of a sub-flow initiates the congestion avoidance phase. This limits the number of packets that can be sent on the sub-flow. Furthermore, MPTCP employs Penalization and Retransmission (PR), where on packet error MPTCP reduces the congestion window of the sub-flow with high RTT and reinjects the lost packet on the other available subflow [32]. Though this technique reduces the possibility of Head-of-Line (HoL) blocking, it also limits the sending rate and significantly impacts overall goodput.

Figure 4(d) shows the goodputs achieved by the two subflows when one of the subflows (subflow in figure) experiences a packet loss rate of about . Due to the above stated reason both schedulers achieve rather small goodputs on the subflow experiencing errors. However, QueueAware is able to better exploit the subflow that has a reliable path. On this path, it achieves a goodput that is an improvement of about over the goodput achieved by minSRTT.

5.3 Small File Upload

minSRTT QueueAware
Scenario Path Reliability Completion Time (s)
Two WiFi, each rate Mbps Reliable paths 1.456 1.327
Two WiFi, each rate Mbps Errors on one path 2.527 2.204
WiFi and G No Tx errors 2.439 1.812
Table 1: Small file upload completion time

Table 1 shows the upload completion time of a MB file for QueueAware and minSRTT for different interfaces and different path reliability. QueueAware achieves about decrease in upload time with respect to minSRTT.

6 Related Work

Several researchers have proposed improvements to the default minSRTT scheduler of MPTCP. Paasch et al. [20] designed a modular scheduler framework for MPTCP and had compared the performance of default minSRTT scheduler with a Round-Robin scheduler. Baidya et al. [2] propose RTT-based path quality metric to adapt out-of-order transmissions while limiting the usage of a slower path. Kuhn et al. [16] aim to reduce overall application delay by estimating maximum allowed receiver buffer blocking time to transmit out-of-order packets on multiple paths. On the other hand, Hwang et al. [12] propose to freeze the utilization of slower path when the difference in RTT’s of faster and slower paths exceeds a calculated threshold. BLEST [9] and OTIAS [33] balance heterogeneous flows and reduce Head-of-Line blocking by considering several parameters such as CWND, in-flight packets etc., along with SRTT. CMT-RMDS [4] proposes adopting receiver-centric path characteristics along with sender-driven RTT values to better estimate current path conditions.

Researchers have also proposed to utilize other network parameters to provide better path estimation. Corbillon et al. [7] leverage application layer information in transport layer flow scheduling decisions to provide delay-resilient video streaming in MPTCP. Lim et al. [28] labels WiFi subflow as active/inactive for data transmission based on a minimum desired signal strength. F2P-DPS [17] proposes to combine several TCP parameters such as CWND, SSThresh, RTTs to estimate subflow weights for data data transmissions. Ni et al. [18] utilizes reverse-path SACK packets to inform the sender of any out-of-order/lost packets at receiver buffer and calculate offset to provide successful data chunk delivery.

Although the solutions mentioned above tackle several critical issues affecting Multipath TCP in real-world, these techniques are significantly dependent on accurate estimation of current path characteristics. The solutions which utilize SRTT value as current path performance metric or are dependent upon it suffer from same issues as that of minSRTT shown in the paper. To efficiently handle varying application data traffic over heterogeneous paths in MPTCP, an ideal scheduler must be able to schedule packets over flows proactively and must adapt to network conditions swiftly.

7 Discussion and Future Work

Host Machine Network Receiver
Data rate Channel utilization Bandwidth utilization
Retry percentage Path congestion Receiver queue delays
Network interface type Number of nodes on path Congestion window size
Table 2: Network parameters impacting RTT

The aim of our evaluation was to show that RTT alone is not the best measure for scheduling packets over multiple network interfaces in MPTCP. Sender device queue is one such parameter which needs to be monitored and considered in overall network scheduling decision. Only by including cross-layer feedback into MPTCP control loop can we fully exploit the potential offered by multiple connections. As shown in the paper, treating the links as independent, parallel TCP flows restrict MPTCP’s performance.

Several other conditions can lead to changes MPTCP’s efficiency. Table 2 lists few such variables that impact RTT and the network path and which could be exploited by a more holistic MPTCP. These include host-specific parameters, such as device queue length; network-specific aspects, such as congestion, and receiver-specific parameters, such as window size. An efficient MPTCP should look at all or many of these parameters in conjunction while deciding how to schedule the packets and allocate the available flows for data transport.

Several other factors can also significantly impact network behavior. Congestion at an access point, over-utilization of channel capacity, queueing delay at receiver are some such scenarios. However, unlike the locally accessible parameters (such as device queue occupancy), explicitly monitoring and predicting these parameters is an interesting research question of its own. One possibility is to use explicit notifications such as ECN to convey occurrence of such a scenario. Incorporating these mechanisms and comparing its performance impact on current MPTCP implementation would be an interesting future avenue for research in the wider community.

In this paper, we argue that the current way of treating the individual subflows, practically separately relying only on their TCP level information, is not sufficient for fully exploiting the potential offered by multiple connections, and a more holistic approach to MPTCP and how it manages the connections is needed. Our results in this paper illustrate only one small facet of the bigger problem, but broader research efforts are needed to understand better how such multiple connections should be used in modern networks. These cover questions such as fairness to other network flows, general performance issues, flow control, and security issues, but they all apparently require a different approach to the problem than parallel, but largely independent connections.

8 Conclusion

In this paper, we demonstrated the shortcomings of MPTCP scheduling algorithm and the inadequacy of its reliance on SRTT. We proposed the QueueAware scheduler, which exploits device driver queue occupancy together with sRTT to obtain significantly better performance than the default MPTCP scheduler. We believe that the QueueAware scheduler highlights the need for a more holistic approach to multipath scheduling than currently being done by MPTCP.

References

  • [1] Cisco visual networking index: Global mobile data traffic forecast update 2016 to 2021. In CISCO white paper.
  • [2] S. H. Baidya and R. Prakash. Improving the performance of multipath tcp over heterogeneous paths using slow path adaptation. In 2014 IEEE International Conference on Communications (ICC), pages 3222–3227, June 2014.
  • [3] O. Bonaventure, M. Handley, and C. Raiciu. An overview of multipath tcp. ; login:, 37(5):17, 2012.
  • [4] Y. Cao, Q. Liu, G. Luo, and M. Huang. Receiver-driven multipath data scheduling strategy for in-order arriving in sctp-based heterogeneous wireless networks. In 2015 IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), pages 1835–1839, Aug 2015.
  • [5] Y.-C. Chen, Y.-s. Lim, R. J. Gibbens, E. M. Nahum, R. Khalili, and D. Towsley. A measurement-based study of multipath tcp performance over wireless networks. In Proceedings of the 2013 Conference on Internet Measurement Conference, IMC ’13, pages 455–468, New York, NY, USA, 2013. ACM.
  • [6] Q. D. Coninck, M. Baerts, B. Hesmans, and O. Bonaventure. Observing real smartphone applications over multipath tcp. IEEE Communications Magazine, 54(3):88–93, March 2016.
  • [7] X. Corbillon, R. Aparicio-Pardo, N. Kuhn, G. Texier, and G. Simon. Cross-layer scheduler for video streaming over mptcp. In Proceedings of the 7th International Conference on Multimedia Systems, MMSys ’16, pages 7:1–7:12, New York, NY, USA, 2016. ACM.
  • [8] M. Coudron and S. Secci. An implementation of multipath tcp in ns3. Computer Networks, 116:1 – 11, 2017.
  • [9] S. Ferlin, . Alay, O. Mehani, and R. Boreli. Blest: Blocking estimation-based mptcp scheduler for heterogeneous networks. In 2016 IFIP Networking Conference (IFIP Networking) and Workshops, pages 431–439, May 2016.
  • [10] N. Fernando, S. W. Loke, and W. Rahayu. Mobile cloud computing: A survey. Future Generation Computer Systems, 29(1):84 – 106, 2013. Including Special section: AIRCC-NetCoM 2009 and Special section: Clouds and Service-Oriented Architectures.
  • [11] S. Floyd and V. Jacobson. Random early detection gateways for congestion avoidance. IEEE/ACM Transactions on Networking (ToN), 1(4):397–413, 1993.
  • [12] J. Hwang and J. Yoo. Packet scheduling for multipath tcp. In 2015 Seventh International Conference on Ubiquitous and Future Networks, pages 177–179, July 2015.
  • [13] A. Inc. Use Multipath TCP to create backup connections for iOS. https://support.apple.com/en-us/HT201373, 2017.
  • [14] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs, and R. L. Braynard. Networking named content. In Proceedings of the 5th international conference on Emerging networking experiments and technologies, pages 1–12. ACM, 2009.
  • [15] S. Kandula, S. Sengupta, A. Greenberg, P. Patel, and R. Chaiken. The nature of data center traffic: Measurements & analysis. In Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement, IMC ’09, pages 202–208, New York, NY, USA, 2009. ACM.
  • [16] N. Kuhn, E. Lochin, A. Mifdaoui, G. Sarwar, O. Mehani, and R. Boreli. Daps: Intelligent delay-aware packet scheduling for multipath transport. In 2014 IEEE International Conference on Communications (ICC), pages 1222–1227, June 2014.
  • [17] D. Ni, K. Xue, P. Hong, and S. Shen. Fine-grained forward prediction based dynamic packet scheduling mechanism for multipath tcp in lossy networks. In 2014 23rd International Conference on Computer Communication and Networks (ICCCN), pages 1–7, Aug 2014.
  • [18] D. Ni, K. Xue, P. Hong, H. Zhang, and H. Lu. Ocps: Offset compensation based packet scheduling mechanism for multipath tcp. In 2015 IEEE International Conference on Communications (ICC), pages 6187–6192, June 2015.
  • [19] C. Paasch, S. Ferlin, O. Alay, and O. Bonaventure. Experimental evaluation of multipath tcp schedulers. In Proceedings of the 2014 ACM SIGCOMM Workshop on Capacity Sharing Workshop, CSWS ’14, pages 27–32, New York, NY, USA, 2014. ACM.
  • [20] C. Paasch, S. Ferlin, O. Alay, and O. Bonaventure. Experimental evaluation of multipath tcp schedulers. In Proceedings of the 2014 ACM SIGCOMM Workshop on Capacity Sharing Workshop, CSWS ’14, pages 27–32, New York, NY, USA, 2014. ACM.
  • [21] C. Paasch and B. Sebastian. Multipath TCP in the Linux Kernel. http://www.multipath-tcp.org, 2017.
  • [22] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose. Modeling tcp throughput: A simple model and its empirical validation. SIGCOMM Comput. Commun. Rev., 28(4):303–314, Oct. 1998.
  • [23] J. Padhye, J. Kurose, D. Towsley, and R. Koodli. A model based tcp-friendly rate control protocol. In Proceedings of NOSSDAV, 1999.
  • [24] C. Raiciu, S. Barre, C. Pluntke, A. Greenhalgh, D. Wischik, and M. Handley. Improving datacenter performance and robustness with multipath tcp. In Proceedings of the ACM SIGCOMM 2011 Conference, SIGCOMM ’11, pages 266–277, New York, NY, USA, 2011. ACM.
  • [25] C. Raiciu, D. Niculescu, M. Bagnulo, and M. J. Handley. Opportunistic mobility with multipath tcp. In Proceedings of the Sixth International Workshop on MobiArch, MobiArch ’11, pages 7–12, New York, NY, USA, 2011. ACM.
  • [26] C. Raiciu, C. Paasch, S. Barre, A. Ford, M. Honda, F. Duchene, O. Bonaventure, and M. Handley. How hard can it be? designing and implementing a deployable multipath tcp. In Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation, pages 29–29. USENIX Association, 2012.
  • [27] Z. Rosberg and P. Kermani. Customer routing to different servers with complete information.

    Advances in Applied Probability

    , 21(4):861–882, 1989.
  • [28] Y. s. Lim, Y. C. Chen, E. M. Nahum, D. Towsley, and K. W. Lee. Cross-layer path management in multi-path transport protocol for mobile devices. In IEEE INFOCOM 2014 - IEEE Conference on Computer Communications, pages 1815–1823, April 2014.
  • [29] R. R. Weber. On the optimal assignment of customers to parallel servers. Journal of Applied Probability, 15(2), 1978.
  • [30] W. Whitt. Deciding which queue to join: Some counterexamples. Oper. Res., 34(1):55–62, Jan. 1986.
  • [31] W. Winston. Optimality of the shortest line discipline. Journal of Applied Probability, 14(1):181–189, 1977.
  • [32] M. working group. Use Cases and Operational Experience with Multipath TCP. https://tools.ietf.org/html/draft-ietf-mptcp-experience-07, 2016.
  • [33] F. Yang, Q. Wang, and P. D. Amer. Out-of-order transmission for in-order arrival scheduling for multipath tcp. In 2014 28th International Conference on Advanced Information Networking and Applications Workshops, pages 749–752, May 2014.