Multi timescale bandwidth profile and its application for burst-aware fairness

03/19/2019 ∙ by Szilveszter Nádas, et al. ∙ Ericsson Budapest University of Technology and Economics 0

We propose a resource sharing scheme that takes into account the traffic history over several predefined time scales and provides fair resource sharing considering the traffic history. Our concept builds on a simplified version of core-stateless resource sharing, where we only use a few Drop Precedences (DPs). For packet marking we introduce Multi timescale bandwidth profile. Additionally, we provide basic dimensioning concepts for the proposed schema and present its simulation based performance analysis.



There are no comments yet.


page 2

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Quality of Service (QoS) is a fundamental area of networking research that has been researched for a long time. Despite this several open issues remain as collected in [1]. Per node (e.g. per subscriber or per traffic aggregate) fairness, being one of these, is usually provided by per node WFQ, but that does not scale as the number of node increases. Core-stateless schedulers [2, 3, 4] solve this, but they still only provide fairness on a single timescale, typically at the timescale of round-trip time (RTT). These solutions mark packets per node at the edge of the network and do simple, session-unaware scheduling in the core of the network based on the marking. The most common way to provide fairness on longer timescales is to introduce caps, e.g. a monthly cap on traffic volume, however the congestion lasts much shorter time period. A similar attempt is to limit the congestion volume instead of the traffic volume as described in [5, 6]. The need for fairness on different time scales is illustrated by the example of short bursty flows and long flows, as mice and elephants in [7]. A demand in this area is that a continuously transmitting node shall achieve the same long term average throughput as nodes with small/medium bursts now and then.

Traffic at the same time is becoming more and more bursty, including traffic aggregates, due to the highly increased throughput of 5G Base Stations [8]. When deploying mobile networks, operators often lease transport lines as Mobile Backhaul from the Core Network to the Base Stations. The transport services and related bandwidth profiles defined by the Metro Ethernet Forum (MEF) [9] are most commonly used for this purpose today. However using the current service definitions, it is not possible to achieve per node (per transport service in this case) fairness and good utilization simultaneously and also it only takes into account a small timescale, i.e. the instantaneous behavior.

In this paper we propose a resource sharing scheme that takes into account the traffic history of nodes over several predefined timescales and provides fair resource sharing based on that. Our concept builds on a simplified version of core-stateless resource sharing, where we only use a few Drop Precedences.

Ii Packet level behavior

In this section, we extend the Two-Rate, Three-Color Marker (trTCM) to provide fairness on several timescales and show how we apply core stateless scheduling on the marked packets.

Ii-a Packet Marking

Fig. 1: trTCM bandwidth profile for a single priority

Fig. 2: A Multi Timescale Bandwidth Profile

MEF currently uses flavors of trTCM for bandwidth profiling [9]. The simplest case, when a single priority is used and there is no token sharing or color awareness, is depicted on Fig. 2. It has two rate parameters, the guaranteed Committed Information Rate (CIR) and the non-guaranteed Excess Information Rate (EIR). Both has an associated token bucket (Committed Burst Size (CBS) and Excess Burst Size (EBS)), whose sizes are typically set to and , where denotes the round trip time. A packet is marked green (conform to CIR), yellow (conform to EIR) or dropped (red) based on the amount of tokens in the associated token buckets. A bucket must contain at least as many tokens as the packet size, marked as Enough Tokens (ET?). If a packet is marked a given color, that many tokens are removed from the relevant bucket.

We extend the trTCM by increasing the number of colors (i.e. drop precedences (DPs)) and by introducing multiple token buckets per drop precedence, representing different timescales (TSs). An example for this Multi Timescale Bandwidth Profile (MTS-BWP) is shown on Fig 2, where the number of DPs is and the number of TSs is .

The darkest purple color (DP 1) is similar to green in the sense that we intend to guarantee transmission of packets marked dark purple. The lighter colors are similar to yellow, though we intend to provide a more refined service for them than simple non-guaranteed delivery. The token rate of the bucket associated with drop precedence and timescale is and the bucket size of BS is set to

. An example time scale vector with

is , where we assume that is in the order of magnitude of the typical RTT.

That is, and are matrices of size . A packet can be marked a given DP value if all buckets BS (which we shorthand as ) contain enough tokens. Upon successful marking all respective buckets are decreased with the packet size.

If we want to enable more bursty traffic on lower timescales, we have to offer smaller bandwidth on higher timescales. Thus the rows of are monotonically decreasing, i.e.


Ii-B Active Queue Management (AQM) algorithm

We assume a FIFO buffer with drop the largest DP from head AQM. More precisely when the buffer is full, we determine the largest DP which has packet present in the buffer and drop the first packet from the head which has this DP.

This algorithm can be implemented efficiently by applying the AQM in [3] for 4 DPs.

This behavior can also be approximated on existing hardware by representing drop precedences with DSCPs; configuring these DSCPs to the same queue; and configuring DP specific TailDrop thresholds to drop the different DPs at increasing queue lengths (the largest DP at the smallest queue length) [10].

Iii Fluid model of MTS-BWP

We analyze the performance of MTS-BWP in a fast and simple fluid simulator, because our focus is not the packet level behavior and the interaction of congestion control and scheduling, as there is other work focusing on that, e.g. [11]. Rather we are interested in how the newly introduced bandwidth profile can provide fairness on several timescales, when the system is ideal, i.e. the used congestion control can utilize its share instantaneously and with no packet loss.

Iii-a High level model

We model a common bottleneck shared among several nodes with identical BWP configuration. A node may have one or several active flows. When no BWP is applied, nodes share the bottleneck proportionally to the number flows within the nodes. The applied BWP constraints this allocation, all traffic with DP above congestion DP is transmitted and with DP below is discarded. Bandwidth allocation within the congestion DP is still proportional with the number of flows. DPs signify priority in the sense that whenever congestion occurs the transmission on higher DPs (i.e., traffic marked with higher DP value) is reduced first.

The token level in bucket of node is maintained and fluid flows out from each bucket of a given DP according to the transmission rate of the node on that DP .

Our ideal system model (istantenous adaptation, no packet loss) assumes no bottleneck buffer and 0 RTT. Consequently we set in the fluid simulator (which results in maximum fluid rate of on a given , see Eq. 2).

Iii-B Simulator model

Iii-B1 System parameters

The system is described by the following parameters:

  • : the total service capacity;

  • : the number of nodes;

  • (assumed to be the same for all nodes);

  • flow limit: the maximum number of concurrent active flows at each node; further flows are discarded upon arrival.

Iii-B2 Traffic Model

We use a compound Poisson point process with a discrete file sizes distribution (given by possible file sizes and associated probabilities) as an input, and based on that we simulate the arrival time and the size of each arriving flow for each node.

Each node has a nominal speed in all investigated situations, we will stick to the natural choice of . The nominal load of each node can then be calculated as

The system load is the average of the nominal loads for all nodes. The system is underloaded if its load is less than 1, and it is overloaded otherwise. A node has low load if its nominal load is less than the system load, and it has high load otherwise.

Iii-B3 Discrete event simulator

A discrete event simulator runs a simulation using a given traffic input (arrival time and file size series) for a given set of system parameters. The simulator identifies the following events:

  1. flow arrival,

  2. flow finishing,

  3. a token bucket emptying,

and keeps track of the following values:

  1. simulation time;

  2. list of active flows;

  3. remaining size of each active flow;

  4. token bucket levels, .

These variables are sufficient to determine the time and type of the next event and the current bandwidth allocation which applies until the next event. The simulator then proceeds to the next event, updates all information and iterates this loop.

Once finished, the simulator provides the following information as an output:

  1. list of all event times (including flow arrivals and departures);

  2. list of the bandwidth allocation for each node and each DP between events;

  3. flow count for each node at each event time.

Iii-C Bandwidth allocation model

At any given point in time, we collect the throughput bounds determined by the current bucket levels for a drop precedence at node into a matrix, denoted by , whose elements are


To present the bandwidth allocation we need the following notation:

  • is the number of flows in node ,

  • is the throughput of node , initialized with 0,

  • shows that node is eligible for increase, initialized with True.

The iterative algorithm to calculate the bandwidth allocation is as follows.

The congestion DP is calculated as

is initialized for all as

Then the procedure iterate the following 3 steps until :

  1. Nodes with are set to non-eligible ( False.)

  2. Mark all eligible nodes for which the ratio is minimal among all eligible nodes.

  3. Increase for all marked nodes by , where is calculated as the maximal possible increase such that the following remain valid:

    • for all ,

    • the ratio from among all marked nodes does not increase beyond the second smallest ratio from among all eligible nodes, and

    • .

From and calculating the per DP throughput is straightforward.

Iv Dimensioning guidelines

This section focuses on the dimensioning of the token rate matrix and the token bucket size matrix (defined in Section II-A). Proper dimensioning of and are vital to obtain the desired properties of the bandwidth profile. We consider a system with nodes with identical MTS-BWP configuration over a bottleneck link with capacity . The required properties are the following:

  1. there are predefined download speeds (decreasing) provided for files of sizes (increasing) arriving at a previously inactive node ( is also the peak rate provided to a node after an inactive period);

  2. provide the nominal speed to each node in long-term average;

  3. ensure ;

  4. provide the minimum guaranteed speed (decreasing) in case of respectively;

  5. guarantee work conserving property (i.e. when there is traffic, the full link capacity shall be used).

In the following analysis we focus on the and case, which allows for file sizes with predefined downloads speeds, but it is straightforward to generalize for . We aim to minimize and will settle at , providing insight to how the 4 DPs are used as well as what happens for fewer DPs.

Iv-a Token rate matrix

In this part we present a simple dimensioning method for a matrix based on the requirements 15 above. All rows of should be decreasing according to (1).

We use the following intuitive guidelines for :

  • DP 1 is used for the guaranteed speeds and not intended to be the limiting DP;

  • DP 1 and 2 are used to be able to reach the predefined download speeds () by a low load node in situations when most nodes are on ;

  • DP 3 or 4 is the congestion DP for high load nodes while low load nodes are inactive;

  • DP 4 is used to guarantee the work conserving property.

In accordance with these guidelines, we propose the structure

The first row (DP 1) is straightforward and simply implements the guaranteed speeds. Note that needs to hold to avoid congestion on DP 1.

is calculated so that to ensure the predefined download speed on DP 1 and 2; similarly, and .

Our next remark is that is defined so that


holds; this important property will be called the return rule. First note that if , then any node with nominal load larger than 1 will continue to deplete their token buckets and eventually end up with


Actually, (5) is exactly the way bad history is described within the system.

The return rule (4) provides two important guarantees: in long-term average, only bandwidth is guaranteed on DP1–DP3 for any node , but since , this also means that no node will be “suppressed” in long-term average by the other nodes.

Also, over a time period when all other nodes are either inactive or have bad history as in (5), any node with nominal load less than will eventually have , and thus potentially have access to a rate larger than (the node returns from “bad history” to “good history”, hence the name of the rule). The general form of the return rule would be that there exists a such that .

Next up is , which is defined so that

This will ensure that in the case when a single node becomes active while all other nodes are either inactive or have bad history as in (5), the congestion DP will change to 2, with the single active node having rate and other nodes having rate allocated.

The last row guarantees the work conserving property: as long as at least one node is active, it has access to the entire capacity .

Finally, the system is relatively insensitive to the exact values of the elements marked with an since typically other elements will limit the bandwidth: high load nodes have a bad history and thus are limited by , while targets for low load nodes are realized on DPs 1 and 2 and are thus limited by and . Elements marked with an * can be selected arbitrarily as long as row 3 of is decreasing.

Further remarks: The file sizes for the predefined download speeds only affect the bucket size matrix , detailed in the next subsection. For typical choices of the parameters, the rows of are monotonically decreasing, but in case they are not, needs to be adjusted, which we neglect here. More (or fewer) timescales can be introduced in a straightforward manner to accommodate more (or fewer) predefined file sizes and download rates. In case of fewer DPs, we need to choose:

  • Omitting the first row of results in no strictly guaranteed speeds.

  • Omitting the second row removes the predefined download speeds, resulting in a system very similar to trTCM, with nearly no memory.

  • Omitting the third row violates the return rule.

  • Omitting the last row results in a non-work-conserving system, where it may occur that their previous history limits nodes to the point where less than the available capacity is used.

Example 1.

For the parameters , (Gbps), guaranteed speeds (Gbps), file sizes are (GByte) and download speeds (Gbps), the following matrix is suitable:


Iv-B Bucket size matrix

The sizes of the buckets are calculated from the rates in and the list of timescales , which we define as


represents the ideal behavior of the fluid model. The remaining timescales correspond to download times of the predefined file sizes. We use the last timescale () to define how long a node must send with bandwidth at least to be considered to have bad history. In Example 1, we actually set sec (to allow a 30 second active period, before a node is considered to have bad history) and calculate accordingly.

We set the bucket sizes according to the formula


which will result in a previously inactive node emptying bucket after time (assuming the rate at DP is limited only by the node’s own history and not by other nodes), taking into account the fact that it uses different bandwidth on different timescales. Buckets with act as rate limiters in the fluid model, due to Eq. 2.

Note that when using the above and dimensioning method, the flow throughput of a single flow of size can reach as high as


If one wants to replace the current maintainable download speed requirement to a flow throughput requirement for , in (IV-A) should be replaced by the (slightly smaller) solution of Eq. (9) for when setting the left-hand side equal to the flow throughput requirement. For Example 1, and for the anticipated meaningful input values, the difference between and is very small; specifically, (Gbps).

Also note that the above calculations are for the fluid model; for actual packet-based networks, bucket sizes must have a minimum: at least MTU (maximum transmission unit) to be able to pass packets, and they must also be able to allow bursts on the RTT timescale. In summary,

V Simulation

V-a Simulation parameters

In all simulations, the MTS-BWP rates (), bucket sizes and the system parameters are set according to Example 1. In the input process, we use the file sizes and from Example 1 with identical probability. is for each node.

We have two groups of nodes with identical nominal loads within a group. We specify the nominal load for low load nodes (low load) and the system load, and calculate the nominal load for high load nodes using the equations in Section III-B2. The simulation setups are summarized in Table I, with the number and load of each node type varying for a total of actual setups.

Setup low load system load
A 1 4 0.5 0.6, 0.7, 0.8, 0.9, 0.95,
B 2 3 1.0, 1.1, 1.2, 1.5, 2.0
C 1 4 0.5, 0.6, 0.7, 1.1
D 2 3 0.8, 0.9, 0.95
TABLE I: Simulation setups and parameters

V-B Example simulation

Fig. 3: Example of bandwidth allocation over time

Figure 3 depicts the evolution of the bandwidth allocation in a time interval for setup A with system load of . Colors correspond to nodes and shades within a color correspond to DPs. Node 1 (red) is the low load node. Some events are also marked (a)–(g).

Node 1 is inactive in the beginning, and the congestion DP is 3. Then a large flow starts in node 1 (a) and the congestion DP changes to 2. Node 1 starts using 2 Gbps () + 4 Gbps () of the available capacity on DP 1 and 2 respectively, while nodes 2–5 start using 0.25 Gbps () + 0.75 Gbps () respectively. ( and was dimensioned for exactly this case; while the congestion DP is 2, all traffic on DP 2 can be transmitted.)

As time progresses, the buckets and of node 1 becomes empty (b), and the bandwidth share of node 1 drops accordingly to 2 Gbps () + 2 Gbps (). The congestion DP switches back to 3, but DP 3 is dominated by nodes 2–5, because those nodes have high numbers of flows, while node 1 has only a single flow. That single large flow can still achieve throughput as dimensioned.

Once node 1 finishes its flow (c), the available bandwidth is reallocated to nodes 2–5 on DP 3. Then, buckets which were filled previously (specifically BS) of nodes 2–5 empty one by one, and their bandwidth shares on DP 3 drop accordingly: first for node 5 (d), then node 4 (e), then node 3 (f). The exact order depends on the bucket levels of of each node, which depend on their earlier history, not visible in the example time interval.

In the meantime, new flow arrivals and finished services at nodes 2–5 may occur and cause minor changes in the bandwidth allocation, e.g. a flow at node 2 finishes at (g).

V-C Statistical results

Based on the simulator output, we calculate the following two statistics: the node bandwidth for active periods (periods when there is no flow at the respective node are excluded); and the flow bandwidth for the different flow sizes, which is the flow size divided by the download time. For both we plot average, 10% best and 10% worst cases, the error bars displaying the worst –best interval, with a dot marking the average. Averaging and percentiles are weighted according to time for node throughputs, while they are weighted according to the number of flows for flow throughputs. All statistics are evaluated for a 1-hour run (with an extra initial warm-up period excluded from the statistics).

We compare the suggested MTS-BWP with matrix versus trTCM profile (CIR==2 Gbps, EIR==8 Gbps) as baseline for various setups.

Fig. 4: Node bandwidth for low load nodes for trTCM vs. MTS BWP

Fig. 5: Node bandwidth for high load nodes for trTCM vs. MTS BWP

Fig. 6: Flow bandwidth for small flows (100 MB) at low load nodes for trTCM vs. MTS BWP

Fig. 7: Flow bandwidth for large flows (1 GB) at low load nodes for trTCM vs. MTS BWP

Fig. 8: Flow bandwidth for small flows vs. large flows at low load nodes for MTS BWP

Fig. 9: Flow bandwidth for small vs. large flows at high load nodes for trTCM vs. MTS BWP

Figure 9 displays node bandwidth statistics for low load nodes for trTCM vs. MTS. MTS consistently outperforms trTCM in allocating more bandwidth to low load nodes. The average bandwidth for MTS is higher in every scenario, and the best possible case (best values) is also significantly higher in most scenarios. MTS provides the most improvement in scenarios where the system is overloaded, for small system loads trTCM also performs well.

Low load node(s) perform better in the 2L/3H setup than in the 1L/4H setup, because protects a low load node better from 3 high load nodes than from 4. Finally, as the load of the low load node approaches 1, the difference between trTCM and MTS gradually disappears.

Figure 9 displays the same statistics for high load nodes. The most important observation here is that the considerable gain for low load nodes in Figure 9 comes at virtually no cost to high load nodes: the difference between the average bandwidth for high load nodes for trTCM vs. MTS BWP is negligible. The reason is that while traffic from low load nodes is indeed served faster, but the total amount of traffic served from low load nodes is the same. This means that for high load nodes, which are active longer, the effect on node bandwidth is negligible, especially for the average. (It matters little whether we decrease the same amount of bytes in a big burst or for a longer period with smaller bandwidth.)

Next we examine the prioritization of small flows () vs. large flows () provided by MTS-BWP compared to trTCM BWP. Figure 9 shows flow bandwidth statistics for small flows in low load nodes. MTS outperforms trTCM in allocating more bandwidth in these cases for every setup, but particularly for overloaded systems, where the difference is huge, both for average and also for best values. Also, as the low load is approaching 1, the difference between trTCM and MTS diminishes (just as for the node bandwidth, see Figure 9), but that is as it should be. Also, for MTS BWP, the best values for small flows reach for all scenarios where the low load is below .

Figure 9 displays the same statistics for large flows (1 GB) at low load nodes. Again, MTS outperforms trTCM significantly. The best throughput matches and is close to the dimensioned (see Section IV-A).

Figure 9 compares flow bandwidth statistics for small vs. large flows at low load nodes for MTS-BWP. It can be seen that small flows are successfully prioritized in all cases.

Finally, Figure 9 displays flow bandwidth at high load nodes for both small and large flows; and for both policies. There is a sharp distinction between underloaded systems and overloaded systems: for underloaded systems, even at high load nodes, typically there are only very few active flows at the same time, resulting in relatively large flow bandwidths. However, as the system load approaches 1, the per flow bandwidth drops gradually, and for overloaded systems, the number of flows in high load nodes is always close to the limit . Thus the flow bandwidth in high load nodes is typically close to , which is very sensitive to the parameter . For these nodes, we consider the node bandwidth to be a more informative statistics.

Vi Conclusion

We have shown that the proposed Multi Timescale Bandwidth Profile can extend fairness from a single timescale to several timescales. We provided a dimensioning method to deploy Service Level Agreements based on MTS-BWP, which can provide target throughputs for a group of nodes. The presented tool can differentiate between nodes using the same service depending on their characteristics.

Our simulation results showed the differences in network throughput for low load and high load nodes. There were high throughput gains on low load nodes with marginal or no throughput decrease on high load ones.


  • [1] D. Papadimitriou, M. Welzl, M. Scharf, and B. Briscoe, “Open research issues in internet congestion control,” RFC 6077. [Online]. Available:
  • [2] S. Nadas, Z. R. Turanyi, and S. Racz, “Per packet value: A practical concept for network resource sharing,” in 2016 IEEE Global Communications Conference (GLOBECOM), Dec 2016, pp. 1–7.
  • [3] S. Laki, G. Gombos, P. Hudoba, S. Nádas, Z. Kiss, G. Pongrácz, and C. Keszei, “Scalable Per Subscriber QoS with Core-Stateless Scheduling,” in ACM SIGCOMM Industrial Demos, 2018.
  • [4] M. Menth and N. Zeitler, “Fair Resource Sharing for Stateless-Core Packet-Switched Networks With Prioritization,” IEEE Access, vol. 6, pp. 42 702–42 720, 2018.
  • [5] B. Briscoe, R. Woundy, and A. Cooper, “Congestion exposure (ConEx) concepts and use cases,” RFC 6089. [Online]. Available:
  • [6] D. Kutscher, H. Lundqvist, and F. G. Mir, “Congestion exposure in mobile wireless communications,” in 2010 IEEE Global Telecommunications Conference GLOBECOM 2010, Dec 2010, pp. 1–6.
  • [7] J. R. Iyengar, O. L. Caro, and P. D. Amer, “Dealing with short tcp flows: A survey of mice in elephant shoes,” 2003, uni. of Delaware. [Online]. Available:
  • [8] Ericsson AB, “5G Radio Access-Research and Vision,” Ericsson White Paper 284 23-3204 Uen, Rev C, April 2016.
  • [9] “EVC ethernet services definitions phase 3,” Aug. 2014, Metro Ethernet Forum 6.2.
  • [10] F. Baker and R. Pan, “On Queuing, Marking, and Dropping,” RFC 7806, Apr. 2016. [Online]. Available:
  • [11] S. Nádas, G. Gombos, P. Hudoba, and S. Laki, “Towards a Congestion Control-Independent Core-Stateless AQM,” in Proceedings of the Applied Networking Research Workshop, ser. ANRW ’18.   New York, NY, USA: ACM, 2018, pp. 84–90. [Online]. Available: