I Introduction
In the context of deterministic networking [1] or time-sensitive networking [2], delays at network elements have to be bounded in the worst case, not in average. Computing and formally verifying delay bounds is often done by using network calculus [3, Section 1.4]. For a FIFO network element, this involves two steps. First, an arrival curve, say , is formulated for the aggregated input traffic. Specifically, is an upper bound on the number of bits that may be submitted by the traffic of interest into the network element within any time units. The function depends on the knowledge of the applications that generates the traffic and on the speed at which data can arrive the network element. Second, the details of the inner workings of the network element are abstracted by using a service curve, say (also called “minimum” service curve). This service curve is typically a rate-latency function, i.e., of the type , where (the rate) and (the latency) are fixed parameters that are specific to the network element and to the traffic class. An exact definition of service curve can be found in [3, Section 1.3]. Roughly speaking, such a rate-latency service curve means that the input traffic is guaranteed to receive a service rate at least equal to , except for possible service interruptions that may impact the delay by at most units of time.
Then, a delay bound given by network calculus is the horizontal deviation between the arrival and service curves [3, Section 3.1.11], which in this case is
(1) |
(In the above formula, is finite if the supremum is finite, otherwise is infinite). This methodology has been successfully applied to many network elements involving a variety of schedulers such as priority schedulers [4, Chapter 7], all schedulers that fall in the class of guaranteed rate scheduling [5], [3, Section 2.1] (including the widespread deficit round robin scheduler [4, Chapter 8]), and more recently Audio-Video Bridging [6] and the Credit Based Shaper [7, 8, 9].
The bound in Eq. (1) is tight if the only information available is the arrival curve and the service curve . However, all of the examples we just mentioned have in common an additional feature: packet transmission occurs at the physical line rate , which is often much larger than the rate guaranteed by the rate-latency service curve. For example, with a Deficit Round Robin (DRR) scheduler that handles classes of traffic of equal importance, for every class the rate is equal to .
In this paper, we exploit the information on the transmission rate to provide a bound on the delay at a FIFO system that improves on the network calculus bound in (1). Specifically, the bound is per-packet and depends on the packet length. We reached this improved bound by combining the min-plus representation of service curve and max-plus representation of arrival curve [11]. We show that the bound is tight, at least when the arrival curve is concave.
Ii System Model
We consider a FIFO system with a queue and a transmission subsystem, as in Fig. 1. Upon arrival, packets enter the queue and are stored in FIFO order. A scheduler decides when the packet at the head of the queue is selected for transmission. When this occurs, this packet is transmitted at a constant rate . Let be the arrival time of the th packet, where the numbering of packets is by order of arrivals, and let be the time at which packet is selected for transmission. The FIFO assumption means that . Let be the length of packet and the maximum packet length. The packet leaves the system at time . We call the “response time” of packet .
Furthermore, we assume that the scheduler is such that the complete system offers to the total flow of all incoming packets a service curve with . In many cases, the rate is much less than ; this occurs for example when the transmission capacity is shared between this FIFO system and other subsystems dedicated to other classes of traffic, as in [8].
We also assume that the total flow of all incoming packets is packetized, i.e. we consider that all bits of packet arrive at the same time instant . Furthermore, we assume that the total flow of all incoming packets is constrained by an arrival curve .
Iii Improved Delay Bound
In this section, we derive a delay bound for a general FIFO system as described in Section II.
Theorem 1.
(Upper bound on the response time at a FIFO system) Consider a FIFO system as in Section II, i.e. one that offers a rate-latency service curve with parameters (,), and where, as soon as a packet starts to be transmitted, it is transmitted at a constant rate . Assume that the total input is packetized and has an arrival curve . For a packet of length , the response time is upper bounded by:
(2) |
where is the network calculus bound, given in Eq. (1).
Proof.
We use the notation in Section II and call the size of the packet for . Now let be the index of the packet of interest, so that . By Lemma 1 for all we have:
(3) |
where is the rate latency service curve. Let be the upper pseudo-inverse of , defined by . According to the properties of upper pseudo-inverse in [10, Section 10.1], for a non-decreasing function , we have . Then, from Eq. (3), we have:
(4) |
Therefore, satisfies,
(5) |
The input traffic has an arrival curve , thus by the max-plus representation of arrival curves in [11, Theorem 1]:
(6) |
By excluding the packet of interest from the sum on the left-hand side of Eq. (6), we obtain:
(7) |
By using Eq. (7) in Eq. (5), we have,
(8) |
By setting , we further obtain,
(9) |
Note that is given by
(10) |
therefore
(11) |
By applying Lemma 2 in Eq. (III), we have:
(12) |
where is the network calculus bound, given in (1). Now observe that , which concludes the proof. ∎
Lemma 1.
If a FIFO system has as a service curve, and has packetized input, then for every packet , there exists a packet index such that, , where is the size of the packet.
Proof.
From the definition of a service curve [3], we have
(13) |
where is the number of bits that have been served until time and is the number of bits that have arrived until time and . Let such that, , then,
(14) |
Then, for some , we have
(15) | ||||
(16) |
By replacing Eqs. (15) and (16) in Eq. (13), we obtain:
(17) |
By setting , we obtain as the time instant we begin to serve packet . This means, that all packets before have already been served. Thus,
(18) |
By replacing Eqs. (14), (18) in Eq. (17), we obtain:
(19) |
If , we obtain , which is a contradiction; else, the statement of the lemma is shown. ∎
Lemma 2.
If is a wide-sense increasing function and is its right-limit, then for any :
(20) |
Proof.
Let and . We want to prove that . To do so, first we show that ; and second that .
: The function is wide-sense increasing; therefore for any , we have:
(21) |
Using (21), it is trivially shown that .
: The function is wide-sense increasing; therefore for any and , we have ; therefore:
(22) |
Since is the lowest upper bound for , we have:
(23) |
Eq. (23) is correct for any values of , therefore:
(24) |
∎
Remark.
If the input in the FIFO system consists of multiple flows, then the arrival curve, , is an envelope for the aggregate of the flows. If the flows have different minimum packet lengths, then Theorem 1 provides a distinct delay bound for every flow, , where is the minimum packet length of flow .
Hereafter we provide two examples that illustrate the improvement of Theorem 1 over the network calculus delay bound.
In the first example, we compute delay bounds for two Audio-Video Bridging (AVB) classes in a TSN scheduler [7, 8]. We consider the traffic specification in [9]. Also, we set the same as [9] idle slopes of the TSN scheduler for classes A and B that are respectively equal to and of the link rate. For a link rate equal to Gbps, using the rate-latency service curves obtained in Theorem 1 of [7], we find that the delay bound improvement for a packet of class A with length KB is s per hop. In addition, the improvement for a packet of class B with length KB is s per hop. When the link rate is Mbps, the improvement for the packet of class A is s and for the packet of class B is s per hop. For both link rates the delay bound of Theorem 1 improves the network calculus bound by around and for classes A and B, respectively. This improvement is small but non-negligible.
In the second example, we consider flows sharing a link with rate using a DRR arbitration policy. We assume that all flows have the same maximum packet length and the same quantum value, , which is set to . Therefore, the rate-latency service curve parameters for all flows are the same given by (Section 9.2.3 of [4]):
(25) | ||||
Assume that the maximum burstiness of each flow is limited by its maximum packet length, i.e., . Then, the network calculus bound is and the improvement in the delay bound for a packet with length is , approximately , which is significant.
Iv Tightness of the Improved Delay Bound
Theorem 2.
(Tightness of the bound of Theorem 1) The bound of Theorem 1 is tight for concave arrival curves, i.e., for any maximum packet length , any packet length , any concave arrival curve, , such that , any rate-latency service curve, , and any fixed transmission rate, , there is a scenario where one packet of length experiences a delay equal to (2).
Remark.
If the condition does not hold, there is no packetized flow with arrival curve and with one packet of length .
Proof.
We perform the proof by following two steps. First, we give a simulation trace for an input sequence of packets. A packet has length , arrives at time , and finishes transmission at time . Second, for the given input sequence and output sequence , we verify the following: (i) the input sequence conforms to the arrival curve , (ii) the system offers as a service curve, (iii) the system is FIFO with constant rate , and (iv) the delay of packet is equal to the bound of Eq. (2).
We begin with the construction. We define the time instant, as if the maximum exists, otherwise . By Eq. (1):
(26) |
We construct the packet length sequence , such that , where
(27) | ||||
(28) | ||||
(29) | ||||
(30) |
In addition, we define the cumulative input function as follows,
(31) |
where is an -packetizer function (Definition 1.7.3, [3]). is the green curve in Fig. 2. Note that the packet of interest, i.e., packet , arrives at time .
In order to construct the output, we first define the function
(32) |
where is an affine function with rate and is the impulse function. The packet starts transmission at time
(33) |
It finishes the transmission at time . Therefore, the cumulative output function is the red line in Fig. 2, which is determined as follows. For each , we start by the point and draw a line with increasing rate () up to the time .
Second, we verify (i)-(iv). The difficulty in showing (i) is the packetizer in the definition of . We use the Theorem 1.7.2 in [3]. It holds that , where is shaper with shaping curve . By Eq. (31), we observe that , i.e., derives by the input that is first -packetized, then shaped by , and finally -packetized again. Thus, by the Theorem 1.7.2 in [3], is -smooth.
To show (ii), we observe that
(34) |
Since, by construction, , it holds that , which shows (ii).
(iii) is true by the construction. The system is FIFO since from Eq. (33), a packet starts transmission at time , where is its preceding (with respect to the arrival time) packet. Also, the transmission time of packet is , i.e., the transmission is done with rate .
V Conclusion
We considered a network element that offers a rate-latency service curve and has a known transmission rate larger than the rate guaranteed by the service curve. We obtained a delay bound that improves on the existing network calculus bound by an amount that depends on the length of the packet being transmitted.
References
- [1] “Deterministic networking (detnet), https://datatracker.ietf.org/wg/detnet/about/,” accessed: 2019-03-26.
- [2] “IEEE TSN, https://1.ieee802.org/tsn/,” 2018, accessed: 2019-03-27.
- [3] J.-Y. Le Boudec and P. Thiran, Network Calculus: A Theory of Deterministic Queuing Systems for the Internet. Springer Science & Business Media, 2001, vol. 2050.
- [4] A. Bouillard, M. Boyer, and E. Le Corronc, Deterministic Network Calculus: From Theory to Practical Implementation. John Wiley & Sons, 2018.
- [5] P. Goyal, S. S. Lam, and H. Vin, “Determining end-to-end delay bounds in heterogeneous networks,” in 5th Int Workshop on Network and Op. Sys support for Digital Audio and Video, Durham NH, April 1995.
- [6] J. A. R. De Azua and M. Boyer, “Complete modelling of avb in network calculus framework,” in the 22nd ACM Int’l Conf. on Real-Time Networks and Systems (RTNS), NY, USA, 2014, pp. 55–64.
- [7] E. Mohammadpour, E. Stai, M. Mohiuddin, and J.-Y. Le Boudec, “Latency and backlog bounds in time-sensitive networking with credit based shapers and asynchronous traffic shaping,” in 2018 30th International Teletraffic Congress (ITC 30), vol. 02, pp. 1–6.
- [8] H. Daigmorte, M. Boyer, and L. Zhao, “Modelling in network calculus a TSN architecture mixing time-triggered, credit based shaper and best-effort queues,” 2018. [Online]. Available: https://hal.archives-ouvertes.fr/hal-01814211
- [9] L. Zhao, P. Pop, Z. Zheng, and Q. Li, “Timing analysis of avb traffic in tsn networks using network calculus,” in Real-Time and Embedded Technology and App. Symp., ser. RTAS ’18. IEEE, 2018, pp. 25–36.
- [10] J. Liebeherr, “Duality of the max-plus and min-plus network calculus,” Foundations and Trends® in Networking, vol. 11, no. 3, pp. 139–282, 2017.
- [11] J.-Y. Le Boudec, “A theory of traffic regulators for deterministic networks with application to interleaved regulators,” IEEE/ACM Transactions on Networking (TON), vol. 26, no. 6, pp. 2721–2733. [Online]. Available: https://doi.org/10.1109/TNET.2018.2875191
Comments
There are no comments yet.