Time-sensitive networks provide guarantees for applications in the automobile, automation, space, avionics and video industries [ieeeDraftStandardLocal2019b, iecIECIEEE608022019, ecssSpaceWireLinksNodes2008, AFDX, TTE, ieeeAVB]. IEEE Time Sensitive Networking (TSN) working group [tsn] and the IETF Deterministic Networking (DetNet) working group [detnet] provide standardization for such networks. The goal of time-sensitive networks is to fulfill flow requirements on worst-case delay and jitter (defined as the difference between worst-case and best-case delays), in-order packet delivery, as well as zero congestion loss and seamless redundancy [ieeeIEEEStandardLocal2017, rfc8655]. The emergeance of applications with low jitter requirement in large-scale time-sensitive networks, such as industrial Internet of Things [itu-y3000] and electricity distribution[tsn-profile-service-provider], questions the performance of existing queuing and shaping mechanisms such as Credit-Based Shaper, IEEE 802.1Qch Cyclic Queuing and Forwarding (CQF) [p8021qch], and Deficit Round Robin [tabatabaee2021deficit]. This issue can be addressed with dampers, which are mechanisms to reduce delay jitter in time-sensitive networks [verma_delay_1991, zhang_rate-controlled_1993, cruz_sced+:_1998].
A damper delays every time-sensitive packet by an amount written in a packet header field, called damper header, which carries an estimate of the earliness of this packet with respect to a known delay upper-bound of upstream systems. This ideally leads to zero jitter; in practice, there is still some small residual jitter, due to errors in acquiring timestamps and in computing and implementing delays. As a positive side effect, dampers create packet timings that are almost the same as at the source, with small errors due to residual jitter, and thus cancel most of the burstiness increase imposed by the network.[mohammadpour2020packet, Lemma 1]. The residual burstiness increase that remains when dampers are used is not influenced by the burstiness of cross-traffic. Thus, dampers solve the burstiness cascade issue [charny_delay_2000]: individual flows that share a resource dedicated to a class may see their burstiness increase, which may in turn increase the burstiness of other downstream flows. Furthermore, dampers are stateless, unlike some TSN shaping mechanisms, e.g., Asynchronous Traffic Shaping (ATS) [ieee8021qcr]. Solving the burstiness cascade in a stateless manner makes the dampers of interest for large-scale time-sensitive networks.
Several implementations of dampers have been proposed; the older ones are associated with specific schedulers such as earliest-deadline-first [verma_delay_1991, cruz_sced+:_1998] and static priority [zhang_rate-controlled_1993]; the recent implementations can coexist with any scheduling mechanism [grigorjewMetzgerHossfeldetal_2020, rgcq, fopleq]. Some of these implementations enforce dampers to behave in a FIFO manner [cruz_sced+:_1998, fopleq, grigorjewMetzgerHossfeldetal_2020] and some do not [zhang_rate-controlled_1993, rgcq]. Analysis of damper is crucial to provide guarantees for applications in the context of time-sensitive networks. In the existing works, [rgcq, fopleq] did not provide any analysis; others analyze only their implementation and under limited assumptions on the network settings. Also, existing analyses assume that the network operates with one ideal clock; in practice, this assumption does not hold and may have non-negligible side effects. Recently, the effect of non-ideal clocks on regulators was analyzed and a clock model was proposed in the context of time-sensitive networks [thomas_time_2020], which we use in this paper.
We first present a taxonomy of dampers that classifies the existing implementations into dampers with or without FIFO constraint. Then, under general network configuration with non-ideal clocks, we provide formulas to compute tight delay and jitter bounds for dampers without FIFO constraint (Theorems1 and 2); we see that the impact of non-ideal clocks can be non-negligible in cases with low jitter requirements. As a result of this analysis, we derive conditions in which clock synchronization throughout a network does not affect the performance of dampers. Moreover, we capture the propagation of arrival curve at the output of dampers and see how this can solve the burstiness cascade issue. Next, we show that existing implementations of dampers without FIFO constraints may cause undesired packet reordering due to clock non-idealities, even in synchronized networks. This problem is avoided with dampers that enforce FIFO constraints; however, the effect on their timing properties was not analysed in the literature and we bridge this gap in this paper. We model two classes of dampers with FIFO constraint: re-sequencing dampers and head-of-line dampers. For the former class, we show that when all network elements are FIFO, the delay and jitter bounds are not affected by the re-sequencing operation (Theorem 3). For the latter class, there is a small penalty due to head-of-line queuing, which we quantify exactly (Theorem 4). In contrast, if some network elements are non FIFO, the jitter bounds for dampers with FIFO constraint can be considerably larger (Theorems 5 and 6). We finally evaluate our results in an industrial case-study.
The rest of the paper is as follows. Section II presents the state-of-the-art. Section III describes the system model, terminology, clock model and all assumptions. Section IV presents a taxonomy of the existing dampers. The analysis of dampers without FIFO constraint is presented in Section V. Packet reordering scenarios due to non-ideal clocks are presented in Section VI. The analysis of dampers with FIFO constraint is given in Section VII. Section VIII provides a numerical evaluation for an industrial case-study and Section IX concludes the paper.
Ii Related Works
The concept of dampers was introduced by Verma et. al [verma_delay_1991], under the name delay-jitter regulator, in combination with earliest-deadline-first (EDF) scheduling. In this scheme, a per-flow regulator is placed at every node to delay a packet as much as its earliness in the previous node; the earliness is the time difference between the delay that a packet was supposed to experience and the actual delay that is measured by time-stamping. Later, Zhang et. al. [zhang_rate-controlled_1993] proposed Rate-Control Static Priority (RCSP) scheduling to avoid coupling of delay and bandwidth allocation in the EDF schedulers mentioned in [verma_delay_1991]. We describe RCSP in Section IV.
The term damper was first used by Rene Cruz [cruz_sced+:_1998] as a conceptual network element that slows down the traffic passing through it. In [cruz_sced+:_1998], dampers are used in relationship with SCED (Service Curve Earliest Deadline) scheduling to avoid extra queuing as was proposed by [zhang_rate-controlled_1993]. With this scheme, called SCED+, a flow traverses a few virtual paths (each is a sequence of switches) with guaranteed service curves and damper curves. Then, at the entrance of each virtual path, for every packet of the flow and every switch in the virtual path, initial and terminal eligibility times are computed using the service and damper curves; a packet is released from a switch within its initial and terminal eligibility times.
Recently, a few implementations of damper are proposed that can be used in combination with any scheduling mechanism. Grigorjew et. al. [grigorjewMetzgerHossfeldetal_2020] implement damper as a shaper in relation with Asynchronous Traffic Shaping (ATS), IEEE 802.1 Qcr [ieee8021qcr]; we refer to their scheme as jitter-control ATS. It is assumed in [grigorjewMetzgerHossfeldetal_2020] that the input flows are constrained by leaky-bucket arrival curve and all the elements inside the network, including the switching fabrics, output port queues and the ATS, are FIFO for the packets that share the same queues inside ATS. Rotated gate-control-queues (RGCQ) [rgcq] is an implementation of a damper integrated with the queuing system of a scheduler. Flow-order preserving latency-equalizer (FOPLEQ) [fopleq] is as an extension of RGCQ to preserve the per-flow order of the packets according to its entrance to FOPLEQ. Section IV describes the details of these implementations. These previous works do not provide delay analysis or do it in restricted settings. In particular, clock non-idealities are ignored. In [thomas_time_2020] clock non-idealities are modelled in the context of time-sensitive networks and the impact on timing analyses is explained in detail. In this paper, we apply this clock model to networks with dampers of various kinds.
Dampers can be used to reduce delay jitter and thus to provide end-to-end services with a low jitter guarantee. An alternative method to provide low jitter, Cyclic Queuing and Forwarding (CQF), also known as Peristaltic Shaper, was introduced by IEEE Time-Sensitive Networking (TSN) [p8021qch], [ieee8021Q, Annex T]. According to CQF, for each priority class, there are two cyclic queues; in each cycle, while one queue is being served, the other enqueues the arriving packets. The cycles change periodically and the queues swap their operations with each other. CQF relies on very different mechanisms and assumptions than dampers; its analysis is out of the scope of this paper.
Iii System Model
We consider a network that contains a set switches or routers, hosts and links with fixed capacity. Every flow follows a fixed path, has a finite lifetime and emits a finite, but arbitrary, number of packets. We consider unicast flows with known arrival curves at their sources (i.e. there are known bounds on the number of bits or packets that can be emitted by a flow within any period of time).
We call jitter-compensated system (JCS) any delay element or aggregate of delay elements with known delay and jitter bounds, for which we want to compensate jitter by means of dampers. This is typically the queuing system on the output port of a switch or router used in time-sensitive networks. It can also be a switching fabric or an input port processing unit, or even a larger system. For time-sensitive flows, a JCS should be able to time stamp packet arrivals and departures using the available local times. It should also increment the damper header field in every time-sensitive packet (if one is present) by an amount equal to an estimate of the earliness of this packet with respect to the known delay upper-bound at the JCS for the class of traffic that this packet belongs to. If no damper header is present, it inserts one, with a value equal to the estimated earliness.111We choose this method of carrying earliness in packet headers for ease of presentation. Another method consists in each JCS inserting a separate damper header: a packet then has as many damper headers as JCSs between dampers, and the earliness to be compensated at a damper is the sum of all these values. The discussion of such methods is out of the scope of this paper, as it does not affect the timing analysis presented here. The operation of the damper header update (DHU) unit is described in Section IV-C. When a time-sensitive flow crosses a JCS, for actual jitter removal to occur, there must be a downstream damper on the path of the flow. For example, if the JCS is a switch output port, the next downstream damper is typically located on the output port of the next downstream switch.
It is generally not possible, or required, to remove delay jitter in all network elements, because time stamping and DHU come with a cost. Therefore, it is required, for our timing analysis, to consider what we call bounded-delay systems (BDSs), defined as any delay element or aggregate of delay elements with known delay and jitter bounds, and for which we do not compensate jitter. Constant delay elements (e.g. an output link propagation delay), variable delay elements with very low jitter (e.g., very high speed backbone network) and other delay elements without DHU unit are examples of BDSs.
A damper is a system that delays every time-sensitive packet, using its local clock, for a duration approximately equal to the damper header (if any, else the damper does not delay the packet). Such a damper header was inserted/updated in the upstream JCSs between this damper and the previous upstream damper or the source of the flow. The damper also resets the damper header, so that the next downstream damper will see only the earliness accumulated downstream of this damper. Designing a stand-alone damper is a challenge, because such a damper may need to release a large number of packets instantly or within a very short time, which might not be feasible. This is why damper implementations are often associated with queuing systems; then, the time at which a damper releases a packet is simply the time at which the packet becomes visible to the queuing system. We classify and model existing designs of dampers in Section IV.
Example 1. Fig. 1 shows an example flow path within a local-area time-sensitive network where we want to compensate the jitter imposed by the output queuing systems and switching fabrics by means of dampers. Therefore, for each of the switching fabrics and the queuing systems, a DHU unit is placed to perform the damper header update; finally a damper is placed before each queuing system to remove the imposed jitter by the upstream switching fabric and queuing system. For example, the damper in the first switch compensates the jitter imposed by the queuing system of the source and the switching fabric of the first switch. Note that the propagation delay is constant and seen as a BDS. Here, the different clocks need not to be synchronized.
Example 2. Fig. 2 shows an example flow path within a large-scale deterministic network. Assume that the backbone network has relatively low delay (because of high-speed links, e.g. or more, worst-case delays tend to shrink [mathis2019deprecating]) and then the main source of jitter is the access network. For a given class of traffic, we want to remove the jitter imposed by the access network, in particular the forwarding plane and output queuing of each access router (each is treated as a JCS); therefore, each of these should have a DHU and a damper upstream of the output queuing system. In this example, the backbone network is modelled as a BDS; also, the source is unaware of any downstream damper and does not have a DHU and hence treated as a BDS. The jitter imposed by the access network is removed, but not the jitter caused by the backbone. The different clocks need not be synchronized and the backbone nodes are unmodified.
Example 3. We continue in Fig. 3 with the previous example but assume now that, for some class of traffic with very low jitter requirement, the jitter induced by the backbone should be compensated. In such a case, we need to treat the backbone network as a JCS, i.e., we need to time stamp the arrival of each time-sensitive packet to the backbone and modify their damper header at the departure from the backbone. This can be done as in Fig. 3, where, at the upstream provider edge (PE) router, a time stamping unit should be added that inserts a field in the packet header equal to the departure time of each time-sensitive packet from the PE router in its local time (this operation can be done within the upstream DHU unit to avoid placement of a time-stamping unit); then at the egress downstream PE router, a DHU is placed that reads the departure time of the packet from packet header, removes it from packet header, computes earliness with its local clock and finally modifies the damper header. In this case, differently from previous examples, the time stamping and DHU are performed with different clocks; therefore, the PE routers should be time-synchronized, as otherwise the computation of earliness is impossible (time synchronization is never absolute and, in sections III-B and IV-C, we analyze how to account for clock non-idealities). The jitter induced by the backbone network is compensated in the damper placed in the downstream PE router and hence removed. The PE routers must be time-synchronized (in provider networks, they typically are); backbone nodes are unmodified but deterministic packets carry an additional header for timestamps.
Iii-B Assumptions on the Clocks
We call the perfect clock, i.e. the international atomic time (TAI222Temps Atomique International). In practice, the local clock of a system deviates from the perfect clock [thomas_time_2020]. Typically the JCSs upstream of a damper operate with different clocks than the damper itself, and this can affect the performance of the damper as we see in Section V. In time-sensitive networks, clocks can be synchronized or non-synchronized. Non-synchronized clocks are independently configured and do not interact with each other; this corresponds to the free-running mode in [g810, Section 4.4.1]. When clocks are synchronized, using methods like Network Time Protocol (NTP) [ntp], Precision Time Protocol (PTP) [ptp], WhiteRabbit [whiterabbit], Global Positioning System (GPS) [gps], the occurrence of an event, when measured with different clocks, is bounded by the time error bound (s or less in PTP, WhiteRabbit, and GPS; ms in NTP).
We follow the clock model in [thomas_time_2020], which applies to time-sensitive networks. Consider a clock that is either synchronized with time error bound , or not synchronized (in which case we set ). Let [resp. ] be a delay measurement done with clock [resp. in TAI], then [thomas_time_2020]:
where is the stability bound and the timing-jitter bound of the clock . Note that this set of bounds is symmetric, i.e. we can exchange the roles of and in (III-B). We assume that the parameters are valid for all clocks in the network, i.e. we consider network-wide time-error, stability and time-jitter bounds. When a flow has as arrival curve with clock , then an arrival curve in TAI is [thomas_time_2020]:
In a TSN network, [802.1AS_ieee_2011, Annex B.1.1] and ns [802.1AS_ieee_2011, Annex B.1.3.1]]; if the network is synchronized with gPTP (generic PTP) then s [802.1AS_ieee_2011, Section B.3] and if it is not synchronized then .
Iii-C Delay Jitter
For a given flow, call the delay of packet , measured in TAI. The “worst-case delay” of the flow is where the max is over all non-lost packets sent by the flow during its lifetime. Similarly, the “best-case delay” of the flow is . The “delay jitter” (also called “jitter”) is the difference, i.e., so that for any . Delay jitter is called IP Packet Delay Variation in RFC 3393 [rfc3393].
Iv Taxonomy of Dampers
As mentioned earlier, designing a damper is a challenge and there exist very different implementations. In this section we classify such implementations in a manner that will be useful for our timing analysis. In the rest of this paper we call “eligibility time” the time at which a damper releases a packet, as in most implementations the packet is not actually moved, but simply made visible to the next processing element.
Iv-a Dampers without FIFO constraints
An ideal damper delays a packet by exactly the amount required by the damping header. Consider a packet with damper header that arrives at local time to a damper. The theoretical eligibility time for the packet is:
and the ideal damper releases the packet at time . Jitter-control EDF [verma_delay_1991] is an ideal damper, used in combination with an EDF scheduler.
Many other implementations of dampers use some tolerance for the packet release times, due to the difficulty of implementing exact timings. We call damper with tolerances a damper such that the actual eligibility time , of packet , in local time, satisfies:
The tolerances can vary from hundreds of nanosecond to a few microsecond based on implementation. Hereafter, we study two instances of dampers with tolerances , namely, RCSP [zhang_rate-controlled_1993] and RGCQ [rgcq].
RCSP is an implementation of a damper in relation with static-priority scheduler; each queue of the scheduler is implemented as a linked list and the damper is implemented as a set of linked lists and a calendar queue [brown_calendar_1988]. A calendar queue contains a clock and a calendar where each calendar entry points to an array of linked lists (each for one priority queue). The clock ticks every fix interval . On each clock tick, the linked list that the current clock time of the calendar points is appended to the corresponding priority queue of the scheduler. Whenever a packet arrives, its theoretical eligibility time is computed based on (3); then the actual eligibility time of the packet, , is computed by rounding down the theoretical eligibility time, . Then, if is equal to the current clock time, it is appended to the corresponding priority queue of the scheduler; otherwise, it is appended to the linked list that the entry of the calendar points to. The computation of theoretical eligibility time is done with some errors in acquiring true local-time on packet arrival and in computation due to finite precision arithmetic that is bounded by (typically, in the order of a few nanoseconds). We can see that satisfies (4) when .
RGCQ, inspired by the idea of Carousel [saeed_carousel_2017], is an implementation of a damper combined with a queuing system of a scheduler; in other words, each queue of a scheduler is replaced with an RGCQ. An RGCQ consists in a timer and a set of gate-control queues (GCQs). By default, the GCQs are closed and are assigned unique increasing openTimes with interspacing of . A GCQ is opened whenever the timer reaches to its openTime and is closed after it is emptied or being opened for a fixed amount of expiration time; when a GCQ is closed, its openTime is set as the largest openTime+. The scheduler selects a packet for transmission from an open GCQ with smallest openTime. Whenever a packet arrives, its theoretical eligibility time is computed based on (3); then the actual eligibility time of the packet, , is computed by rounding up the theoretical eligibility time, i.e., . Then, the packet is enqueued to the GCQ whose openTime is . Similarly to RCSP, due to timing acquisitions and arithmetic rounding bounded by , we see that satisfies (4) when .
Iv-B Dampers with FIFO constraint
The definition of damper with tolerance given in the previous section does not mention whether the damper preserves packet order, and the satisfaction of (4) does not preclude packet misordering. Indeed, we show in Section VI that our two examples of dampers with tolerance, namely RCSP and RGCQ, can cause packet misordering due to clock non-idealities. Such a behavior is not possible with a class of proposed damper designs, which enforce the FIFO constraint, and which we now cover.
Iv-B1 Re-sequencing damper
We call re-sequencing damper with tolerances a system that behaves as the concatenation of a damper with same tolerances and a re-sequencing buffer that, if needed, re-orders packets based on the packet order at the entrance of the damper. The packet order is with respect to a flow of interest.
Formally, a system is a re-sequencing damper if there exists a sequence such that the release times for the flow of interest, in local time, satisfy:
where is the theoretical eligibility time defined in (3) and packet numbers are in order of arrival at the damper.
It follows that such a damper is FIFO for the flow of interest and that333The converse does not hold, i.e., any system that is FIFO for the flow of interest and satisfies (6) is not necessarily a re-sequencing damper.:
We say that a re-sequencing damper is ideal if has zero tolerances. Hereafter, we describe two instances of re-sequencing dampers, namely, SCED+ [cruz_sced+:_1998] and FOPLEQ [fopleq].
SCED+ is an implementation of a damper in combination with SCED scheduling. The damper in [cruz_sced+:_1998] is defined as a conceptual element with tolerance . Accordingly, each packet is assigned an initial eligibility time and a terminal eligibility time where the difference between the two is . In SCED+, the damper ensures that the damper is released after the initial eligibility time and before the terminal eligibility time. In fact, the dampers assigns a tentative eligibility time, to a packet , where:
SCED+ assumes that the damper serves packets in FIFO manner; then, the actual eligibility time of the packet is:
so that SCED+ is a re-sequencing damper with tolerances , where is a bound on the errors on timing acquisition and arithmetic rounding.
FOPLEQ, similarly to RGCQ, is inspired by the architecture of Carousel [saeed_carousel_2017]. Accordingly, it has a set of time-based queues along with a table, called eligibility time table (ETT), for the purpose of preserving the order of packets inside FOPLEQ. Each row in ETT belongs to a flow that has a packet in the Carousel and stores a tentative eligibility time of the latest packet belonging to the corresponding flow. The tentative eligibility time of a packet is obtained by dividing its theoretical eligibility time by and rounding down the computed value. Consider a packet of the flow of interest, where number is in the order of arrival at the FOPLEQ. First a theoretical eligibility time is computed using (3); second, a tentative eligibility time is obtained by rounding down to a multiple of , i.e. ; then, the actual eligibility time of the packet is the maximum of its tentative eligibility time and the stored tentative eligibility time of the flow of interest in the ETT. The tentative eligibility times correspond to a damper with tolerances , where is a bound on the errors on timing acquisition and arithmetic rounding, and therefore FOPLEQ is a re-sequencing damper with tolerances .
Iv-B2 Head-of-line (HoL) damper
The idea is introduced in [grigorjewMetzgerHossfeldetal_2020]. A HoL damper is implemented as a FIFO queue. When a packet arrives, its arrival time is collected and the packet is stored at the tail of the queue. Only the packet at the head of the queue is examined; if its eligibility time is passed, it is immediately released, otherwise it is delayed and released at its eligibility time. When the head packet is released, it is removed from the damper queue and the next packet (if any) becomes the head of the queue and is examined. When an arriving packet finds an empty queue, it is immediately examined. By construction, packet ordering is preserved.
As before, the model should incorporate some tolerance to account for the timing inaccuracy and for processing times. Unlike with previous damper models, these two things cannot be aggregated because the head-of-line property has the effect that processing times may have an effect over subsequent packets (this is visible in Theorem 4).
Formally, we model a head-of-line damper as follows. It has tolerance parameters that account for the accuracy of timings, as well as processing bounds that account for non-zero processing times. We must have and . Packet numbering is with respect to the order of arrivals at the damper and is global for this damper (not per-flow). We say that a system is a head-of-line damper if the release times , in local time, satisfy:
where is the theoretical eligibility time as in (3).
The definition in (9) can be explained as follows. First, the eligibility times are obtained with some errors due to timing acquisition and arithmetic rounding. Let be the resulting tentative eligibility times, so that
Second, packet is examined only when packet is released, and this action takes a processing time . The actual release time is therefore
If the tolerances and processing bounds are all equal to , then the HoL damper is called ideal. It follows immediately from (9) and (IV-B1) that an ideal HoL damper is the same as an ideal re-sequencing damper.
In [grigorjewMetzgerHossfeldetal_2020], Jitter-control ATS is presented as an ideal head-of-line damper in combination with Asynchronous Traffic Shaping [ieee8021qcr] within a switch where each FIFO queue is shared among all time-sensitive flows that come from the same input port, have the same class, and go to the same output port. In [grigorjewMetzgerHossfeldetal_2020], the authors implicitly assume that the tolerances and processing times are zero and therefore ignore them in their analysis. This assumption might not hold in practical cases, specifically when a large number of packets become eligible at the same time in a jitter-control ATS. The effect of non-zero tolerances and processing times appear in Theorem 4 and is illustrated numerically in Section VIII.
Iv-C Damper Header Computation
In this subsection, we first describe the operation of damper header update unit. Then, we discuss the possible sources of error in the computation.
Iv-C1 DHU unit operation
The DHU unit of a JCS computes the earliness of a packet and updates the damper header. A classical approach to compute the earliness is to first measure the actual delay of the packet in the JCS with the clock of DHU unit; then set the earliness as the difference between the known delay bound of the system for this class of traffic and the actual delay of the packet [verma_delay_1991, zhang_rate-controlled_1993]. More precisely, for a packet , its arrival time is time stamped with local clock and stored locally (Examples 1 and 2 in Section III-A) or delivered by the packet (Example 3 in Section III-A); let denote the stored/delivered value. Then the DHU unit time stamps the departure time of the packet with its local clock ; let denote the departure time. Then, the DHU unit computes the earliness of the packet as
The last step for the DHU unit is to update the damper header that is equal to the current damper header incremented by the computed earliness, and write the result in the damper header field. Then the packet leaves the JCS. In the case that the JCS is connected to an output link, the departure time of a packet is the complete packet transmission and thus the packet header is accessible to write the damper header just before packet transmission. Therefore, the start of transmission time of the packet is time stamped () and the transmission time is inferred as with as the packet length and as the transmission rate. Then, we set the departure time to and compute the earliness using (12). This method of damper header computation is used in most of the existing damper variants like RCSP [zhang_rate-controlled_1993], FOPLEQ [fopleq] and jitter-control ATS [grigorjewMetzgerHossfeldetal_2020]. We call this the default method of damper header computation.
Recently, [rgcq] proposed a subtle change in the computation of earliness when a JCS comes immediately after a damper with tolerances . In particular, they suggest to time stamp the theoretical eligibility time of packet from the damper instead of the arrival time to the JCS; as a consequence, the jitter imposed by the tolerance of the damper is compensated by the next upstream damper. In such proposal, note that the delay upper-bound between the theoretical eligibility time to the arrival time to the JCS (i.e., the actual eligibility time from the damper) should be added to the earliness; by (4), this upper bound is . Hence, the earliness for theoretical eligibility time stamping is:
We call this method of damper header computation TE time-stamping.
Iv-C2 Errors in damper header computation
The DHU unit computes a damper header equal to the current damper header incremented by the computed earliness, and write the result in the damper header field. This step is imperfect due to finite precision arithmetic and finite resolution of the damper header. The corresponding error is where is the theoretical value of the damper header and is the actual value written in the packet.
In the computation of earliness ((12) and (13)), when is delivered within a packet header field (Example 3 in Section III-A), there is some error induced due to the finite resolution of the header field. The corresponding error is where is the time stamped value at the packet arrival to the JCS.
As discussed earlier, when the JCS is connected to a transmission link, the DHU unit can infer the transmission time by dividing the packet length over the nominal transmission rate of the link. Due to transmission of preamble and inexact knowledge of actual transmission rate, the inference of transmission time is done with some error where and are the actual and inferred transmission times. The error can go up to tens of nanoseconds [intelMegaCore, 10gMac].
Acquiring the true local-time on packet arrival and within the DHU unit usually comes with an error. We define the clock acquisition error as: where and are the true local times on packet arrival and departures.
The two clocks and are often the same (Examples 1 and 2 in Section III-A), but not always (Example 3 in Section III-A). We select as the reference clock of a JCS to compute damper header; then we define the error with respect to the reference clock as: where is the time that would be displayed at packet arrival if would be used. If the clocks are the same, ; if the clocks are synchronized with error with respect to TAI, ; and finally if the clocks are not synchronized, can get arbitrary large, which is incompatible with the goal of removing jitter. Therefore, in this paper, we assume that both clocks and are either one and the same, or are synchronized.
To summarize, the value of a damper header, as written in a packet , suffers from some error equal to: . Each of these sources of error can be bounded, depending on the technology used by the routers and switches. The variable denotes an upper bound on the error , i.e., . When and are the same, is typically of the order of tens of nanoseconds; if they are synchronized, is mainly dominated by the time error bound (e.g. for gPTP).
V Delay Analysis of Dampers without FIFO constraints
In this section we study the end-to-end delay and jitter of a flow when dampers without FIFO constraint are used. The first step is to decompose a flow path into a set of blocks that can be analyzed separately. Every block is as in Fig. 4; it contains a number of JCSs and BDSs and ends in a damper with tolerance. The second step, given in the rest of this section, is to give delay and jitter bounds for a flow through such a block. The bounds are valid whether the JCSs and the BDSs of the block are FIFO or not. The last step, to obtain end-to-end results, simply consists in summing up the delays and jitters of every block and, possibly, of remaining BDSs. For example, in Fig. 2, from source to the first router and from the queuing system of the last router to the destination are BDSs and the rest are decomposed in blocks as in Fig. 4.
In the following we give delay and jitter bounds for a block as in Fig. 4. Theorem 1 gives the result for dampers without FIFO constraint when the default mode of header computation is used (as explained in Section IV-C) and Theorem 2 when TE time-stamping is used. In both cases, we capture the effect of errors and non-ideal clocks. We also illustrate cases where the errors and non-ideal clocks make a major contribution to the jitter bound.
Consider a flow of interest that traverses the block in Fig. 4. The block contains a sequence of JCSs and BDSs and terminates in a damper with tolerances (). Assume that the clock of every system has stability bound , timing-jitter bound and time-error bound with respect to TAI (Section III-B). Assume that JCS has a delay upper bound , which is used for damper header computation, in its local time. Also, assume that the BDS has delay lower and upper bounds and and jitter bound , all in TAI. Then, the delay of a packet from entrance to the exit of the block, in TAI, is upper-bounded by , low-bounded by and has jitter bound , with
where and are due to clock non-idealities,
The bounds are tight, i.e., for any tolerances (), every , any , , , there is a system and two individual execution traces such that in one of them a packet experiences a delay of and in the other one a packet experiences a delay of .
The proof is in Appendix A-B.
The second arguments of the functions in (1) capture the impact of clock time error bounds when all the clocks ( JCSs and one damper with tolerance) are different from each other. If some systems share a common clock, so that there are different clocks in total, the second argument of the functions should be replaced by .
Hereafter, we provide an application of Theorem 1 to obtain delay and jitter bounds for the three examples of Section III-A. Then we compare the bounds with the basic bounds obtained when assuming that clocks are perfect, i.e. by summing the tolerances of dampers and the jitters of BDSs.
Example 1. Consider Example 1 in Section III-A for a flow that traverses switches. Assume that the switching fabrics (as JCSs) have a delay upper-bound of s and the delay bound at each queuing system (as a JCS) is s. Suppose that the error ns and all the dampers are RCSP with sns. Assume the propagation delay (as a BDS) is fixed and equal to s for all links. Assume first that the clocks of the switches are not synchronized, i.e. we set the time-error bound to in (1). Then, by applying Theorem 1 from source to the output of the first damper, the delay upper-bound is s; the delay jitter is s of which ns is due to errors and ns is due to non-ideal clocks. The basic jitter bound is s and is due to the tolerance of the first damper. We can see that the error and non-ideal clocks add ns () to the basic jitter bound. The end-to-end delay and jitter bounds are computed by summing up the delay and jitter bounds from the output of one damper to the output of the next downstream damper until the destination; this gives an end-to-end delay upper-bound of ms and end-to-end jitter bound of s. The values remain the same if we assume next that the clocks of the switches are synchronized with a time-error bound of s, as is typical in IEEE TSN systems.
Example 2. Consider Example 2 in Section III-A for a flow that traverses four access routers to reach the backbone and traverses four other access routers to reach to the destination. Assume that 1) the queuing delay at source has a delay upper-bound of s and the jitter bound of s, 2) the delay upper-bound at the output queuing and packet forwarding of each access router are s and s and the output queuing has jitter of s, 3) the backbone network has a delay upper-bound of ms and jitter bound of ms, 4) the propagation delay is s, 5) the error ns and 6) all dampers are RCSP with sns. Then, by using Theorem 1, the end-to-end delay upper-bound is ; delay jitter is ms of which s is due to the errors and ns is due to non-ideal clocks. The basic jitter bound is ms that is due to the tolerance of the dampers, as well as the BDSs, i.e., the backbone network, the source output queuing and the output queuing of the last access router (before the destination). We can see that here the effect of the errors and non-ideal clocks is negligible due to the jitter of the BDSs captured by the basic jitter bound.
Example 3. Consider Example 3 in Section III-A for a flow that traverses four access routers to reach the backbone and traverses four other access routers to reach to the destination. Also, consider the same numerical assumptions as the previous example. We want to remove jitter of the backbone network using time stamping at the upstream PE router. Assume that the PE routers are synchronized with error bound of s; hence the error damper header computation at the downstream PE router of the backbone network is bounded by s (the error at the other JCSs is bounded by ns). Then, by using Theorem 1, the end-to-end delay upper-bound is ms; delay jitter is s of which s is due to the errors and s is due to non-ideal clocks. Comparing to the previous example, the basic jitter bound is reduced to s as the jitter of the backbone network, queuing at source and the queuing at the last access router are removed. We can see that the errors and clock non-idealities add s () to the basic jitter bound.
We see from these examples that, when the end-to-end delay-jitter that remains after applying dampers is still large (ms or more, Example 2) then the timing errors and clock non-idealities do not play a significant role and can be ignored. In contrast, for very small residual delay-jitter (Examples 1 and 2, s or less), ignoring timing errors and clock non-idealities can lead to significant under-estimation.
In Example 1, we see that the delay-jitter bounds are not affected by the time-error bound, i.e., here, time synchronization does not improve the performance of dampers. We can easily analyze when this is the case, by comparing the terms in the functions in (1). We find that time synchronization does not improve the performance of dampers if and only if
It follows that if , the time error bound affects the delay and jitter bounds of Theorem 1; i.e., it is the relation between the delay (not delay-jitter) and the time-error bound that matters (see Table I).
|Synchronization method||Time-error bound ()||Minimum value of|
TSN networks are typically synchronized with gPTP () [802.1AS_ieee_2011, Section B.3]. Three main delay sensitive classes are Control-Data Traffic (CDT), class A for audio traffic and class B for video traffic. According to TSN documents [ieeeAVB, cdt_delay, tsn-profile-service-provider], the end-to-end delay requirement for CDT, classes A and B are respectively s in hops, ms and ms in hops. According to Table I, the gPTP synchronization does not impact on the obtained delay bound using Theorem 2 for CDT and class A. For class B, if we consider that for each block the sum of JCS delay bounds is less than ms, similarly the gPTP synchronization does not play a role. This implies when all switches and the destinations in a TSN network implement dampers with tolerances and the source performs time stamping (as a JCS), then without gPTP synchronization the same performance is achieved.
In order to provide delay and delay-jitter guarantees to time-sensitive flows, it is often required to bound the burstiness of flows inside the network, which is typically larger than at the source. Finding such bounds may be difficult, and worst-case bounds may be large when there are cyclic dependencies [thomas_on-cyclic_2019]. Here, dampers can help a lot, as shown by the following Corollary, which comes by direct application of the jitter bound in Theorem 1 and [mohammadpour2020packet, Lemma 1].
In Example1 of Section III-A, suppose that the flow has leaky bucket arrival curve with rate Mbps, in TAI, and burstiness KBytes at source. We computed that the jitter bound is s from source the output of the first damper. Then the arrival curve at the output of the first damper has the same rate and the burstiness is increased by Bytes. Without damper, the burstiness increase would be Bytes: we see that the burstiness increase due to multiplexing is almost entirely removed.
When dampers use TE time stamping for damper header computation rather than the default method, the delay and jitter bounds computation with TE time-stamping are slightly different than in Theorem 1. A JCS is affected only when the upstream damper uses TE time stamping (otherwise, the bounds are the same). The next theorem gives end-to-end delay and jitter bounds when damper header update uses TE time-stamping.
Consider Fig. 5 where a sequence of blocks are concatenated and TE time-stamping is used for damper header computation. Assume that clocks follow the description in Section III-B and damper with tolerances at block () and JCS operate with the same clock. Assume that there are JCSs in total. Let us denote the the sum of the delay bounds of the JCSs as and the sum of delay lower and upper bounds and jitter bound of the BDSs as and respectively. Then,
where and are the errors due to non-ideal clocks:
The proof is available in Appendix A-C.
Let us redo the end-to-end delay and jitter bound computation for the three examples of Section III-A using Theorem 2 and compare them with the ones obtained with Theorem 1. Let us consider the same assumptions made when applying Theorem 1. We can see that the delay upper-bounds obtained by Theorem 2 are the same as the ones computed after Theorem 1; however, the end-to-end jitter is reduced. Using Theorem 2, the end-to-end jitter bound for Example 1 is , Example 2 is , Example 3 is . The reason for jitter bound reduction by Theorem 2 is the elimination of the jitter imposed by the tolerance of all the intermediate dampers by the next downstream dampers. In examples 1 and 3, the jitter bounds are considerably reduced, by and ; however, this is not the case for Example 2 as the main sources of jitter are the BDSs. Furthermore, the jitter imposed by the errors and the non-ideal clocks incorporate and of the the end-to-end jitter bounds computed for examples 1 and 3, which are respectively and times the basic jitter bounds.
Vi Packet reordering in Dampers Without FIFO Constraints
In this Section we show that dampers without FIFO constraints can cause packet misordering, and we quantify the corresponding re-ordering metrics.
Obviously, a damper modifies packet order if the sequence of theoretical eligibility times is not monotonic. Since the theoretical eligibility time is equal to the arrival time at the JCS plus a constant, this may occur only if the packet order at the entrance to the damper is not the same as at the entrance to the the JCS, i.e. this requires the JCS to be non FIFO. But, as we show next, this may occur even if the JCS is FIFO, due to timing inaccuracies.
RGCQ and RCSP are two instances of dampers with tolerance; by design, they avoid packet reordering due to the tolerances by enforcing FIFO behavior after computation of theoretical eligibility times. However, as we show next, packet reordering may still occur within RGCQ and RCSP due to the errors of damper header computation and non-ideal clocks.
Re-ordering example with RGCQ. Consider Fig. 6 where the damper is RGCQ with tolerances () and clocks are not synchronized. Assume that the JCS represents a FIFO queue connected to a transmission line with a fixed rate and the BDS has zero jitter and represents constant propagation delay (similar to the first hop in Example 1 of Section III-A). Suppose that two packets and enter the JCS at the same time while is prior to . Then packet leaves before packet . The damper headers in the packets are:
where is the inferred transmission times of packets measured with . Then, the interspacing of the two packet at the output of the JCS is:
where is the actual transmission time of packet and is the departure time of packet from the JCS. Both packets experience the same delay in the BDS. Therefore, the interspacing between the two packets at the entrance of RGCQ when seen with clock is:
The difference between the theoretical eligibility times is the sum of the error between actual and inferred transmission time and the measurement difference of packet transmission time seen from clocks and . Therefore, if it happens that clock is faster than during the transmission time of packet from the JCS, then , hence i.e. packet has smaller theoretical eligible time than packet . Then, by implementation of RGCQ discussed in Section IV, packet leaves RGCQ before packet .
In this scenario, reordering occurs because of the difference of speed between the two clocks and at the microscopic scale and the error in inferring the transmission time. The earliness of a packet written in the header is measured using the local clock while the delay imposed to the packet is measured in the RGCQ using the local clock . Even if both systems are time-synchronized, there still remains a small difference in the time measurements performed by the two clocks. Over the transmission time of a packet, there is equal chance that one clocks ticks slightly faster than the other, i.e. there is % chance that the change of order described in this scenario occurs.
Re-ordering example with RCSP. Consider Fig. 7 where the dampers are RCSP. Assume that the first JCS represents a router, the second JCS is a FIFO queue connected to a transmission line with a fixed rate and the BDS has zero jitter and represents a constant propagation delay; this resembles the first and second access routers in Example 2 of Section III-A. Now, focus on the first JCS and the first RCSP. Suppose two packets and enter the JCS with interspacing , measured in TAI (e.g. transmission time of packet from source, when packets are sent back-to-back from source), i.e.,
Let denote the delay difference of two packets from entrance of the JCS to the theoretical eligibility time of the concrete damper in TAI, i.e.,
which shows the interspacing between the theoretical eligibility times of two packets at the RCSP in TAI. Observing this interspacing with , we obtain
where is the difference in the measurement of the interspacing between and , which is bounded by (III-B). Now, the actual eligibility times of the packets from RCSP is obtained by getting the floor of the theoretical eligibility times divided by . Hence, if
, the two packets may have the same actual eligibility times (become back-to-back from RCSP). The probability of this phenomenon is
which implies that, the smaller the interspacing of the two packets when entering the JCS are, the larger is the probability of having the two packets have the same actual eligibility time and leave the RSCP back-to-back. When two packets are back-to-back from one RCSP, then similar to scenario 1, there is % chance of reordering for the two packets at the output of the next downstream RCSP. Due to the independence of the two events (being back-to-back at the output of the RCSP and reordering at next downstream RCSP), the chance of reordering is .
One approach to tackle the reordering issue of dampers with tolerance is to place re-sequencing buffers after the dampers to correct the reordering that they causes. With this approach, it is crucial to find proper time-out value and size for the re-sequencing buffers. As shown in [mohammadpour2020packet], two reordering metrics, namely reordering late-time offset (RTO) and reordering byte offset (RBO) respectively give the time-out value and size of a re-sequencing buffer. We obtain these metrics for dampers as a direct result of [mohammadpour2020packet]:
Consider Fig. 4 and a flow that has arrival curve at the entrance of the block. Then, the RTO for the flow from the entrance of the block to the output of the damper with tolerance, measured in TAI, is and the corresponding RBO is :
where is the jitter bound of the block, computed in Theorem 1, and is the minimum packet length of the flow.
Consider Example 1 of Section III-A with the same assumptions made after Theorem 1. Suppose that a flow has leaky bucket arrival curve with rate Mbps, in TAI, burstiness KBytes at source and minimum packet length Bytes. We computed that the jitter bound is s from the source to the output of the first damper. Then the RTO (time-out value) is s and the RBO (required buffer size) is Bytes.
Another approach to tackle reordering is to use dampers with FIFO constraints, as discussed in Section IV and analyzed in the next section.
Vii Analysis of Dampers with FIFO constraints
As mentioned earlier, one way to avoid packet reordering within dampers with tolerance is to replace them with dampers with FIFO constraints, namely, re-sequencing and HoL dampers. The goal of this section is to provide delay and jitter bounds when dampers with FIFO constraint are used. In this context, “FIFO” and “re-sequencing” are with respect to the aggregate of all packets that use a damper of interest. When all the BDSs and JCSs within a flow path are FIFO, using re-sequencing or HoL dampers, in contrary to dampers with tolerances, can provide end-to-end in-order packet delivery. However, this might impact the delay and jitter bounds computed in Theorem 1. To this end, we capture the impact of using re-sequencing or HoL dampers instead of dampers with tolerances in terms of delay and jitter bounds in Theorem 3 and Theorem 4 when all systems are FIFO. Then, we see in Theorem 5 and Theorem 6 that the presence of a non-FIFO system (BDS or JCS) in the flow path considerably worsens the delay and jitter bounds obtained when all systems are FIFO. This phenomenon does not occur with dampers without FIFO constraint because the results in Section V hold whether the JCSs and BDSs are FIFO or not.
Consider the block of systems in Fig. 8 where all the JCSs and BDSs are FIFO and the damper is an instance of re-sequencing dampers with tolerances . Assume that the clocks follow the description in Section III-B. Then, the delay and jitter bounds of the block, in TAI, is the same as the bounds in Theorem 1.
The proof is in Appendix A-D. It consists in two steps. First, we use an abstraction of a re-sequencing damper with tolerances as a damper with tolerances followed by a re-sequencing buffer that preserve the order of packet at their entrance to the damper with tolerances. Second, by the re-sequencing-for-free property of the re-sequencing buffers [mohammadpour2020packet], we obtain the bounds.
We have seen in the previous section that even if all BDSs and JCSs are FIFO in a flow path, dampers with tolerance may cause packet reordering due to the tolerances, non-ideal clocks and errors in packet header computation. Theorem 3 indicates that in such a case, placing a re-sequencing damper avoid packet reordering with the same delay and jitter bounds as if dampers with tolerances are used.
Consider the block of systems in Fig. 8 where all the JCSs and BDSs are FIFO and the damper is an instance of head-of-line dampers with tolerances () and processing-time bounds (). Assume that the clocks follow the description in Section III-B. Then if , the delay and jitter bounds are the same as the bounds in Theorem 1. Otherwise, for a flow with per-packet arrival curve at the entrance of the block,
the delay upper-bound is increased by ,
the delay lower-bound is increased by ,
the jitter bound is increased by ,
where is a delay upper-bound of a single-server FIFO queue with maximum processing time of , computed as
where is the jitter bound computed in Theorem 1 and
The proof is in Appendix A-E. The proof consists in two steps. First, we prove that a HoL damper is equivalent to re-sequencing damper with tolerances () followed by a single-server FIFO queue with service times within (). Second, using the bounds of Theorem 3 and obtaining delay and jitter bounds on the single-server queue, the theorem is proven.
HoL dampers, in contrary to re-sequencing dampers, imposes some queuing delay, captured by in (28). The queuing delay is maximized for the last packet of a packet sequence when all become eligible at the same time; then since the HoL damper examines only the packet at the head of the queue, the last packet of the sequence is delayed as much as the processing delay of all the preceding packets.
So far we provide delay and jitter bounds as well as propagation of arrival curve when all the systems are FIFO; however, the FIFO condition might not be always met for all systems such as multi-stage switching fabrics, multi-path routing of packets or packet duplication [bennett_packet_1999, laor_effect_2002, jaiswal_measurement_2007]. Hence, in the following theorems we capture the impact of non-FIFO behavior of systems on the delay and jitter bounds when re-sequencing and HoL dampers are used.
Consider Fig. 9 where system (a BDS or a JCS) is the last non-FIFO system in the block and the damper is an instance of re-sequencing damper. Let us call as the jitter from JCS 1 to system (included), in TAI. Then, the delay upper-bound and jitter bound of the block, in TAI, are increased by comparing to the bounds in Theorem 3.
The bounds are tight, i.e., for any packet that experiences the delay equal to , there is system and an execution trace that another packet experiences a delay equal to .
The proof is in Appendix A-F. The proof has two parts. First, we show that delay upper-bound is increased by while the delay lower-bound remains unchanged. Second, we provide a scenario where two packets with interspacing enter the block and leave the element back-to-back while their order is changed. We show that when the second packet experiences a delay , the first packet experiences a delay of and the second packet leaves the re-sequencing damper before the first packet.
Consider Fig. 9 where system (a BDS or a JCS) is the last non-FIFO system in the block and the damper is an instance of HoL damper. Let us call as the jitter from JCS 1 to system (included), in TAI. Then, comparing to Theorem 4, the delay upper-bound and jitter bound of the block, in TAI, are increased by if , and are increased by if .
The proof is in Appendix A-G. The proof consists in two steps. First, similarly to the proof of Theorem 3, we abstract an HoL damper as a re-sequencing damper followed by a single-server FIFO queue. Second, by summing the bounds obtained in Theorem 5 and the bounds on the FIFO queue, the statement is proven. In the case , the bounds are increased once by within the re-sequencing damper and once within the FIFO queue as a result of propagated arrival curve at the output of the re-sequencing damper.
Similarly to Corollary 1, propagated arrival curve of a flow, with arrival curve at the entrance of a block, is at the output of re-sequencing or HoL damper, where is the jitter of the block computed by applying the corresponding theorem.
Theorem 5 and Theorem 6 show that when there is a non-FIFO system in a block, placement of a damper with FIFO constraint is counterproductive. First, comparing to placement of dampers with tolerances, the jitter is increased; in result, it leads to an increase in the burstiness of the propagated arrival curve. Second, the damper with FIFO constraint preserves the wrong order of the packets, which occurred within the non-FIFO system.
Viii Numerical Evaluation
We illustrate our theoretical results on the Orion crew exploration vehicle network, as described in [obermaisser_time-triggered_2012] and depicted in Fig. 10, taken from [thomas_on-cyclic_2019]. For the delay and jitter analysis, we used Fixed Point TFA [thomas_on-cyclic_2019, mifdaoui_beyond_2017] as there are cyclic dependencies.The device clocks are not synchronized. The link rates are Gbps. The output ports use a non-preemptive TSN scheduler with Credit-based Shapers (CBSs) with per-class FIFO queuing [mohammadpour_latency_2018, zhao_timing_2018]; from highest to lowest priority, the classes are Control Data Traffic (CDT), A, B, and Best Effort). The CBSs are used separately for classes A and B. The CBS parameters are set to and of the link rate respectively for classes A and B [zhao_timing_2018]. In each switch, the switching fabric has a delay between s to s [nexus9508]. The CDT traffic has a leaky-bucket arrival curve with rate kilobytes per second and burst bytes. The maximum packet length of classes B and BE is bytes. We focus on class A. Using the results in [mohammadpour_latency_2018], a rate-latency service curve offered to class A is bytes with s.
Class A contains flows with constant packet size bytes, which transmit packets every ms. The flows traverse between to hops. We assume all switching fabrics and output queuing system implement DHU unit and therefore are JCSs; the propagation delays are considered as BDSs with zero jitter. We examine the case where no damper is placed and the case where dampers are placed at every switch and the destinations. For the choice of dampers, we considered individually the full deployment of RCSP (s, ns), RGCQ ( ns, s) with TE time-stamping, FOPLEQ (s, ns) and a head-of-line damper ( ns, ns).
Fig. 11 shows the end-to-end jitter bounds of the flows for full deployment of RCSP and RGCQ with TE time-stamping. For each of the cases, the basic jitter computation only considers the jitter imposed by the tolerances of the dampers and ignore the impact of non-ideal clocks and errors in the computation of damper header. Fig. 11 also shows the true jitter bound for the case of RCSP, using Theorem 1, and for the case of RGCQ with TE time-stamping using Theorem 2. We see that non-ideal clocks and errors can increase jitter by in the case of RCSP and in the case of RGCQ with TE time-stamping. We also see that the TE time-stamping used with RGCQ can significantly reduce the end-to-end jitter comparing to the classical time-stamping used with RCSP.
Fig. 12 shows the end-to-end delay and jitter bounds of the flows when no damper is used and when there is a full deployment as above. All switching fabrics are FIFO. We see that without damper, the delay upper-bound is smaller compared to full damper deployments; this is due to the line-shaping effect when computing the queuing delay bounds in the absence of dampers. However, as expected, the full deployment of dampers significantly reduces the jitter bounds. In this computation, the HoL damper provides quasi similar jitter bound as RGCQ with TE time-stamping and FOPLEQ gives the exact same jitter bound as RCSP as seen in Theorem 3.
Fig. 13 shows the end-to-end jitter bounds of the flows for FOPLEQ and HoL damper considering the switching fabrics are FIFO and are not FIFO. The figure shows that with FOPLEQ the jitter is significantly increased that is due to the jitter imposed by the output queuing, as seen in Theorem 5. It also shows that jitter bounds are worse in the case of HoL damper as discussed in Theorem 6.
We have presented a theory to compute delay and jitter bounds in a network that implements dampers with non-ideal clocks. We have shown that dampers without FIFO constraints can cause packet reordering even if all network elements are FIFO. Resequencing dampers and head-of-line dampers avoid the problem; the former come with no jitter or delay penalty, and the latter with a small, quantified penalty. However, when a flow path contains non-FIFO elements, resequencing dampers and head-of-line dampers do not perform well.
Appendix A Proofs
A-a Lemmas for Section Iv
Consider a non-decreasing sequence and a sequence . Assume the sequence is defined by
Then a closed-form formula for is:
We prove by induction. Base case .
as required by the lemma.
Induction step. We assume that the lemma holds for all . Then for , by the closed-form formula we have:
Then, using the recursive definition of :
1) Let and the mapping defined by . is continuous and is compact and connected, therefore is compact and connected. The compact and connected subsets of are the closed, bounded intervals, therefore for some . Necessarily, is the minimum of over and is the maximum of over .
Now for every :