ACP: An End-to-End Transport Protocol for Delivering Fresh Updates in the Internet-of-Things

11/08/2018 ∙ by Tanya Shreedhar, et al. ∙ Rutgers University IIIT Delhi 0

The next generation of networks must support billions of connected devices in the Internet-of-Things (IoT). To support IoT applications, sources sense and send their measurement updates over the Internet to a monitor (control station) for real-time monitoring and actuation. Ideally, these updates would be delivered at a high rate, only constrained by the sensing rate supported by the sources. However, given network constraints, such a rate may lead to delays in delivery of updates at the monitor that make the freshest update at the monitor unacceptably old for the application. We propose a novel transport layer protocol, namely the Age Control Protocol (ACP), that enables timely delivery of such updates to monitors, in a network-transparent manner. ACP allows the source to adapt its rate of updates to dynamic network conditions such that the average age of the sensed information at the monitor is minimized. We detail the protocol and the proposed control algorithm. We demonstrate its efficacy using extensive simulations and real-world experiments, which have a source send its updates over the Internet to a monitor on another continent.



There are no comments yet.


page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The availability of inexpensive embedded devices with the ability to sense and communicate has led to the proliferation of a relatively new class of real-time monitoring systems for applications such as health care, smart homes, transportation, and natural environment monitoring. Devices repeatedly sense various physical attributes of a region of interest, for example, traffic flow at an intersection. This results in a device (the source) generating a sequence of packets (updates) containing measurements of the attributes. A more recently generated update contains a more current measurement. The updates are communicated over the Internet to a monitor that processes them and decides on any actuation that may be required.

For such applications, it is desirable that freshly sensed information is available at monitors. However, as we will see, simply generating and sending updates at a high rate over the Internet is detrimental to this goal. In fact, freshness at a monitor is optimized by the source smartly choosing an update rate, as a function of the end-to-end network conditions. Freshness at the monitor may suffer significantly when a too small or a too large rate of updates is chosen by the source. In this work, we propose the Age Control Protocol (ACP), which in a network-transparent manner regulates the rate at which updates from a source are sent over its end-to-end connection to the monitor. This rate is such that the average age, where the age of an update is the time elapsed since its generation by the source, of sensed information at the monitor is kept to a minimum, given the network conditions. Based on feedback from the monitor, ACP adapts its suggested update rate to the perceived congestion in the Internet. Consequently, ACP also limits congestion that would otherwise be introduced by sources sending to their monitors at unnecessarily fast update rates.

Figure 1: Unlike the traditional applications like voice, video or file download; real-time monitoring applications are highly loss resilient and care for freshness of an update. Existing transport protocols like TCP and RTP are not best suited for such applications.

The requirement of freshness is not akin to requirements of other pervasive real-time applications like voice and video. For these applications, the rate at which packets are sent is determined by the codec being used. This also determines the network bandwidth used and the obtained voice/video quality. Often, such applications adapt to network conditions by choosing an appropriate code rate. These applications, while resilient to packet drops to a certain degree, require end-to-end packet delays to lie within known limits and would like small end-to-end jitter. Monitoring applications may achieve a low update packet delay by simply choosing a low rate at which the source sends updates. This, however, may be detrimental to freshness, as a low rate of updates can lead to a large age of sensed information at the monitor, simply because updates from the source are infrequent. More so than voice/video, monitoring applications are exceptionally loss resilient. Specifically, they don’t benefit from the source retransmitting old updates that were not received by the monitor. Instead, the source should continue sending new updates at its configured rate.

Figure 2: Example interplay of the networking metrics of delay (solid line), throughput (normalized by service rate) and age. Shown for a M/M/1 queue [29] with service rate of . The age curve was generated using the analysis for a M/M/1 queue in [20].

At the other end of the spectrum are applications like that of file transfer that require reliable transport and high throughputs but are delay tolerant. Such applications use the transmission control protocol (TCP) for end-to-end delivery of application packets. TCP via its congestion control mechanism attempts to keep as many application bytes in transit over the end-to-end connection as is possible without packet drops due to network congestion. For updating applications, TCP is detrimental to optimizing freshness as it tries by design to fill the network pipe. This increases throughput but also increases packet delay because of queueing. Such a strategy would correspond to the source sending updates at a high rate. While the monitor would receive a steady stream of updates at the high rate, each update would have a high age, when received by the monitor, as a result of it having experienced a large network delay.

Figure 2 broadly captures the behavior of the metrics of delay and age as a function of throughput. Under light and moderate loads when packet dropping is negligible, throughput (average network utilization) increases linearly in the rate of updates. This leads to an increase in the average packet delay. Large packet delays coincide with large average age. Large age is also seen for small throughputs (and corresponding small rate of updates). At a low update rate,the monitor receives updates infrequently, and this increases the average age (staleness) of its most fresh update. Finally, observe that there exists a sending rate (and corresponding throughput) at which age is minimized.

Figure 3: The ACP end-to-end connection.

As noted in Section 2, many works have analyzed age as a Quality-of-Service metric for monitoring applications. Often such works have employed queue theoretic abstractions of networks. More recently in [28] the authors proposed a deep Q-learning based approach to optimize age over a given but unknown network topology. We believe our work is the first to investigate age control at the transport layer of the networking stack, that is over an end-to-end connection in an IP network and in a manner that is transparent to the application.

Our specific contributions are the following.

  1. We propose the Age Control Protocol, a novel transport layer protocol for real-time monitoring applications that wish to deliver fresh updates over IP networks.

  2. We argue that such a protocol, unlike other transport protocols like TCP and RTP, must have just the right number of update packets in transit at any given time.

  3. We propose a novel control algorithm for ACP that regulates the rate at which a status updating source sends its updates to a monitor over its end-to-end connection in a manner that is application independent and makes the network transparent to the source.

  4. We provide an extensive evaluation of the protocol using network simulations and real world experiments in which a source sends packets to a monitor over an inter-continental end-to-end IP connection.

  5. We show that ACP adapts the source update rate to make effective use of a fast end-to-end path with multiple hops from the source to the monitor. Over a connection with a round-trip-time of about msec, ACP achieves a significant reduction in median age of about msec( improvement) over age achieved by a protocol that sends one update every round-trip-time.

The rest of the paper is organized as follows. In the next section, we describe related work. In Section 3, we detail the Age Control Protocol, how it interfaces with a source and a monitor, and the protocol’s timeline. In Section 3 we define the age control problem. In Section 5, we use simple queueing models to intuit a good age control protocol and discuss a few challenges. Section 6 details the control algorithm that is a part of ACP. This is followed by details on the evaluation methodology in Section 7. We discuss simulation results in Section 8 and results from real-world experiments in Section 9. We conclude in Section 10.

2 Related Work

The need for timely updates arises in many fields, including, for example, vehicular updating [19], real time databases [33], data warehousing [18], and web caching [35, 7].

For sources sending updates to monitors, there has been growing interest in the age of information (AoI) metric that was first analyzed for elementary queues in [20]. To evaluate AoI for a single source sending updates through a network cloud [14] or through an M/M/k server [15, 16], out-of-order packet delivery was the key analytical challenge. Packet deadlines are found to improve AoI in [17]. AoI in the presence of errors is evaluated in [6]. Distributional properties of the age process have also been analyzed for the D/G/1 queue under First Come First Served (FCFS) [5], as well as single server FCFS and LCFS queues [10]. There have also been studies of energy-constrained updating [2, 34, 30, 25, 8, 1].

There has also been substantial efforts to evaluate and optimize age for multiple sources sharing a communication link [13, 21, 26, 23, 11]. In particular, near-optimal scheduling based on the Whittle index has been explored in [13, 12, 9]. When multiple sources employ wireless networks subject to interference constraints, AoI has been analyzed under a variety of link scheduling methods [22, 31]. AoI analysis for multihop networks has also received attention [32]. Notably, optimality properties of a Last Generated First Served (LGFS) service when updates arrive out of order are found in [3].

While the early work [19] explored practical issues such as contention window sizes, the subsequent AoI literature has primarily been focused on analytically tractable simple models. Moreover, a model for the system is typically assumed to be known. In this work, our objective has been to develop end-to-end updating schemes that perform reasonably well without assuming a particular network configuration or model. This approach attempts to learn (and adapt to time variations in) the condition of the network links from source to monitor. This is similar in spirit to hybrid ARQ based updating schemes [4, 24] that learn the wireless channel. The chief difference is that hybrid ARQ occurs on the short timescale of a single update delivery while ACP learns what the network supports over many delivered updates.

3 The Age Control Protocol

Figure 4: Illustration of the timeline of an ACP connection. The box I marks the beginning of the initialization phase of ACP. The box C denotes the ACP control algorithm (Figure 9

) that is executed at the source every control epoch. The box

U (Figure 10) is executed when an ACK is received and it updates and .

The Age Control Protocol resides in the transport layer of the TCP/IP networking stack and operates only on the end hosts. An end host that runs ACP could, for example, be an Internet-of-Things (IoT) device with one or more sources (or a gateway that has sensors connected to it), or it could be a server that hosts one or more monitoring applications. Figure 3 shows an end-to-end connection between two hosts, an IoT device, and a server, over the Internet. A source opens an ACP connection to its monitor. Multiple sources may connect to the same monitor. Much like the real-time transport protocol (RTP) [27] that supports voice/video applications, ACP also uses the unreliable transport provided by the user datagram protocol (UDP) for sending of updates generated by the sources. This is in line with the requirements of fresh delivery of updates. Retransmissions make an update stale and also compete with fresh updates for network resources.

Figure 5: A sample function of the age process . Updates are indexed . The timestamp of update is . The time at which update is received by the monitor is . Since update is received out-of-sequence, it doesn’t reset the age process.

The source ACP appends a header to an update from a source. The header contains a timestamp

field that stores the time the update was generated. The source ACP suggests to the source the rate at which it must generate updates. To be able to calculate the rate, the source ACP must estimate network conditions over the end-to-end path to the monitor ACP. This is achieved by having the monitor ACP acknowledge each update packet received from the source ACP by sending an ACK packet in return. The ACK contains the timestamp of the update being acknowledged. The ACK(s) allow the source ACP to keep an estimate of the age of sensed information at the monitor. We say that an ACK is received

out-of-sequence if it is received after an ACK corresponding to a more recent update packet. An out-of-sequence ACK is discarded by the source ACP. Similarly, an update that is received out-of-sequence is discarded by the monitor. This is because the monitor has already received a more recent measurement from the source.

Figure 4 shows a timeline of a typical ACP connection. For an ACP connection to take place, the monitor ACP must be listening on a previously advertised UDP port. The ACP source first establishes a UDP connection with the monitor. This is followed by an initialization phase during which the source sends an update and waits for an ACK or for a suitable timeout to occur, and repeats this process for a few times, with the goal of probing the network to set an initial update rate. Following this phase, the ACP connection may be described by a sequence of control epochs. The end of the initialization phase marks the start of the first control epoch. At the beginning of each control epoch, ACP sets the rate at which updates generated from the source are sent until the beginning of the next epoch. ACP may do this in different ways. For example, ACP may simply communicate the rate to the source. This however, would require an appropriate interface between ACP and the source. An alternative to the above is a mechanism wherein updates generated by the source gets queued in the transport layer send buffer assigned to it. The updates waiting to be serviced in the transport layer send buffer are sent by the ACP in a Last Generated First Served (LGFS) manner. The older updates are discarded which is in line with the freshness requirement.

An ACP end-to-end connection is closed when the source closes its corresponding UDP socket.

4 The Age Control Problem

Figure 6: A sample function of the backlog process . Updates are indexed . The timestamp of update is . The time at which update is received by the monitor is . Since update is received before , backlog is reduced by packets at . Also, there is no change in at .

We will formally define the age of sensed information at a monitor. To simplify presentation, in this section, we will assume that the source and monitor are time synchronized, although the functioning of ACP doesn’t require the same. Let be the timestamp of the freshest update received by the monitor up to time . Recall that this is the time the update was generated by the source.

The age at the monitor is of the freshest update available at the monitor at time . An example sample function of the age stochastic process is shown in Figure 5. The figure shows the timestamps of packets generated by the source. Packet is received by the monitor at time . At time , packet has age . The age at the monitor increases linearly in between reception of updates received in the correct sequence. Specifically, it is reset to the age of packet , in case packet is the freshest packet (one with the most recent timestamp) at the monitor at time . For example, when update is received at the monitor, the only other update received by the monitor until then was update . Since update was generated at time , the reception of resets the age to at time . On the other hand, while update was sent at a time , it is delivered out-of-order at a time . So packet is discarded by the monitor ACP and the age at the monitor stays unchanged at time .

We want to choose the rate (updates/second) that minimizes the expected value of age at the monitor, where the expectation is over any randomness introduced by the network. Note that in the absence of a priori knowledge of a network model, as is the case with the end-to-end connection over which ACP runs, this expectation is unknown to both source and monitor and must be estimated using measurements. Lastly, we would like to dynamically adapt the rate to nonstationarities in the network.

5 Good Age Control Behavior and Challenges

Figure 7: A snapshot of how updates may be seen transiting through a 3 queue network. The top two correspond to a high and low rate of updating, respectively. The third one corresponds to an optimal rate of updating.
Figure 8: (a) Expected value of age as a function of update rate is shown for different queueing networks. (b) Average update packet system time is shown as a function of inter-arrival time (). The green dashed line is the line. The mark the age minimizing . (c) The backlog in each queue and the sum is shown as the service rate of the second queue increases from to . Service rate of the first queue is .

ACP must suggest a rate updates/second at which a source must send fresh updates to its monitor. ACP must adapt this rate to network conditions. To build intuition, let’s suppose that the end-to-end connection is well described by an idealized setting that consists of a single FCFS queue that serves each update in constant time. An update generated by the source enters the queue, waits for previously queued updates, and then enters service. The monitor receives an update once it completes service. Note that every update must age at least by the (constant) time it spends in service, before it is received by the monitor. It may age more if it ends up waiting for one or more other updates to complete service.

In this idealized setting, one would want a new update to arrive as soon as the last generated update finishes service. To ensure that the age of each update received at the monitor is the minimum, one must choose a rate such that new updates are generated in a periodic manner with the period set to the time an update spends in service. Also, update generation must be synchronized with service completion instants so that a new update enters the queue as soon as the last update finishes service. In fact, such a rate is age minimizing even when updates pass through a sequence of such queues in tandem [29]. The update is received by the monitor when it leaves the last queue in the sequence. The rate will ensure that a generated packet ages exactly times the time it spends in the server of any given queue. At any given time, there will be exactly update packets in the network, one in each server. An illustration shown in Figure 7.

Of course, the assumed network is a gross idealization. We assumed a series of similar constant service facilities and that the time spent in service and instant of service completion were known exactly. We also assumed lack of any other traffic. However, as we will see further, the resulting intuition is significant. Specifically, a good age control algorithm must strive to have as many update packets in transit as possible while simultaneously ensuring that these updates avoid waiting for other previously queued updates.

Before we detail our proposed control method, we will make a few salient observations using analytical results for simple queueing models that capture stochastic service and generation of updates. These will help build on our intuition and also elucidate the challenges of age control over a priori unknown and likely non-stationary end-to-end network conditions.

We will consider two queueing models. One is the FCFS queue with an infinite buffer in which a source sends update packets at a rate to a monitor via a single queue, which services packets at a rate updates per second. The updates are generated as a Poisson process of rate

and packet service times are exponentially distributed with

as the average time it takes to service a packet. In the other model, updates travel through two queues in tandem. Specifically, they enter the first queue that is serviced at the rate . On finishing service in the first queue, they enter the second queue that services packets at a rate of . As before, updates arrive to the first queue as a Poisson process and packet service times are exponentially distributed. The average age for the case of a single queue was analyzed in [20]. We extend their analysis to obtain analytical expressions of average age as a function of and for the two queue case, by using the well known result that updates also enter the second queue as a Poisson process of rate  [29].

On the impact of non-stationarity and transient network conditions: Figure (a)a shows the expected value (average) of age as a function of when the queueing systems are in steady state. It is shown for three single queues, each with a different service rate, and for two queues in tandem with both servers having the same unit service rate. Observe that all the age curves have a bowl-like shape that captures the fact that a too small or a too large leads to large age. Such behavior has been observed in non-preemptive queueing disciplines in which updates can’t preempt other older updates. A reasonable strategy to find the optimal rate thus seems to be one that starts at a certain initial and changes in a direction such that a smaller expected age is achieved.

In practice, the absence of a network model (unknown service distributions and expectations), would require Monte-Carlo estimates of the expected value of age for every choice of . Getting these estimates, however, would require averaging over a large number of instantaneous age samples and would slow down adaptation. This could lead to updates experiencing excessive waiting times when is too large. Worse, transient network conditions (a run of bad luck) and non-stationarities, for example, because of introduction of other traffic flows, could push these delays to even higher values, leading to an even larger backlog of packets in in transit. Figure (a)a, illustrates how changes in network conditions (service rate and number of hops (queues)) can lead to large changes in the expected age.

It is desirable for a good age control algorithm to not allow the end-to-end connection to drift into a high backlog state. As we describe in the next section, ACP tracks changes in the average number of backlogged packets and average age over short intervals, and in case backlog and age increase, ACP acts to rapidly reduce the backlog.

On Optimal Average Backlogs: Figure (b)b plots the average packet system times, where the system time of a packet is the time that elapses between its arrival and completion of its service, as a function of inter-arrival time () for three single queue networks and two networks that have two queues in tandem. As expected, increase in inter-arrival time reduces the system time. As inter-arrival times become large, packets wait less often for others to complete service. As a result, as inter-arrival time increases, the system times converge to the average service time of a packet. For each queueing system, we also mark on its plot the inter-arrival time that minimizes age. It is instructive to note that for the three single queue systems this inter-arrival time is only slightly smaller than the system time. However, for the two queues in tandem with service rates of each, the inter-arrival time is a lot smaller than the system time. The implication being that on an average it is optimal to send slightly more than one () packet every system time for the single queue system. However, for the two queue network with the similar servers, we want to send a larger number () of packets every system time. For the two queue network where the second queue is served by a faster server, this number is smaller (). As we observe next, as one of the servers becomes faster, the two queue network becomes more akin to a single queue network with the slower server.

Note that these numbers are in fact the optimal (age minimizing) average number of packets in the system. Figure (c)c shows how this optimal average backlog varies as a function of for a given . The observations stay the same on swapping and . As increases, that is as the second server becomes faster than the first (), we see that the average backlog increases in queue and reduces in queue , while the sum backlog gets closer to the optimal backlog for the single queue case. Specifically, as queue becomes a larger bottleneck relative to queue , optimal must adapt to the bottleneck queue. The backlog in the faster queue is governed by the resulting choice of . For when the rates and are similar, they see similar backlogs. However, as is seen for when , the backlog per queue is smaller than a network with only a single such queue. However, the sum backlog () is larger.

To summarize, one would want a good age control algorithm to have a larger number of packets simultaneously in transit in a network with a larger number of hops (queues).

6 The ACP Control Algorithm

Figure 9: The control algorithm at the source ACP.

Let the control epochs of ACP (Section 3) be indexed . Control epoch starts at time . The first control epoch marks the end of the initialization phase of ACP. At time the update rate is set to the inverse of the average packet round-trip-times (RTT) obtained at the end of the initialization phase. At time , , the update rate is set to . The source transmits updates at a fixed period of in the interval .

Let be the estimate at the source ACP of the time average update age at the monitor at time . This average is calculated over . To calculate it, the source ACP must construct its estimate of the age sample function (see Figure 5), over the interval, at the monitor. It knows the time a source sent a certain update . However, it needs the time at which update was received by the monitor, which it approximates by the time the ACK for packet was received. On receiving the ACK, it resets its estimate of age to the resulting round-trip-time (RTT) of packet .

Note that this value is an overestimate of the age of the update packet when it was received at the monitor, since it includes the time taken to send the ACK over the network. The time average is obtained simply by calculating the area under the resulting age curve over and dividing it by the length of the interval.

Let be the time average of backlog calculated over the interval . This is the time average of the instantaneous backlog over the interval. The instantaneous backlog increases by when the source sends a new update. When an ACK corresponding to an update is received, update and any unacknowledged updates older than are removed from the instantaneous backlog. Figure 6 shows the instantaneous backlog as a function of time corresponding to the age sample function in Figure 5.

In addition to using RTT(s) of updates for age estimation, we also use them to maintain an exponentially weighted moving average (EWMA) of RTT. We update on reception of an ACK that corresponds to a round-trip-time of RTT.

The source ACP also uses an estimate of the average time between consecutive update arrivals at the monitor. Specifically, it estimates the inter-update arrival times at the monitor and the corresponding EWMA . The inter-update arrival times are approximated by the corresponding inter-ACK arrival times. The length of a control epoch is set as an integral multiple of . This ensures that the length of a control epoch is never too large and allows for fast enough adaptation. Note that at sufficiently low rate of sending updates is large and at a sufficiently high update rate is large. At time we set . In all our evaluation we have used . The resulting length of was observed to be long enough to see desired changes in average backlog and age in response to a choice of source update rate at the beginning of an epoch. The source updates , , and every time an ACK is received. Figure 10 summarizes the updates.

At the beginning of control epoch , at time , the source ACP calculates the difference in average age measured over intervals and respectively. Similarly, it calculates .

ACP at the source chooses an action at the th epoch that targets a change in average backlog over an interval of length with respect to the th

interval. The actions, may be broadly classified into (a) additive increase (INC), additive decrease (DEC), and multiplicative decrease (MDEC). MDEC corresponds to a set of actions

, where . We have


where is a step size parameter. Later we evaluate selections of step size ranging from to .

ACP attempts to achieve by setting appropriately. The estimate of at the source ACP of the average inter-update arrival time at the monitor gives us the rate at which updates sent by the source arrive at the monitor. This and allow us to estimate the average change in backlog over as . Therefore, to achieve a change of requires choosing


Figure 9 summarizes how ACP chooses its action as a function of and .

The source ACP targets a reduction in average backlog over the next control interval in case (a) or (b) . The first condition indicates that the update rate is such that update packets are experiencing larger than optimal delays. ACP attempts to reduce backlog multiplicatively to reduce congestion delays and in the process reduce age quickly. Every consecutive occurrence of this case (tracked by increasing by every time) attempts to decrease backlog even more aggressively, which is by a larger power of .

The second condition captures a reduction in both age and backlog. ACP greedily aims at reducing backlog further hoping that age will reduce too. It attempts multiplicative decrease if the previous action did so. Else, it attempts an additive decrease.

The source ACP targets an increase in average backlog over the next control interval in case (a) or (b) . The first condition hints at too low an update rate causing an increase in age. So, ACP additively increases backlog. On the occurrence of the second condition ACP greedily increases backlog.

When the condition occurs, we check if the previous action attempted to reduce the backlog. If yes, and if the actual change in backlog was much smaller than the desired, we reduce backlog multiplicatively. This helps counter situations where the increase in age is in fact because of increasing congestion. Specifically, increasing congestion in the network may cause the inter-update arrival rate at the monitor to reduce during the epoch. As a result, despite the attempted multiplicative decrease in backlog, it may change very little. Clearly, in such a situation, even if the backlog reduced a little, the increase in age was not caused because the backlog was low. The above check ensures that ACP successfully reduces backlog to desired levels. In the above case, if instead ACP ignores the much smaller than desired change, it will end up increasing the rate of updates, which will only further increase backlog and age.

Figure 10: Update of , , and , which takes place every time an ACK is received.

7 Evaluation Methodology

We evaluated ACP using network topologies simulated using the network simulator ns3111 (Section 8) and by conducting real experiments over an inter-continental end-to-end path over the Internet (Section 9).

The simulated environment while limited in scale and its vagaries allows us to evaluate ACP in a controlled and repeatable setting. It also allows us to understand how the behavior of ACP matches up to that of a desired age control algorithm. In simulations, we compare ACP with two baselines namely Optimal and Basic.

The Optimal baseline is obtained by empirically calculating Monte-Carlo estimates of average age for a selection of update rates with sufficiently fine granularity, for the network being simulated. For each selection of a fixed update rate , we estimate the average age as would be done by ACP, which is by using ACK packets obtained in return of update packets sent. Optimal chooses the update rate that gives the smallest estimate of average age for the network.

The other baseline Basic uses the average round-trip-time of an update in a very lightly loaded network, that is for a very small selection of update rate, to set its update rate. Specifically, the average age achieved by Basic is the Monte-Carlo estimate of the expected value of age for a constant update rate that is the inverse of the round-trip-time as calculated above. We choose such a baseline as we know from analysis of single queue networks that such a rate is close to optimal (marked by in Figure (b)b). For networks shown in the figure, this rate is the inverse of the inter-arrival time at the point where the line intersects the plot for the chosen network. In fact, as can be seen in the figure, even when there are two queues in tandem, if one of them is a relative bottleneck, the choice of rate by Basic is close to optimal.

To baseline real-world performance of ACP, we use a modified version of Basic. We will call it Lazy. Lazy, like ACP, also adapts the update rate to network conditions. However, it is very conservative and keeps the average number of update packets in transit small. Specifically, it updates the EWMA of RTT every time an ACK is received and sets the current update rate to the inverse of the EWMA of RTT. As a result, it aims at maintaining an average backlog of , given the network conditions.

8 Simulation Setup and Results

Figure 11: The source is connected to the monitor via multiple routers.
Figure 12: We compare the average (a) Age (b) Backlog and (c) RTT that results from using ACP with averages that result from Optimal and Basic. Simulated networks had one or two hops. The application generated fixed size updates and no other traffic was sharing the network.
Figure 13: We compare the average (a) Age (b) Backlog and (c) RTT that results from using ACP with averages that result from Optimal and Basic. Simulated networks had three hops. The application generated fixed size updates and no other traffic was sharing the network.

We simulated IP networks with multiple hops between the source and the monitor. The physical and the link layer of each hop was simulated by a simple point-to-point full duplex link whose bit rate could be set to a desired value. Each hop was assigned a link rate of either Mbps or Mbps. We simulated all possible combinations of these rates for a given number of hops. This allows us to simulate conditions in which one or more links are rate bottlenecks relative to others in the network.

We denote a network in which there is only one hop by its corresponding link rate. So if the link rate is Mbps, the network is denoted by . Similarly, a two hop network is denoted by in case the link that connects the source and the router has a rate of Mbps and the link that connects the router and the monitor has rate Mbps.

Figure 11 shows an illustration of the simulated network. We simulated two kinds of update applications, one which generated packets of size bytes and the other that generated packets of size chosen uniformly and randomly over bytes.

For every selection of hop count and link rates per hop, and update application type, we evaluated ACP under the following conditions.

  • Lightly loaded network: The only packets being processed by the network are the source packets and the ACK(s).

  • Link errors:

    Lots of prior works have considered the impact of link errors on the congestion control mechanism of TCP. Motivated by this, we simulated packet drops by links to understand how ACP adapts to them. The hops are not occupied by other traffic. However, each hop drops packets with positive probability.

  • Heavily loaded network: There is a competing constant bit rate UDP flow at each hop that takes a significant fraction of the rate that is available at the point-to-point links. In all our simulations with ACP, the network transitions from the state of being lightly loaded for seconds to being heavily loaded for seconds and back to being lightly loaded for seconds. This allows us to verify the ability of ACP to (a) quickly counter the rapid increase in backlogged update packets due to sudden changes in network conditions and (b) maintain the backlog at a desired level under the new conditions.

  • Varying step size: We repeat every simulation for different values of step size. We simulate for .

8.1 Lightly Loaded Network

We show results for a step size and constant packet sizes. The results are qualitatively similar for uniform random packet sizes and we don’t show them. Figure 12 compares the performance of ACP with Optimal and Basic in terms of the metrics of average age, backlog, and RTT. We look at a single hop network with link rate set to Mbps and two -hop networks, one in which both the links have a rate of Mbps and the other in which one link has a rate of Mbps and the other Mbps. As is seen from Figure (a)a, the ages obtained by all the mechanisms are within a few milliseconds of each other. This is a small fraction of the packet round-trip-times which are about msec. ACP, given its control algorithm, sees a slightly larger average age for the networks and as it looks to change the backlog in the network with the hope of reducing age. Optimal benefits from having good Monte-Carlo estimates of age as a function of . Basic gets lucky that in both the networks the optimal backlog is influenced by a single link with rate Mbps. As is seen in Figure (b)b, the average backlog in the network when using ACP is larger than when using Basic.

ACP’s strategy gains over Basic, albeit in a modest manner ( msec for ACP and for Basic), in the two hop network with links rates of Mbps. Also, note that the average backlog due to ACP is almost as large as Basic. This ability of ACP to better populate the existing hops with updates, as we will see later, is especially beneficial in the Internet, where end hosts are likely to have many hops between them.

Figure (c)c compares the average RTT(s). As one would expect, given its larger backlogs, ACP results in larger RTT(s) than Basic. That said, larger average backlogs don’t necessarily cause larger RTT(s). See, for example, the backlogs corresponding to for ACP and Optimal. While Optimal has larger backlogs, it sees smaller RTT. This is explained by the fact that the network stays at a steady backlog when using Optimal. Also, given constant packet sizes, the updates don’t experience any waiting. On the other hand, ACP constantly attempts changing the backlog. An increase in backlog can lead to updates temporarily experience large RTT. This may increase age and makes ACP rapidly reduce backlog. As a result, ACP sees smaller average backlogs and larger RTT.

Figure 13 compares performance for three hop networks. As is seen, ACP achieves a smaller age than Basic (Figure (a)a) especially when the network has a larger number of bottlenecks. Also, observe that ACP maintains larger average backlogs (Figure (b)b) in the network than Basic. Finally, for reasons explained above for two hop networks, ACP ends up with larger RTT (Figure (c)c) even when Optimal has larger backlogs.

8.2 Link Errors

Figures (a)a(b)b, and (c)c, show the impact of link errors on ACP for one hop, two hop (), and three hop () networks, respectively. We consider packet error probabilities of and for each link. As is seen from the figures, ACP’s choice of update rate is not affected by errors introduced in the one hop network. However, for larger numbers of hops (thus, a larger probability of an update being dropped because of an error), unlike Basic and Optimal, ACP reduces its rate by large amounts with respect to the rate it uses for when there is no error ( in the figures). This is because it, like TCP, confuses the link errors with congestion. Note that packet drops, for a given , lead to an increase in average age and average backlog. ACP tries to recover from this situation by aggressively reducing .

(a) hop, Mbps
(b) hops, Mbps each.
(c) hops, Mbps each.
Figure 14: For a network with (a) one, (b) two and (c) three hops we show how ACP adapts to link errors that cause loss of update packets.
Figure 15: We compare the average (a) Age (b) Backlog and (c) RTT obtained by ACP with that obtained by Optimal and Basic. Simulated networks had three hops. The application generated random sized packets and each hop was shared by a Mbps UDP flow.

8.3 Heavily Loaded Network

We show results for a step size , uniformly random packet sizes, and three hop networks. A UDP flow of Mbps was introduced at each hop. As is seen from the comparisons in Figure 15, ACP performs very well in the presence of competing UDP traffic. Its performance is especially close for networks with larger numbers of similar hops. The results for constant packet sizes and networks with one or two hops are qualitatively similar.

8.4 On Selection of Step Sizes

An appropriate selection of step size is crucial to the proper functioning of ACP. A very small step size may lead to negligible changes in the update rate , which may further not lead to the desired changes in backlog and age. Worse, the control algorithm may start oscillating over a small range of , which could lead to an average age far from optimal. Our simulations and real-world experiments have shown that step size must also be cognizant of the round-trip-times. In simulations where the RTT was in the range of tens of milliseconds, a step size of was found to be sufficiently large. However, in real-world experiments over an inter-continental end-to-end path, with about hops, which we describe in the following section, RTT values were in the range of msec. Here a step size of turned out to be too small. A step size of worked very well for the inter-continental path.

Large step sizes ensure that ACP does not get stuck oscillating in a small region around a sub-par selection of the update rate. However, they cause large swings in instantaneous backlogs and hence RTT(s). This may lead to larger average age. To exemplify, a step size gives an average age of sec for a single hop network with link rate Mbps. This is close to the average sec obtained by Optimal. In comparison, a step size of results in an average age of sec.

Figure 16: We compare the CDF(s) of average (a) Age (b) RTT and (c) Backlog obtained over runs each of Lazy and ACP with step size choices of .
Figure 17: The evolution of average age and backlog that resulted from ACP running over the Internet.

9 Updates From Another Continent

We had a source and a monitor on different continents communicating over the Internet. The monitor was on a machine with a global IP and the source was running on a machine behind a firewall. We did about experiments over a span of few days. Using the traceroute utility, we observed that the number of hops varied between during the course of our experiments. The source and the monitor took turns communicating using ACP and Lazy. During each turn, update packets were sent by the source to the monitor. This ensured that both ACP and Lazy experienced similar network conditions.

Figure 16 summarizes the comparison of ACP and Lazy. Figure (a)a

shows the cumulative distribution functions (CDF) of the average age obtained by ACP and

Lazy over the experiments. ACP used a step size of for half the experiments and for the rest. As is seen for the figure, ACP outperforms Lazy and obtains a median improvement of about msec in age ( over age obtained using Lazy). This over an end-to-end connection with RTT of about msec. Also, observe from Figure (b)b that the median RTT(s) for both ACP and Lazy are almost the same.

Lastly, consider a comparison of the CDF of average backlogs shown in Figure (c)c. ACP exploits very well the fast end-to-end connection with multiple hops and achieves a very high median average backlog of about when using a step size of and that of when using a step size of . On the other hand Lazy achieves a backlog of about (not shown in figure).

We will end by showing snippets of ACP in action over the end-to-end path. Figures (a)a and (b)b show the time evolution of average backlog and average age, as calculated at control epochs. ACP increases backlog in small steps over a large range followed by a rapid decrease in backlog. The increase coincides with reduction in average age and the rapid decrease is initiated once age increases. Also observe that age decreases very slowly (dense regions of points low on the age curve) with increase in backlog just before it increases rapidly. It seems that the region of slow decrease is around where, ideally, backlog must be set to keep age to a minimum.

10 Conclusions

We proposed the Age Control Protocol, which is a novel transport layer protocol for real-time monitoring applications that desire the freshness of information communicated over the Internet. ACP works in an application-independent manner. It provides a source the ability to regulate its update rate in a network-transparent manner. We detailed ACP’s control algorithm that adapts the rate so that the age of the updates at the monitor is minimized. Via an extensive evaluation using network simulations and real-world experiments, we showed that ACP adapts the source update rate well to make an effective use of network resources available to the end-to-end connection between the source and the monitor. For example, over a connection with a round-trip-time of about msec, ACP achieved a significant reduction in age of about msec over age achieved by a protocol that sends one update every round-trip-time.


  • [1] A. Arafa, J. Yang, and S. Ulukus. Age-minimal online policies for energy harvesting sensors with random battery recharges. In 2018 IEEE International Conference on Communications (ICC), pages 1–6, May 2018.
  • [2] B. T. Bacinoglu, E. T. Ceran, and E. Uysal-Biyikoglu. Age of information under energy replenishment constraints. In Proc. Info. Theory and Appl. (ITA) Workshop, Feb. 2015. La Jolla, CA.
  • [3] A. M. Bedewy, Y. Sun, and N. B. Shroff. Age-optimal information updates in multihop networks. CoRR, abs/1701.05711, 2017.
  • [4] E. T. Ceran, D. Gündüz, and A. György. Average age of information with hybrid arq under a resource constraint. In 2018 IEEE Wireless Communications and Networking Conference (WCNC), pages 1–6, April 2018.
  • [5] J. P. Champati, H. Al-Zubaidy, and J. Gross. Statistical guarantee optimization for age of information for the d/g/1 queue. In IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pages 130–135, April 2018.
  • [6] K. Chen and L. Huang. Age-of-information in the presence of error. In Proc. IEEE Int’l. Symp. Info. Theory (ISIT), pages 2579–2584, 2016.
  • [7] J. Cho and H. Garcia-Molina. Effective page refresh policies for web crawlers. ACM Transactions on Database Systems (TODS), 28(4):390–426, 2003.
  • [8] S. Farazi, A. G. Klein, and D. R. Brown. Average age of information for status update systems with an energy harvesting server. In IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pages 112–117, April 2018.
  • [9] Y.-P. Hsu. Age of information: Whittle index for scheduling stochastic arrivals. In isit, pages 2634–2638, June 2018.
  • [10] Y. Inoue, H. Masuyama, T. Takine, and T. Tanaka. A general formula for the stationary distribution of the age of information and its application to single-server queues. CoRR, abs/1804.06139, 2018.
  • [11] Z. Jiang, B. Krishnamachari, X. Zheng, S. Zhou, and Z. Miu. Decentralized status update for age-of-information optimization in wireless multiaccess channels. In isit, pages 2276–2280, June 2018.
  • [12] Z. Jiang, B. Krishnamachari, S. Zhou, and Z. Niu. Can decentralized status update achieve universally near-optimal age-of-information in wireless multiaccess channels? In International Teletraffic Congress ITC 30, September 2018.
  • [13] I. Kadota, E. Uysal-Biyikoglu, R. Singh, and E. Modiano. Minimizing the age of information in broadcast wireless networks. In 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 844–851, Sept 2016.
  • [14] C. Kam, S. Kompella, and A. Ephremides. Age of information under random updates. In Proc. IEEE Int’l. Symp. Info. Theory (ISIT), pages 66–70, 2013.
  • [15] C. Kam, S. Kompella, and A. Ephremides. Effect of message transmission diversity on status age. In Proc. IEEE Int’l. Symp. Info. Theory (ISIT), pages 2411–2415, June 2014.
  • [16] C. Kam, S. Kompella, G. D. Nguyen, and A. Ephremides. Effect of message transmission path diversity on status age. IEEE Trans. Info. Theory, 62(3):1360–1374, Mar. 2016.
  • [17] C. Kam, S. Kompella, G. D. Nguyen, J. Wieselthier, and A. Ephremides. Age of information with a packet deadline. In Proc. IEEE Int’l. Symp. Info. Theory (ISIT), pages 2564–2568, 2016.
  • [18] A. Karakasidis, P. Vassiliadis, and E. Pitoura. ETL queues for active data warehousing. In Proceedings of the 2nd international workshop on Information quality in information systems, IQIS ’05, pages 28–39, Baltimore, Maryland, 2005. ACM. ACM ID: 1077509.
  • [19] S. Kaul, M. Gruteser, V. Rai, and J. Kenney. Minimizing age of information in vehicular networks. In IEEE Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON), Salt Lake City, Utah, USA, 2011.
  • [20] S. Kaul, R. Yates, and M. Gruteser. Real-time status: How often should one update? In Proc. IEEE INFOCOM Mini Conference, 2012.
  • [21] S. K. Kaul and R. Yates. Status updates over unreliable multiaccess channels. In Proc. IEEE Int’l. Symp. Info. Theory (ISIT), pages 331–335, June 2017.
  • [22] N. Lu, B. Ji, and B. Li. Age-based scheduling: Improving data freshness for wireless real-time traffic. In Proceedings of the Eighteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing, Mobihoc ’18, pages 191–200, New York, NY, USA, 2018. ACM.
  • [23] E. Najm and E. Telatar. Status updates in a multi-stream m/g/1/1 preemptive queue. In IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pages 124–129, April 2018.
  • [24] E. Najm, R. Yates, and E. Soljanin. Status updates through M/G/1/1 queues with HARQ. In Proc. IEEE Int’l. Symp. Info. Theory (ISIT), pages 131–135, June 2017.
  • [25] S. Nath, J. Wu, and J. Yang. Optimizing age-of-information and energy efficiency tradeoff for mobile pushing notifications. In 2017 IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pages 1–5, July 2017.
  • [26] Y. Sang, B. Li, and B. Ji. The power of waiting for more than one response in minimizing the age-of-information. In GLOBECOM 2017 - 2017 IEEE Global Communications Conference, pages 1–6, Dec 2017.
  • [27] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson. Rtp: A transport protocol for real-time applications. Technical report, STD 64, RFC 3550, July 2003.
  • [28] E. Sert, C. Sönmez, S. Baghaee, and E. Uysal-Biyikoglu.

    Optimizing age of information on real-life tcp/ip connections through reinforcement learning.

    In 2018 26th Signal Processing and Communications Applications Conference (SIU), pages 1–4, May 2018.
  • [29] J. F. Shortle, J. M. Thompson, D. Gross, and C. M. Harris. Fundamentals of queueing theory, volume 399. John Wiley & Sons, 2018.
  • [30] Y. Sun, E. Uysal-Biyikoglu, R. D. Yates, C. E. Koksal, and N. B. Shroff. Update or wait: How to keep your data fresh. IEEE Transactions on Information Theory, 63(11):7492–7508, 2017.
  • [31] R. Talak, I. Kadota, S. Karaman, and E. Modiano. Scheduling policies for age minimization in wireless networks with unknown channel state. CoRR, abs/1805.06752, 2018.
  • [32] R. Talak, S. Karaman, and E. Modiano. Minimizing age-of-information in multi-hop wireless networks. In 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 486–493, Oct 2017.
  • [33] M. Xiong and K. Ramamritham. Deriving deadlines and periods for real-time update transactions. In The 20th IEEE Real-Time Systems Symposium, 1999. Proceedings, pages 32–43. IEEE, 1999.
  • [34] R. Yates. Lazy is timely: Status updates by an energy harvesting source. In Proc. IEEE Int’l. Symp. Info. Theory (ISIT), 2015.
  • [35] H. Yu, L. Breslau, and S. Shenker. A scalable web cache consistency architecture. SIGCOMM Comput. Commun. Rev., 29(4):163–174, Aug. 1999. ACM ID: 316219.