I Introduction
Recently, the study of information freshness has received increasing attentions, especially for timesensitive applications that require realtime information/status updates, such as road congestion information, stock quotes, and weather forecast. In order to measure the freshness of information, a new metric, called the AgeofInformation (AoI) is proposed. The AoI is defined as the time elapsed since the generation of the freshest update among those that have been received by the destination [1]. Prior studies reveal that the AoI depends on both the interarrival time and the delay of the updates. Due to the dependency between the interarrival time and the delay, this new AoI metric exhibits very different characteristics than the traditional delay metric and is generally much harder to analyze (see, e.g., [1]).
Although it is wellknown that scheduling policies play an important role in reducing the delay in singlesever queues, it remains largely unknown how exactly scheduling policies impact the AoI performance. To that end, we aim to holistically study the impact of various aspects of scheduling policies on the AoI performance in singleserver queues and provide useful guidelines for the design of scheduling policies that can achieve a small AoI.
While much research effort has already been exerted to the design and analysis of scheduling policies aiming to reduce the AoI, almost all of these policies are only based on the arrival time of updates, such as FirstComeFirstServed (FCFS) and LastComeFirstServed (LCFS), assuming that the updatesize information is unavailable. Here, the size of an update is the amount of time required to serve the update if there were no other updates around. In some applications, such as smart grid and traffic monitoring, the updatesize information can be obtained or fairly well estimated
[2]. It has been shown that scheduling policies that leverage the size information can substantially reduce the delay, especially when the system load is high or when the size variability is large [3]. This motivates us to investigate the AoI performance of sizebased policies in a G/G/1 queue. Note that the updatesize information is “orthogonal” to the arrivaltime information, both of which could significantly impact the AoI performance. Therefore, it is quite natural to further consider AoIbased policies that use both the updatesize and arrivaltime information of updates.Guideline  Summary  Representative policies 

1  Prioritizing small updates  SJF, SJF_P, SRPT 
2  Prioritizing recent updates  LCFS, LCFS_P 
3  Allowing service preemption  PS, LCFS_P, SJF_P, SRPT 
4  AoIbased designs  ADE, ADS, ADM 
5  Prioritizing informative updates  Informative version of the above policies 
In addition, prior work has revealed that scheduling policies that allow service preemption and that prioritize informative updates (also called effective updates, which are those that lead to a reduced AoI once delivered; see Section VIA for a formal definition) yield a good AoI performance [4, 5, 6]. Intuitively, preemption prevents fresh updates from being blocked by a large and/or stale update in service; informative policies discard stale updates, which do not bring new information but may block fresh updates. To that end, we also consider AoIbased scheduling designs that both allow service preemption and prioritize informative updates.
In Fig. 1, we position our work in the literature by summarizing various design aspects of scheduling policies for a G/G/ queue. Existing work mostly explores the design based on the arrivaltime information along with considering service preemption and informative updates. We point out that the sizebased design is an orthogonal dimension of great importance, which somehow has not received sufficient attentions yet. Unsurprisingly, designing AoIefficient policies requires the consideration of all these dimensions. In Table I, we summarize several useful guidelines for the design of AoIefficient policies, which are also labeled in Fig.1. To the best of our knowledge, this is the first work that conducts a systematic and comparative study to investigate the design of AoIefficient scheduling policies for a G/G/ queue. In the following, we summarize our key contributions along with an explanation of Fig. 1 and Table I.
First, we investigate the AoI performance of sizebased scheduling policies (i.e., the green arrow in Fig. 1), which is an orthogonal approach to the arrivaltimebased design studied in most existing work. We conduct extensive simulations to show that sizebased policies that prioritize small updates significantly improve AoI performance. We also explain interesting observations from the simulation results and summarize useful guidelines (i.e., Guidelines 1, 2, and 3 in Table I) for the design of AoIefficient policies.
Second, leveraging both the updatesize and arrivaltime information, we introduce Guideline 4 and propose AoIbased scheduling policies (i.e., the blue arrow in Fig. 1). These AoIbased policies attempt to optimize the AoI at a specific future time instant from three different perspectives: the AoIDropEarliest (ADE) policy, which makes the AoI drop the earliest; the AoIDroptoSmallest (ADS) policy, which makes the AoI drop to the smallest; the AoIDropMost (ADM) policy, which makes the AoI drop the most. The simulation results show that such AoIbased policies indeed have a good AoI performance.
Third, we observe that informative policies can significantly improve the AoI performance compared to their noninformative counterparts, which leads to Guideline 5. Integrating all the guidelines, we propose preemptive, informative, AoIbased policies (i.e., the red arrow in Fig. 1). The simulation results show that such policies empirically achieve the best AoI performance among all the considered policies.
Finally, we prove samplepath equivalence between some sizebased policies and AoIbased policies. These results provide an intuitive explanation for why some sizebased policies, such as ShortestRemainingProcessingTime (SRPT), achieve a very good AoI performance.
To summarize, our study reveals that among various aspects of scheduling policies we investigated, prioritizing small updates, allowing service preemption, and prioritizing informative updates play the most important role in the design of AoIefficient scheduling policies.
The rest of this paper is organized as follows. We first discuss related work in Section II. Then, we describe our system model in Section III. In Section IV, we evaluate the AoI performance of sizebased scheduling policies. We further propose AoIbased scheduling policies in Section V. In addition, we evaluate the AoI performance of preemptive, informative, AoIbased policies in Section VI. Finally, we make concluding remarks in Section VII.
Ii Related work
The traditional queueing literature on singleserver queues is largely focused on the delay analysis. In [7], the authors prove that all nonpreemptive scheduling policies that do not make use of job size information have the same distribution of the number of jobs in the system. The work of [8, 9] proves that for a workconserving queue, the SRPT policy minimizes the number of jobs in the system at any point and is therefore delayoptimal. The work of [10] derives a formula of the average delay for several common scheduling polices (which will be discussed in Section IV).
On the other hand, although the AoI research is still in a nascent stage, it has already attracted a lot of interests (see [11, 12] for a survey). Here we only discuss the most relevant work, which is focused on the AoIoriented queueing analysis. Much of existing work considers scheduling policies that are based on the arrival time (such as FCFS and LCFS). The AoI is introduced in [1], where the authors study the average AoI in the M/M/1, M/D/1, and D/M/1 queues under the FCFS policy. In [13], the AoI performance of the FCFS policy in the M/M// and M/M// queues is studied, where new arrivals are discarded if the buffer is full. The average AoI of the LCFS policy in the M/M/ queue is also discussed in [13].
There has been some work that aims to reduce the AoI by making use of service preemption. In [14], the average AoI of LCFS in the M/M/ queue with and without service preemption is analyzed. The work of [15] is quite similar to [14], but it considers the average AoI in the M/M/ queue. In [16], the average AoI for the M/G// preemptive system with a multistream updates source is derived. The ageoptimality of the preemptive LCFS (LCFS_P) policy is proved in [4]
, where the service times are exponentially distributed.
In addition to taking advantage of service preemption, some of the prior studies also consider the strategy of prioritizing informative updates for reducing the AoI. The work of [5, 6] reveals that the AoI performance can be improved by prioritizing informative updates and discarding noninformative policies when making scheduling decisions. In [17], the authors consider a G/G/
queue with informative updates and derive the stationary distribution of the AoI, which is in terms of the stationary distribution of the delay and the Peak AoI (PAoI). With the AoI distribution, one can analyze the mean or higher moments of the AoI in GI/GI/
, M/GI/, and GI/M/ queues under several scheduling policies (e.g., FCFS and LCFS).Recent research effort has also been exerted to understanding the relation between the AoI and the delay. In [18], the authors analyze the tradeoff between the AoI and the delay in a singleserver M/G/ system under a specific scheduling policy without knowing the service time of each individual update. In [19]
, the violation probability of the delay and the PAoI is investigated under an additive white Gaussian noise (AWGN) channel, but the update size is assumed to be identical.
Iii System Model
In this section, we consider a singleserver queueing system and give the definitions of the AgeofInformation (AoI) and the Peak AoI (PAoI).
We model the informationupdate system as a G/G/ queue where a single source generates updates which contain current state of a measurement or observation of the source) with rate . The updates enter the queueing system immediately after they are generated. Hence, the generation time is the same as the arrival time. We use to denote the size of an update (i.e., the amount of time required for the update to complete service), which has a general distribution with mean . The system load is defined as .
We use and to denote the time at which the th update was generated at the source and the time at which it leaves the server, respectively. The AoI at time is then defined as , where is the generation time of the freshest update among those that have been processed by the server. An example of the AoI evolution under the FCFS policy is shown in Fig. 2. Then, the average AoI can be defined as
(1) 
In general, the analysis of the average AoI is quite difficult since it is determined by two dependent quantities: the interarrival time and the delay of updates[1]. We define the interarrival time between the th update and th update as and define the delay of the th update as . Alternatively, the Peak AoI (PAoI) is also proposed as an information freshness metric [5], which is defined as the maximum value of the AoI before it drops due to a newly delivered fresh update. Let be the th PAoI. From Fig. 2, we can see . This can be rewritten as the sum of the interarrival time between the th update and the previous update (i.e., ) and the delay of the th update (i.e., ). Therefore, the PAoI of the th update can also be expressed as , and its expectation is .
Iv Sizebased policies
In this section, we investigate the AoI performance of several common scheduling policies, including sizebased policies and nonsizebased policies, via extensive simulations. Note that these common scheduling policies may serve the noninformative updates (which do not lead to a reduced AoI). This is because in some applications, such as news and social network, obsolete updates are still useful and need to be served [4]. In Section VI, we will discuss the case where obsolete updates are discarded.
Following [3], we first give the definitions of several common scheduling policies that can be divided into four types: depending on whether they are sizebased or not, where the sizebased policies use the updatesize information (which is available in some applications, such as smart grid [2]) for making scheduling decisions; depending on whether they are preemptive or not. The definition of preemption is given below. In this paper, we do not consider the cost of preemption.
Definition 1.
A policy is preemptive if an update may be stopped partway through its execution and then restarted at a later time without losing intermediary work.
The first type consists of policies that are nonpreemptive and blind to the update size:

FirstComeFirstServed (FCFS): When the server frees up, it chooses to serve the update that arrived first if any.

LastComeFirstServed (LCFS): When the server frees up, it chooses to serve the update that arrived last if any.

RandomOrderService (RANDOM): When the server frees up, it randomly chooses one update to serve if any.
The second type consists of policies that are nonpreemptive and make scheduling decisions based on the update size:

ShortestJobFirst (SJF): When the server frees up, it chooses to serve the update with the smallest size if any.
The third type consists of policies that are preemptive and blind to the update size:

ProcessorSharing (PS): All the updates in the system are served simultaneously and equally (i.e., each update receives an equal fraction of the available service capacity).

Preemptive LastComeFirstServed (LCFS_P): This is the preemptive version of the LCFS policy. Specifically, a preemption happens when there is a new update.
The fourth type consists of policies that are preemptive and make scheduling decisions based on the update size:

Preemptive ShortestJobFirst (SJF_P): This is the preemptive version of the SJF policy. Specifically, a preemption happens when there is a new update that has the smallest size.

ShortestRemainingProcessingTime (SRPT): When the server frees up, it chooses to serve the update with the smallest remaining size. In addition, a preemption happens only when there is a new update whose size is smaller than the remaining size of the update in service.
Previous work (see, e.g., [3, Section VII]) reveals that sizebased policies can greatly improve the delay performance. Due to such results, we conjecture that sizebased policies also achieve a better AoI performance given that the AoI is dominantly determined by the delay when the system load is high or when the size variability is large [1]. As we mentioned earlier, it is in general very difficult to obtain the exact expression of the average AoI except for some special cases (e.g., FCFS and LCFS) [1, 17]. Therefore, we attempt to investigate the AoI performance of sizebased policies through extensive simulations.
In Fig. 3 and 4, we present the simulation results of the average AoI and PAoI performance under the scheduling policies we introduced above, respectively. Here we assume that a single source generates updates according to a Poisson process with rate , and the update size is independent and identically distributed (i.i.d.). In Fig. 3(a), we assume that the update size follows an exponential distribution with mean . In Figs. 3(b) and 3(c), we assume that the update size follows a Weibull distribution with mean . We define the squared coefficient of variation of the update size as
, i.e., the variance normalized by the square of the mean
[3]. Hence, a larger means a larger variability. In Fig. 3(b), we fix and change the value of system load , while in Fig. 3(c), we fix system load and change the value of . Note that throughout the paper, these simulation settings are used as default settings unless otherwise specified.In the following, we will discuss key observations from the simulation results and propose useful guidelines for the design of AoIefficient policies. Note that similar observations can also be made for the G/G/ queue. An additional interesting observation is that the average PAoI could be much smaller than the average AoI when the interarrival time has a large variability. More simulation results can be found in Appendix D.
Observation 1.
Sizebased policies achieve a better average AoI/PAoI performance than nonsizebased policies in both nonpreemptive and preemptive cases.
In Fig. 3, we can see that for the nonpreemptive case, SJF has a better average AoI performance than FCFS, RANDOM, and LCFS in various settings. Similarly, for the preemptive case, SJF_P and SRPT have a better average AoI performance than PS and LCFS_P. Similar observations can be made for the average PAoI performance in Fig. 4.
Observation 2.
Under preemptive, sizebased policies, the average AoI/PAoI decreases as the system load increases.
In Figs. 3(a) and 3(b), we can see that under SJF, SJF_P, and SRPT, the average AoI decreases as the system load increases. There are two reasons. First, when increases, there will be more updates with small size arriving to the queue. Therefore, sizebased policies that prioritize updates with small size lead to more frequent AoI drops. Second, preemption operations prevent fresh updates from being blocked by a large or stale update in service. Similar observations can be made for the average PAoI performance in Figs. 4(a) and 4(b).
Guideline 1.
When the updatesize information is available, one should prioritize updates with small size.
However, in certain application scenarios, the updatesize information may not be available or is difficult to estimate. Hence, the scheduling decisions have to be made without the updatesize information. In such scenarios, we make the following observations from Figs. 3 and 4.
Observation 3.
LCFS and LCFS_P achieve the best average AoI performance among nonpreemptive, nonsizebased policies and preemptive, nonsizebased policies, respectively.
Observation 4.
Under LCFS_P, the average AoI/PAoI decreases as the system load increases.
Observations 3 and 4 have also been made in previous work [13, 20, 4]. It is quite intuitive that when the updatesize information is unavailable, one should give a higher priority to more recent updates. This is because while all the updates have the same expected service time, the most recent update arrives the last and thus leads to the smallest AoI once delivered. Therefore, Observations 3 and 4 lead to the following guideline:
Guideline 2.
When the updatesize information is unavailable, one should prioritize recent updates.
Note that Observations 2 and 4 also suggest that under preemptive policies, the average AoI/PAoI decreases as the system load increases. This is because preemptions prevent fresh updates from being blocked by a large or stale update in service. In addition, we have also observed the following nice properties of preemptive policies.
Observation 5.
Not only do preemptive policies achieve a better average AoI/PAoI performance than nonpreemptive policies, but they are also less sensitive when the updatesize variability changes, i.e., they are more robust.
In Figs. 3(a) and 3(b), we can see that preemptive policies (e.g., LCFS_P, SJF_P, and SRPT) generally have a better average AoI performance than nonpreemptive ones (e.g., FCFS, RANDOM, LCFS, and SJF), especially when the system load is high. In Fig. 3(c), we can see that the advantage of preemptive policies becomes larger as the updatesize variability (i.e., ) increases. Moreover, the AoI performance of preemptive policies is only very slightly impacted when the updatesize variability changes, while that of nonpreemptive policies varies significantly. Therefore, Observations 2, 4, and 5 lead to the following guideline:
Guideline 3.
Service preemption should be employed when it is allowed.
V AoIbased policies
In Section IV, we have demonstrated that sizebased policies achieve a better average AoI/PAoI performance than nonsizebased policies. However, sizebased policies do not utilize the arrivaltime information, which also plays an important role in reducing the AoI. In this section, we propose three AoIbased scheduling policies, which leverage both the updatesize and arrivaltime information to reduce the AoI. Our simulation results show that these AoIbased policies outperform nonAoIbased policies.
We begin with the definitions of three AoIbased policies that attempt to optimize the AoI at a specific future time instant from three different perspectives:

AoIDropEarliest (ADE): When the server frees up, it chooses to serve an update such that once it is delivered, the AoI drop as soon as possible.

AoIDroptoSmallest (ADS): When the server frees up, it chooses to serve an update such that once it is delivered, the AoI drops to a value as small as possible.

AoIDropMost (ADM): When the server frees up, it chooses to serve an update such that once it is delivered, the AoI drops as much as possible.
If all updates waiting in the queue are obsolete, then the above policies choose to serve an update with the smallest size.
Although all of these AoIbased policies are quite intuitive, they behave very differently. In order to explain the differences of these AoIbased policies, we present an example in Fig. 5 to show how the AoI evolves under these policies. Suppose that when the ()st update is being served, three new updates (i.e., the th, ()st, and ()nd updates) arrive in sequence at times , , and , respectively. The sizes of these updates satisfy . When the server frees up after it finishes serving the ()st update at time , ADE, ADS, and ADM choose to serve the th, ()st, and ()nd updates, respectively. This is because serving the th update leads to the earliest AoI drop at time (following the red curve), serving the ()st update leads to the AoI dropping to the smallest at time (following the blue curve), and serving the ()nd update leads to the largest AoI drop at time (following the green curve). Clearly, ADE, ADS and ADM aim to optimize AoI at a specific future time instant (i.e., the future delivery time of chosen update) with different myopic goals. Note that at first glance, ADS and ADM may look the same. Indeed, they would be equivalent if the events of AoI drop have happened at the same time instant. However, these two policies are different as the time instants at which the AoI drops are not necessarily the same (e.g., vs. in Fig. 5).
Next we conduct extensive simulations to investigate the AoI performance of these AoIbased policies. In Fig. 6, we present the simulation results of the average AoI performance of the AoIbased policies compared to a representative arrivaltimebased policy (i.e., LCFS) and a representative sizebasedpolicy (i.e., SJF). All the policies considered here are nonpreemptive; the preemptive cases will be discussed in Section VI.
In Fig. 6(a), we observe that most AoIbased policies are slightly better than nonAoIbased policies, although their performances are very close. Among the AoIbased policies, ADE is the best, ADM is the worst, and ADS is inbetween. This is not surprising that ADM is the worst: although ADM has the largest AoI drop, this is at the cost that it may have to wait until the AoI become large first. ADE being the best suggests that giving a higher priority to small updates (so that the AoI drops as soon as possible) is a good strategy. In Figs. 6(b) and 6(c), similar observations can be made for update size following Weibull distributions.
The above observations lead to the following guideline:
Guideline 4.
Leveraging both the updatesize and arrivaltime information can further improve the AoI performance. However, the benefit seems marginal.
Vi Preemptive, informative, AoIbased Policies
In Section IV, we have observed that preemptive policies have several advantages and perform better than nonpreemptive policies. In this section, we first demonstrate that policies that prioritize informative updates (i.e., those that can lead to AoI drops once delivered) perform better than noninformative policies. Then, by integrating the guidelines we have, we consider preemptive, informative, AoIbased policies and evaluate their performances through simulations.
Via Informative Policies
As far as the AoI is concerned, there are two types of updates: informative updates and noninformative updates [21]. Informative updates lead to AoI drops once delivered while noninformative updates do not. In some applications, such as autonomous vehicles and stock quotes, it is reasonable to discard noninformative updates (which do not help reduce the AoI but may block new updates). In this subsection, we introduce the “informative” versions of various policies, which prioritize informative updates and discards noninformative updates. Then, we use simulation results to demonstrate that informative policies generally have a better average AoI/PAoI performance than the original (noninformative) ones. Furthermore, we rigorously prove that in an G/M/ queue, the informative version of LCFS is stochastically better than the original LCFS policy.
We use to denote the informative version^{1}^{1}1For simplicity, we omit the additional “_” in the policy name if policy is a preemptive policy ending with “_P”. For example, we use LCFS_PI to denote the informative version of LCFS_P. of policy . All the scheduling policies we consider have their informative versions. In some cases, the informative version is simply the same as the original policy (e.g., FCFS and LCFS_P).
In Fig. 8, we show the simulation results of the average AoI performance of several informative policies compared to their noninformative counterparts. In order to evaluate the benefit of informative policies, we plot the informative AoI gain, which is the ratio of the difference between the average AoI of the noninformative version and the informative version to the average AoI of the noninformative version. Hence, a larger informative gain means a larger benefit of the informative version. One important observation from Fig. 8 is as follows.
Observation 6.
Informative policies achieve a better average AoI performance than their noninformative counterparts. The informative gain is larger for nonpreemptive policies and increases as the system load increases.
Intuitively, informative policies are expected to outperform their noninformative counterparts because serving noninformative updates cannot reduce the AoI but may block new updates. The simulation results verify this intuition as the informative AoI gain is always nonnegative. Second, we can see that most nonpreemptive policies (e.g., RANDOM, LCFS, and SJF) benefit more from prioritizing informative updates. Third, as the system load increases, the informative AoI gain increases under most considered policies, especially those nonpreemptive ones. This is because as the system load increases, the number of noninformative updates also increases, which has a larger negative impact on the AoI performance for nonpreemptive, noninformative policies. Observation 6 leads to the following guideline:
Guideline 5.
The server should prioritize informative updates and discard noninformative updates when it is allowed.
Based on Observation 6, we conjecture that an informative policy is as least as good as its noninformative counterpart. As a preliminary result, we prove that this conjecture is indeed true for LCFS in a G/M/1 queue. In the following, we introduce the stochastic ordering notion, which will be used in the statement of Proposition 1.
Definition 2.
Stochastic Ordering of Stochastic Processes [22, Ch.6.B.7]: Let and be two stochastic processes. Then, is said to be stochastically less than , denoted by , if, for all choices of integer and in , the following holds for all upper sets^{2}^{2}2A set is an upper set if whenever and , where and
are two vectors in
and if for all . :(2) 
where and . Stochastic equality can be defined in a similar manner and is denoted by .
Roughly speaking, Eq. (2) implies that is less likely than to take on large values, where “large” means any value in an upper set . We also use to denote the AoI process under policy . Furthermore, we define a set of parameters , where is the number of updates and is the generation time of update . Having these definitions and notations, we are now ready to state Proposition 1.
Proposition 1.
In a G/M/1 queue, for all , the AoI under LCFS_I is stochastically smaller than that under LCFS, i.e.,
(3) 
Proof.
See Appendix A. ∎
ViB Preemptive, Informative, AoIbased Policies
So far, we have demonstrated the advantages of preemptive policies, AoIbased policies, and informative policies. In this subsection, we want to integrate all of these three ideas and propose preemptive, informative, AoIbased policies.
We first consider preemptive, informative version of three AoIbased policies: ADE_PI, ADS_PI, and ADM_PI. Interestingly, we can show equivalence between ADE_PI and SRPT_I (i.e., the informative version of SRPT) and between ADE_I and SJF_I (i.e., the informative version of ADE and SJF, respectively) in the samplepath sense. These results are stated in Propositions 2 and 3.
Proposition 2.
ADE_PI and SRPT_I are equivalent in every sample path.
Proposition 3.
ADE_I and SJF_I are equivalent in every sample path.
We prove Propositions 2 and 3 using the strong induction. The detailed proofs are provided in Appendix B and C, respectively. Propositions 2 and 3 imply that although SRPT_I and SJF_I do not explicitly follow an AoIbased design, they are essentially AoIbased policies. This provides an intuitive explanation for why sizebased policies, such as variants of SRPT and SJF, have a good empirical AoI performance.
In Fig. 10, we present the simulation results for the average AoI performance of the preemptive, informative, AoIbased policies (ADE_PI) compared to several other policies. We observe that in various settings we consider, ADE_PI achieves the best AoI performance.
Vii Conclusion
In this paper, we systematically studied the impact of various aspects of scheduling policies on the AoI performance and provided several useful guidelines for the design of AoIefficient scheduling policies. Our study reveals that among various aspects of scheduling policies we investigated, prioritizing small updates, allowing service preemption, and prioritizing informative updates play the most important role in the design of AoIefficient scheduling policies. It turns out that common scheduling policies like SRPT and SJF_P and their informative variants can achieve a very good AoI performance, although they do not explicitly make scheduling decisions based on the AoI. This can be partially explained by the equivalence between such sizebased policies and some AoIbased policies.
Our findings also raise several interesting questions that are worth investigating as future work. One important direction is to pursue more theoretical results beyond the simulation results we provided in this paper. For example, it would be interesting to see whether one can rigorously prove that any informative policy always outperforms its noninformative counterpart, which is consistently observed in the simulation results.
References
 [1] S. Kaul, R. Yates, and M. Gruteser, “Realtime status: How often should one update?” in 2012 Proceedings IEEE INFOCOM. IEEE, 2012, pp. 2731–2735.
 [2] S. Wu, X. Ren, S. Dey, and L. Shi, “Optimal scheduling of multiple sensors with packet length constraint,” IFACPapersOnLine, vol. 50, no. 1, pp. 14 430–14 435, 2017.
 [3] M. HarcholBalter, Performance modeling and design of computer systems: queueing theory in action. Cambridge University Press, 2013.
 [4] A. M. Bedewy, Y. Sun, and N. B. Shroff, “Optimizing data freshness, throughput, and delay in multiserver informationupdate systems,” in 2016 IEEE International Symposium on Information Theory (ISIT). IEEE, 2016, pp. 2569–2573.
 [5] M. Costa, M. Codreanu, and A. Ephremides, “Age of information with packet management,” in 2014 IEEE ISIT. IEEE, 2014, pp. 1583–1587.
 [6] N. Pappas, J. Gunnarsson, L. Kratz, M. Kountouris, and V. Angelakis, “Age of information of multiple sources with queue management,” in 2015 IEEE ICC. IEEE, 2015, pp. 5935–5940.
 [7] M. E. Crovella, R. Frangioso, and M. HarcholBalter, “Connection scheduling in web servers,” Boston University Computer Science Department, Tech. Rep., 1999.
 [8] L. Schrage, “A Proof of the Optimality of the Shortest Remaining Processing Time Discipline,” Operations Research, vol. 16, no. 3, pp. 687–690, June 1968.
 [9] D. R. Smith, “A new proof of the optimality of the shortest remaining processing time discipline,” Operations Research, vol. 26, no. 1, pp. 197–199, 1978.
 [10] M. HarcholBalter, “Queueing disciplines,” Wiley Encyclopedia of Operations Research and Management Science, 2010.
 [11] A. Kosta, N. Pappas, V. Angelakis et al., “Age of information: A new concept, metric, and tool,” Foundations and Trends® in Networking, vol. 12, no. 3, pp. 162–259, 2017.
 [12] Y. Sun, I. Kadota, R. Talak, and E. Modiano, “Age of information: A new metric for information freshness,” Synthesis Lectures on Communication Networks, vol. 12, no. 2, pp. 1–224, 2019.
 [13] M. Costa, M. Codreanu, and A. Ephremides, “On the age of information in status update systems with packet management,” IEEE Transactions on Information Theory, vol. 62, no. 4, pp. 1897–1910, 2016.
 [14] S. K. Kaul, R. D. Yates, and M. Gruteser, “Status updates through queues,” in 2012 46th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2012, pp. 1–6.
 [15] C. Kam, S. Kompella, and A. Ephremides, “Effect of message transmission diversity on status age,” in 2014 IEEE ISIT. IEEE, 2014.
 [16] E. Najm and E. Telatar, “Status updates in a multistream m/g/1/1 preemptive queue,” in Ieee Infocom 2018Ieee Conference On Computer Communications Workshops (Infocom Wkshps). IEEE, 2018.
 [17] Y. Inoue, H. Masuyama, T. Takine, and T. Tanaka, “A general formula for the stationary distribution of the age of information and its application to singleserver queues,” arXiv preprint arXiv:1804.06139, 2018.
 [18] R. Talak and E. Modiano, “Agedelay tradeoffs in single server systems,” arXiv preprint arXiv:1901.04167, 2019.
 [19] R. Devassy, G. Durisi, G. C. Ferrante, O. Simeone, and E. UysalBiyikoglu, “Delay and peakage violation probability in shortpacket transmissions,” in 2018 IEEE ISIT. IEEE, 2018, pp. 2471–2475.
 [20] R. D. Yates and S. K. Kaul, “The age of information: Realtime status updating by multiple sources,” IEEE Transactions on Information Theory, vol. 65, no. 3, pp. 1807–1827, 2018.
 [21] C. Kam, S. Kompella, and A. Ephremides, “Age of information under random updates,” in 2013 IEEE ISIT. IEEE, 2013, pp. 66–70.
 [22] M. Shaked and J. G. Shanthikumar, Stochastic orders. Springer Science & Business Media, 2007.