Future networked systems are expected to provide information updates in real time to support the emerging time-critical applications in cyber-physical systems, the increasing demand for live updates by mobile applications, etc. Since freshness of the information updates is crucial to the performance of the applications, one has to account for it in the design of the networked systems. Age of Information (AoI), proposed in , has emerged as a relevant performance metric for quantifying the freshness of the updates from the perspective of a destination. It is defined as the time elapsed since the generation of freshest update available at the destination. Unlike the system delay, AoI accounts for the frequency of generation of updates by a source, since it linearly increases with time until an update with latest generation time is received at the destination. Whenever such an update is received AoI resets to the system delay of that update and thus indicating its age.
Given the above properties and its relevance to the networked systems, the question of how to optimize AoI in a given system has received significant attention in the recent past. The problem of computing optimal arrival rate to minimize some function of AoI has been studied for a given inter-arrival time and service time distribution, e.g., see [2, 3, 4, 5, 6]. While the objective function was the average AoI in [2, 3, 4], the authors in 
considered the AoI violation probability, and the authors in considered the average peak AoI (PAoI). Given the sequence of arrivals, the authors in  proved that a preemptive last-generated-first-served policy results in smaller age processes at all nodes of a network when the service times are exponential.
In contrast to the above works, we consider the generate-at-will source model, studied in [8, 9], in a single-source-single-server system. Under this model, the source can generate an update at any time instant specified by a scheduling policy and thus the arrival sequence here is a function of the policy. Further, under this model no queueing is required, because by the defintion of AoI, at any time instant, sending an old update from a queue would be suboptimal to sending a freshly generated update. A counter-intuitive result is that the work-conserving zero-wait policy, that generates a packet immediately when the server becomes idle, is not optimal for minimizing the average AoI [8, 9]. In fact, introducing waiting time after an update is served was shown to have a lower average AoI. Given a service-time distribution with finite mean and assuming no service preemptions, the authors in  solved for optimal-waiting times for minimizing the average AoI, while the authors in  solved the problem for any non-decreasing function of AoI. Motivated by the fact that allowing service preemptions could further reduce AoI in this system, we ask a fundamental question what is the minimum achievable AoI in a single-source-single-server queuing system for any given service-time distribution?
In this work, we answer this question for minimum achievable average PAoI111 Minimum achievable average AoI was recently studied in  and is an open problem. by considering service preemptions, where the service of an update is preempted and dropped whenever a new update is generated by the scheduling policy. The service times across updates are independent and identically distributed (i.i.d.) with a general distribution (possibly with infinite mean222In fact, preemptions are more beneficial when the service-time distribution has infinite mean.). Average PAoI was first studied in  for M/M/1/1 and M/M/1/2* systems, and has received considerable attention in recent works [6, 11, 12], which use non-preemptive service model. The related work on service preemptions is discussed and contrasted with our results in Section VI.
We note that a decision about when to generate a new update that preempts an update under service clearly depends on the service-time distribution and could potentially depend on the past decisions. Thus, minimizing the average PAoI under preemptions results in an infinite-horizon average cost Markov Decision Problem (MDP) where the state space and the action space are continuous. In general, for such a problem, it is hard to prove the existence of an optimal stationary deterministic policy among all randomized causal policies that use the entire history of available information . Our key result is that, a work-conserving fixed-threshold policy, that chooses a fixed duration for preemptions, minimizes the average PAoI among all randomized-threshold causal policies.
We prove the above result in two steps. First, we formulate an MDP with appropriate cost functions and show that the policy for choosing the sequence of thresholds between any two AoI peaks is independent of the initial state and is also stationary. Second, we define costs for each decision within the two AoI peaks and show that the sequence of decisions converge to a stationary policy and that a fixed-threshold policy achieves the minimum cost. Given the optimal policy among randomized-threshold causal policies, we characterize the minimum average PAoI in any single-source-single-server queuing system. We also present a necessary and sufficient condition for service-time distributions under which preemptions are always beneficial. Finally, using a case study we provide an insight for the design of the threshold.
The rest of the paper is organized as follows. In Section II we formulate the average PAoI minimization problem. In Section III we present preliminary results that are used in Section IV to obtain the optimal fixed-threshold policy. In Section V we discuss the conditions under which preemptions are beneficial. The related work on service preemptions is presented in Section VI. In Section VII we present some numerical results and finally conclude in Section VIII.
Ii System Model and Problem Statement
We study an information retrieval system shown in Figure 1, where a monitor (e.g., a mobile application) strives to obtain latest information (e.g., newsfeeds) from a source which evolves independently. The source instantaneously generates an information update (or simply update) and sends it to the preemptive server whenever it receives a request from the monitor. We assume zero delay for a request from the monitor to the source. However, an update incurs a random service time, denoted by , at the server before it reaches the monitor. We assume that the service times across the updates are i.i.d. Further, we consider that a new update always preempts an update under service. Note that the above model also holds for a system where the monitor just indicates to the source if an update was received (for instance by an ACK), and then the source decides itself about when to generate the next update. Let , and, respectively. We use to denote the minimum value in the support of .
Let denote the index of a request and its corresponding update. At any time, the monitor aims to have the freshest update. Note that this depends on the time instants at which monitor requests new information. A scheduling policy for information requests specifies these time instants. To be precise, a scheduling policy , where denotes the generation time of request (and thus also represents the generation time of update ). Using the convention that request is sent at time zero, the waiting time between requests and , denoted by , is given by . Note that the scheduling policy can be equivalently written as . In the following we describe the policies of interest.
Work-conserving policy: , for all , where is a threshold for preemption and takes values from . Under this policy a request is sent immediately after an update is received and thus no server idle time is allowed.
Threshold policy: , for all , where is a threshold for preemption, and . A threshold policy is a work-conserving policy with finite thresholds.
Fixed-threshold policy: , for all , for some . We use to denote this policy.
-threshold policy: , for all . We use to denote this policy.
Zero-wait policy: , for all . We use to denote this policy. Under a request is sent immediately after an update is received and no preemptions are allowed. We note that is the only non-preemptive work-conserving policy, where , for all .
Let denote the time at which information update is received at the monitor. We assign , if the update is dropped due to preemption. We have
In this system, the AoI at the monitor at any time , denoted by , is given by
Here, increases linearly with and drops instantaneously when an update is received. Let denote the th AoI peak, and denote the corresponding PAoI value. Further, let denote the index of the update received just after the th AoI peak. Note that between updates and there could be multiple updates that are preempted. We now have , where is the time just before update is received under . We illustrate the above defined quantities in Figure 2, where we present a sample path of AoI under service preemptions. Here, we have used the convention that, a packet is received at time zero and the initial AoI .
Under a given policy , the average PAoI is defined as
where the expectation above is taken with respect to a probability distribution determined byand the distribution of . Let denote the set of all admissible causal policies for which the limit in (2) exists. We are interested in solving the PAoI minimization problem
We use to denote an optimal policy, and to denote the minimum average PAoI.
Iii Threshold Policies and Auxiliary Results
In this section we define different classes of threshold policies and provide some important auxiliary results which will be used in the later parts of the paper. In the following, denotes the causal information available at th request.
A randomized-threshold causal policy specifies a probability distribution for choosing using which might be different at each .
Let denote the set of all randomized-threshold causal policies. The constraint is an artefact introduced to bound the MDP costs and facilitate the proof of convergence of the optimal policy to a stationary fixed-threshold policy. However, considering 333An optimal policy never chooses a . Thus, the constraint only excludes the case . and excludes -threshold policy and zero-wait policy from . Nevertheless, for a given problem, choosing arbitrarily close to and sufficiently large, the imposed constraints result in only a mild restriction of . This is illustrated in Figure 3.
A repetitive randomized-threshold policy is a randomized-threshold causal policy under which the joint distributions for choosing the set of thresholds between any two AoI peaks are identical.
A repetitive randomized-threshold policy is a randomized-threshold causal policy under which the joint distributions for choosing the set of thresholds between any two AoI peaks are identical.
Let denote the set of all repetitive randomized-threshold policies, denote the set of all fixed-threshold policies. From the above definitions, we have .
From Figure 2, it is easy to infer that under any policy , we have, for all ,
Note that is equal to , the service time of update . However, under preemptive policies does not have the same distribution as . The time denotes the duration between the time instances at which update and are received. Note that constitutes the idle time of the server after reception of update . Therefore, introducing idle time penalizes PAoI and it is always beneficial to send a request immediately after receiving an update. This implies that an optimal policy belongs to the set of work-conserving policies. Hence, we arrive at the following lemma.
The optimal policy belongs to the set of work-conserving policies.
In the following, we present some auxiliary results that will be extensively used in the proofs later in Section IV. We first define deterministic-repetitive threshold policies and compute for this calss of policies.
A deterministic-repetitive-threshold policy uses the same sequence of deterministic thresholds between any two AoI peaks.
Let denote a sequence of deterministic thresholds. Then, a deterministic-repetitive-threshold policy repeats this sequence between any two peaks. In the following lemma we characterize and .
For a deterministic-repetitive-threshold policy , are i.i.d. with mean , and are i.i.d. with mean , where
The proof is given in Appendix -A. ∎
Using the result in Lemma 2 we compute , the average PAoI under a fixed-threshold policy.
For a fixed-threshold policy , we have the average PAoI , where
The proof is given in Appendix -B. ∎
For a given distribution , the average PAoIs achieved by the -threshold policy and the zero-wait policy are given by
Iv Minimum Achievable Average PAoI
In this section we first present a fixed-threshold policy that is optimal among all causal randomized policies. Next, in any single-source-single-server queuing system, we present the optimal policy among all work-conserving policies and provide an expression for the minimum average PAoI.
Given the distribution of service times , there exists a fixed-threshold policy in that is optimal in , where is the optimal fixed threshold, given by
The proof of the theorem is given in two steps. First, we formulate an infinite horizon average cost MDP problem equivalent to in the domain of and show that an optimal policy belongs to . Next, we consider the decision process between two successive updates and show the independence of the optimal policy with the past decisions. Further, we prove that a fixed-threshold minimizes the average PAoI. The details are provided in Appendix -C. ∎
Consider a single-source-single-server queuing system with a given service time distribution, having any arrival process and any service policy, e.g., FCFS/LCFS, preemptions/no preemptions, packet drops/no drops etc. By the definition of AoI, it is easy to argue that the minimum average PAoI in this system will be at least the minimum average PAoI in our system with generate-at-will source model, no queueing, and service preemptions. Now, as illustrated in Figure 3, for a given problem, by choosing arbitrarily close to and sufficiently large, the set can closely approximate the set of work-conserving policies. Therefore, from Theorem 1 and Lemma 1, it immediately follows that is the minimum achievable PAoI. Now using Corollay 2, we arrive at the following result on minimum achievability.
In any single-source-single-server queuing system with i.i.d. service times, and a given distribution , the minimum achievable average PAoI is given by
and thus, the optimal policy is either or or , whichever achieves .
V When are Preemptions Beneficial?
In this section we study the conditions under which preemptions are beneficial, i.e., allowing preemptions will result in a stricly lower average PAoI. From Theorem 2, a necessary and sufficient condition for preemptions to be beneficial is as follows:
In the following we consider an example distribution and obtain the condition under which preemptions are beneficial.
Case Study: Consider a random service time that takes value with probability and with probability , and . Note that, here and threrefore . The distribution of can be written as follows:
where and are Dirac delta function and unit-step function, respectively. Note that for this distribution choosing threshold or does not reduce average PAoI. Therefore, we compute for .
From the last step above we conclude that . This implies that, under preemptive policies whenever an update is not received within the duration , it is optimal to send a new request just after .
We use (11) to check if preemptions are beneficial or not. Since , preemptions are beneficial iff , which implies
The condition in (12) establishes a lower bound on for preemptions to be beneficial. For example, if and , then preemptions are beneficial if is greater than .
Note that the service-time distribution in the above example is simple enough to compute analytically and use (11) to infer whether preemptions will be beneficial or not. In general, it is not straightforward to do so for any service-time distribution. In the following lemma we provide a sufficient condition that could be used to infer if preemptions are beneficial for a given class of distributions.
For any single-source-single-server queueing system, a sufficient condition for preemptions to be beneficial for minimizing average PAoI is as follows:
Vi Related Work
Most of the works in the AoI literature that considered service preemptions focused on analysing the average AoI and average PAoI for different queueing systems, e.g., see [15, 16, 17, 18, 19, 20]. In contrast, the authors in  studied the problem of whether to preempt or not preempt the current update in service in an M/GI/1/1 system with the objective of minimizing the average AoI. They established conditions under which two extreme policies always-preempt and no-preemptions are optimal among stationary randomized policies.
The work by the authors in  is contemporary to ours. They studied the same system model as ours but considered the problem of minimizing the average AoI in the system. In the following we first summarise their results and then contrast our contributions with theirs. Considering a fixed-threshold policy for doing preemptions, the authors first solve for an optimal waiting time444The idle time of the server after an update is received. Idling the server does not reduce the average PAoI but may reduce the average AoI.. Stating that it is hard to obtain a closed-form expression for the average AoI in terms of the fixed threshold and its corresponding optimal waiting time, the authors compute, numerically, the optimal fixed threshold for two service-time distributions, namely, exponential and shifted exponential. It was not shown that the proposed method would result in a global optimum solution for general service-time distribution. In our work, we considered the average PAoI minimization problem. We have derived a fixed-threshold policy that is optimal in the set of randomized causal policies. This result provides a justification for the choice of fixed-threshold policies in . Furthermore, using , zero-wait and -threshold policies we have characterized the minimum achievable average PAoI.
In their seminal work , the authors studied the problem of finding optimal thresholds for restarting the execution of an algorithm having random runtime. For discrete service-time distributions the authors provided an optimal fixed-threshold policy that minimizes the expected run-time, considering the set of stationary randomized policies. Compared to the problem in , minimizing expected PAoI is hard as the consecutive AoI peaks are not independent even under a stationary policy. Furthermore, we have proven a general result since we considered the set of randomized causal policies and continuous service-time distributions.
Vii Numerical Analysis
In this section, we compute the optimal fixed threshold for the Erlang and Pareto service-time distributions. We have considered the Pareto distribution to illustrate the effectivenes of preemptions for heavy-tailed distributions, and the Erlang distribution is chosen due to the fact that it models a tandem of exponential (memoryless) servers. We compare the average peak AoI achieved by zero-wait policy, optimal fixed-threshold policy
, and median-threshold policy that uses the median as the fixed threshold. We study the median-threshold policy because it can be useful in cases where the distribution of the service times is not known apriori but the median can be estimated. Further, unlike mean, median is always finite and is an unbiased estimate.
Vii-a Erlang Service-Time Distribution
Erlang distribution is characterized by two parameters , where is the shape parameter and is the rate parameter. In Figure 4, we plot the average PAoI , computed using Corollary 1, by varying the threshold . The minimum values of are indicated by the points in magenta. Recall that, for the Erlang distribution results in an exponential distribution. For this case, from Figure 4 we observe that the function is concave, and therefore the optimal approaches zero which further implies that always chooses the threshold zero. In contrast, for , the functions are convex in and we obtain . We have observed this change in the nature of with different parameter values of a distribution in the case of log-normal, but it is not presented here due to space limitation. In Figure 5, we compare the average peak AoI achieved by different policies. We observe that in general zero-wait policy has average PAoI close to . This is because the sufficient condition that is not satisfied by the Erlang distribution for any , and thus allowing preemptions does not significantly reduce average PAoI. The average PAoI under median-threshold policy is relatively higher and also diverges from both zero-wait and when increases, thus suggesting that using preemptions with arbitrary threshold could in fact penalize the average PAoI. Thus, it is important to verify first if preemptions are beneficial for a given service-time distribution. The conditions provided in (11) and Lemma 3 are potentially useful toward this end.
Vii-B Pareto Service-Time Distribution
The Pareto distribution is characterized by two parameters , where is the scale parameter and is the tail index. The smaller the , the heavier the tail. In Figure 6, we plot the average PAoI by varying the threshold . The minimum values of are indicated by the points in magenta. Observe that in this case are convex in for each . Further, for the Pareto distribution we obtain . In Figure 7, we compare the average peak AoI achieved by different policies. Observe that for higher values the optimal policy coincides with zero-wait policy because the distribution has a light tail. For , the distribution has a heavy tail and infinite mean, and thus zero-wait policy also attains this value. In contrast, the optimal policy achieves finite average PAoI values in this case, and this illustrates the effectiveness of preemptions for heavy-tailed distributions. Furthermore, the median-threshold policy performs consistently well when compared with the optimal policy and thus it is an attractive choice when the parameters are not known apriori, but an estimate of the median is available.
In this work we have studied a problem of finding the minimum achievable average PAoI for a given service-time distribution. To this end, we have considered generate-at-will source model and service preemptions. Using an MDP formulation we have shown that a fixed-threshold policy achieves minimum average PAoI in the set of randomized-threshold causal policies. The minimum achievable average PAoI in any single-source-single-server queuing system is then given by the minimum average PAoI achieved among zero-wait, -threshold and the optimal fixed-threshold policies. Using the fact that zero-wait policy is optimal among all non-preemptive policies, we establish necessary and sufficient conditions for the service-time distributions under which preemptions result in a lower average PAoI. In the numerical analysis, we have used the Pareto service-time distribution to illustrate the effectiveness of preemptions for heavy-tailed distributions.
We leave the numerical analysis studying the average PAoI for wide range of service-time distributions for future work. We plan to study the minimum achievability for other functions of AoI including the average AoI.
-a Proof of Lemma 2
We first analyse and . Recall that is the index of the th received update. We note that at time , request will be sent and update will be generated by the source and sent to the server. Note that repeats the same sequence between any two peaks. If then update will be received successfully. In this case, we set and . If , then update will be preempted by sending request . In this case the above statements can be similarly repeated by comparing and . Using the above analysis we characterize in terms of the service times of updates , and the corresponding thresholds .
Note that the above characterization of is true for any as is a deterministic-repetitive threshold policy. Since are i.i.d. we infer that are also i.i.d. In the following we write using indicator functions.
Taking expectation on both sides and noting that and are i.i.d. we arrive at (4).
To analyse , we start with request that is sent at time and compare its service time with . We use similar analysis as above and characterize as follows.
Again, taking expectation on both sides and noting that and are i.i.d. we arrive at (5). Further, as is a deterministic-repetitive threshold policy and are i.i.d., we infer that are i.i.d.
Since are i.i.d., and are i.i.d., and , we conclude that for all have identical distribution with mean . Therefore,
-B Proof of Corollary 1
Substituting for all in (4), we obtain
In step we have used . In step we have used the sum for infinite geometric series.
Similarly, substituting for all in (5), we obtain
From steps and of (-B) we infer that
-C Proof of Theorem 1
In this proof, we use the notation to denote the sequence and to denote the N-fold Cartesian product of a set . Let denote the causal information available to the scheduler at th request after th update, where denotes the sequence of threshold values between th and th updates and . Here, denotes the information state exactly at th update. Further, we use to denote a realization of and to denote the conditional distribution function of the threshold given . Recall that a randomized-threshold causal policy specifies a sequence of causal sub-policies at each update, denoted by , where each specifies the conditional distributions at each request between the th and th updates. For a given , the sub-policy belongs to , which is the set of randomized sub-policies that specify the distributions of thresholds between two successive updates. For a given , the distribution belongs to , which is the set of valid probability distribution functions.
Now, we solve among
in two steps. First, we formulate an infinite-horizon average cost MDP problem with the decision epochs as the times at which the updates are received. In the next step, we consider the decision epochs as the times at which requests are sent between any two successive updates.
The identified infinite-horizon average cost MDP problem equivalent to has the following elements:
State: the service time of an update, ,
Action: the sequence of conditional distribution functions,
Cost function: the expected PAoI given ,
where denotes the time lost due to preemptions.
Here, using the result from the Lemma 2, we obtain 0pt
where and are deterministic functions. Therefore, we can express the cost function as
Now, the problem in the domain of is equivalent to the infinite horizon average cost problem given by
where is the optimal policy. Note that for a given policy , we have and because the limit in (2) exists for all . Given , let denotes the minimum expected cumulative cost over a finite horizon and the optimal finite-horizon solution can be obtained using the backward recursion of the stochastic Bellman’s dynamic programming  given by
where the value function denotes the optimal expected cumulative cost-to-go from to . Since there will be no cost after the finite-horizon, we initialize the recursion with . Thus, for , we have
where is a constant for all . Similarly, for ,
Here, is a constant and the optimal sub-policy is independent of . Now, for some such that , we assume that the optimal sub-policy satisfies and the value function has the same structure as in (19), that is given by
where are some constants. Next, for , we get 0pt