In recent years, various kinds of real-time applications such as ads bidding, stocks forecast, weather monitoring, and social networks have become a focus of attention. These applications have high requirement in the freshness of status information for making accurate decision. The freshness of data can be measured by age-of-information (AoI) [9, 10]. It is defined as the time elapsed since the latest delivery of the update was generated.
AoI has attracted many researchers in academic. AoI was firstly proposed in [9, 10] as a metric of the information freshness at the target node. In , the authors obtained a general result for extensive service systems with the update messages served with first-come-first-served (FCFS) principle, and specifically considered , and standard queuing models. In , status updating from multiple sources was analyzed. In  and , minimizing AoI for multi-hop wireless networks with interference-free networks and general interference constrains were considered, respectively. The above references focus on the update messages stochastically generated at the source. Thus, the message has to wait in the queue when the server is busy. Thus AoI may increase due to the queuing delay. A just-in-time policy was proposed in  to solve the problem according to the knowledge of the system state, for example, to generate messages only when the server is idle. The policy is also called zero-wait policy  or the work-conserving policy . The authors in  also noticed that there can be better polices other than the zero-wait policy in many scenarios. In , peak age was taken as a new measurement of the freshness of information because of its analytically convenience. Recently, some researchers are devoted to developing new tools for AoI analysis in networks. In particular, ref.  explicitly calculated the average age over a multi-hop network of preemptive servers by using a stochastic hybrid system (SHS). And in 
, the authors applied SHS in the analysis of the temporal convergence of higher order AoI moments, and enable the moment generation function to characterize the stationary distribution of an AoI process in multi-hop networks.
The above papers only pay attention to the influence of data transmission and queuing on AoI. However, the impact of data processing on AoI is non-negligible in some real-time applications. Take autonomous driving as an example, when a status update is an image, it needs not only to be transmitted to the controller, but also to be processed to expose the embedded status information. Unfortunately, subject to the limited computational capacity of the local processor, data processing could be computational expensive and time consuming. Mobile edge computing (MEC) can be a potential technique to solve the above problem for reducing the AoI of computation-intensive messages, since it has the ability to provide abundant cloud-like computing resource via integrated MEC servers deployed at the network edge such as access points and cellular base stations, as well as to cut down the response time in comparison with the centralized cloud [15, 14]. Motivated by this, we consider introducing MEC to process the computation-intensive message.
In MEC systems, the computing tasks can be offloaded to an MEC server. As the MEC server usually has sufficient computing capacity and is close to mobile users, offloading can greatly save the user’s energy and reduce the computing time. For computation offloading, it is crucial to determine whether or not to offload and what and how much should be offloaded . The computation offloading decision is influenced by a number of parameters such as the QoS requirements to be met (e.g., in AoI minimization, energy efficiency maximization), as well as the capacities of processing nodes and the availability of radio resources for wireless packet transmission. And there are mainly two widely used computation task models, namely binary computation offloading as well as partial computation offloading [15, 14]
. For binary offloading, the task is inseparable because its tight integration or relatively simplicity, such as speech recognition and natural language translation. Thus, the whole computation task is performed either locally by the mobile user or offloaded to the MEC server. For partial offloading, the computation task can be divided into more than one part. Some parts are computed by mobile user locally and the rest are transmitted to and computed at the MEC server. Applications of partial offloading consist of multiple fine-grained processes/components, such as augmented reality and face detection. If the composable components of the task are independent, the computing process can be executed both locally and remotely in parallel. In the literature, many works investigated computation offloading in MEC systems. Minimizing the execution delay is studied in[13, 16, 35, 19]. Under the constraint of execution delay, [21, 28, 33, 27] minimized the energy consumption and  maximized the system scalability. A balance between execution delay and energy consumption for computation offloading was considered in [17, 4, 6]. While in practical applications, completely parallel execution for the task-input bits may be unpractical, since the bit-wise correlation hinders arbitrarily division into different groups. In this paper, we consider the computation task generated at the information source to be computed in three ways: 1) local computing, where the task is computed as a whole at the local processor; 2) remote computing, where the task is computed as a whole at the MEC server; 3) partial computing, where the task is firstly computed at the local processor and then the output of the local processor is transferred to the MEC server for further computation. Note that in the third method, the size of the output obtained by means of local processing is less than the size of the message generated at the source node.
In terms of computation-intensive messages, apart from the two affects mentioned above, the message generation frequency and the delay caused by data transmission and queuing, data processing delay is also non-negligible for the research of AoI. For example, ref. 
considered AoI minimization in two data processing scenes, the complicated initial feature extraction and classification in computer vision, as well as the optimization of sampling and updating processes in an Internet of things device’s sampled physical process. The authors in considered a general analysis with packet management for average AoI and average peak AoI with a computation server and a transmission queue. Ref.  put forward new scheduling schemes for computing and network phases in vehicular networks by combining the computation and information freshness. In , the authors investigated bidirectional timely data exchanging between a fog node and a mobile user in a fog computing system. For a resource constrained edge cloud system, the authors in  considered a greedy traffic scheduling policy to minimize the overall age penalty of multiple users. In , a cloud computing status updating was studied with preemption policy. The authors in  proposed a new performance metric called age of task (AoT) to evaluate the temporal value of computation tasks and jointly considered task scheduling, computation offloading and energy consumption. Although the above papers considered data processing, they did not take advantage of MEC’s short distance to the source node and sufficient computation resources.
In this paper, we concentrate on the average AoI for computation-intensive messages in an MEC system. We study the AoI performance with three computing strategies, including local computing, remote computing and partial computing. It has not been studied to the best of our knowledge. Zero-wait policy is applied to the three computing schemes. Specifically, local computing generates the update message immediately after the computation of the last update message. For remote and partial computing, the generation of a new update message comes after the previous one’s arrival at the MEC server. In the three computing schemes, the computing follows FCFS principle. Assume the transmission time follows an exponential distribution, and consider exponentially distributed and deterministic computing time with infinite computing queue size. The main contributions of this article are summarized as follows:
We derive the closed-form average AoI for the three computing schemes with exponentially distributed computing time. We found that by carefully partitioning the computing task, partial computing performs the best compared with local computing and remote computing. And it is significantly better than remote computing when the ratio of transmission rate and remote computing rate is very small or very large. If the transmission rate is small, the performance of local computing is the same as partial computing.
The average AoI with deterministic computing time is obtained numerically. Simulation results show that, with large local computing rate and with both small transmission rate and large one, local computing and partial computing have similar performance. While, with small local computing rate, remote computing outperforms local computing.
The influence of message size, required number of central processing unit (CPU) cycles, data rate, as well as computing capacity of the MEC server for data processing on the average AoI is studied by numerical simulations. It is found that remote computing does not always outperform local computing in terms of average AoI. We characterize numerically when remote computing should be adopted compared with local computing.
The rest of the paper is organized as follows. In the next section, the system model and the average AoI about the three computing schemes are presented. The analytic results for average AoI with exponentially distributed computing time and deterministic computing time are discussed in Section III and Section IV, respectively. And numerical analysis for exponentially distributed computing time are showed in Section V. This paper is concluded in Section VI.
Ii System Model and Average AoI
Fig. 1 presents a status monitoring and control system for computation-intensive messages. Firstly, the source generates system status. Then, one of the local server and the remote server or both of them, will process system status. In the next, the target node receives the transmitted processed signal. During the procedure, it is vital to maintain the freshest processed status for the accuracy of control. Details of the three schemes of local computing, remote computing and partial computing will be described in the following parts.
Ii-a Local computing, remote computing and partial computing model
Since both the MEC server and the user have the computing ability, we compare three computing methods in this paper.
Ii-A1 Local Computing
Depicted in Fig. 1(a), this scheme allocates all computation-intensive data to local computation before sending the processed information to the target. In particular, the source firstly generates a status update message and then arrives at the computing queue. After the computation is completed by the local server, the computing result arrives at the transmission queue and then is transmitted to the destination node through the channel.
Ii-A2 Remote Computing
In this scheme, the computation-intensive message is transmitted to the MEC server and then computed remotely, as illustrated in Fig. 1(b). Particularly, the status update message generated by the source node is transmitted through the channel and arrives at the computing queue in the MEC server. Finally, the MEC server completes the computation of the status update message based on FCFS principle and sends the result to the destination node.
Ii-A3 Partial Computing
As shown in Fig. 1(c), the last scheme partially computes the computation-intensive data by the local server, and then sends intermediate computing result to the MEC server for further computing. Specifically, the local server partially processes the computation-intensive data, and the intermediate computing result enters the transmission queue. Then, the intermediate result is transmitted to the remote computing queue to wait for the MEC server to finish the rest computing part which is also based on the FCFS principle. When the computation is completely finished, the result can be sent to the destination node.
Ii-B Zero-Wait Message Generation Policy
In this section, we present the three computing schemes with zero-wait message generation policy , a policy that new message is generated immediately after the last one completes its computing or transmission. Intuitively, the zero-wait policy attains good performance as the waiting in a queue is avoided. Other message generation policies will be considered in the future work. The detailed zero-wait policies in the three schemes are given as follows.
Ii-B1 Local Computing
In local computing, a new status update message waits to be generated until the previous message is totally computed by the local server. Therefore, the computing queue is empty and the queuing delay is completely eliminated. Compared to the size of the original message, that of the computing result to be transmitted is negligible, thus the time to transmit the result to the target can be ignored when comparing with to the time for computing. Fig. 2(a) illustrates the evolution of the AoI at the destination node for local computing under FCFS queuing, where denotes the generation time instant of the -th status update message, is the time instant when message -th is computed locally. When the computing is finished, the revealed status information is transmitted to the destination node. Therefore, the age drops suddenly at time , and a new message is generated at that time.
Ii-B2 Remote Computing
In remote computing, zero-wait policy means that once the receiver receives a status update message, it sends an acknowledgement signal to the source node, and a new status update message will immediately be generated in the source node and be transmitted. Due to the size of the acknowledgement signal is relatively much smaller compared to that of the status update message, the feedback time is ignored. With zero-wait policy, the queuing delay for transmission is zero. The delivered status update message waits in a queue before the MEC server, and will be served with FCFS principle. Fig. 2(b) shows the change of the AoI at the destination. The -th status update message reaches the computing queue at in Fig. 1(b). In accordance with zero-wait policy, the -th status update message starts to be transmitted at . Denote as the time instant for terminating service of the -th status update message in the MEC server.
Ii-B3 Partial Computing
Zero-wait strategy in partial computing means that the source generates a new message when the intermediate result of the previous one is received by the remote MEC server. Thus, both the local computing queue and the transmission queue are empty. The age evolution of partial computing is shown in Fig. 2(c). The generation time instant of the -th message is denoted by , which is also the time instant when the -th message arrives at the remote computing queue. Denote as the time instant when the computing of parts of the -th message is finished at the local server. Denote as the computing completion time instant of the -th message at the MEC server, at which time the age drops sharply.
Ii-C Average AoI
Notice that local computing as well as remote computing can be viewed as special cases of partial computing. In particular, local computing can be considered as the case of partial computing with zero transmission time and zero remote computing time (or equivalently infinite transmission rate and infinite remote computing rate), and remote computing can be viewed as the special case with zero local computing time (infinite local computing rate). Thus, we firstly calculate the average AoI for partial computing. Then, the result can be easily applied to local computing and remote computing.
Ii-C1 Partial Computing
At time , a time-stamp denotes the generation time of the previous processed message, and the following random process defines the AoI of the processed status at the target node.
Fig. 2(c) illustrates the evolution of with FCFS principle. At , the queue is empty with . The average age of the processed status message is the area between the curve of and -axis in Fig. 2(c) normalized by the observed time length. The average AoI in interval , is
Set the length of the observation interval . The average AoI in partial computing can be represented as
From Fig. 2(c), we know that is a polygon, and is an isosceles trapezoid, which can be derived from two isosceles triangles, i.e.,
where represents the inter-generation time from the -th message to the -th one at the source node. is also the time spent in local computing and transmission of the -th message, i.e., , where refers to the service time of the -th message in the local server and denotes the service time in the channel. Denote as the elapsed time from the arrival time instant at the remote computing queue for the -th status update message to the service termination time instant in the MEC server. A new representation of the average AoI in partial computing can be derived as
where . Note that the contribution of to the average AoI is negligible, since when , the first term in (5) divided by tends to zero. From Fig. 2(c), we know that , then . Thus, the inverse of the fraction to the left of the second term in (5), , can be viewed as the sum service time of the local server and the transmission channel. Thus, the following equation is obtained
where is the local service rate and is the transmission rate.
Ii-C2 Local Computing
Compared with partial computing, both the transmission time and the remote computing time are zero in local computing, i.e., , . It is equivalent to infinite transmission rate and infinite remote computing rate in partial computing model, i.e., , . Therefore, the average AoI in local computing can be obtained as follows
Ii-C3 Remote Computing
Compared with partial computing, the local computing time in remote computing is zero, i.e., . It is equivalent to infinite local computing rate in partial computing model, i.e., . Hence, the average AoI in remote computing can be obtained as follows
Noted that for tandem queue, in the case where one or the other queue is unstable, the whole system will be unstable and the AoI will reach infinity. In this paper, as we adopt zero-wait policy in the first hop, the stability issue of the tandem queue only exists in the second hop. Therefore, as long as the service rate of the first hop is less than that of the second hop, the tandem queue is stable. Hence, the stability issues for the three cases can be discussed as follows. For local computing, the queue is stable due to the infinitely large transmission rate. For remote computing, if the remote computing rate is equal or less than the transmission rate, the tandem queue will be unstable. For partial computing, when the remote computing rate is equal or less than , the whole system will be unstable.
The calculation of equation (11) depends on the distributions of transmission time and computing time. In this paper, we assume exponentially distributed transmission time to indicate purely random transmission process, and derive the results for two different computing time distributions: exponential distribution  and deterministic distribution. The main results are detailed in the following sections. Noted that the main results are obtained based on the whole system to be stable.
Iii Average AoI with Exponentially Distributed Computing Time
We will deduce the average age with exponentially distributed computing time in this section. Exponential distribution is widely used to model random events in practice and can derive closed-form analytical results in most cases. For example, security based on face recognition identifies the user as a random event, and users’ arrival process usually can be viewed as a Poisson process. In this section, closed-form average AoIs in the three computing schemes are presented.
Iii-a Local Computing
The average AoI in local computing with exponentially distributed computing time is defined as below.
If the local computing time is exponentially distributed, the average AoI of local computing (9) is expressed as
where is the computing rate of the local server.
See Appendix A. ∎
Iii-B Remote Computing
We obtain the closed-form expression of average AoI for remote computing using zero-wait policy. Both the transmission time of the channel and the computing time at the MEC server are exponentially distributed.
Assume both the transmission time and the remote computing time are exponentially distributed. The average AoI in remote computing (11) is expressed as
where denotes the transmission rate and is the computing rate of the MEC server.
See Appendix B. ∎
Iii-C Partial Computing
Now we derive the closed-form average AoI for partial computing under zero-wait policy. The computing time of the local server and the MEC server, as well as the transmission time of the channel are exponentially distributed.
Assume the local computing time, the channel transmission time and the remote computing time are exponentially distributed. The average AoI in partial computing (7) is obtained as
where and . The notation is expressed as
See Appendix C. ∎
It is remarkable that the average AoI for both local computing (12) and remote computing (13) can be obtained based on (14). Since partial computing with infinite transmission rate and infinite remote computing rate can be taken as local computing, while partial computing with infinite local computing rate can be viewed as remote computing. Thus, we have
As shown in Fig. 3, the analytical results of the three computing schemes with exponentially distributed computing time are validated by simulations. It must be clear that both local computing and remote computing share the same settings of , and . While for partial computing, we adopt a linear computation partitioning model . In particular, the computing rate of the local server, the transmission rate and the computing rate of the MEC server are denoted as , , , respectively, and denotes the percentage of computing tasks computed by the MEC server. Particularly, in local computing while in remote computing. In Fig. 3, we set and denote , the optimal value of in partial computing is determined with numerical simulation methods according to the criteria of minimizing the average AoI in partial computing. In this figure, the average AoI in partial computing is smaller than local and remote computing. And when or , partial computing is significantly better than remote computing. It can be explained as when is small, outdated message will occur due to the long transmission time in remote computing, and when is large, the MEC server will compute delayed message since the status update messages are backlogged in the computing queue. While for partial computing, it can execute a proper proportion of computing tasks required by a computation-intensive message locally to mitigate the above problems. The performance of local computing is the same as partial computing when the is small. The reason is that when the transmission rate is sufficiently small, partial computing prefers to compute the messages locally. As increases, the average AoI in partial computing is smaller than that in local computing, especially for a small local computing rate, such as .
Iv Average AoI with Deterministic Computing Time
In this part, we derive the average AoI with deterministic computing time. In the application scenario where the volume of a computing task is constant and the computing resource allocated to the task is static, the computing time for an update message is deterministic. The analysis of the average AoI in the three computing schemes with deterministic computing time is provided in the following.
Iv-a Local Computing
The average AoI in local computing with deterministic computing time is given in the following theorem.
If the local computing time is deterministic and is equal to , the average AoI for local computing (9) is expressed as
See Appendix D. ∎
Iv-B Remote Computing
With deterministic computing time, the closed-form expression of average AoI for remote computing is difficult to obtain. But the result can be numerically calculated based on the following theorem.
Assume the transmission time of the channel is exponentially distributed with mean and the computing time at the MEC server is deterministic and is equal to . The average AoI can be numerically computed as:
See Appendix E. ∎
Iv-C Partial Computing
For partial computing, if we view the local computing process and the transmission process as a whole, the remote computing part can be considered as an queuing model. If , a closed-form expression can be obtained. Otherwise, the average AoI can only be calculated numerically. The result is concluded in the theorem below.
Assume both the local computing time and the remote computing time are deterministic, and the channel transmission time is exponentially distributed with mean . If , the average AoI in partial computing is expressed as
For , the average AoI can be numerically computed as follows:
where probability density function and is the derivative of the CDF
where , and .
See Appendix F. ∎
As shown in Fig. 4, the analytical results of the three computing schemes with deterministic computing time are validated by simulations, and the remote computing rate . When 0.5, local computing and partial computing perform similar for small and large . The reason is that when the transmission rate is small, the transmission time is very large; while the transmission rate is large, the computing queue at the MEC server is long. Thus, in the two cases, partial computing will process the whole computation-intensive message at the local server. While for medium value of , partial computing attains smaller average AoI compared with local computing due to the benefit of splitting the computing task between local and remote servers. The average AoI in remote computing is always larger than partial computing, and is even larger than local computing when 0.5. It shows that there is no benefit of using remote computing when the local computing capacity is sufficient.
V Numerical Analysis
Different parameters will influence the average AoI for computation-intensive messages in MEC systems. In this section, we study with the exponentially distributed computing time on the influence of various parameters, including message size, required number of CPU cycles, average data rate and computing capacity of the MEC server. Note that, we show the numerical results based on the stable system. For all status update messages, we adopt identical parameter pair to describe the status update message, where is the input size of the message and indicates the required number of CPU cycles to compute the original message. The size of the transmitted data and the data rate will affect the transmission time. Note that, the transmission time refers to the total time for delivering a message over multiple channel uses. Due to channel fading, the number of bits that can be transmitted in one channel use is random. Therefore, it is reasonable to assume random transmission time for each message. The required number of CPU cycles and the computing capacity will affect the computing time. Let be the average local computing capacity and denote as the average available computing capacity of the MEC server. Denote as the average data rate of the channel. We adopt a linear model to represent their relationships, then the service rates , , can be expressed as 
Recall that denotes the percentage of computing tasks computed by the MEC server. For local computing, , while for remote computing , and represents partial computing scheme. For partial computing, given other parameters, is chosen to minimize the average AoI.
Firstly, set Mbits/s, GHz and GHz. The average AoI associated with the message size with different required number of CPU cycles is shown in Fig. 5. It is shown that the AoI for local computing keeps stable when the message size is increasing. For remote computing, the average AoI firstly decreases sharply and then increases gradually as the message size increases with c = 3500 Megacycles. This is because the transmission rate is large with small message size, which makes a large amount of messages queued in the computing queue waiting to be computed, while for the large message size, the transmission time takes longer. When Megacycles, there is a cross point between the curves for local computing and remote computing at Mbits; meanwhile the cross point is at Mbits when Megacycles. This phenomenon means whether remote computing is superior to local computing or not depends on the message size. With the decrease of message size, the average AoI for partial computing decreases. By properly partitioning the computing tasks between the local server and the remote server, the performance of partial computing is always better than the other two schemes with the same parameters setting. Moreover, with sufficiently large message size, the average AoI for partial computing is the same as local computing. This is because the larger message size will cause the longer transmission time. With the aim to reduce the average AoI, the optimal parameter in partial computing is , which is equivalent to local computing.
Fig. 6 depicts the average AoI versus the required number of CPU cycles with different message sizes, where Mbits/s, GHz and GHz. It can be seen that the AoI curves of the three schemes rise up when the required number of CPU cycles is increasing, due to the increased computation time. There is an overlap among the curves for local computing with different message sizes because the average AoI of local computing is not affected by the message size. In general, remote computing outperforms local computing for large number of CPU cycles. However, under the condition of Megacycles, the average AoI of remote computing with Mbits boosts and performs worse than local computing. This is because as increases, the MEC server computes slowly which results in long queuing delay, and hence leads to large AoI. For Mbits or Mbits, a sudden sharp increase of the average AoI in remote computing occurs on the larger required number of CPU cycles for the same reason.
In Fig. 7, we set Megacycles, GHz and GHz and show the AoI performance versus data rate . Firstly, for remote computing with Mbits, the average AoI firstly decreases with the increasing of data rate, since higher data rate leads to less transmission time. However, the average AoI increases when Mbits/s. This is because the number of messages queuing at the MEC server also increases as data rate increases. For remote computing with Mbits and Mbits, the average AoI has the same trend as Mbits and it increases after a certain point of larger data rate. Secondly, for partial computing, the average AoI always decreases as data rate increases. Because the transmission time reduces as data rate increases and the congestion of queuing messages at the MEC server can be alleviated by completing a part of the computing tasks locally. Thirdly, when the data rate is small or large, the performance of local computing is superior to remote computing. This means when data rate is small or large, there’s no benefit in transmitting to the MEC server for reducing the average AoI of computation-intensive messages. Finally, regardless of the data rate or the message size, partial computing always derives a smaller average AoI than the other two computing schemes. Thus it proves that partial computing outperforms both local computing and remote computing.
Fig. 8 shows the impact of the computing capacity of the MEC server with data rate Mbits/s, local computing capacity GHz and message size Mbits. When Megacycles, remote computing performs worse than local computing. This is due to the required number of CPU cycles is small, the local server can complete the computation in a short time. Under the condition of large number of required CPU cycles, such as Megacycles, if the computing capacity is smaller than 2700, the average AoI of remote computing is larger than that of local computing, and vice versa. It eventually converges to the minimum average age . When Megacycles, the curves for local computing and partial computing are overlapped, which indicates that local computing performs the best. When Megacycles, with the increasing of the average computing capacity of the MEC server, the performance of partial computing converges to remote computing. Thus, it is better to offload most of the computing task to the MEC server.
We have analyzed the AoI for computation-intensive messages in an MEC system with three computing strategies: local computing, remote computing and partial computing. Two computing time distributions are considered: exponential distribution and deterministic distribution. The closed-form expressions for the three computing schemes with exponentially distributed computing time are derived. And the average AoI with deterministic computing time is obtained numerically. Simulation results prove that partial computing has smaller average AoI than the other two computing schemes in most cases, and only in some special cases has the same performance as the other two schemes. The influence of various parameters for data processing on the average AoI for exponentially distributed computing time is studied by numerical results, including message size, required number of CPU cycles, data rate, and average computing capacity of the MEC server. We find that for computation-intensive data, the combination with MEC significantly helps to obtain the optimal AoI. For the future works, we can extend to multi source-destination pairs and consider other message generation policies.
Appendix A Proof of Theorem 1
Appendix B Proof of Theorem 2
Since the arrival process of the computing queue equals to the message’s departure process of the transmission channel. And the process is a Poisson process due to the zero-wait policy. Thus, the computing queue and an MEC server form an system. Consequently, the inter-arrival time as well as the service time is iid exponential with and average service time . As and are independent, we have
Then we thoroughly calculate . For of status update , it also shows the system time in queuing theory, which has two parts as waiting time and service time, i.e.,
where is the waiting time in the computing queue and is the service time at the MEC server. The waiting time has a relation with the system time of the -th message, , and the inter-arrival time . Particularly, if , i.e., message will reach the computing queue, while the -th message is still waiting in the queue or under service, we have . Otherwise, . Thus, we can express the waiting time of message as
From (34), the term can be written as
Notice that the system time relies on the service time and the waiting time of message , hence it is independent of , and . Further more, the system time becomes stochastically identical, i.e., , since the system will reach a stable state. The probability density function of the system time for the system is 
The condition expected waiting time given can be derived as
Appendix C Proof of Theorem 3
Note that the arrival process of the remote computing queue is the same process as the departure process of the tandem of the local server and the transmission channel. The time of computing at the local server or the remote server and the transmission time of channel are iid with exponentially distributed. Thus, partial computing can be viewed as an system. Three expectations in equation (7) need to be calculated for obtaining the average AoI. Since the local computing time and the transmission time are iid exponentials with average service time and , respectively. Thus, we can obtain the following equations
Then we calculate in detail. For status update , consists of service time and waiting time similar to (34). The waiting time is in accordance with the system time of the -th message, , and the inter-arrival time . Specially, if , i.e., message arrives at the remote computing queue when the -th message is still in the queue for waiting or is under service, we can get . Otherwise, . Therefore, for message , the waiting time can be written as
From (34), the term can be written as
Note that the system time depends on the service time and waiting time of message , thus it is independent of , and . Further more, the system times become stochastically identical, i.e., when the system reach a stable state. The system time ’s probability density function for the system is 
where satisfies the following equation
is the Laplace-Stieltjes transform of random variable.
For , the probability density function of is
Then, we have
We can get the condition expected waiting time given as