Accounting for Information Freshness in Scheduling of Content Caching

10/29/2019 ∙ by Ghafour Ahani, et al. ∙ 0

In this paper, we study the problem of optimal scheduling of content placement along time in a base station with limited cache capacity, taking into account jointly the offloading effect and freshness of information. We model offloading based on popularity in terms of the number of requests and information freshness based on the notion of age of information (AoI). The objective is to reduce the load of backhaul links as well as the AoI of contents in the cache via a joint cost function. For the resulting optimization problem, we prove its hardness via a reduction from the Partition problem. Next, via a mathematical reformulation, we derive a solution approach based on column generation and a tailored rounding mechanism. Finally, we provide performance evaluation results showing that our algorithm provides near-optimal solutions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Content caching at the network edge is considered to be an enabler for future wireless networks. This technique strives to mitigate the heavy burden on backhaul links via providing the users with their contents of interest from the network edge without the need of going to the core networks.

In designing effective caching strategies, previous works have focused on content popularity, whereas another important aspect is information freshness. Popularity of a content is defined as the number of users requesting the content. Popularity may vary over time[8327582]. Thus, some contents may be added to or removed from the cache as they become popular or unpopular. Freshness of contents in the cache refers to how recent the content has been obtained from the core network. The longer a content is stored in the cache without an update, the higher risk is that the cached content becomes obsolete. Hence, we would like to refresh the cached contents often, which however leads to higher load on the backhaul. Freshness of contents naturally arises in applications such as news, traffic information, etc., and it may have a great impact on user satisfaction. We model freshness of contents using the notion of age of information (AoI). For content caching, AoI is defined as the amount of time elapsed since the time that the content is refreshed. In this paper, we use a joint cost function to address the trade-off between the benefit of offloading via caching and AoI.

The works such as [7562037, Cost2018Deng, 6883600, 7414014] took into account only the popularities of contents in designing cache placement strategies. The works in [7562037, Cost2018Deng] considered content caching with known popularities of contents. The studies in [6883600, 7414014]

showed that the popularites of contents can be estimated via learning-based algorithms. However, in the mentioned works popularity of a content are time-invariant. In

[8357917, Zhang2018Using], caching with time-varying popularity profiles are investigated. In [Zhang2018Using] an algorithm is proposed to estimated the time-varying popularities of contents. The studies in [8000687, Tang2019] considered information freshness but not popularity of contents in their caching problems. Recently, a few works [8006505, 8006506, 8795490] have considered both popularity and freshness of contents. However, these works have the following limitations. In [8006505], the downloading cost of contents from the server is neglected. In [8006506], only one content of the cache could be updated in each time slot. In [8795490], it is assumed that the cache capacity is unlimited.

In this paper, we study optimal scheduling of content caching along time in a base station (BS) with limited storage capacity taking into account jointly offloading via caching and freshness of contents. The objective is to mitigate the load of backhaul links via minimizing a penalty cost function related to content downloading, content updating, and AoI costs subject to the cache capacity. The main contributions of this work are summarized as follows:

  • The caching scheduling problem is formulated as an optimization problem. Specifically, it is formulated as an integer linear program (ILP) and the hardness of the problem is proved based on a reduction from the Partition problem.

  • Via a problem reformulation, a column generation algorithm (CGA) is developed. We prove that the subproblem of CGA can be converted to a shortest path problem that can be solved in polynomial time. In addition, the CGA provides an effective lower bound (LB) of global optimum .

  • The solution obtained from CGA could be fractional, thus an advanced and problem-tailored rounding algorithm (RA) is derived to construct integer solutions.

  • Simulations show the effectiveness of our solution approach by comparing the obtained solutions to the LB as well as the conventional algorithms. Our algorithm provides solutions within of global optimum.

Ii System Scenario and Problem Formulation

Ii-a System Scenario

The system scenario consists of a content server, a BS and a set of users within the coverage of the BS. The server has all the contents, and the BS is equipped with a cache device of capacity . The contents are dynamic, i.e., the information they contain may change over time. Denote by the set of the contents. We assume the server has always the up-to-date version of the contents. Denote by the size of content . Each content is either fully stored or not stored at all at the BS. The system scenario is shown in Figure 1.

Figure 1: System scenario.

We consider a slotted time system of time slots. At the beginning of each time slot, the contents to be stored in the cache need to be determined by an updating/placement action. Namely, some stored contents may be removed from the cache, some contents may be added to the cache, and some contents may be re-downloaded from the server. The freshness of a content may decrease along time. We use AoI to model the freshness of contents. A content that is newly downloaded from the cache has AoI , and for each time slot it remains in the cache without re-downloading, its AoI increases with one time slot. Denote by the cost associated with an AoI of time slots for content . A content has AoI time slots when the content has been stored in the cache for continuously time slots without any update.

In our model, user requests at most contents within the time slots based on its interest. The set of requests for user is denoted by . The downloading process of a content starts as soon as the request is made. The content can be downloaded either from the cache if the content is in the cache, or otherwise from the server. We assume the time of each request is known or can be predicted via using a prediction model, e.g., the one in [Zhang2018Using]. For user and its -th request, the requested content and the time slot of request are denoted by and , respectively.

Ii-B Cost Model

Denote by a binary optimization variable which equals one if and only if the -th content is stored in time slot . Denote by and the costs for downloading one unit of data from the server and the cache to a user, respectively. We have to encourage downloading from the cache. The downloading cost for user to obtain its -th request, denoted by , is expressed as:

(1)

The downloading cost for completing all requests of all users, denoted by , is .

Denote by binary variable

, , whether or not content is in the cache and has AoI time slots. The overall AoI cost is expressed as:

(2)

where is the number of users requesting content in time slot . Updating contents in the cache incurs an updating cost. The updating cost, denoted by , is expressed as:

(3)

where means that the content is just downloaded from the server and has cost . Here is the downloading cost unit from the server to the cache. Finally, the total cost is denoted by and expressed as:

(4)

Here, is a weighting factor between and . Larger means frequently updating the contents of the cache and consequently smaller AoI for cached contents.

Ii-C Problem Formulation

The update-enabled caching problem (UECP) is formulated as an ILP, and shown in (5).

(UECP) (5a)
(5b)
(5c)
(5d)
(5e)

Constraints (5b) indicate that used storage space is less than or equal to the cache capacity in each time slot. Constraints (5c) state that if the content is in the cache, it has to have one of the AoIs . Constraints (5d) indicate content in time slot has AoI if and only if the content is in the cache in time slot , has not AoI in time slot , and has AoI in time slot .

Even though this ILP can be solved by a standard solver, it needs significant computational time. Exploiting the structure of the problem, we develop an solution method based on column generation.

Ii-D Complexity Analysis

Theorem 1.

UECP is -hard.

Proof.

The proof is established by a polynomial reduction from the Partition problem that is -complete [garey1979computers]. Consider a Partition problem with a set of integers, i.e., . The task is to decide whether it is possible to partition into two subsets and with equal sum.

The reduction is constructed as follows. We set the cache capacity as , the set of contents to , size of content to , and the number of time slots to one, i.e., . As , there is no updating or AoI costs. The time slots of all requests are set to , i.e., . We set for , , and . By this setting, if the cache stores content , gain is achieved. As the cache capacity is , a maximum possible of gain can be achieved. Now, the question is if this maximum gain can be achieved. This question can be answered by solving UECP which also will answer the Partition problem. Hence the conclusion. ∎

Iii Reformulation of UECP

We provide a reformulation of the problem that enables a CGA. We define the caching and updating decisions for content across the time slots as tuple in which and . In total, of such tuples exist and one of them is used in a solution. Denote by the index set for all possible solutions. We refer to a possible solution as a column. The cost of column for content is denoted by and can be calculated by the formula in (6).

(6)

In (6), and are constants and represent the values of and with respect to -th column, respectively. Now, ILP (5) can be reformulated as (7).

(7a)
(7b)
(7c)
(7d)

Here, is a binary variable where if and only if the -th column of content is selected, otherwise it is zero. Constraints (7b) are the cache capacity constraints, and constraints (7d) indicate that only one of the columns is used.

Iv Algorithm Design

In this section, we present our solution method which consists of two algorithms. Algorithm is a column generation algorithm (CGA) applied to the continuous version of (7). Algorithm is a rounding algorithm (RA) applied to the solution obtained from CGA if the solution is fractional. These algorithms are applied alternately until an integer solution is constructed. The solution method is shown in Algorithm 1. The term RMP in the algorithm will be discussed later.

1:  STOP
2:  while (STOPdo
3:     Apply CGA to RMP and obtain
4:     if ( is an integer solution) then
5:         STOP
6:     else
7:         Apply RA to
Algorithm 1 CGA and RA

Iv-a Column Generation Algorithm

In column generation, the problem is decomposed into a so called master problem (MP) and a subproblem (SP). The algorithm starts with a subset of columns and solves alternately MP and SP. Each time SP is solved a new column that possibly improves the objective function is generated. The benefit of CGA is to exploit the fact that at optimum only a few columns are used.

Iv-A1 MP and RMP

MP is the continuous version of formulation (7). Restricted MP (RMP) is the MP but with a small subset for any content . RMP is expressed in (8). Denote by the cardinality of .

(RMP) (8a)
(8b)
(8c)
(8d)

Iv-A2 Subproblem

The SP uses the dual information to generate new columns. Denote by the optimal solution of RMP. Denote by and the corresponding optimal dual variables of (8b) and (8c), respectively, i.e., and . After obtaining , we need to check whether is the optimal solution of RMP. This can be determined by finding a column with the minimum reduced cost for each content . If all these values are nonnegative, the current solution is optimal. Otherwise, we add the columns with negative reduced cost to corresponding sets.

Given and for content , the reduced cost of column is where can be computed using expression (6) in which constants and are replaced with optimization variables and , respectively. To find the column with minimum reduced cost for content , we need to solve subproblem SP, shown in (9). Denote by the optimal solution of SP. If the reduced cost of is negative, we add it to .

(9a)
(9b)
(9c)
(9d)
(9e)
(9f)

Even though (9) is an ILP, in the following, we show that it can be solved as a shortest path problem using for example Dijkstra’s algorithm[Cormen2009introduction] in polynomial time.

Theorem 2.

For content , SP can be solved in polynomial time as a shortest path problem.

Proof.

Consider content . We construct an acyclic directed graph where finding the shortest path from the source to distention is equivalent to solving SP. The objective function (9a) can be rewritten as (10). Denote by the total cost for downloading content via the server for all users requesting the content over all time slots, i.e., . Denote by the scenario where users request content in time slot and the content has AoI . Denote by the downloading cost from the server to the cache. Denote by the reduction in due to storing content .

The graph is constricted as follows. Nodes and are used to represent the source and destination. Node is used to represent . For time slot , there are vertically aligned nodes. Using node means that the content is not in the cache, and using node , , means that the content is in the cache and has AoI . From node to there is an arc with weight . For each node there are two outgoing arcs one to which means that the content is not stored in the next time slot and has weight , and the other to which has weight and means that the content is downloaded to the cache in the next time slot and has AoI . For each node there three outgoing arcs to , , and , respectively. Using the first arc means that the content is deleted for the next time slot and has weight . Using the second arc means the content is re-downloaded from the cache and has AoI  with weight . Using the third arc means that the content is kept and its AoI increases with one unit and has weight . Finally, there are arcs from and for to each with weight .

Given any solution of (9), by construction of the graph, the solution directly maps to a path from the source to the destination with the same objective function. Conversely, given a path we construct an ILP solution. For time slot , if flow is in node then we set . If the flow is in , we set and . The resulting ILP solution has the same objective function value as length of the given path in terms of the arcs’s weights. Hence the conclusion. ∎

(10)
Figure 2: Graph of the shortest path problem for subproblem.
10:  , , , , , , and , u
0:  
1:   for
2:  STOP
3:  while (STOPdo
4:     Solve RMP and obtain , , and
5:     STOP
6:     for   to   do
7:         Solve SP and obtain
8:         if  then
9:            
10:            STOP
Algorithm 2 Column Generation Algorithm (CGA)

Iv-B Rounding Algorithm

The solution of CGA could be fractional. Thus, we need a mechanism to construct integer solutions. We design a rounding algorithm (RA) to achieve this. RA repeatedly fixes the caching decisions of contents over time slots until an integer solution is constructed. The caching decision for content and time slot is determined based on value , defined as . This value indicates how likely it is optimal to store content in time slot . In the following we prove a relationship between and and then give the RA.

Theorem 3.

For any content , is binary for any if and only if every element of is binary.

Proof.

For necessity, for any content , if is binary for any , , it is obvious from the definition that all elements of are binary. Now, we prove the sufficiency. For any content , assume that every element in is binary. Assume that for , then . To satisfy that element for is binary, elements for either are all zero or all one. Otherwise, as , one of the becomes fractional. This means that all columns corresponding to for

must be the same. Having two vectors with the same values violates the condition that the columns of any two

are different. Therefore, for any content , , if is binary for any , , then is an binary for any , . Hence the proof. ∎

RA consists of three main steps which are shown in Algorithm 3. First, for content in time slot , the decision is to store the content if . All columns that do not comply with this caching decision will be discarded. These are done by Lines -. Second, the element of being closest to zero or one is found and rounded. Based on the rounding outcome, the caching decision is determined and non-complying columns are discarded. These are done via Lines -. Finally, the algorithm fixes the decisions of the contents across the time slots to zero if there is no remained spare space in the cache to store them in these time slots. This is done by Lines -. The caching decisions made until now will be remained fixed in all subsequent iterations. Note that with these fixings SP can still be solved as a shortest problem. If is set to , nodes for and their connected arcs will be deleted from the graph. If is set to , node and its connected arcs will be deleted.

0:   and
1:  Compute where
2:  Fix in SP if ,
3:  Fix in RMP if ,
4:  
5:  
6:  
7:  
8:  if  then
9:     Fix in SP
10:     Fix if ,
11:  else if  then
12:     Fix in SP
13:     Fix if ,
14:  else
15:     Fix in SP
16:     Fix if ,
17:  for   to   do
18:     
19:     
20:     for  do
21:         if  then
22:            Fix in SP
23:            Fix in RMP if ,
Algorithm 3 Rounding Algorithm (RA)

V Performance Evaluation

We compare CGA to the LB and two conventional caching algorithms: random-based algorithm (RBA) [7959865] and popularity-based algorithm (PBA) [Ahlehagh2014Video]. Both algorithms treat contents one by one. In RBA, the contents are considered randomly, but with respect to their total numbers of requests; a content with higher number of requests will be more likely selected for caching. In PBA, popular contents, i.e., contents with higher number of requests, will be considered first. For the content under consideration, if the content was not in the cache in the previous time slot, it is downloaded with AoI zero. Otherwise, if AoI cost has reached fifty percent of downloading cost, the content is re-downloaded. Otherwise, the content is kept and the AoI increases by one.

The content popularity distribution is modeled by a ZipF distribution[KarthikeyanShanmugam2013]

, i.e., the probability that a user requests the

-th content is . The popularities of contents are changed randomly across the time slots. We set , , and with length of one hour for each time slot [8691020]. The sizes of files are uniformly generated within interval . The cache capacity is set as . Here, shows the size of cache in relation to the total size of all contents. The number of requests for each user is randomly generated in .

The performance results are reported in Figures 3-5. The deviation from global optimum is bounded by the deviation from the LB, as LB is always less than or equal to the global optimum. We refer to the deviation from LB as optimality gap. The CGA provides solutions within gap from LB and outperforms the conventional algorithms. Figure 3 shows the impact of . When increases from to the cost nearly linearly increases, however, the optimality gap of algorithms decreases. The reason is that with larger number of users, more contents from the content set are requested by users. As the cache capacity is limited, the only way to get many of requested contents is from the server by all algorithms which leads to a lower optimality gap.

Figure 4 shows the impact of . Recall that the cache capacity is set to of the total size of the files. For CGA, when the capacity of cache is extremely limited and as is small, almost all contents will be requested by users. These together imply that many requests need to be satisfied from the server which leads to a high cost. When increases to , the cost decreases. Because as increases the cache capacity increases, and CGA is able to efficiently utilize the cache capacity. However when further increases to , the cost increases. The reason is that even though the capacity increases with but the diversity of requested contents becomes too large, and consequently some of them need to be satisfied from the server which leads to a higher cost.

Figure 5 shows the impact of . Recall that larger means higher backhaul load but smaller AoI. From the figure, it can be seen that when grows, PBA and RBA push down the average AoI of contents to almost zero but incur substantial amount of load on the backhaul. In contrast, the solutions of CGA achieve a much better balance between the backhaul load and AoI of contents with respect to . Note that the backhaul load and average AoI are normalized to interval .

Figure 3: Impact of on cost when , , , , , , and .
Figure 4: Impact of on cost when , , , , , and .
Figure 5: Impact of on backhaul load and average AoI when , , , , and .

Vi Conclusions

This paper has investigated scheduling of content caching along time where jointly offloading effect and freshness of the contents are accounted for. The problem is formulated as an ILP and -hardness of the problem is proved. Next, via a mathematical reformulation, a solution approach based on column generation and a rounding mechanism is developed. Via the joint cost function, it is possible to address the trade-off between the updating and AoI costs. The numerical results show that our algorithm is able to balance between the two costs. Simulation results demonstrated that our solution approach provides near-optimal solutions.

References