User Preference Learning-Aided Collaborative Edge Caching for Small Cell Networks

While next-generation wireless communication networks intend leveraging edge caching for enhanced spectral efficiency, quality of service, end-to-end latency, content sharing cost, etc., several aspects of it are yet to be addressed to make it a reality. One of the fundamental mysteries in a cache-enabled network is predicting what content to cache and where to cache so that high caching content availability is accomplished. For simplicity, most of the legacy systems utilize a static estimation - based on Zipf distribution, which, in reality, may not be adequate to capture the dynamic behaviors of the contents popularities. Forecasting user's preferences can proactively allocate caching resources and cache the needed contents, which is especially important in a dynamic environment with real-time service needs. Motivated by this, we propose a long short-term memory (LSTM) based sequential model that is capable of capturing the temporal dynamics of the users' preferences for the available contents in the content library. Besides, for a more efficient edge caching solution, different nodes in proximity can collaborate to help each other. Based on the forecast, a non-convex optimization problem is formulated to minimize content sharing costs among these nodes. Moreover, a greedy algorithm is used to achieve a sub-optimal solution. By using mathematical analysis and simulation results, we validate that the proposed algorithm performs better than other existing schemes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

01/09/2018

User Preference Learning Based Edge Caching for Fog-RAN

In this paper, the edge caching problem in fog radio access networks (F-...
03/29/2019

User Preference Aware Lossless Data Compression at the Edge

Data compression is an efficient technique to save data storage and tran...
05/17/2018

Show me the Cache: Optimizing Cache-Friendly Recommendations for Sequential Content Access

Caching has been successfully applied in wired networks, in the context ...
05/16/2020

Artificial Intelligence Assisted Collaborative Edge Caching in Small Cell Networks

Edge caching is a new paradigm that has been exploited over the past sev...
12/28/2020

Selfish Caching Games on Directed Graphs

Caching networks can reduce the routing costs of accessing contents by c...
02/12/2019

Dynamic Content Updates in Heterogeneous Wireless Networks

Content storage at the network edge is a promising solution to mitigate ...
07/31/2019

A rolling-horizon dynamic programming approach for collaborative caching

In this paper, we study the online collaborative content caching problem...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Wireless user penetration is consistently increasing with a continuous emergence of new and sophisticated user-defined applications. This steers wireless technologies to evolve rapidly from one generation to the next generation striving to soothe the yearning for more enhanced spectral efficiency, energy efficiency, quality of experience, operation cost, etc. Even though the existing wireless networks ensured a very promising performance in these contexts, new demands on capacity and other performance have never ceased to emerge [3]. Besides, with the advent of the Internet of everything [1, 8], the incompetence of these legacy technologies became more apparent. Therefore, researchers are continuously in a toiled search for new technologies that can be adopted on top of the existing ones for future generation networks. Among many other impressive ideas, moving away from the traditional centralized infrastructure and towards the user-centric distributed network infrastructure is a promising one [16, 14, 15, 18, 17, 6, 10].

Note that a user-centric network platform significantly reduces energy consumption [6], increases network throughput [10] as well as enhances the utilization of the much-needed spectrum [16, 14, 15]. On the other hand, edge caching is the concept of storing popular contents close to the end users. Therefore, leveraging edge caching, user-centric network infrastructure efficiently utilizes the network bandwidth and significantly reduces the congestion on the links to the centralized cloud server [16, 14, 15]. Besides, edge caching is considered as a promising scheme to support video applications due to their traffic volume escalation and stringent QoS requirements [4]. To mitigate this bottleneck, one of the prominent advocacy of caching is to alleviate the bandwidth demand in the centralized network segments by storing the popular video contents in the local/nearby nodes. Furthermore, in various mission-critical delay-sensitive applications, edge caching can perform much better than the cloud caching approach due to the significant reduction in the delay [2].

In the literature [14, 15, 18, 12, 13, 5], several researchers studied edge caching in terms of different performance metrics. Tan et al. conducted static popularity based throughput maximization analysis in [15]. A novel content delivery delay minimization problem was studied in [18]. Shanmugam et al. [12] also considered both coded and uncoded cases for caching contents at the helper nodes to minimize the content downloading time. Song et al. [13] proposed a dynamic approach for the scenarios of the single player and the multiple players. Recently, Jiang et al. considered a cache placement strategy to minimize network costs in [5].

In comparison to these works, we develop a new caching platform that allows user caching, device to device (D2D) communications and collaborations among cache-enabled end nodes. Furthermore, long short-term memory (LSTM) based sequential model is proposed for content popularity prediction, where the dynamic nature of the content’s popularity is discerned. In summary, the contributions in this paper are listed as follows:

  1. To capture the short-temporal user dynamics, LSTM is used for forecasting user preferences ahead of time.

  2. To fully exploit the advantages of edge caching, a collaborative communication framework is proposed, in which different nodes in the same cluster can share contents among each other.

  3. We formulate the optimization problems to minimize the content sharing costs under the constraints of limited and dynamic storage capacities at both the users and the base stations (BSs) for both heterogeneous caching placement and homogeneous caching placement scenarios.

  4. We further analyze the content sharing cost and develop collaborative edge caching algorithms to configure the parameters of caching placement.

  5. Numerical results are illustrated to validate the theoretical findings and the performance gain of the proposed algorithms.

The outline of this paper is as follows. Section II describes the system model and proposed dynamic user preference prediction. Section III introduces the caching model and optimization problems. The algorithms are given in Section IV. Section V presents the performance results, followed by the concluding remarks in Section VI.

Ii System Model and Dynamic User Preference Prediction Modeling

This section presents the proposed user-centric system model, followed by the proposed content access protocols and dynamic user preference prediction model.

Fig. 1: System Model for Collaborative Caching

Ii-a User-Centric System Model

In our proposed system model, a set of D2D users - denoted by , where ; are distributed in the coverage area of the BSs. Let denotes the caching size of all the users. Considering a cluster-based system model, we assume that each cluster consists of several BSs111In each cluster, different BSs use orthogonal bandwidth so there is no interference within a cluster. Within a cluster, we consider an equal number of BSs with equal caching capacity. Let , where and represent the set of BSs and the cache storage size of each BS, respectively. For simplicity, we assume that each BS serves an equal number of users. We denote a D2D requesting node and the serving BS as the tagged D2D node and the tagged BS222Throughout this paper, the name serving BS and tagged BS are used interchangeably., respectively. Furthermore, all the D2D nodes in a single cell - under the coverage region of a BS, are assumed to be in the communication range of each other and all the D2D users are in the communication range with at least one of the BSs.

Note that, in this paper, most popular contents are taken in the content catalog - denoted by , where . Note that this assumption of fixed content is made only during a period. Considering the age of information and content freshness, similar to [16, 14], we assume that new popular content is added periodically while removing the least popular ones. Furthermore, following the widely used notion, we assume that all of the contents have the same size - denoted by . Note that if the content sizes are different, we can always divide the content into segments of equal size and store those segments [7].

Ii-B Proposed Content Access Protocols

We carefully design our proposed model to satisfy the critical latency of real-time communication by delivering the requested contents from local caches as much as possible. If a tagged user needs to access the desired content, before sending the content request to other nodes, it first checks its own cache storage. It sends the content request to its D2D neighbors that are residing under the same cell and are within its communication range only if it does not store the requested content in its own storage. If the content is available in one of the D2D neighbor nodes, that content can be instantly served from that neighbor to the tagged user. If none of the D2D nodes store the requested content, the request is forwarded to the serving BS. The serving BS delivers content to the tagged user if the content is found in its storage. If the requested content does not exist in the serving BS’s cache, the serving BS forwards the request to the neighboring BSs residing in the same cluster. If the content is not available in any of the above local caches, it can be downloaded from the cloud, which is considered to be the least favorable choice.

Besides, we model the problem formulation in two steps. In the first step, we depict the dynamic content preferences of the users. Note that we intend to model per-user content preference in a dynamic manner. The second step performs the caching placements based on the prediction. The goal is to store the most probable ‘to be requested’ contents in future time slots. Using the actual requests of the users, we finally present the optimization model aiming to minimize total content sharing costs. In the next section, we present the prediction model.

Ii-C Dynamic User Preference Prediction

Note that if we only take content popularity into account, it dynamically varies over time and locations, let alone user preferences. Driven by this, we first acquaint different terms to facilitate the perception of several aspects used in our dynamic user preference prediction.

Ii-C1 Content Popularity

It is the probability distribution of the contents, which expresses the number of times a content

is accessed or requested by all of the users. If we consider only a small geographic region such as a small cell, this distribution is usually regarded as local popularity. In most of the legacy networks, Zipf distribution has been widely used to model the content popularity [12]. The probability mass function (pmf) of this Zipf distribution is represented by , where

denotes the skewness of the content popularity.

Ii-C2 User Content Preference

The user content preference defines the conditional probability of requesting a content by a user given that the user actually makes a request. It is mathematically expressed as

(1)

where represents the probability that user requests content at time slot given that it actually makes a request. Note that .

Ii-C3 Activity Level of User

We now define the probability that a user sends a content request as its activity level. It is denoted as

, where . Please note that the above derivation and more discussion about it are presented in the online technical report [9].

Ii-D Predicting Dynamic User Preferences Using LSTM

This subsection presents a special kind of recurrent neural network (RNN), namely the LSTM, which is developed to avoid the long term dependencies in the RNN. Usually, the structure of LSTM includes three gates, namely forget gate, input gate and output gate

[11]. Given the historical dataset for time slots, we focus on LSTM based prediction model. This prediction model forecasts the probability of making a request and what content a user will demand in that request at time slot , . Please note that the steps and pertinent discussions about our proposed prediction model are provided on the online technical report [9]. For our training purposes, we fed an entire row to the input of the LSTM block, meaning that the input of the LSTM is an entire row of the user-content matrix obtained from Alg. 1 of [9]. After running Alg. 1, we calculate activity levels , and preference probabilities that are clearly described in the online technical report [9].

Furthermore, notice that the preference probability can be modeled for all the future time slots . Based on the requirements, we can set to any reasonable time window. Furthermore, if per slot time scale analysis is required, we can easily model that by considering only the per time slot user’s preference probabilities. Therefore, our proposed modeling is flexible for both of these cases. However, as placing the content for each forecast time slot may not be cost-efficient due to practical hardware limitations, we consider long term request probability in this paper. Without loss of any generality, assuming a fixed forecast window, the future content preferences of the users are considered as the average of the predicted , . Now, let denote this fixed time window chosen by the network administrator. Then, the average , , is considered as the preference probability of user for evaluating the system performance333 represents only the optimization time slots, while represents all time slot.. Let denote this preference probability that is used for the performance evaluation. This quantity can be calculated as

(2)
1:for each cell,  do
2:     for each user,  do
3:          take generated historical dataset from Alg. 1 of [9]
4:          process the data to take the entire row for the time slot as input elements of the LSTM input
5:          divide the dataset into training, validation and test part
6:          feed the data to the LSTM model
7:          using the model forecast the values of the entire row for time slot
8:          save the trained model and store the values
9:     end for
10:end for
11:return: predicted value for time slot
Algorithm 1 Predicting Sequential Data Using LSTM

Iii Caching Model and Content Sharing Cost

This section discusses the caching policy and introduces our objective functions.

Iii-a Caching Models

We consider a probabilistic caching model for caching at the edge nodes - D2D users and BSs. Let us define the probabilities that the BS () and the user () cache the content () by and , respectively. Due to physical storage limitations, we have the constraints of and , , and . Without loss of generality, the tagged user and its associated BS (or its serving BS) are the focus of the study in the following. Let the remaining BSs be denoted by , where . Similarly, let the set of users in the coverage of be defined as , where is the number of users including the tagged user.

Iii-A1 Heterogeneous Caching Model

In the heterogeneous caching placement strategy, the caching policy at node is different from that of node . We define the probability of getting a content from the tagged user’s own cache storage , from the D2D neighbors , from the serving BS , from the neighbor BSs and from the cloud respectively as follows:

(3)
(4)
(5)
(6)
(7)
(8)

Iii-A2 Homogeneous Caching Model

In the homogeneous caching model, the cache-enabled nodes store the same set of contents. Thus, the probabilities of storing a content into the cache-enabled nodes are equal for all the local nodes in the same tier, i.e. and , where , . For simplicity, we get rid of the superscripts and denote the storing probabilities for the D2D nodes and BSs as and , respectively. Furthermore, we denote by the number of users in the cell. Then, we rewrite (3-8) as

(9)
(10)
(11)
(12)
(13)
(14)

We now determine the cost of collaborating and sharing the contents among different nodes in the following sub-section.

Iii-B Content Sharing Cost

We consider two types of costs, namely (a) the storage cost and (b) the communication cost. The communication cost represents the transmission cost per bit per meter. If a content has a size of bits, the transmission cost between two D2D nodes that are meters apart is calculated as , where is the cost per byte transmission in case of D2D transmission. For simplicity, we consider equal storage cost - denoted by , for all nodes. The cost of obtaining the content from node is, therefore, . Here, , and represent the costs of extracting a content from the cloud, the other BS in the same cluster, the serving BS and the other D2D nodes in the same cell, respectively. Furthermore, we assume that the transmission cost is zero if the requested content is in its own storage. But the storage cost is still included in this particular case. The relationships of the costs are presented in Proposition 1, which is presented in the online technical report [9]. In the following, we calculate the average content access cost for the heterogeneous and homogeneous caching models.

Iii-B1 Heterogeneous Caching Model Case

In the case of heterogeneous caching placement, we calculate the average content access cost as

(15)

where and represent the tagged user and serving BS, respectively. Moreover, and are calculated by and .

We then intend to minimize the content sharing cost by the following optimization problem:

(16a)
(16b)
(16c)
(16d)

In problem , the constraints in (16b) and (16c) indicate that the total contents cached at each node (i.e., the D2D node and the BS) must not exceed their storage capacity. The constraint in (16d) simply states that the caching probabilities must be in the range of .

Iii-B2 Homogeneous Caching Model Case

Similarly, the average cost in the homogeneous caching placement case is derived as

(17)

where represents the tagged user, while is the number of users in the cell. and .

Note that in the legacy homogeneous caching models, we assume equal caching policy for all the nodes in the same tier. Besides, same user preference modeling is considered. Heterogeneous content preferences of the users is used to demonstrate the effectiveness of dynamic user preference prediction. Following the homogeneous notions, the optimization problem is reformulated as

(18a)
(18b)
(18c)
(18d)

The constraints (18b) - (18d) are defined similarly as in .

Iv Efficient Problem Solvers

The optimization problems and are non-convex due to non-linear combinatorial decision variables. Furthermore, user preferences vary dynamically over different time slots. Considering these dynamic variations, we intend to capture the long term caching placement probabilities at the cache-enabled nodes. The significance of doing this is that the system may need to forecast the requested contents in multiple future time slots. If the binary cases444A binary case considers only or . For example, if , the content is not cached at the user node ., are considered, the obtained results are only for a single time slot. Instead, the goal of this work is to optimize the caching placement probabilities for the scenarios in a relatively long term. Let and denote the cache placement indicator functions at the users and the BSs, respectively for time slot . Here, indicates that content is placed into the cache storage of user at time slot ; otherwise. As such, the cache placement probabilities are determined as follows:

(19)
(20)

where is the total number of time slots.

Iv-a Algorithm for Heterogeneous Caching Placement

Due to mix-integer and non-convex nature, is highly challenging to solve. Besides, the heterogeneity in preference and caching models leads to a large number of system parameters. Therefore, we analyze three scenarios for content placements at the edge nodes in the heterogeneous case. The three sub-cases are (a) collaborative greedy caching - base station first (non-overlapping) (b) collaborative greedy caching - user first (non-overlapping) and (c) collaborative greedy overlapping caching. Owing to space constraints, we only discuss the last one. However, interested readers can find the other two algorithms in our online technical report [9].

Collaborative Greedy Overlapping Caching: In this case, we adopt a greedy caching mechanism. As the cost of getting the requested contents from other nodes is higher than storing the content at the requester node, this algorithm aims to place as many to-be-requested contents as possible into the requester cache storage. Recall that the prediction model can forecast what contents a user will request ahead of time. Therefore, it makes sense to adjust the caching policy based on the user’s preferences. Using the forecast information, the to-be-requested contents, by the users, are placed into their cache storage for each time slot. This gives the indicator functions s. Finding the indicator functions then gives the long term cache placement probabilities. For the BS’s cache storage, the remaining contents are placed based on their popularity profiles. Finally, the caching placement probabilities and are calculated using (19) and (20), respectively. The detailed operations for this case are summarized in Alg. 2.

1:for each time slot, of the optimization of  do
2:     input: predicted user content preference,
3:     for each cell,  do
4:          , , ,
5:          for each user,  do
6:               find and sort based on
7:               if   then
8:                    
9:                    
10:                    
11:               else
12:                    
13:                    
14:                    
15:                    
16:               end if
17:          end for
18:          find the index of and
19:           descending order
20:          if  then
21:               for  in which  do
22:                    , itemin if in , store the next popular one and delete it from
23:                    
24:               end for
25:               set
26:          else
27:               repeat steps (21-24), if any storage is yet left consider storing the most popular content in that cell
28:          end if
29:          if  then
30:               if   then
31:                    ,
32:               else
33:                    ,
34:                    fill out the BS storage (if any space left after step 33) with the most popular content of the cell
35:               end if
36:          else
37:               repeat step (34)
38:          end if
39:     end for
40:end for
41:calculate and , using equations (19-20)
42:Return
Algorithm 2 Collaborative Greedy overlapping Caching

Iv-B Algorithm for Homogeneous Caching Placement

Now, we study homogeneous caching placement, which is eventually a special case of heterogeneous caching placement. Recall that in the homogeneous caching policy, all similar tier nodes store the same copy of content into their caches. However, as the joint optimization problem

is not a convex problem, it is also difficult to obtain the optimal solution. As such, we now derive an efficient heuristic algorithm that solves

. The detailed procedures are presented in Alg. 5 in the online technical report [9].

V Results and Discussion

The simulation parameter setting is given as follows: total number of contents is ; total number of users is ; total number of BSs in a cluster is ; total number of users under a serving BS is ; is in the range of ; is in the range of ; number of historical time slots is , ; number of optimization time slots is , ; = 2000; .

We first generate the initial content request number following Alg. 1 of [9]. After that, the correlated request numbers are generated using , where , and represent initial generated number for time slot , the rest time slots for which the correlated data are being generated and amplitude, respectively. Moreover, we consider is random variable with 0 and 1; , and for our simulation. Note that, as the requested incident number is an integer and non-negative, we perform the necessary replacement of any negative number with and rounding. We stress out that the proposed LSTM is a powerful solution and can be readily extended for any other kind of co-related data generation process. Given enough data samples, our proposed method is capable of predicting dynamic user preferences efficiently.

Now, using the proposed prediction model in Alg. 1, the contents that will be requested in the next time slot by the users are sequentially predicted. The prediction made by this model for the most popular content of user is illustrated in Fig. (a)a. We also present the temporal dynamics over time for some selected users from all the cells in the following. Thanks to the LSTM solution, the dynamic user preference changes of the users and all the contents, in all time slots, are well captured. We illustrate only a sample of how the popularity of the contents and activity of the users change over time in Fig. (b)b. Using these values, the content preference probabilities of the users are measured. We then use these results for the caching policy designing in the following experiments.

(a)
(b)
(c)
Fig. 2: (a) Predicted value for user and content ; (b) Time varying nature of users’ content preferences and activity levels; and (c) comparison between existing static case [15] and proposed dynamic case
(a)
(b)
Fig. 3: (a) Cost functions for different BS cache sizes; and (b) Cost functions for different user cache sizes

To this end, we compare the performance between the static caching placement [15] and the proposed dynamic prediction-based caching strategy in Fig. (c)c. Particularly, we consider the homogeneous caching model for a static estimation based model of [15] to compare the results of the proposed scheme. In the static case, there is no information about the temporal dynamics of the user preferences and activity levels of it for all time slots . Therefore, the caching placement probabilities remain constant in all time slots. On the other hand, our proposed scheme captures all of the temporal dynamics. Hence, it very well knows what content will be requested by the users and at what time the users will place those requests. Therefore, the proposed algorithm can proactively design the optimal caching placement. It is not difficult to find that the proposed dynamic prediction-based caching strategy outperforms static caching placement [15]. This observation validates the beneficial contribution of the dynamic prediction-based caching strategy. Therefore, we only show comparisons among the proposed caching schemes in the following.

In Figs. LABEL:Cost4differentBSCacheSizeWithCd4 and LABEL:Cost4differentuserCacheSizeWithCb12, we illustrate the cost performance of our proposed schemes for different cache sizes. As the heterogeneous caching strategy allows storing diversified contents at the edge nodes, the performance of it is much higher than that of the legacy homogeneous caching placement schemes. Moreover, the performance of the three proposed algorithms - for the heterogeneous caching strategy, may vary depending on the cache sizes of the edge nodes. However, it is perceived that the proposed greedy overlapping caching placement performs significantly better than all the other cases if the edge nodes have reasonable cache sizes. Consequently, we fairly conclude that the system administrator has the flexibility of choosing the best algorithm based on its initial cache storage sensing of the edge nodes. Therefore, our proposed dynamic caching solution is efficient, flexible, agile and scalable compare to that of similar legacy schemes. Note that more critical discussions are presented in our online technical report [9] due to space constraints.

Vi Conclusion

In a content delivery network, obtaining accurate content popularity prediction is immensely influential yet a difficult task. Following the LSTM model, we have successfully captured the temporal dynamics of the user preferences and their activity levels. With the theoretical analysis and experimental simulation in this paper, we demonstrated that the system performance highly depends on the prediction of the content dynamics and popularity. We furthermore made fair comparisons among different cache placement strategies and concluded that the proposed greedy overlapping caching mechanism outperforms other alike caching schemes.

0.92

References

  • [1] D. Bandyopadhyay and J. Sen (2011) Internet of things: applications and challenges in technology and standardization. Wireless Personal Commun. 58 (1), pp. 49–69. Cited by: §I.
  • [2] M. Erol-Kantarci and S. Sukhmani (2018) Caching and computing at the edge for mobile augmented reality and virtual reality (ar/vr) in 5g. In Ad Hoc Networks, pp. 169–177. Cited by: §I.
  • [3] A. Fehske, G. Fettweis, J. Malmodin, and G. Biczok (2011-08) The global footprint of mobile communications: the ecological and economic perspective. IEEE Commun. Mag. 49 (8), pp. 55–62. External Links: Document, ISSN 0163-6804 Cited by: §I.
  • [4] N. Golrezaei, P. Mansourifard, A. F. Molisch, and A. G. Dimakis (2014-07) Base-station assisted device-to-device communications for high-throughput wireless video networks. IEEE Trans. Wireless Commun. 13 (7), pp. 3665–3676. External Links: Document, ISSN 1536-1276 Cited by: §I.
  • [5] L. Jiang and X. Zhang (2020) Cache replacement strategy with limited service capacity in heterogeneous networks. IEEE Access 8 (), pp. 25509–25520. Cited by: §I.
  • [6] M. F. Pervej and S.-C. Lin (2020)

    Eco-vehicular edge networks for connected transportation: a decentralized multi-agent reinforcement learning approach

    .
    arXiv preprint arXiv:2003.01005. Cited by: §I, §I.
  • [7] S. Müller, O. Atan, M. van der Schaar, and A. Klein (2017-02) Context-aware proactive content caching with service differentiation in wireless networks. IEEE Trans. Wireless Commun. 16 (2), pp. 1024–1036. External Links: Document, ISSN 1536-1276 Cited by: §II-A.
  • [8] S. Nayak and R. Patgiri (2020) 6G: envisioning the key issues and challenges. arXiv preprint arXiv:2004.04024. Cited by: §I.
  • [9] M. F. Pervej, L. T. Tan, and R. Q. Hu (2020) User preference learning aided collaborative edge caching for small cell networks. Technical report. External Links: Link Cited by: §II-C3, §II-D, §III-B, §IV-A, §IV-B, §V, §V, 3.
  • [10] M. F. Pervej and S. Lin (2020) Dynamic power allocation and virtual cell formation for throughput-optimal vehicular edge networks in highway transportation. arXiv preprint arXiv:2002.10577. Cited by: §I, §I.
  • [11] H. Sak, A. W. Senior, and F. Beaufays (2014) Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Cited by: §II-D.
  • [12] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire (2013-12) FemtoCaching: wireless content delivery through distributed caching helpers. IEEE Trans. on Inf. Theory 59 (12), pp. 8402–8413. External Links: Document, ISSN 0018-9448 Cited by: §I, §II-C1.
  • [13] J. Song, M. Sheng, T. Q. S. Quek, C. Xu, and X. Wang (2017-10) Learning-based content caching and sharing for wireless networks. IEEE Trans. on Commun. 65 (10), pp. 4309–4324. External Links: Document, ISSN 0090-6778 Cited by: §I.
  • [14] L. T. Tan, R. Q. Hu, and L. Hanzo (2019)

    Twin-timescale artificial intelligence aided mobility-aware edge caching and computing in vehicular networks

    .
    IEEE Trans. Veh. Tech. (), pp. 1–1. External Links: Document, ISSN 0018-9545 Cited by: §I, §I, §I, §II-A.
  • [15] L. T. Tan, R. Q. Hu, and Y. Qian (2018-10) D2D communications in heterogeneous networks with full-duplex relays and edge caching. IEEE Trans. Ind. Informat. 14 (10), pp. 4557–4567. External Links: Document, ISSN 1551-3203 Cited by: §I, §I, §I, Fig. 2, §V.
  • [16] L. T. Tan and R. Q. Hu (2018-11) Mobility-aware edge caching and computing in vehicle networks: a deep reinforcement learning. IEEE Trans. Veh. Tech. 67 (11), pp. 10190–10203. External Links: Document, ISSN 0018-9545 Cited by: §I, §I, §II-A.
  • [17] F. Ye, Y. Qian, and R. Q. Hu (2018) Smart grid communication infrastructures: big data, cloud computing, and security. John Wiley & Sons. Cited by: §I.
  • [18] S. Zhang, P. He, K. Suto, P. Yang, L. Zhao, and X. Shen (2018-08) Cooperative edge caching in user-centric clustered mobile networks. IEEE Trans. Mobile Comput. 17 (8), pp. 1791–1805. External Links: Document, ISSN 1536-1233 Cited by: §I, §I.