Artificial Intelligence Assisted Collaborative Edge Caching in Small Cell Networks

Edge caching is a new paradigm that has been exploited over the past several years to reduce the load for the core network and to enhance the content delivery performance. Many existing caching solutions only consider homogeneous caching placement due to the immense complexity associated with the heterogeneous caching models. Unlike these legacy modeling paradigms, this paper considers heterogeneous (1) content preference of the users and (2) caching models at the edge nodes. Besides, collaboration among these spatially distributed edge nodes is used aiming to maximize the cache hit ratio (CHR) in a two-tier heterogeneous network platform. However, due to complex combinatorial decision variables, the formulated problem is hard to solve in the polynomial time. Moreover, there does not even exist a ready-to-use tool or software to solve the problem. Thanks to artificial intelligence (AI), based on the methodologies of the conventional particle swarm optimization (PSO), we propose a modified PSO (M-PSO) to efficiently solve the complex constraint problem in a reasonable time. Using numerical analysis and simulation, we validate that the proposed algorithm significantly enhances the CHR performance when comparing to that of the existing baseline caching schemes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

10/30/2019

Human-Like Hybrid Caching in Software-defined Edge Cloud

With the development of Internet of Things (IoT) and communication techn...
04/16/2019

Content Caching and Delivery in Wireless Radio Access Networks

Today's mobile data traffic is dominated by content-oriented traffic. Ca...
05/16/2020

User Preference Learning-Aided Collaborative Edge Caching for Small Cell Networks

While next-generation wireless communication networks intend leveraging ...
05/07/2021

Content Caching for Shared Medium Networks Under Heterogeneous Users' Behaviours

Content caching is a widely studied technique aimed to reduce the networ...
07/03/2017

Modeling preference time in middle distance triathlons

Modeling preference time in triathlons means predicting the intermediate...
10/13/2020

Learning to Cache: Distributed Coded Caching in a Cellular Network With Correlated Demands

Design of distributed caching mechanisms is considered as an active area...
12/28/2020

Selfish Caching Games on Directed Graphs

Caching networks can reduce the routing costs of accessing contents by c...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Owing to the ever growing requirements on the high data rates, good quality of service and low latency, wireless communication has evolved from generation to generation. With the exponential increase of the connected devices, existing wireless networks have already been experiencing performance bottleneck. While the general trends are shifting resources towards the edge of the network [13, 10], study shows that mobile video traffic is one of the dominant applications that cause this performance bottleneck [3, 11, 15]. Caching has become a promising technology to address this performance issue by storing popular contents close to the end users [4, 2]. Therefore, during the network busy time, the requested contents can be delivered from these local nodes ensuring a deflated pressure to the backhaul and the centralized core network and reducing the latency for content delivery. Thus, the much-needed wireless spectrum and wireline bandwidth can be better utilized in the cache-enabled network platform. In the ultra-dense network platform, caching at the edge nodes is therefore a powerful mechanism for delivering video traffic.

While the caching solution can significantly benefit the next-generation wireless communication, various challenges need to be handled to ensure the necessitated performances of the cache-enabled network [19, 14, 18, 9]. First of all, the content selection has an enormous impact on the cache-enabled platform [7]. Then, choosing at what node to store the contents needs to be answered. Due to the broad combinatorial decision parameters, this is an immense challenge for any cache-enabled network platform. Furthermore, owing to the necessity of the system performance metrics, the solution to this combinatorial decision problem may change. Therefore, based on the performance metric, an efficient solution is demanded to handle the issue in a reasonable time. As such, under a practical system model in actual communication scenarios, a heterogeneous network platform needs to be adopted for evaluating the caching performance.

There exist several caching solutions in the literature [8, 7, 17, 16]. Caching policy and cooperative distance were designed in [8], by Lee et al., considering clustered device-to-device (D2D) networks. While the authors showed some brilliant concepts for the caching policy design aiming to maximize (a) energy efficiency and (b) throughput, they only considered the collaboration among the D2D users. Lee et al. also proposed a base station (BS) assisted D2D caching network in [7]

that maximizes the time-average service rate. However, the authors only considered a single BS underlay D2D communication with homogeneous request probability modeling. Tan

et al. [17] adopted the collaboration based caching model in the heterogeneous network model. A mobility aware probabilistic edge caching approach was explored in [16]. Here, the proposed model considered the noble idea of collaboration by considering the spatial node distribution and the user-mobility. While some brilliant concept of relaying and collaboration was introduced in [17, 16], only homogeneous caching placement strategies were incorporated.

Unlike these existing works, in this paper, we investigate heterogeneous content preference model leveraging heterogeneous cache placement strategy. Particularly, in a small cell network (SCN), we incorporate collaborations among spatially distributed full-duplex (FD) enabled BSs and half-duplex (HD) operated D2D users to maximize the average cache hit ratio (CHR). However, the formulated problem contains hard combinatorial decision variables that are hard to determine in a polynomial time. Therefore, we implement a modified particle swarm optimization (M-PSO) algorithm that effectively solves the grand probabilistic cache placement problem within a reasonable time. To the best of our knowledge, this is the first work to consider heterogeneous user preference with a heterogeneous caching model in a practical SCN that uses collaborative content sharing among heterogeneous edge nodes to maximize the CHR.

The outline of this paper is as follows. The system model and the proposed content access protocols are presented in Section II, followed by the CHR analysis in Section III. The optimization problem and the proposed M-PSO algorithm are described in Section IV. Section V gives the performance results, followed by the concluding remarks in Section VI.

Ii System Model and Content Access Protocols

This section presents the node distributions and describes the caching properties, followed by the proposed content access protocols.

Ii-a Node Distributions

We consider a practical two-tier heterogeneous network, which consists of macro base stations (MBS) and low-power sBSs (or relays) with underlaid D2D users. The nodes are distributed following an independent homogeneous Poisson point processes (HPPP) model. Let us denote the densities of the D2D user, sBS and MBS by , and , respectively. The sBSs and MBSs operate in the FD mode whereas the D2D users operate in the HD mode. Let us denote the set of D2D users, sBSs and MBSs by , and , respectively. Without any loss of generality, user, sBS and MBS are denoted by , , and , respectively. Besides, the communication ranges of these nodes are denoted by , and , respectively.

The requesting user node is named as the tagged user node. While a user is always associated with the serving MBS, it can also associate with a low powered sBS if the association rules are satisfied. The main benefits of being connected to sBS over MBS are higher data rate, less latency, less power consumption, more effective uses of radio resources, etc. We denote the associated sBS as the tagged sBS for that user. Furthermore, if such a tagged sBS exists for the user, the user maintains its communication with the serving MBS via the tagged sBS. In that case, the sBS can also use its FD mode to deliver requested content from the other sBSs or the cloud via the MBS. If such a tagged sBS does not exist for the user, the user will have to rely on the neighbor sBS nodes and the serving MBS for extracting the requested contents. As all the users may not place a content request at the same time, we assume that only portions of the users act as tagged users. Without any loss of generality, the requesting user, the associated sBS, and the serving MBS are denoted as , and , respectively.

Ii-B Cache Storage, Caching Policy and Content Popularity

The cache storage of the users, sBSs and MBSs are denoted by and , respectively. Considering equal sized contents with a normalized size [1], it is assumed that the users can make a content request from a content directory of , where . For the caching model, a probabilistic method is considered assuming a heterogeneous caching placement strategy. Let , and be the probabilities of storing a content at the cache store of the user node , the sBS and the MBS , respectively. Note that probabilistic caching is highly practical and adopted in many existing works [3, 11, 8, 7, 17, 16, 1].

The content popularity is modeled by following the distribution with the probability mass function

. Note that the skewness

governs this distribution. It is assumed that each user has a different content preference. Therefore, a random content preference order and a random skewness are chosen for each user. While the content order is chosen using random permutation, the parameter, , is chosen following random distribution within a range of maximum and minimum values. Without any loss of generality, the probability that user requests for content is denoted by . This is modeled based on the distribution.

Ii-C Proposed Content Access Protocol

For accessing the contents, the following practical cases are considered.

Case 1 - Local/self cache hit: If a tagged user requests the content that is previously cached, the user can directly access the content from its own storage.

Case 2 - D2D cache hit: If the required content is not stored in its own storage, the tagged user sends the content request to the neighboring D2D nodes. If any of the neighbors has the content, the user can extract the content from that neighboring user.

Case 3 - sBS cache hit: If the tagged user is under the communication range of any sBS, it maintains its communication via the tagged sBS. In this particular case, we have the following sub-cases:

Case 3.1: If the requested content is in the tagged sBS cache, it can access the content directly from there.

Case 3.2: If the content is not stored in the tagged sBS cache but is available in one of the neighboring sBSs, the tagged sBS extracts the content from the neighboring sBS via its FD capability and delivers it to the tagged user.

Case 3.3: If the requested content is not available in any of the sBSs, the tagged sBS forwards the request to the serving MBS. If the content is in the serving MBS, it is delivered to the tagged sBS and then to the user.

Case 3.4: If all of the above sub-cases fail, then the MBS extracts the content from the cloud using its FD capability. The sBS extracts the content from the MBS using its own FD capability and delivers it to the tagged user.

Case 4 - MBS cache hit: If the tagged user is not in the communication range of any of the sBSs, it has to rely on the serving MBS for its communication. In this case, we consider the following sub-cases:

Case 4.1: If the requested content is available in the MBS cache, the content is directly delivered to the tagged user.

Case 4.2: If the content is not available in the MBS storage and the above case fails, the MBS extracts the content from the cloud using its FD capability. Then, the content is directly delivered to the user.

Without loss of generality, Case 3 and (Case 4) are denoted by the indicator function and , respectively. Note that, in Case 3, if the tagged user is in the communication ranges of multiple sBSs, it gets connected to the one that provides the best received power.

Iii Edge Caching: Cache Hit Ratio Analysis

In this section, we analyze and calculate the local cache hit probabilities.

Iii-a Caching Probabilities

We now analyze the cache hit probability at different nodes for the cases mentioned in Section II-C. Note that a cache hit occurs at a node, if a requested content is available in that node.

Iii-A1 Case 1 - Local/self cache hit

The local cache hit probability is denoted as , i.e. the probability of storing the content at the self cache storage of the tagged user.

Iii-A2 Case 2 - D2D cache hit

The cache hit probability for the D2D nodes can be calculated as follows:

(1)

where means that none of the active neighbors (D2D nodes) in its communication range have the content. Thus, the complement of that is the probability that at least one of the users stores the content.

Iii-A3 Case 3 - sBS cache hit

In this case, cache hit probabilities achieved via the tagged sBS for the respective sub-cases are calculated.

Case 3.1: The probability of getting a requested content from the tagged sBS is calculated as follows:

(2)

Case 3.2: The probability of getting a requested content from one of the neighbor sBSs is considered in this sub-case. Essentially, this case states that a cache miss occurs at the tagged sBS. Mathematically, this probability is expressed as

(3)

where is the set of active neighboring sBSs that are in the communication range of the tagged sBS.

Case 3.3: If sub-cases 3.1 and 3.2 fail, the content request is forwarded to the serving MBS via the tagged sBS. The cache hit probability, for this case, is calculated as

(4)

When , i.e. the tagged user is in the communication range of at least one sBS, from the above cases and sub-cases, we calculate the total cache hit probability as

(5)

Case 3.4: Now, if the content is not even stored in the MBS cache storage, it has to be downloaded from the cloud. This case is termed as a cache miss via both sBSs and MBSs. In this case, the MBS initiates its FD mode and downloads the content from the cloud. Therefore, the cache miss probability is calculated from (5) as

(6)

Iii-A4 Case 4 - MBS cache hit

Recall that Case 4 is only considered when the tagged user is not under the coverage region of any of the sBSs. Firstly, we consider Case 4.1, i.e. the requested content is available in the MBS cache (i.e. and ). In this sub-case, the cache hit probability is expressed as

(7)

Furthermore, the total local cache hit probability in this case is given as

(8)

Note that the cache miss probability of Case 4.2 is derived as

(9)

Iii-B Cache Hit Ratio

We define CHR as the fraction of the requests that are served locally without reaching the cloud. Let us denote the portion of the users by the set of . In a heterogeneous caching placement, the fraction of requests of that are served from the local nodes is as follows:

(10)

where the first term represents the self cache hit, the second term represents the successfully achieved cache hit from D2D neighbors. Moreover, represents the successful transmission probability for the respective ‘*’ cases. Note that the transmission success probability between two nodes does not depend on the content index. Therefore, we mention the success probability as instead of .

Iii-C Probability of Successful Transmission

Now, we calculate the transmission success probabilities among different nodes. When a tagged user requests a content, interference comes from other active D2D users, active sBSs and the MBS. The wireless channel between two nodes follows a Rayleigh fading distribution with . Let us denote the channel between node and node by . Let us also denote the threshold SINR for successful communication by dB. The transmission power of the user, the sBS and the MBS are denoted by , and , respectively. Moreover, the path loss exponent is denoted by . Owing to the space constraint, the detail derivations of these probabilities are omitted. However, interested readers can find them in our online technical report [12]. Also, note that we do not consider the case of obtaining the content from the cloud, when we calculate . This is due to the fact that we are interested in calculating the percentage of served request from the local nodes only.

(11)

Iv Cache Hit Ratio Maximization using Particle Swarm Optimization

We present the objective function, followed by the proposed M-PSO algorithm in this section.

Iv-a CHR Maximization Objective Function

To this end, we calculate the average cache hit ratio for the requesting nodes, which is denoted by . The detailed derivation of the is shown in (11). Our objective is to maximize given that the storage constraints are not violated. Thus, the objective function in the heterogeneous caching model case is expressed as

(12a)
(12b)
(12c)
(12d)
(12e)

where the constraints in (12b-12d) ensure the physical storage size limitations of the user, the sBS and the MBS, respectively, while the constraints in (12e) are due to the probability range in . The goal is to find optimal caching placements that give us the optimal solutions. In general, problem is non-convex [16] by nature and may not be solved efficiently in a polynomial time due to the nonlinear and combinatorial content placement variables. In the following, a modified particle swarm optimization (M-PSO) framework is proposed to obtain the best set of parameters.

Iv-B Modified-Particle Swarm Optimization Algorithm

PSO is a swarm intelligence approach that guarantees to converge [6]

. In this meta-heuristic algorithm, all possible sets of the candidate solutions are named as the particles, which are denoted by

. Each particle has a position denoted by . Furthermore, it maintains a personal best position of each particle, denoted by , and the global best positions of the entire swarm, denoted by . The algorithm evolves with an exploration and exploitation manner by adding a velocity term at each particle’s previous position aiming to converge at the global optima. The following two simple equations, thus, govern the PSO algorithm.

(13)
(14)

where , , and are the parameters that need to be selected properly. Moreover, and are two random variables. Note that and are positive acceleration coefficients, which are also known as the cognitive and social learning factors [16] respectively. While this is a general framework for the PSO algorithm, it may not be used directly in the constraint optimization [5]. In our objective function, each particle must have a position matrix, each dimension of which must not violate the restrictions. Therefore, in the following, we modify the PSO algorithm to solve the optimization problem efficiently.

Let be the number of particles. Let denote the caching probabilities of user for all the contents . Similarly, for all the sBSs and MBSs, let and denote their caching placement probabilities for all the contents. Then, all of these parameters can be stacked into a matrix with dimension of , which is the exact shape of each particle. Let the current position of each of these particles be denoted by . Let denote the velocity. Furthermore, the personal best position of particle is , while the global best for the entire swarm is . Therefore, each particle updates its velocity with social and individual cognition parameters. We use the following equation to govern these updates.

(15)

where , and are the parameters as described in (13). Moreover, and are two matrices with sizes of respectively, where their elements are drawn from random distribution. Finally, represents Hadamard product.

The position of each particle is then updated by the velocity similar to (13). However, as there are constraints (12b)-(12e), we need to modify this equation accordingly. Let denote an intermediate updated position of particle as shown in the following expression.

(16)

Besides, necessary normalization and scaling need to be performed. Note that this intermediate particle position leads to a normalized particle position. This parameter is then used as the current particle position . Moreover, the ultimate goal for each particle is to converge to an optimal position (i.e., the global best ). We summarize all the steps in Alg. 1. Note that the proposed algorithm can be implemented to solve any similar hard combinatorial problems.

1:for each particle,  do
2:     ,
3:     for each dimension  do
4:          initialize the particles positions,

with uniform random vector of size

by making sure and , ; then set is the cache storage of the node in dimension
5:          initialize particles velocity, with uniform random vector of size by making sure and , ; then set
6:     end for
7:     set particle best position, as the initial position
8:     if  then
9:          
10:     end if
11:end for
12:while termination criteria has not met do
13:     for each particle,  do
14:          for each dimension,  do
15:               draw uniform random vectors, and of size
16:               set
17:               set
18:          end for
19:          update particles intermediate position,
20:          , ,
21:          for each dimension  do
22:               
23:               for i in  do
24:                    
25:               end for
26:               ; Normalized particle position
27:               
28:               
29:          end for
30:          if  then
31:               
32:               do necessary scaling following step 27
33:               if  then
34:                    
35:               end if
36:          end if
37:     end for
38:end while
39:return and do necessary scaling following step 28 and return
Algorithm 1 CHR Maximization using M-PSO

V Results and Discussions

The simulation parameters are listed as follows: (per ), (per ), (per ), , and , , , , , , , dBm, dBm, dBm, dB, , and dBm/Hz. Monte Carlo simulation is used for performance evaluation.

To show the effectiveness of the proposed algorithm, we firstly validate that the obtained results do not violate any of the constraints. The global best obtained from Alg. 1 is therefore scrutinized as follows. Note that it must not violate any of the caching storage constraints of the edge nodes. Besides, each of the caching probabilities must be in the range of . Furthermore, each node must store different copies of the content. All these constraints are considered in the proposed algorithm. Therefore, it is expected that the obtained results shall meet these requirements. The caching probabilities of and for D2D users, sBSs and MBSs are illustrated in Fig. (a)a. It is readily observed that each node stores different copies. Moreover, caching probabilities and storage constraints are also satisfied. Now, we study the performance of our proposed M-PSO algorithm and make a fair comparison with the following benchmark caching schemes in this sub-section.

(a) Obtained caching probabilities at the local
nodes when , and
(b) CHR comparison when , ,
and
(c) Impact of catalog size: CHR with ,
and
Fig. 1: Performance observation of the proposed M-PSO algorithm
(a) CHR for different user cache storage sizes
(b) CHR for different sBS cache storage sizes
(c) CHR for different MBS cache storage sizes
Fig. 2: Impact of cache size on CHR

Random Caching Scheme: In the random caching scheme, contents are stored randomly while satisfying the constraints.

Equal Caching Scheme: In the equal caching scheme, each content is placed with the same probability.

The proposed algorithm runs iterations and it effectively converges. Fig. (b)b demonstrates the CHR comparison of the proposed algorithm with the random caching scheme and equal caching scheme. It can be seen that the MPSO achieves higher performance gain over these benchmark caching schemes. In the following, we use our algorithm to evaluate the system performance in terms of different parameter settings.

V-1 Impact of the Catalog Size

Considering the catalog size = , we aim to store as many to-be-requested contents as possible into the local edge nodes. The total number of iterations is chosen as for the catalog size in , respectively. If the catalog size increases, the number of possible combinations also increases. Therefore, whenever the content catalog increases, we slightly increase the total number of iterations. Also, if the total number of contents increases and there are only a limited number of cache-enabled nodes, the chance of storing the contents locally decreases, meaning that more content requests need to be served from the cloud. Therefore, the should decrease if the content catalog increases. Moreover, if the percentage of the requester nodes increases, the performance should degrade as we consider the heterogeneous preference of the users. Fig. (c)c also demonstrates that if we increase the catalog size, or the number of requests (), then decreases.

V-2 Impact of the Storage Size

Recall that if the cache size increases, more contents can be stored at the cache-enabled nodes. Therefore, increasing the cache size of the users means that users store more contents in their local storage. As these storage sizes increase, the proposed M-PSO algorithm determines the optimal caching placements. The simulation results, presented in Fig. 2, validate that as the storage size increases, more contents are locally stored leading to an improvement of CHR. Note that increasing MBS cache size provides a lower CHR gain than increasing the cache size of the D2D users (or, the sBSs). This is because the total number of MBSs is typically much lower than that of the available D2D (or, sBS) nodes.

1.07

Vi Conclusion

Caching solution helps to achieve better system performances. However, the hard combinatorial decision-making problem of placing the contents at the local nodes is challenging. The e grand problem is effectively solved with good accuracy by using the artificial intelligence based technique. Considering heterogeneous content preferences in a real-world network platform, the proposed algorithm converges fast and achieves a much better performance than the existing benchmark caching schemes.

References

  • [1] B. Blaszczyszyn and A. Giovanidis (2015-06) Optimal geographic caching in cellular networks. In Proc. ICC, Cited by: §II-B.
  • [2] J. Du, C. Jiang, E. Gelenbe, H. Zhang, Y. Ren, and T. Q. S. Quek (2019-Feb.) Double auction mechanism design for video caching in heterogeneous ultra-dense networks. IEEE Trans. Wireless Commun. 18 (3), pp. 1669–1683. Cited by: §I.
  • [3] N. Golrezaei, P. Mansourifard, A. F. Molisch, and A. G. Dimakis (2014-07) Base-station assisted device-to-device communications for high-throughput wireless video networks. IEEE Trans. Wireless Commun. 13 (7), pp. 3665–3676. External Links: Document, ISSN 1536-1276 Cited by: §I, §II-B.
  • [4] Y. Hao, L. Hu, Y. Qian, and M. Chen (2019-05) Profit maximization for video caching and processing in edge cloud. IEEE J. Sel. Areas Commun. 37 (7), pp. 1632–1641. Cited by: §I.
  • [5] X. Hu, R. C. Eberhart, and Y. Shi (2003) Engineering optimization with particle swarm. In Proc. IEEE Swarm Intell. Symp., Cited by: §IV-B.
  • [6] J. Kennedy (2010) Particle swarm optimization.

    Encyclopedia of machine learning

    , pp. 760–766.
    Cited by: §IV-B.
  • [7] M. Lee, H. Feng, and A. F. Molisch (2020-02) Dynamic caching content replacement in base station assisted wireless d2d caching networks. IEEE Access 8 (), pp. 33909–33925. Cited by: §I, §I, §II-B.
  • [8] M. Lee and A. F. Molisch (2018-11) Caching policy and cooperation distance design for base station-assisted wireless d2d caching networks: throughput and energy efficiency optimization and tradeoff. IEEE Trans. Wireless Commun. 17 (11), pp. 7500–7514. External Links: Document, ISSN 1536-1276 Cited by: §I, §II-B.
  • [9] D. Liu, B. Chen, C. Yang, and A. F. Molisch (2016) Caching at the wireless edge: design aspects, challenges, and future directions. IEEE Commun. Mag. 54 (9), pp. 22–28. Cited by: §I.
  • [10] M. F. Pervej and S.-C. Lin (2020)

    Eco-Vehicular edge networks for connected transportation: a decentralized multi-agent reinforcement learning approach

    .
    arXiv preprint arXiv:2003.01005. Cited by: §I.
  • [11] A. F. Molisch, G. Caire, D. Ott, J. R. Foerster, D. Bethanabhotla, and M. Ji (2014) Caching eliminates the wireless bottleneck in video aware wireless networks. Advances in Electrical Engineering 2014. Cited by: §I, §II-B.
  • [12] M. F. Pervej, L. T. Tan, and R. Q. Hu (2020) Artificial intelligence assisted collaborative edge caching in modern small cell networks. Technical Report. External Links: Link Cited by: §III-C.
  • [13] M. F. Pervej and S. Lin (2020) Dynamic power allocation and virtual cell formation for Throughput-Optimal vehicular edge networks in highway transportation. arXiv preprint arXiv:2002.10577. Cited by: §I.
  • [14] M. Sheng, C. Xu, J. Liu, J. Song, X. Ma, and J. Li (2016) Enhancement for content delivery with proximity communications in caching enabled wireless networks: architecture and challenges. IEEE Commun. Mag. 54 (8), pp. 70–76. Cited by: §I.
  • [15] V. A. Siris and D. Dimopoulos (2015) Multi-source mobile video streaming with proactive caching and d2d communication. In in proc. WoWMoM, pp. 1–6. Cited by: §I.
  • [16] L. T. Tan, R. Q. Hu, and L. Hanzo (2019) Heterogeneous networks relying on full-duplex relays and mobility-aware probabilistic caching. IEEE Trans. Commun. (), pp. 1–1. External Links: Document Cited by: §I, §II-B, §IV-A, §IV-B.
  • [17] L. T. Tan, R. Q. Hu, and Y. Qian (2018) D2D communications in heterogeneous networks with full-duplex relays and edge caching. IEEE Trans. Ind. Informat. 14 (10), pp. 4557–4567. Cited by: §I, §II-B.
  • [18] X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. C. Leung (2014) Cache in the air: exploiting content caching and delivery techniques for 5g systems. IEEE Commun. Mag. 52 (2), pp. 131–139. Cited by: §I.
  • [19] J. Xu, J. Liu, B. Li, and X. Jia (2004) Caching and prefetching for web content distribution. Computing in science & engineering 6 (4), pp. 54–59. Cited by: §I.