Proactive Video Chunks Caching and Processing for Latency and Cost Minimization in Edge Networks

12/16/2018 ∙ by Emna Baccour, et al. ∙ 0

Recently, the growing demand for rich multimedia content such as Video on Demand (VoD) has made the data transmission from content delivery networks (CDN) to end-users quite challenging. Edge networks have been proposed as an extension to CDN networks to alleviate this excessive data transfer through caching and to delegate the computation tasks to edge servers. To maximize the caching efficiency in the edge networks, different Mobile Edge Computing (MEC) servers assist each others to effectively select which content to store and the appropriate computation tasks to process. In this paper, we adopt a collaborative caching and transcoding model for VoD in MEC networks. However, unlike other models in the literature, different chunks of the same video are not fetched and cached in the same MEC server. Instead, neighboring servers will collaborate to store and transcode different video chunks and consequently optimize the limited resources usage. Since we are dealing with chunks caching and processing, we propose to maximize the edge efficiency by studying the viewers watching pattern and designing a probabilistic model where chunks popularities are evaluated. Based on this model, popularity-aware policies, namely Proactive caching policy (PcP) and Cache replacement Policy (CrP), are introduced to cache only highest probably requested chunks. In addition to PcP and CrP, an online algorithm (PCCP) is proposed to schedule the collaborative caching and processing. The evaluation results prove that our model and policies give better performance than approaches using conventional replacement policies. This improvement reaches up to 50

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In the last decade, the video content traffic witnessed an explosive growing, especially with the upgrade of the next generation mobile networks and the advancement of smart devices. For example, authors in [1] stated that, in 2016, audio and video streaming contents presented the largest traffic category. This traffic is accounted for 60% of the overall data traffic and it is predicted to increase to 78 % by 2021 [2]. Notably, the huge multimedia traffic load is caused by the redundant delivery of the same popular videos.

In order to solve the problem of repeated video transmission and support the continuously growing content delivery, MEC networks have been introduced to complement the cloud and CDN networks. In fact, in MEC networks, the base stations (BSs) are equipped with servers having small caching and computing capacities. These resources allow the network to fetch a content from the CDN only one time, then, cache and serve it in the proximity of viewers without duplicating the transmission. Since the MEC network presents an opportunity to enhance the Quality of experience (QoE) of mobile users and alleviate the data load over the transit links, it arouses the research interest. The authors in [3] proposed a CachePro approach where they used the storage and computing capabilities of one MEC server to store videos at the edge base stations. However, since one MEC server has a limited storage and computation capacity, collaborative and intelligent caching are introduced. In CoCache [4], authors implemented video sharing to minimize the network cost while authors of JCCP approach [5] implemented the Adaptive Bitrate streaming technology (ABR) to jointly transcode and share new bitrate versions of videos. In the existing literature, authors propose to bring the requested long video from the cloud and to store it or transcode it in one of the MEC servers.

However, in real life scenarios, video content is usually partitioned into small chunks of few seconds which can be transferred independently. Also, viewers only watch small parts of a video before leaving the session as stated in [6] and [7]. This makes requesting the whole video from the CDN a waste in terms of cost and delivery latency. According to Ooyala’s Q4 2013 reports [8], the average watch time, per play, of VoD on mobiles is only 2.8 minutes. Ending a video unexpectedly without fully watching it incurs: (a) a lower cache hit ratio due to caching a number of unwatched video parts. This leads to a rapid cache storage saturation and a lower caching efficiency because of storing a long video in the same base station; (b) a lower processing efficiency and a higher transcoding cost caused by generating videos that will not be fully watched, in addition to a rapid resource consumption because of transcoding a long video in one server. In the same context, authors in [7]

showed that the watching time of videos can be studied since it is impacted by the length, the popularity and the category of a video. Hence, based on these challenges, we propose to derive a study of chunks popularity based on users viewing pattern. Then, using this model, a proactive caching to load the cache with popular videos and a reactive cache replacement are introduced. We, also, propose a framework where servers can collaborate to cache or transcode chunks of the same video. This framework will be presented in a greedy heuristic.

The main contributions of this paper are summarized as follows:

  • We present our collaborative chunks caching and processing. By proposing the possibility of caching one video in different cooperative MEC servers, resource utilization can be improved.

  • We propose a probabilistic model where we study the popularity of different chunks of videos based on users preference.

  • We introduce our popularity-aware caching policies (PcP and CrP) that use the probabilistic model for chunks caching and evicting.

  • We design a PCCP greedy heuristic to schedule the chunks loading and transcoding.

  • We evaluated our model compared to previously described caching approaches (CachePro, CoCache and JCCP).

Our paper is organized as follows: In section II, we present our proposed caching and processing collaborative system, where we study the viewing pattern of videos and we express the popularity of chunks. Then, the PcP and CrP policies are presented along with the PCCP greedy heuristic. A detailed experimental evaluation is provided in section III. Finally, in section IV, we draw the conclusions.

Ii Proactive caching and processing of video chunks

In this section, we will describe our caching system implemented on distributed MEC servers in RAN networks. The description of the architecture is followed by the study of viewing pattern and the presentation of the probabilistic model. This model is used, next, to introduce the popularity-aware caching policies (PcP and CrP). Finally, the greedy heuristic suggesting a collaborative caching for different chunks is described.

Ii-a System model

In our system, the network consists of multiple base stations (BSs), each BS is associated with a MEC server providing computation, storage and networking capacities. The area grouping communicating base stations is called cluster. In this paper, we intend to use the servers for caching and computation and we assume that these servers can share resources. Then, the shared streams can be transmitted to mobile users if requested. In addition, the transcoder embedded with the server can transcode the shared video to the required bitrate version if needed. Hence, the requested data can be received either from the cache or the transcoder. A video transcoding is the lossy compression of a higher bitrate version to a lower version. The architecture of our system is described in Figure 1.

Fig. 1: Illustration of the proposed collaborative chunks caching and processing system: (1) The chunk can be served from home server; (2) The chunk can be served from the home server after being transcoded; (2) The chunk can be served from the neighbor server; (3) The chunk can also be served and transcoded in the neighbor server; (4) The chunk can be served from the neighbor BS server and transcoded at the home server; (6)-(7) The chunk can be served from the cloud either directly or after being transcoded loacally.
Category () P-square Category () P-square
People 2.39 0.56 0.0023 0.999 Howto 2.74 0.52 0.0153 0.999
Gaming 1.98 0.45 0.0146 0.999 Comedy 2.89 0.65 -0.0250 0.999
Entertainment 2.41 0.56 -0.0064 0.999 Education 2.40 0.54 -0.0104 0.999
News 4.70 0.95 -0.298 0.999 Science 2.53 0.53 0.013 0.999
Music 2.45 0.51 0.0178 0.999 Autos 2.68 0.58 0.0016 0.999
Sports 4.34 0.92 -0.267 0.999 Activism 2.50 0.59 -0.0228 0.999
Film 2.32 0.62 0.0205 0.999 Pets 3.089 0.69 -0.066 0.999
TABLE I: Fit coefficients of PDF of the drop positions for different categories.

In our system, a cluster can comprise base stations accommodating caching servers. A cluster is denoted by . The collection of videos shared in the cluster is indexed by . We assume that all requested videos have bitrate versions. Each video can be partitioned into chunks with similar length. The set of chunks related to a video is denoted as , is the number of chunks in the video . All video chunks having a bitrate version have the same size proportional to the bitrate, denoted as . The set of all chunks that can be requested by a viewer is denoted as . We consider that a video chunk can be obtained by transcoding the video chunk , if , and . We consider that viewers can only request and receive videos from the closer BS, denoted as home node. In addition, we consider that each server is provisioned with a cache capacity equal to bytes. We model the video content caching by introducing the variable , where , if is stored at the BS and , if not. The cache capacity of a cache server associated to a BS is expressed by . Since transcoding videos is a computational intensive task and because of the real-time requirements of converting videos to the requested bitrate, we consider separate instances for each transcoding task, that are large enough to transcode the highest considered representation of videos. Let represent the processing capacity (number of transcoding instances) of the caching server. The description of different chunks fetching scenarios presented in Figure 1 will be deferred to ulterior subsections.

Ii-B Users viewing pattern

To study the viewing pattern, we will identify first the parameters that impact the preferences of users (video categories, length, popularity, etc). Then, we will define our viewing model and identify the most likely chunks to be requested within the collaborative cluster. In fact, authors in [9] studied the characteristics of several types of videos and proved that the popularity of a video

follows a Zipf distribution with a skew parameter

. Studies in [10] showed that videos popularities depend strongly on its category. Additionally, authors stated that the popularity of different categories changes from a viewer to another. Another interesting conclusion is that the popularity of a category depends also on the location of the viewer. Other researchers studied the viewing pattern and the watching time of VoD contents, among them we can cite [6]. These studies concluded that viewers only watch small parts of a video before leaving the session and that short videos have higher probabilities to be fully watched. Other previous works (e.g. [7]) proved that in addition to its length, the popularity and the category of a video can impact its watching time. These conclusions motivate us to identify the probability that a user requests a specific chunk based on the popularity and the category of the video, and the watching time pattern.

Ii-B1 Probability of requesting a video

We define a set of categories. We define, also, a set of users connected to a BS at the studied period. We define for each user a set , where is the probability of requesting videos from a category () by the user . The probability that a video belonging to a category is requested by users connected to a BS can be calculated by summing the probabilities that is selected by different users:

(1)

where is the number of users connected to the BS and is the probability that a user requests a video. We assume that all viewers have the same probability to request a video, . Hence, the probability of requesting a video belonging to a category within the BS can be expressed as follows:

(2)

Next, we will identify the probability of requesting a video belonging to a category , namely . First, let be the probability of choosing a video from videos inside a category . This probability can be written as follows:

(3)

where is the total number of videos, is the popularity of inside the library and

is a binary variable which indicates if

belongs to or not. The difference between the probabilities and is the popularity of videos within the BSs and . The probability of requesting a video belonging to a category from the whole library can be expressed as follows:

(4)

The next step is to calculate the probabilities of requesting chunks of a video . As we stated previously, the watching time and the behavior of viewers depend on the length, category and popularity of the requested videos and this viewing pattern has different distributions depending on the studied data [7]. Hence, without loss of generality, we will first analyze and extract the viewers behavior based on a specific dataset [11]. Any other data can be used to identify the viewing behavior. Also, we suppose that we study a population that has the same behavior of chunks viewing.

Ii-B2 Probability of requesting a chunk of video

To study the viewers behavior and model the video watching pattern, we will use a real-life video dataset. Specifically, we will use video logs extracted from one of the most popular video providers ”YouTube” collected by authors in [11] and named TweetedVideos. This dataset contains more than 5 Million videos published in July and August, 2016 from more than one million channels. The dataset contains also several metadata of videos including view count which gives an idea about the popularity of videos, the watch time and video duration.

Fig. 2: Dropping behavior modeling.
Fig. 3: Drop position distribution fit.

In this paper, we suppose that the users request chunks of videos from the edge servers and different chunks have similar duration. Hence, we decompose each video into chunks and we attribute to each chunk a position . These positions are normalized from 0 to 1. We further model the drop position distribution which means the chunk position where the viewers stopped watching the video and the drop probability in each chunk. Figure 3 presents the PDF distribution of the drop positions of videos from 14 different categories in the dataset. We can see that the drop position PDFs follow the same shape which is similar to a Weibull model but with different coefficients. This validates that the category of video impacts the viewing behavior. Figure 3 shows the integrity of our Weibull fit expressed as follows for a category :

(5)

where is the drop distribution of chunk position of a video belonging to a category . , and are the coefficients of the Weibull distribution; , and . Different Weibull coefficients corresponding to different categories of the dataset TweetedVideos are presented in Table I. This table includes also the P-square indicators which is the goodness of fit measure. We can see that the P-square is close to 1 for all categories which proves that the Weibull fit can model the distribution of the drop positions.

Using the Weibull distribution, we can now derive the instantaneous drop probability and the watching probability of a random chunk position . To simplify our model, we suppose that viewers leaving a video must have watched the last chunks completely. It means a viewer who dropped a video at a chunk must have watched this chunk. Hence, the probability of watching a chunk can be expressed as follows:

(6)

where is the probability of watching a video from position 0 to (not leaving a video between positions 0 and ) and is the probability of watching all chunks which is equal to 1. We define, now, the popularity of a chunk of a video belonging to a category as the probability of requesting a video and consuming its chunk . From equations (2) and (3

), we can estimate this popularity as:

(7)

After deriving the expression of chunk popularity, we will define our popularity-aware policies in the next section.

Ii-C Popularity-aware caching policies

In this section, we present two caching policies based on the popularity of chunks studied in the previous subsection; Proactive caching Policy PcP for cache pre-loading and Cache replacement Policy CrP for reactive cache removal. However, we need to introduce, first, our proposed distributed and synchronized video catalogue that contains the metadata of each stored video and helps to manage the data caching. This catalogue is associated with each cache. In fact, each catalogue is updated reactively for every new event e.g., accessing, caching or removing occurring in the related base station. This updated catalogue is shared in real time with other nodes so they can update their catalogues accordingly. Such a collaborative catalogue management helps to enhance the video selection delay by making video searching local and minimizing the communication overheads. A snapshot of a synchronized catalogue is illustrated in Figure 4.

Fig. 4: Snapshot of a synchronized catalogue.

Ii-C1 CrP

CrP is a reactive caching replacement policy that presents two new features: The first feature is the minimum replication strategy. Indeed, in previous edge caching studies, each fetched video is cached at the local cache whether it exists in the cluster or not, which means multiple copies of the same video can be cached. Even when the cache is full, other potentially needed videos have to be deleted to provide space for the duplicated video. In our system, a second copy of the video is only stored when the cache is available and this video is marked in the catalogue as Replica=1. In case of storage unavailability, only a single copy of the video is cached in the collaborative cluster. In our system, a video chunk is called a Replica, if a similar or a higher bitrate representation exists in the cluster because it is always possible to create a lower video quality from a higher one. In this way, when the cache is unavailable, if a higher representations exists, there is no need to store lower ones. The second new feature is the popularity-aware caching and removing. Indeed, if the cache is available, the incoming chunk request is stored even if it is a replica or an unpopular content. In case the storage is unavailable, all in the catalogue are ranked and the Less Popular Chunk (LPC) is selected. If the requested video chunk is a replica and its popularity is less than LPC, this chunk is not cached. The above approach guarantees that the highest popular chunks are cached and the number of stored videos is maximized due to the minimum replication policy and the LPC rule. The detailed CrP policy is presented in algorithm 1.

1:Input: , Output: updated
2:,
3:if  then
4:if  then
5:       
6:       if  then
7:             if  then
8:                    * RcP-Removing part:
9:                    while  do
10:                          - Prioritize removing unpopular replicas
11:                           over unique unpopular chunks
12:                          ,                     
13:                                  
14:       else
15:             if  then
16:                    while  do
17:                          - Remove Higher popular chunks with
18:                          
19:                          ,                     
20:                    
21:             else
22:                    * RcP-Removing part                     
23:else
24:       
25:if  then
26:       ,
27:       - Add to
28:       - Update to recent time in
29:       - Update other catalogues
30:else
31:       - Relay to the viewer without caching
32:* is the set of popularities of different chunks stored in the BS
Algorithm 1 CrP

Ii-C2 PcP

We, also, propose a proactive caching policy where we populate the cache with the most popular chunks that are most likely to be requested. This pre-load is done based on the preference of active users connected to the base station as studied previously in section II-B. More specifically, high popular chunks are stored with the higher bitrate one by one until filling the cache. If the cache is still available, popular chunks are pre-loaded again with a lower representation. Meanwhile, the catalogue is updated with Replica and popularity status. This task is done at the initialization of the network and has an initial cost. The proposed PcP is illustrated in algorithm 2.

1:Input:
2:Output: , ,
3:Cache initialization:
4:Catalogue initialization:
5:level=
6:for  do
7:       while  do
8:             - Select the higher popular chunk
9:             - Update
10:             ,
11:             if All chunks are cached then
12:                    level=level-1                     
Algorithm 2 PcP

Ii-D Proactive Chunks Caching and Processing (PCCP)

To illustrate the different events that can occur when a user connected to a BS requests a chunk of video , we introduce our greedy algorithm, named Proactive Chunks Caching and Processing (PCCP) and detailed in algorithm 3. At the installation of the system, different caches and catalogues are initialized with the popular video chunks. On each chunk request arriving to the BS , the catalogue is checked to find the chunk with the requested bitrate. If available, the viewer is served from the home node (see Figure 1, scenario 1) and the CrP policy is called to update the access time of the video. In addition to the requested chunk, potentially requested chunks from the same video are fetched. In this way, when the user requests to watch the next part, the video will be streamed without stalls. If the requested bitrate is not available at home node, a higher representation is searched locally (see Figure 1, scenario 2) and the possibility of serving the video from a neighboring BS is also studied. The option that has the lower cost is adopted. If the requested bitrate does not exist in the cluster and a local transcoding is not possible, a higher representation is searched in neighbor nodes. Depending on resources availability, the chunk of video can be transcoded at the neighboring node or locally (see Figure 1, scenarios 3,4 and 5) . If these options cannot be performed, the request is served from the CDN, either by bringing the same or a higher bitrate (see Figure 1, scenario 6 and 7). The CrP policy is applied to update the cache and the catalogue. It means, depending on the popularity of the requested chunk, the caching and removing are accomplished.

(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5: Performance comparison based on varying cache capacity and a processing capacity =15Mbps: (a) Cache Hit ratio (b) Average access delay per request, (c) CDN cost; Performance comparison based on varying processing capacity and a cache size =10% of library: (d) Cache Hit ratio, (e) Average access delay, (f) CDN cost.
1:Initialize,
2:(, , )=PcP(, )
3:for  do
4:       for each video request incoming to BS  do
5:             for  do
6:                    if  then
7:                          -Stream from home BS
8:                          -CrP()
9:                    else
10:                          if  and  then
11:                                 if ,  then
12:                                       if  then
13:                                             -Transcode and Stream from
14:                                             home BS
15:                                             -CrP()
16:                                       else
17:                                             -Fetch from neighboring BS
18:                                             -CrP()                                        
19:                                 else
20:                                       -Transcode and Stream from home BS
21:                                       -CrP()                                  
22:                          else
23:                                 if ,  then
24:                                       -Fetch from neighboring BS
25:                                       -CrP()
26:                                 else
27:                                       if 
28: and  then
29:                                             if  then
30:                                                    -Fetch from and transcode at BS
31:                                                    -CrP()
32:                                                    -CrP()
33:                                             else
34:                                                    -Transcode and Fetch from node
35:                                                    -CrP()                                              
36:                                       else
37:                                             if  then
38:                                                    -Fetch from CDN and
39:                                                    transcode at home BS
40:                                                    -CrP()
41:                                                    -CrP()
42:                                             else
43:                                                    -Fetch from CDN
44:                                                    -CrP()                                                                                                                                                                                        
Algorithm 3 PCCP

Iii Performance evaluation

Iii-a Simulation Settings

In this section, we will evaluate the performance of our system under different network configurations including storage capacity, processing capacity and popularity of videos. In our simulation, we considered that our network consists of 3 neighboring BSs attached to 3 MEC servers (). Let the video library consist of 1000 different videos chosen randomly from the dataset in [11] and belonging to 14 categories described previously in Table I. All videos are divided into chunks of 30 seconds each. We selected only videos with a duration lower than 1500 s (50 chunks) to limit the size of the set . Each video has 4 different representations (). As configured in [5], we will set the bitrate variants related to each representation to be 0.45, 0.55, 0.67, 0.82 of the original video bitrate version. In this paper, we consider that all videos have the same original bitrate which is 2 Mbps. The popularity (number of views) of the chosen videos follows a Zipf distribution having a skew parameter . The parameters of users arrival and requests along with the cluster parameters are summarized in Table II.

Variable Distribution/parameters value
Number of MEC servers const, =3
Total number of video requests const, 10.000
Total number of videos const, =1000 randomly chosen
from [11]
Video popularity zipf, =0.5
Number of video categories const, 14 (see Table I)
Category preference random
Video sizes 1500s (50 chunks)
Video bitrate Uniform, =4, from 200 kbps
to 2Mbps
Watching time Exponential, mean watch time
from [11]
Chunk size 30s
Number of viewers const, 500
Activity session size Exponential, mean 300s
Video request arrival Poisson, =5, inter-arrival time=30s
Max cache size Library size
popularity threshold =0.001
TABLE II: Simulation parameters.

The latency of getting a chunk of video for various scenarios follows a uniform distribution in a range of (a) [5-10] ms from the cloud remote servers (b) [1-2.5] ms when fetching different versions from the neighboring BSs (c) [0.25-0.5] ms from the home server. Several parameters are evaluated to prove the performance of our system: (a)

cache hit ratio: is the number of requests that can be fetched or transcoded in the edge network (home or neighbor servers). (b) Average access delay: is the average latency to receive videos from different caches or from the CDN. (d) CDN cost: is the cost of fetching the chunks of videos from the CDN. The CDN cost is calculated as 0.03$ per GB. Our system is compared to different recently proposed systems, which are CachePro[3], CoCache[4], and JCCP[5].

Iii-B Simulation Results

Iii-B1 Impact of caching and processing resources

The cache size and the processing capacity are important parameters to test the efficiency of a caching system. Figure 5 shows the performance of different cache systems achieved for the described cluster for different cache and processing sizes. The results show the performance of our system in terms of cache hit ratio, access delay and cost. It can be seen that our PCCP heuristic with its CrP and PcP policies performs significantly better than the other caching systems even for low cache sizes or processing capability. For example, when the cache size is equal to 10% of the library size, PCCP achieves a hit ratio equal to 0.79 when and 0.97 when compared to JCCP, CoCache and CachePro achieving a cache hit ratio equal to 0.56, 0,46 and 0.51 respectively. We can see that our system performs more than 20 % better than the other systems without proactively fetching the next chunks (). This can be explained by: (a) The proactive caching policy (PcP) that stores the highest popular chunks, which improves the probability to find the videos inside the cluster, (b) Collaboration between BSs to store different chunks of the same videos which makes caching a video possible, even when the cache is full, (c) Avoiding to store the whole video, since it is proved that viewers rarely watch the content to the end, which provides more space to cache a higher number of chunks, (d) The reactive caching policy (CrP) that stores only high popular chunks in the cache, removes only less popular chunks and avoids the replication of videos to maximize the number of cached videos. When , we can see that the hit ratio becomes very high, which is explained by the fact that the next potentially requested chunks are fetched and served beforehand. Such prediction of the incoming requests contributes to increasing the hit ratio. The system with a prediction window incurs more cost and data fetching compared to the system without prediction. This can be explained by the additional cost of the chunk that is served but not watched by the user abandoning the video. This cost waste can be accepted since the loaded content is only a small chunk with a minimal cost.

Iii-B2 Impact of video popularity

Varying the popularities of videos has an important impact on our system since it is based on the preference of viewers.

(a)
(b)
Fig. 6: Performance comparison based on Zipf parameter with a cache size = 10% of the library size : (a) Cache Hit ratio, (c) CDN cost.

Figure 6 presents the impact of changing the Zipf parameter on cache hit and CDN cost. Specifically, a large (high skewed popularity) represents a library with a small number of videos with similar popularities. It means the library contains videos with very high popularities and videos with very low popularities. In our simulation, we created another video library from the same dataset [11] containing videos with a larger , which makes the library contain videos with higher difference in terms of popularity. In this way, videos with very high popularity will be highly requested. Whereas, unpopular videos will be rarely or never requested. Figures 6(a) and 6(b) show that a higher skewed library gives better results in terms of cache ratio. This is explained by the fact that only a part of videos are frequently requested which are stored in the cluster thanks to the PcP policy. Also, even if the cache is low (10% of the library size), the size is enough to cache highly popular chunks which enhances the cache hit.

Iv Conclusion

In this paper, we propose that different MEC servers collaborate to cache and transcode different chunks of one video content. In order to maximize the edge caching, we studied videos viewing pattern and we proposed CrP and PcP content placement policies for estimating the placement of video chunks in the BS caches based on a probabilistic model. Then, to schedule sharing videos between MEC servers, a greedy algorithm (PCCP) is proposed. The extensive simulation of our heuristic proves the performance of PCCP compared to other caching approaches in terms of cost, cache hit ratio and access delay.

Acknowledgment

This publication was made possible by NPRP grant 8-519-1-108 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the author(s).

References

  • [1] (2016) Global internet phenomena report. [Online]. Available: https://www.sandvine.com/trends/global-internet-phenomena/
  • [2] “Cisco visual networking index: Global mobile data traffic forecast update, 2016–-2021,” in Cisco white paper, 2017.
  • [3] H. Ahlehagh and S. Dey, “Video-aware scheduling and caching in the radio access network,” IEEE/ACM Transactions on Networking, vol. 22, no. 5, pp. 1444–1462, Oct 2014.
  • [4] J. Dai, F. Liu, B. Li, B. Li, and J. Liu, “Collaborative caching in wireless video streaming through resource auctions,” IEEE Journal on Selected Areas in Communications, vol. 30, no. 2, pp. 458–466, 2012.
  • [5] T. X. Tran, P. Pandey, A. Hajisami, and D. Pompili, “Collaborative multi-bitrate video caching and processing in mobile-edge computing networks,” CoRR, 2016.
  • [6] H. Yu, D. Zheng, B. Y. Zhao, and W. Zheng, “Understanding user behavior in large-scale video-on-demand systems,” SIGOPS Oper. Syst. Rev., vol. 40, no. 4, pp. 333–344, 2006.
  • [7] Y. Chen, B. Zhang, Y. Liu, and W. Zhu, “Measurement and modeling of video watching time in a large-scale internet video-on-demand system,” IEEE Transactions on Multimedia, vol. 15, no. 8, pp. 2087–2098, 2013.
  • [8] (2013) Ooyala’s q4 2013 report. [Online]. Available: http://www.ooyala.com/resources/industry-reports
  • [9] M. Cha, H. Kwak, P. Rodriguez, Y. Y. Ahn, and S. Moon, “Analyzing the video popularity characteristics of large-scale user generated content systems,” IEEE/ACM Transactions on Networking, vol. 17, no. 5, pp. 1357–1370, 2009.
  • [10] Z. Li, J. Lin, M.-I. Akodjenou, G. Xie, M. A. Kaafar, Y. Jin, and G. Peng, “Watching videos from everywhere: A study of the pptv mobile vod system,” in Proceedings of the 2012 Internet Measurement Conference, ser. IMC ’12.   ACM, 2012, pp. 185–198.
  • [11] S. Wu, M. Rizoiu, and L. Xie, “Beyond views: Measuring and predicting engagement on youtube videos,” CoRR, vol. abs/1709.02541, 2017.