Meta-Reinforcement Learning via Buffering Graph Signatures for Live Video Streaming Events

In this study, we present a meta-learning model to adapt the predictions of the network's capacity between viewers who participate in a live video streaming event. We propose the MELANIE model, where an event is formulated as a Markov Decision Process, performing meta-learning on reinforcement learning tasks. By considering a new event as a task, we design an actor-critic learning scheme to compute the optimal policy on estimating the viewers' high-bandwidth connections. To ensure fast adaptation to new connections or changes among viewers during an event, we implement a prioritized replay memory buffer based on the Kullback-Leibler divergence of the reward/throughput of the viewers' connections. Moreover, we adopt a model-agnostic meta-learning framework to generate a global model from past events. As viewers scarcely participate in several events, the challenge resides on how to account for the low structural similarity of different events. To combat this issue, we design a graph signature buffer to calculate the structural similarities of several streaming events and adjust the training of the global model accordingly. We evaluate the proposed model on the link weight prediction task on three real-world datasets of live video streaming events. Our experiments demonstrate the effectiveness of our proposed model, with an average relative gain of 25 state-of-the-art strategies. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://github.com/stefanosantaris/melanie

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

06/18/2021

Multi-Task Learning for User Engagement and Adoption in Live Video Streaming Events

Nowadays, live video streaming events have become a mainstay in viewer's...
07/28/2021

A Deep Graph Reinforcement Learning Model for Improving User Experience in Live Video Streaming

In this paper we present a deep graph reinforcement learning model to pr...
11/11/2020

EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events

In this study, we present a dynamic graph representation learning model ...
11/11/2020

VStreamDRLS: Dynamic Graph Representation Learning with Self-Attention for Enterprise Distributed Video Streaming Solutions

Live video streaming has become a mainstay as a standard communication s...
03/15/2021

Robust MAML: Prioritization task buffer with adaptive learning process for model-agnostic meta-learning

Model agnostic meta-learning (MAML) is a popular state-of-the-art meta-l...
03/20/2021

MetaHDR: Model-Agnostic Meta-Learning for HDR Image Reconstruction

Capturing scenes with a high dynamic range is crucial to reproducing ima...
08/25/2021

A Unified Taxonomy and Multimodal Dataset for Events in Invasion Games

The automatic detection of events in complex sports games like soccer an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Live video streaming services have been widely adopted by large enterprises to enable the communication among thousands of employees who are located at different offices across the world. Accounting for the high impact of video to the employee engagement and productivity, Fortune-500 companies converted more than of their physical to live video streaming events [10]. However, network inefficiency and bandwidth congestion is experienced when several employees attend a high-quality live video streaming event at the same time. This negatively impacts the performance of the event, with more than of the viewers not attending the overall event [5]. To address this problem, large enterprises exploit distributed solutions provided by live video streaming companies e.g., Hive Streaming AB111https://www.hivestreaming.com/. Such solutions leverage the office’s internal high-bandwidth network to efficiently distribute the live video stream between viewers [3, 23]. For example, as shown in Figure 1(a), Viewers and directly connect to the Content Delivery Network (CDN) through a low-bandwidth connection to download the live video stream of the presenter. Viewers and exploit the internal high-bandwidth network to distribute the live video stream to Viewers and , respectively. Then, Viewers 2 and 4 distribute the video stream to the remaining viewers at the same office, for example to Viewers and , accordingly.

Fig. 1: (a) A distributed live video streaming process in enterprise offices with limited network capacity. (b) Selection process of each viewer’s connections via the tracker.

To immediately select the internal high bandwidth connections, distributed video streaming solutions require prior knowledge of the company’s network topology, for instance, Viewers and are in the same Office . Such information is not always feasible to be acquired, since large enterprises regularly adapt their network [3]. Therefore, it is essential to predict the network capacity of each connection in real-time during the live video streaming event based on the observed viewers’ interactions. To infer the high-bandwidth connections, each viewer periodically reports various data, such as throughput, interactions, and so on, to a centralized server, namely tracker. The tracker exploits the reported data to predict the network capacity among any two viewers of the event. Then, the tracker selects the connections with the highest predicted throughput of the viewers and adapts their connections. For example, as shown in Figure 1(b), the tracker probes Viewer to adjust the low-bandwidth connection ( Mbps) with Viewer and connect to Viewer with a high-bandwidth connection ( Mbps). The problem of selecting the proper connections among viewers becomes even more challenging at the beginning of the event, only after “exploring” a limited number of connections. To avoid network congestion, it is necessary to predict the best possible connection almost immediately. Nonetheless, this might only be possible by relying on the knowledge from the similar events that happened in the past on the same or similar networks.

The viewers’ interactions during a video streaming event can be modeled as a temporal interaction network. The edge/ interaction weight of the network corresponds to the measured throughput of the connection among two nodes/viewers. Graph representation learning has been proven to be an efficient strategy to learn low-dimensional node embeddings that capture the evolution of the network [19, 15, 20, 22, 24]

. Baseline approaches exploit Recurrent Neural Networks (RNNs)

[15, 22], self-attention mechanisms [24]

and Long-Short Term Memory (LSTMs) units

[19] to capture the network’s evolution. Although evolving graph representation learning strategies can capture the temporal evolution of the graph, such strategies do not necessarily work on streaming events. At the beginning of a live video streaming event a viewer may have more low-bandwidth interactions than high-bandwidth connections [2]. Existing strategies learn to accurately predict the low weight edges/interactions and require a large number of interactions to estimate the high weights of the connections among viewers [15, 3]. The challenge here is how to accurately predict the high weights of the edges/interactions, given only a few interactions per viewer. Recently, graph representation learning approaches have been proposed to predict the high weights of edges/interactions in a live video streaming event [3, 1]. However, these approaches exploit the viewers’ interactions in a single event, ignoring the knowledge of past live video streaming events.

Extracting knowledge from past streaming events is a challenging task, as a significantly low number of viewers attends multiple live video streaming events, as it will be demonstrated in Section I of our supplementary [27]. In addition, each event has different network characteristics such as edges/interactions weight distribution, viewers’ emerging patterns, and so on. Therefore, each live video streaming event is considered as a new task, which the prediction model needs to adapt. Meta-Learning has achieved remarkable performance on computing generic models, able to adjust to new tasks when only a limited amount of information is available [4, 9, 18]

. The goal of meta-learning is to derive a global knowledge from previous experiences, so as to rapidly adapt to a new task with only a small amount of training data. Meta-learning approaches have successfully been employed on various machine learning tasks, such as image classification

[13, 32, 31] and recommender systems [18, 21, 29]. However, meta-learning on graphs, where each task corresponds to a new graph, has received little attention. Meta-Graph adopts the Model-Agnostic Meta-Learning framework [9] and graph signature functions [6] to perform link prediction on new graphs with limited edges [4]

. Nonetheless Meta-Graph works on static graphs, and does not reflect on the dynamic case of live video streaming events. Recently, GELS employed gradient boosting to extract information from past streaming events for improving user experience 

[2]. However, GELS considers equal contribution of each event when learning the global model. Provided that each event exhibits different characteristics, it is essential to compute the similarity of different events and generate an accurate global model.

In this paper, we propose a Meta-rEinforcement LeArNing model via buffering graph sIgnaturEs, namely MELANIE, making the following contributions: i) We implement a task adaption component to update the global MELANIE model for a new streaming event. We adopt the Actor-Critic reinforcement learning scheme, where the agent/tracker learns an optimal policy by incorporating the viewers’ interactions. To quickly adapt to a new event, we design a replay memory buffer and prioritize the stored interactions based on the viewers’ throughput divergence. Hence, the agent/tracker is trained on diverse experiences and converges to the optimal policy after a few interactions.; ii) We propose a meta-learning component to generate the global MELANIE model by combining the learned policy by the task adaptation component of the current live video streaming event and the previously trained events. We formulate two objective functions to update the actor and critic parameters, respectively, based on the structural similarities of different streaming events; iii) We design a graph signature memory buffer in the meta-learning component to compute the network’s structure similarity between different events. In doing so, we enforce to produce similar representations of viewers with high structural similarities in different events. The graph signature memory buffer allows the meta-learning component to generate the global MELANIE model by generalizing over several events with a few common viewers.

Our experiments on three real-world datasets, generated by live video streaming events, demonstrate that superiority of MELANIE over several baseline approaches. The remainder of the paper is organized as follows, in Section II we formally define the problem of meta-reinforcement learning in live video streaming events and detail the proposed MELANIE model in Section III. Our experimental evaluation is presented in Section IV, and we conclude the study in Section V.

Fig. 2: Overview of the MELANIE model, given a new task, that is a new temporal interaction network . MELANIE consists of (i) the task-adaptation component on the support set and (ii) the meta-learning component on the query set .

Ii Preliminaries

A distributed live video streaming event is represented as a temporal interaction network, which is defined as follows [15, 19, 20].

Temporal Interaction Network. A temporal interaction network is defined as a graph , with nodes/viewers and an ordered sequence of node/viewer interactions/connections . An interaction/connection occurs between two nodes/ viewers and at time , . Each interaction/connection has a weight , which corresponds to the throughput measurement of node/viewer , reported in the tracker. The set contains the network’s weights at time .

The nodes/viewers periodically communicate with the tracker to retrieve the interactions/connections, established during a live video streaming event (Section I). Therefore, the tracker is responsible to determine the viewers’ edges/connections. We model the selection of the node/viewer connections/interactions during a live video streaming event as a Markov Decision Process (MDP).

Live Video Streaming MDP A MDP is defined as , with , , , and

being the state, action, transition probability and reward sets, and the discount factor, respectively. At each time step

, the tracker/agent takes an action to connect/interact node/viewer with node , based on state of the node/viewer . The tracker receives the measured throughput of the interaction as a reward , and the node/viewer ’s state is updated to with transition probability . The objective of the tracker/agent is to find an optimal policy , where are the parameters of the policy , that maximizes the expected cumulative rewards from any state-action pairs as follows:

where is the expectation based on the policy , is the discount factor, and is the reward received at the time step , given the action taken at the state .

Finding the optimal policy of the agent/tracker in a live video streaming event typically requires a large number of node/viewer interactions to achieve efficient performance. Our study is inspired by the gradient based meta-learning framework in reinforcement learning (meta-RL), which optimizes the global policy of the agent over several tasks, so as to rapidly adapt to a new task [11, 9]. Extracting knowledge from live video streaming events is defined as the following meta-RL process:

Meta-RL in Live Video Streaming Events In meta-RL for live video streaming events, we consider a distribution of tasks , where each task corresponds to a live video streaming event. We denote each task by , where and is the MDP and the temporal interaction network of the live video streaming event, respectively. During meta-training, for each task we divide the node/viewer interactions in a support set and a query set , with . The agent/tracker adapts the policy to learn the task based on the loss on the support set . Then, the meta-learner exploits the query set to optimize the global policy across all tasks . The goal of meta-RL is to learn a global policy to maximize the following expected cumulative reward

Iii Proposed Model

Iii-a Overview of MELANIE

As illustrated in Figure 2, the proposed MELANIE model consists of two main components: the task adaptation and the meta-learning component. The goal of the proposed MELANIE model is to learn a generic model that achieves fast adaptation to new MDPs, that is new live video streaming events.

Task Adaptation The role of this component is to adapt the global policy of the tracker to a new task . The input of the task adaptation component is the support set of the sampled task . We adopt an Actor-Critic reinforcement learning scheme to model the interactions between viewers and the tracker. The policy of the tracker is trained based on the state-action transitions stored in a replay memory buffer. Following [25], we prioritize the state-action transitions in the replay memory buffer based on the KL-divergence of the viewers’ reward distribution between consecutive time steps. Provided that most viewer interactions may have low reward at the beginning of the live video streaming event, we train the policy on dissimilar experiences.

Meta-learning The role of the meta-learning component is to evaluate the policy , learned by the task adaptation component, to the query set of the sampled task . To measure the similarity of the sampled task against the previously trained tasks, we introduce a signature buffer during the meta-learning process. The signature buffer contains a signature of each task , which represents the structural information of the task’s network. According to the gradient-based Model Agnostic meta-learning approach [9], we update the global policy to generate the global MELANIE model over all the computed signatures of the tasks.

Iii-B Task Adaptation on the Support Set

The input of the task adaptation component is the support set of the sampled task . Given the viewers’ interactions of the support set , we aim to learn the optimal policy of the specific task , by adopting a deep RL framework based on the Actor-Critic learning scheme. Therefore, the task adaptation component consists of three main parts: i) the Actor network, ii) the Critic network, and iii) the replay memory buffer.

Actor Network At each time step , the actor network takes as input an interaction and then generates an action for the node/viewer of the task . We represent each node/viewer with a -dimensional node/viewer embedding based on the node/viewer features [17]. If the nodes/viewers do not have any node features, as in the case of live video streaming events, we calculate an embedding look up. Given an interaction , we compute a -dimensional state representation of the viewer , as follows [30, 6]:

(1)

where is the attention function of the node/viewer with its neighborhood , parameterized by . The parameter weight matrix is the shared weight transformation of the node/viewer embedding to a

-dimensional vector and

is the exponential linear unit (ELU) activation function. The attention coefficient

measures the similarity of the node/viewer to the node/viewer as follows:

(2)

where is the connection weight/throughput between nodes/ viewers and at the time step , is the concatenation of the node/viewer representations and , and is the LeakyRELU non-linearity activation function. Provided that the temporal interaction network of the sampled task evolves over time, we capture the evolution in the state representation , by parameterizing the attention function with a -dimensional graph signature , which is defined as follows:

(3)

where is the aggregation of the node representations to a -dimensional signature vector , parameterized by . Intuitively, the role of the graph signature is to capture the structural and node/viewer features similarities of the temporal interaction network over time, and bias the node/viewer feature aggregation towards nodes/viewers with similar interactions. By computing the graph signature , we determine the structural similarity of viewers, and promote similar viewers in the attention process [6].

The state captures the preferences of the viewer at the -th time step. An interaction with high connection throughput corresponds to a high attention coefficient , which reflects on the preferences of the viewer in the state representation . The actor network transforms the state representation of the viewer to an -dimensional action

vector. In our implementation, we employ a two-layer perceptron (MLP) on the state representation

as follows:

(4)

We normalize the action vector based on the softmax function and select the node/viewer with the highest value. We employ the -greedy exploration technique to learn an accurate policy [28].

Critic Network The input of the Critic network is the node/ viewer state and the action generated by the policy . The critic network outputs a scalar, which is an approximation of the true state-action value function , that is the following Q-value function:

(5)

where denotes the concatenation of the state representation and the action representation . Here is the Q-value approximation function paremeterized by , which consists of the weights and biases in the MLP. The Q-value function represents the benefit of the action generated by the Actor network, given the node/viewer state and the policy . Following the deterministic policy gradient theorem [26], we update the parameters of the Actor based on the sampled policy gradient as [28]:

(6)

where indicates the merit of the action , given the state , compared with the random action, and is defined as the step size. The term is the number of interactions exploited to train the actor network. The weight parameters of the Critic network are updated accordingly via the temporal-difference learning approach, that is minimizing the following mean squared error:

(7)

Replay Memory Buffer To optimize the Actor and the Critic networks, we employ a replay memory buffer , that contains the latest state-action transitions. The agent/tracker adapts the nodes/viewers connections based on their reward until the viewers weight/throughput distribution converges. Therefore, we store the latest state-action transitions of the nodes/viewers in the replay memory buffer . The replay memory buffer allows the agent/tracker to learn from earlier memories, and break undesirable temporal correlations, that is similar state-action transitions in a short period of time. Instead of exploiting all the state-action transitions to train the agent, we sample a subset of the stored transitions. The most popular sampling strategy is uniform sampling, where each transition in the replay memory buffer is selected with equal probability [8, 25]. However, provided that in our case the connections at the beginning of a live video streaming event may be among viewers with a low network’s capacity, the majority of the stored state-action transitions in the replay memory buffer have low reward values. Hence, uniformly sampling the state-action transitions to train the Actor and Critic network prevents the agent/tracker to learn from new experiences. To overcome this problem, we prioritize the state-action transitions based on the Kullback-Leibler (KL) divergence values of the weight/throughput distribution of each viewer between consecutive time steps. In particular, to measure the difference between the weight/throughput distributions and of two consecutive steps, the Kullback-Leibler (KL) divergence [14] is defined as follows:

where is the equally partitioned probability space of the interactions weight/throughput. A high KL-divergence value corresponds to significant changes on the weight/throughput distributions between consecutive live video streaming minutes, whereas a low value indicates that viewers converge to their optimal connections. We retrieve the top- state-action transitions from the buffer and train the tracker/agent based on Equations 6 and 7.

Iii-C Meta-Learning on the Query Set

The goal of the meta-learning component is to learn the global weight parameters and , so that the actor and critic networks can quickly adjust to a novel task . The input of the meta-learning component is the query set of the sampled task and the weight parameters and calculated by the task adaptation component. According to the gradient-based model of agnostic meta-learning [9], we update the weight parameters and on the query set as follows:

(8)

where

is the learning rate. We formulate the loss functions

and as follows:

(9)

where the loss functions and measure the accuracy of the actor and the critic networks on the query set based on Equation 6 and 7, respectively. The term is the KL divergence among different graph signatures. The intuition behind this divergence is to provide a model with parameters and and generalize over all the previously trained graph signatures .

Graph Signature Buffer The graph signature buffer contains the signatures of each task that has been previously trained in the past events. Given the graph signature buffer, we compute the similarity among the newly sampled task and the previous trained tasks. During the update of the parameters and , we minimize the average divergence of the signature of task and all the stored graph signatures . Provided that different live video streaming events have a low number of common viewers, we capture the similarities between the events and promote similar viewers that attend to more than one event. In doing so, as we will show in the experimental evaluation, the meta-learning component can generalize in several divergent live video streaming events and quickly adjust to new events within a limited number of viewers’ interactions. Further details about the proposed MELANIE algorithm can be found in [27].

Iv Experimental Evaluation

Datasets LiveStream-1 LiveStream-2 LiveStream-3
#Events 30 30 30
#Offices 62 12 56
#Enterprises 1 1 15
#Viewers (K)
#Connections (M)
Avg. #Events Per Viewer
TABLE I: Summary statistics of the three datasets.

Iv-a Setup

Datasets In our experiments, we evaluate our proposed MELANIE model on three real-world datasets [2]. In Table I, we summarize the statistics of each dataset and we perform an extensive analysis in Section I of our supplementary [27].

Support/Train and Query/Test Sets In our experiments, we evaluate our proposed MELANIE model in the link weight prediction task on the three examined datasets, with 30 temporal interaction networks (events). For each dataset, we consider live video streaming events as training set, for validation and for testing. Following [18], we divide the interactions of each live video streaming event to support set and query set according to the interaction time, with a ratio of :. The task of link weight prediction is to predict the weight of the interaction that will occur in the query set of the test set.

LiveStream-1 LiveStream-2 LiveStream-3
Baselines RMSE MAE RMSE MAE RMSE MAE
Jodie
EvolveGCN
DySAT
VStreamDRLS
PolicyGNN
MetaHIN
Meta-Graph
GELS
MELANIE-T
MELANIE-B
MELANIE-M
MELANIE 0.187 0.042 0.175 0.026 0.196 0.013 0.135 0.052 0.223 0.015 0.164 0.011
TABLE II: Methods’ comparison in terms of RMSE and MAE. The reported values are averaged over all the interactions in the query set of the test set. Bold values indicate the best method.

Evaluation Metrics We evaluate the performance of our proposed model in terms of Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), which are defined as follows:

(10)

where is the reward of the throughput of the interaction . Note that RMSE emphasizes more on the large errors than the MAE metric. Therefore, the RMSE metric indicates if the actions taken by the agent/tracker significantly deviate from the received reward. In addition, to measure the performance of our proposed model on how well it can adapt to new live video streaming events, we report the average reward received by the viewers for the actions taken at each -th minute based on the interactions in the query set as follows:

(11)

Iv-B Compared Methods

We evaluate the performance of the proposed MELANIE model against several graph representation learning strategies: i) Jodie222https://github.com/srijankr/jodie[15]

employs recurrent neural networks (RNNs) to update the node embeddings; ii)

EvolveGCN333https://github.com/IBM/EvolveGCN [22] exploits RNNs between consecutive graph convolutional networks; iii) DySAT444https://github.com/aravindsankar28/DySAT[24] adopts self-attention mechanism; and iv) VStreamDRLS555https://github.com/stefanosantaris/vstreamdrls[3] employs self-attention between consecutive graph convolutional networks. Moreover, we evaluate Melanie against the following meta-learning strategies: i) PolicyGNN666https://github.com/lhenry15/Policy-GNN[16] employs deep RL to adapt the aggregation level of convolutional networks; ii) MetaHIN777https://github.com/rootlu/MetaHIN[18] captures different semantic facets of each node in a global model; iii) Meta-Graph888https://github.com/joeybose/Meta-Graph[4] employs model-agnostic meta-learning on static networks; and iv) GELS999https://github.com/stefanosantaris/GELS[2] adopts gradient boosting on live video streaming events.

To examine the different components of our model, we compare the proposed MELANIE model with the following variants: i) MELANIE-T employs the deep RL scheme described in Section III on a single live video streaming event, without considering the meta-learning component. Moreover, MELANIE-T adopts uniform sampling in the replay memory buffer , instead of the prioritized sampling based on the KL-divergence. ii) MELANIE-B incorporates the prioritized based on the KL-divergence replay memory buffer . Similar to the MELANIE-T model, MELANIE-B ignores the meta-learning component. iii) MELANIE-M uses the meta-learning component in multiple events. However, MELANIE-M generates a global model, without considering the graph signatures stored in the graph signature buffer . For reproduction purposes, the source code of the proposed MELANIE model and its variants are publicly available101010https://github.com/stefanosantaris/melanie.

Environment and Parameter Settings We tuned the hyper-parameters of each model based on the validation set and a grid-selection strategy. We repeated our experiments five times, and the results were averaged over the five trials. In Section II of our supplementary [27], we detail the experimental environment and we discuss the hyper-parameter settings. Moreover, we evaluate the influence of different configurations of the replay memory and graph signature buffers on the performance of the MELANIE model.

Iv-C Performance Evaluation

In Table II, we evaluate the performance of the examined models in terms of RMSE and MAE. The proposed MELANIE model constantly outperforms the baseline approaches in all datasets. This suggests that MELANIE can efficiently learn a policy that accurately represents the temporal interaction network. Compared with the second best method MetaHIN, the proposed MELANIE model achieves relative drops and in terms of RMSE and MAE in LiveStream-1, and in LiveStream-2, and in LiveStream-3. Note that the MetaHIN model outperforms the other baselines, as it employs a co-adaptation meta-learner component to capture the facets of new nodes that appear in the temporal interaction network and have a few number of interactions, as it happens in the case of live video streaming events. Instead, the other baseline approaches do not handle well the case of new nodes with few interactions in the temporal interaction network. Although MetaHIN works efficiently on a single temporal interaction network, it ignores the auxiliary information of other networks, thus having limited prediction accuracy. MELANIE overcomes this issue by performing meta-learning to generate a global model not only from a single event, but also from several past events. Moreover, we observe that MELANIE constantly outperforms the GELS baseline. Given that GELS considers equal contribution of each event during learning, it negatively impacts the model to derive the similarity between events. In addition, on inspection of Table II, we observe that MELANIE outperforms all its variants. According to the performances of the variants we can measure the impact of the prioritized replay memory and signature buffers, as well as the meta-learning process on the link prediction accuracy of MELANIE. We find that MELANIE-B outperforms MELANIE-T, when the prioritized replay memory buffer based on the KL-divergence is employed in the learning process, demonstrating the importance of training MELANIE on divergent experiences/interactions. Moreover, MELANIE-M achieves superior performance over the variants MELANIE-B and MELANIE-T, by incorporating the information of past events via the meta-learning component. However, MELANIE beats the MELANIE-T variant, showing that the computation of the graph signatures plays a crucial role when generalizing over multiple events.

Iv-D Comparison of Meta-learning Strategies

In Figure 3, we present the performance of the examined meta-learning strategies in terms of RMSE during the evolution of the streaming events. We observe that MELANIE achieves a low RMSE value at the first minutes of LiveStream-1 and LiveStream-2. This occurs because each viewer in LiveStream-1 participates in more than events, as described in Section I of our supplementary [27]. Similarly, the LiveStream-2 events occurred at a low number of offices. Therefore, the temporal interaction networks in the LiveStream-1 and LiveStream-2 datasets share structural similarities, that allows the MELANIE model to accurately learn a global policy that exhibit fast adaptation to new events in the first minutes. However, in LiveStream-3 the MELANIE model requires more streaming minutes (interactions) to adapt to the new event than in the other two datasets. This happens because the LiveStream-3 occured at different enterprise networks. This means that the temporal interaction networks in LiveStream-3 have limited structural similarities. Nevertheless, MELANIE still outperforms the baselines in the LiveStream-3 dataset, reflecting the ability of our model to better adapt to dissimilar new events than the baseline strategies. Moreover, we observe that Meta-Graph underperforms in all datasets, as Meta-Graph employs meta-learning on static graphs ignoring the evolution of the temporal interaction networks. This indicates that capturing the evolution of the network over time during the meta-learning process has a significant impact on the link prediction accuracy of the examined models.

Fig. 3: Performance evaluation of the meta-learning approaches in terms of RMSE.

Iv-E Average Reward Evaluation

In Figure 4, we demonstrate the ability of MELANIE to achieve fast adaptation to new events based on the average reward (Equation 11). The average reward is received by the agent/tracker as a reward for the connections that the viewers have established at each streaming minute of an event. Therefore, we consider only the approaches that employ deep RL. Note that PolicyGNN, MELANIE and its variants are the only examined strategies that adopt deep RL. However, we omit PolicyGNN from this set of experiments, as the actions in PolicyGNN correspond to the number of convolutional layers applied on each node that participates in the network, rather than the actions to select the high-bandwidth connections between viewers. We observe that MELANIE constantly achieves higher reward than its variants in a low number of interactions (streaming minutes). This indicates the effectiveness of the meta-learning process to correctly learn a global policy that can efficiently adjust to a new event in the first streaming minutes. We also observe that the average rewards in MELANIE-T and MELANIE-M converge at a lower pace than MELANIE. This means that in MELANIE-T and MELANIE-M the agent/tracker is biased towards the low-bandwidth connections requiring a significant amount of interactions to optimize the policy. This occurs because MELANIE-T and MELANIE-M employ uniform sampling in the replay memory buffer, instead of prioritizing the state-action transitions based on the KL-divergence, thus they are not necessarily trained on divergent interactions.

Fig. 4: Average rewards (Equation 11) of MELANIE and its variants.

V Conclusions

In this study we presented the MELANIE model, a meta-RL strategy where each task corresponds a live video streaming event on large enterprise networks. We modeled each event as a MDP and then apply meta-RL to compute a global policy. To exhibit fast adaptation to a new event, we train our model on divergent interactions by prioritizing the stored state-action transitions in the replay memory buffer based on the KL-divergence of the viewers’ throughputs between consecutive streaming minutes. Moreover, we introduced a graph signature buffer in the meta-learning process to measure the structural similarity among different events, allowing MELANIE to learn an accurate global model that generalizes over different events with less common viewers. Our experiments showed the proposed MELANIE model achieves high link weight prediction accuracy, with average relative drops and in terms of RMSE and MAE against the second best strategy.

Distributing a high-quality live video stream in an enterprise network is an intensive operation, as network inefficiencies in several offices negatively impact the performance of the live video streaming event. As as consequence, a low number of viewers might attend the event, resulting in limited viewer engagement [7]. Therefore, distributed live video streaming solutions require a fast and accurate selection of the high-bandwidth connections between viewers. Our model can support large enterprises to exploit the structural information from different events and efficiently distribute video content of higher quality, avoiding any network problems. An interesting future direction is to incorporate the video quality of experience perceived by each viewer in the learned policy of MELANIE, so as to maximize the user engagement [12].

References

  • [1] S. Antaris, D. Rafailidis, and S. Girdzijauskas (2020) EGAD: evolving graph representation learning with self-attention and knowledge distillation for live video streaming events. In IEEE BigData, Cited by: §I.
  • [2] S. Antaris, D. Rafailidis, and S. Girdzijauskas (2021) A deep graph reinforcement learning model for improving user experience in live video streaming. Cited by: §I, §I, §IV-A, §IV-B.
  • [3] S. Antaris and D. Rafailidis (2020) VStreamDRLS: dynamic graph representation learning with self-attention for enterprise distributed video streaming solutions. In ASONAM, Cited by: §I, §I, §I, §IV-B.
  • [4] A. J. Bose, A. Jain, P. Molino, and W. L. Hamilton (2020) Meta-graph: few shot link prediction via meta learning. External Links: 1912.09867 Cited by: §I, §IV-B.
  • [5] (2017) BRINGING down the walls. Note: https://www.akamai.com/us/en/multimedia/documents/ebooks/broadcast-over-the-internet-book-one-bringing-down-the-walls-ebook.pdf[Online; accessed 29-January-2021] Cited by: §I.
  • [6] M. Brockschmidt (2020) Gnn-film: graph neural networks with feature-wise linear modulation. In ICML, pp. 1144–1152. Cited by: §I, §III-B, §III-B.
  • [7] F. Dobrian, V. Sekar, A. Awan, I. Stoica, D. Joseph, A. Ganjam, J. Zhan, and H. Zhang (2011) Understanding the impact of video quality on user engagement. In SIGCOMM, pp. 362–373. Cited by: §V.
  • [8] W. Fedus, P. Ramachandran, R. Agarwal, Y. Bengio, H. Larochelle, M. Rowland, and W. Dabney (2020) Revisiting fundamentals of experience replay. In ICML, pp. 3061–3071. Cited by: §III-B.
  • [9] C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, pp. 1126–1135. Cited by: §I, §II, §III-A, §III-C.
  • [10] (2020) Gauging demand for enterprise streaming – 2020 – investment trends in times of global change. Note: https://www.ibm.com/downloads/cas/DEAKXQ5P[Online; accessed 29-January-2021] Cited by: §I.
  • [11] S. Hochreiter, A. S. Younger, and P. R. Conwell (2001) Learning to learn using gradient descent. In ICANN, pp. 87–94. Cited by: §II.
  • [12] T. Huang, R. Zhang, C. Zhou, and L. Sun (2018) QARC: video quality aware rate control for real-time video streaming based on deep reinforcement learning. In MM, pp. 1208–1216. Cited by: §V.
  • [13] S. Khodadadeh, L. Boloni, and M. Shah (2019) Unsupervised meta-learning for few-shot image classification. In NeurIPS, pp. 10132–10142. Cited by: §I.
  • [14] S. Kullback and R. A. Leibler (1951) On information and sufficiency. Ann. Math. Statist. 22, pp. 79–86. Cited by: §III-B.
  • [15] S. Kumar, X. Zhang, and J. Leskovec (2019) Predicting dynamic embedding trajectory in temporal interaction networks. In KDD, pp. 1269–1278. Cited by: §I, §II, §IV-B.
  • [16] K. Lai, D. Zha, K. Zhou, and X. Hu (2020) Policy-gnn: aggregation optimization for graph neural networks. In KDD, pp. 461–471. Cited by: §IV-B.
  • [17] H. Lee, J. Im, S. Jang, H. Cho, and S. Chung (2019) MeLU: meta-learned user preference estimator for cold-start recommendation. In SIGKDD, pp. 1073–1082. Cited by: §III-B.
  • [18] Y. Lu, Y. Fang, and C. Shi (2020) Meta-learning on heterogeneous information networks for cold-start recommendation. In SIGKDD, pp. 1563–1573. Cited by: §I, §IV-A, §IV-B.
  • [19] Y. Ma, Z. Guo, Z. Ren, J. Tang, and D. Yin (2020) Streaming graph neural networks. In SIGIR, pp. 719–728. Cited by: §I, §II.
  • [20] G. H. Nguyen, J. B. Lee, R. A. Rossi, N. K. Ahmed, E. Koh, and S. Kim (2018) Continuous-time dynamic network embeddings. In WWW, pp. 969–976. Cited by: §I, §II.
  • [21] F. Pan, S. Li, X. Ao, P. Tang, and Q. He (2019) Warm up cold-start advertisements: improving ctr predictions via learning to learn id embeddings. In SIGIR, pp. 695–704. Cited by: §I.
  • [22] A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi, T. Kaler, T. B. Schardl, and C. E. Leiserson (2020) EvolveGCN: evolving graph convolutional networks for dynamic graphs. In AAAI, pp. 5363–5370. Cited by: §I, §IV-B.
  • [23] R. Roverso, R. Reale, S. El-Ansary, and S. Haridi (2015) SmoothCache 2.0: cdn-quality adaptive http live streaming on peer-to-peer overlays. In MMSys, pp. 61–72. Cited by: §I.
  • [24] A. Sankar, Y. Wu, L. Gou, W. Zhang, and H. Yang (2020) DySAT: deep neural representation learning on dynamic graphs via self-attention networks. In WSDM, pp. 519–527. Cited by: §I, §IV-B.
  • [25] T. Schaul, J. Quan, I. Antonoglou, and D. Silver (2016) Prioritized experience replay. In ICLR, Cited by: §III-A, §III-B.
  • [26] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller (2014) Deterministic policy gradient algorithms. In ICML, pp. I–387–I–395. Cited by: §III-B.
  • [27] (2021) Supplementary Material. Note: https://github.com/stefanosantaris/melanie/blob/main/supplementary/supplementary.pdf[Online; accessed 14-July-2021] Cited by: §I, §III-C, §IV-A, §IV-B, §IV-D.
  • [28] R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. A Bradford Book. External Links: ISBN 0262039249 Cited by: §III-B, §III-B.
  • [29] M. Vartak, A. Thiagarajan, C. Miranda, J. Bratman, and H. Larochelle (2017) A meta-learning perspective on cold-start recommendations for items. In NeurIPS, pp. 6904–6914. Cited by: §I.
  • [30] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio (2018) Graph Attention Networks. In ICLR, Cited by: §III-B.
  • [31] X. -S. Wei, P. Wang, L. Liu, C. Shen, and J. Wu (2019)

    Piecewise classifier mappings: learning fine-grained learners for novel categories with few examples

    .
    IEEE Transactions on Image Processing 28 (12), pp. 6116–6125. Cited by: §I.
  • [32] Y. Zhu, C. Liu, and S. Jiang (2020) Multi-attention meta learning for few-shot fine-grained image recognition. In IJCAI, pp. 1090–1096. Cited by: §I.