EGAD: Evolving Graph Representation Learning with Self-Attention and Knowledge Distillation for Live Video Streaming Events

11/11/2020
by   Stefanos Antaris, et al.
0

In this study, we present a dynamic graph representation learning model on weighted graphs to accurately predict the network capacity of connections between viewers in a live video streaming event. We propose EGAD, a neural network architecture to capture the graph evolution by introducing a self-attention mechanism on the weights between consecutive graph convolutional networks. In addition, we account for the fact that neural architectures require a huge amount of parameters to train, thus increasing the online inference latency and negatively influencing the user experience in a live video streaming event. To address the problem of the high online inference of a vast number of parameters, we propose a knowledge distillation strategy. In particular, we design a distillation loss function, aiming to first pretrain a teacher model on offline data, and then transfer the knowledge from the teacher to a smaller student model with less parameters. We evaluate our proposed model on the link prediction task on three real-world datasets, generated by live video streaming events. The events lasted 80 minutes and each viewer exploited the distribution solution provided by the company Hive Streaming AB. The experiments demonstrate the effectiveness of the proposed model in terms of link prediction accuracy and number of required parameters, when evaluated against state-of-the-art approaches. In addition, we study the distillation performance of the proposed model in terms of compression ratio for different distillation strategies, where we show that the proposed model can achieve a compression ratio up to 15:100, preserving high link prediction accuracy. For reproduction purposes, our evaluation datasets and implementation are publicly available at https://stefanosantaris.github.io/EGAD.

READ FULL TEXT
research
11/11/2020

Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation

Dynamic graph representation learning strategies are based on different ...
research
11/11/2020

VStreamDRLS: Dynamic Graph Representation Learning with Self-Attention for Enterprise Distributed Video Streaming Solutions

Live video streaming has become a mainstay as a standard communication s...
research
06/18/2021

Multi-Task Learning for User Engagement and Adoption in Live Video Streaming Events

Nowadays, live video streaming events have become a mainstay in viewer's...
research
07/28/2021

A Deep Graph Reinforcement Learning Model for Improving User Experience in Live Video Streaming

In this paper we present a deep graph reinforcement learning model to pr...
research
10/03/2021

Meta-Reinforcement Learning via Buffering Graph Signatures for Live Video Streaming Events

In this study, we present a meta-learning model to adapt the predictions...
research
06/29/2023

Streaming egocentric action anticipation: An evaluation scheme and approach

Egocentric action anticipation aims to predict the future actions the ca...
research
07/12/2021

Technical Report of Team GraphMIRAcles in the WikiKG90M-LSC Track of OGB-LSC @ KDD Cup 2021

Link prediction in large-scale knowledge graphs has gained increasing at...

Please sign up or login with your details

Forgot password? Click here to reset