Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation

12/07/2021
by   Amirhossein Dadashzadeh, et al.
0

Despite the outstanding success of self-supervised pretraining methods for video representation learning, they generalise poorly when the unlabeled dataset for pretraining is small or the domain difference between unlabelled data in source task (pretraining) and labeled data in target task (finetuning) is significant. To mitigate these issues, we propose a novel approach to complement self-supervised pretraining via an auxiliary pretraining phase, based on knowledge similarity distillation, auxSKD, for better generalisation with a significantly smaller amount of video data, e.g. Kinetics-100 rather than Kinetics-400. Our method deploys a teacher network that iteratively distils its knowledge to the student model by capturing the similarity information between segments of unlabelled video data. The student model then solves a pretext task by exploiting this prior knowledge. We also introduce a novel pretext task, Video Segment Pace Prediction or VSPP, which requires our model to predict the playback speed of a randomly selected segment of the input video to provide more reliable self-supervised representations. Our experimental results show superior results to the state of the art on both UCF101 and HMDB51 datasets when pretraining on K100. Additionally, we show that our auxiliary pertaining, auxSKD, when added as an extra pretraining phase to recent state of the art self-supervised methods (e.g. VideoPace and RSPNet), improves their results on UCF101 and HMDB51. Our code will be released soon.

READ FULL TEXT

page 3

page 7

research
08/01/2020

Distilling Visual Priors from Self-Supervised Learning

Convolutional Neural Networks (CNNs) are prone to overfit small training...
research
02/26/2020

Unsupervised Temporal Video Segmentation as an Auxiliary Task for Predicting the Remaining Surgery Duration

Estimating the remaining surgery duration (RSD) during surgical procedur...
research
10/10/2022

Exploiting map information for self-supervised learning in motion forecasting

Inspired by recent developments regarding the application of self-superv...
research
04/26/2018

Better and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification

Video representation learning is a vital problem for classification task...
research
05/21/2023

From Patches to Objects: Exploiting Spatial Reasoning for Better Visual Representations

As the field of deep learning steadily transitions from the realm of aca...
research
04/20/2023

Movie Box Office Prediction With Self-Supervised and Visually Grounded Pretraining

Investments in movie production are associated with a high level of risk...
research
02/28/2022

Domain Knowledge-Informed Self-Supervised Representations for Workout Form Assessment

Maintaining proper form while exercising is important for preventing inj...

Please sign up or login with your details

Forgot password? Click here to reset