VideoBERT: A Joint Model for Video and Language Representation Learning

04/03/2019
by   Chen Sun, et al.
30

Self-supervised learning has become increasingly important to leverage the abundance of unlabeled data available on platforms like YouTube. Whereas most existing approaches learn low-level representations, we propose a joint visual-linguistic model to learn high-level features without any explicit supervision. In particular, inspired by its recent success in language modeling, we build upon the BERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens, derived from vector quantization of video data and off-the-shelf speech recognition outputs, respectively. We use this model in a number of tasks, including action classification and video captioning. We show that it can be applied directly to open-vocabulary classification, and confirm that large amounts of training data and cross-modal information are critical to performance. Furthermore, we outperform the state-of-the-art on video captioning, and quantitative results verify that the model learns high-level semantic features.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 8

page 11

page 12

page 13

research
11/14/2020

ActBERT: Learning Global-Local Video-Text Representations

In this paper, we introduce ActBERT for self-supervised learning of join...
research
11/09/2018

Cross and Learn: Cross-Modal Self-Supervision

In this paper we present a self-supervised method for representation lea...
research
08/04/2021

Enhancing Self-supervised Video Representation Learning via Multi-level Feature Optimization

The crux of self-supervised video representation learning is to build ge...
research
08/08/2022

Boosting Video-Text Retrieval with Explicit High-Level Semantics

Video-text retrieval (VTR) is an attractive yet challenging task for mul...
research
07/08/2018

Video Captioning with Boundary-aware Hierarchical Language Decoding and Joint Video Prediction

The explosion of video data on the internet requires effective and effic...
research
06/13/2019

Contrastive Bidirectional Transformer for Temporal Representation Learning

This paper aims at learning representations for long sequences of contin...
research
06/28/2020

Video Representations of Goals Emerge from Watching Failure

We introduce a video representation learning framework that models the l...

Please sign up or login with your details

Forgot password? Click here to reset