Learning Spatiotemporal Features via Video and Text Pair Discrimination

01/16/2020
by   Tianhao Li, et al.
5

Current video representations heavily rely on learning from manually annotated video datasets. However, it is expensive and time-consuming to acquire a large-scale well-labeled video dataset. We observe that videos are naturally accompanied with abundant text information such as YouTube titles and movie scripts. In this paper, we leverage this visual-textual connection to learn effective spatiotemporal features in an efficient weakly-supervised manner. We present a general cross-modal pair discrimination (CPD) framework to capture this correlation between a clip and its associated text, and adopt noise-contrastive estimation technique to tackle the computational issues imposed by the huge numbers of pair instance classes. Specifically, we investigate the CPD framework from two sources of video-text pairs, and design a practical curriculum learning strategy to train the CPD. Without further fine tuning, the learned models obtain competitive results for action classification on the Kinetics dataset under the common linear classification protocol. Moreover, our visual model provides a very effective initialization to fine-tune on the downstream task datasets. Experimental results demonstrate that our weakly-supervised pre-training yields a remarkable performance gain for action recognition on the datasets of UCF101 and HMDB51, compared with the state-of-the-art self-supervised training methods. In addition, our CPD model yields a new state of the art for zero-shot action recognition on UCF101 by directly utilizing the learnt visual-textual embedding.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset