Support-set bottlenecks for video-text representation learning

10/06/2020
by   Mandela Patrick, et al.
1

The dominant paradigm for learning video-text representations – noise contrastive learning – increases the similarity of the representations of pairs of samples that are known to be related, such as text and video from the same sample, and pushes away the representations of all other pairs. We posit that this last behaviour is too strict, enforcing dissimilar representations even for samples that are semantically-related – for example, visually similar videos or ones that share the same depicted action. In this paper, we propose a novel method that alleviates this by leveraging a generative model to naturally push these related samples together: each sample's caption must be reconstructed as a weighted combination of other support samples' visual representations. This simple idea ensures that representations are not overly-specialized to individual samples, are reusable across the dataset, and results in representations that explicitly encode semantics shared between samples, unlike noise contrastive learning. Our proposed method outperforms others by a large margin on MSR-VTT, VATEX and ActivityNet, for video-to-text and text-to-video retrieval.

READ FULL TEXT
research
03/09/2023

Improving Video Retrieval by Adaptive Margin

Video retrieval is becoming increasingly important owing to the rapid em...
research
09/30/2021

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations

Contrastive learning allows us to flexibly define powerful losses by con...
research
02/25/2019

A Theoretical Analysis of Contrastive Unsupervised Representation Learning

Recent empirical works have successfully used unlabeled data to learn fe...
research
04/08/2022

Probabilistic Representations for Video Contrastive Learning

This paper presents Probabilistic Video Contrastive Learning, a self-sup...
research
11/21/2022

Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations

Most video-and-language representation learning approaches employ contra...
research
02/23/2023

Learning Visual Representations via Language-Guided Sampling

Although an object may appear in numerous contexts, we often describe it...
research
08/23/2021

TACo: Token-aware Cascade Contrastive Learning for Video-Text Alignment

Contrastive learning has been widely used to train transformer-based vis...

Please sign up or login with your details

Forgot password? Click here to reset