Beyond Short Clips: End-to-End Video-Level Learning with Collaborative Memories

04/02/2021
by   Xitong Yang, et al.
0

The standard way of training video models entails sampling at each iteration a single clip from a video and optimizing the clip prediction with respect to the video-level label. We argue that a single clip may not have enough temporal coverage to exhibit the label to recognize, since video datasets are often weakly labeled with categorical information but without dense temporal annotations. Furthermore, optimizing the model over brief clips impedes its ability to learn long-term temporal dependencies. To overcome these limitations, we introduce a collaborative memory mechanism that encodes information across multiple sampled clips of a video at each training iteration. This enables the learning of long-range dependencies beyond a single clip. We explore different design choices for the collaborative memory to ease the optimization difficulties. Our proposed framework is end-to-end trainable and significantly improves the accuracy of video classification at a negligible computational overhead. Through extensive experiments, we demonstrate that our framework generalizes to different video architectures and tasks, outperforming the state of the art on both action recognition (e.g., Kinetics-400 700, Charades, Something-Something-V1) and action detection (e.g., AVA v2.1 v2.2).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2022

TALLFormer: Temporal Action Localization with Long-memory Transformer

Most modern approaches in temporal action localization divide this probl...
research
12/10/2019

Listen to Look: Action Recognition by Previewing Audio

In the face of the video data deluge, today's expensive clip-level class...
research
04/08/2019

SCSampler: Sampling Salient Clips from Video for Efficient Action Recognition

While many action recognition datasets consist of collections of brief, ...
research
10/10/2022

Turbo Training with Token Dropout

The objective of this paper is an efficient training method for video ta...
research
01/20/2022

MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition

While today's video recognition systems parse snapshots or short clips a...
research
12/18/2020

Temporal Bilinear Encoding Network of Audio-Visual Features at Low Sampling Rates

Current deep learning based video classification architectures are typic...
research
04/19/2016

Online Human Action Detection using Joint Classification-Regression Recurrent Neural Networks

Human action recognition from well-segmented 3D skeleton data has been i...

Please sign up or login with your details

Forgot password? Click here to reset