MAR: Masked Autoencoders for Efficient Action Recognition

07/24/2022
by   Zhiwu Qing, et al.
10

Standard approaches for video recognition usually operate on the full input videos, which is inefficient due to the widely present spatio-temporal redundancy in videos. Recent progress in masked video modelling, i.e., VideoMAE, has shown the ability of vanilla Vision Transformers (ViT) to complement spatio-temporal contexts given only limited visual contents. Inspired by this, we propose propose Masked Action Recognition (MAR), which reduces the redundant computation by discarding a proportion of patches and operating only on a part of the videos. MAR contains the following two indispensable components: cell running masking and bridging classifier. Specifically, to enable the ViT to perceive the details beyond the visible patches easily, cell running masking is presented to preserve the spatio-temporal correlations in videos, which ensures the patches at the same spatial location can be observed in turn for easy reconstructions. Additionally, we notice that, although the partially observed features can reconstruct semantically explicit invisible patches, they fail to achieve accurate classification. To address this, a bridging classifier is proposed to bridge the semantic gap between the ViT encoded features for reconstruction and the features specialized for classification. Our proposed MAR reduces the computational cost of ViT by 53 consistently outperforms existing ViT models with a notable margin. Especially, we found a ViT-Large trained by MAR outperforms the ViT-Huge trained by a standard training scheme by convincing margins on both Kinetics-400 and Something-Something v2 datasets, while our computation overhead of ViT-Large is only 14.5

READ FULL TEXT

page 1

page 3

page 4

page 9

research
08/25/2017

Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition

Convolutional neural networks with spatio-temporal 3D kernels (3D CNNs) ...
research
10/15/2015

Beyond Spatial Pyramid Matching: Space-time Extended Descriptor for Action Recognition

We address the problem of generating video features for action recogniti...
research
06/18/2020

Learning non-rigid surface reconstruction from spatio-temporal image patches

We present a method to reconstruct a dense spatio-temporal depth map of ...
research
06/13/2017

Joint Max Margin and Semantic Features for Continuous Event Detection in Complex Scenes

In this paper the problem of complex event detection in the continuous d...
research
11/02/2022

Learning a Condensed Frame for Memory-Efficient Video Class-Incremental Learning

Recent incremental learning for action recognition usually stores repres...
research
05/13/2023

Lightweight Delivery Detection on Doorbell Cameras

Despite recent advances in video-based action recognition and robust spa...

Please sign up or login with your details

Forgot password? Click here to reset