DeepAI AI Chat
Log In Sign Up

Learning Temporally Invariant and Localizable Features via Data Augmentation for Video Recognition

by   Taeoh Kim, et al.

Deep-Learning-based video recognition has shown promising improvements along with the development of large-scale datasets and spatiotemporal network architectures. In image recognition, learning spatially invariant features is a key factor in improving recognition performance and robustness. Data augmentation based on visual inductive priors, such as cropping, flipping, rotating, or photometric jittering, is a representative approach to achieve these features. Recent state-of-the-art recognition solutions have relied on modern data augmentation strategies that exploit a mixture of augmentation operations. In this study, we extend these strategies to the temporal dimension for videos to learn temporally invariant or temporally localizable features to cover temporal perturbations or complex actions in videos. Based on our novel temporal data augmentation algorithms, video recognition performances are improved using only a limited amount of training data compared to the spatial-only data augmentation algorithms, including the 1st Visual Inductive Priors (VIPriors) for data-efficient action recognition challenge. Furthermore, learned features are temporally localizable that cannot be achieved using spatial augmentation algorithms. Our source code is available at


page 3

page 6

page 7

page 12

page 13


Extending Temporal Data Augmentation for Video Action Recognition

Pixel space augmentation has grown in popularity in many Deep Learning a...

Learning Representational Invariances for Data-Efficient Action Recognition

Data augmentation is a ubiquitous technique for improving image classifi...

Exploring Temporally Dynamic Data Augmentation for Video Recognition

Data augmentation has recently emerged as an essential component of mode...

SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation

A novel approach of data augmentation based on irregular superpixel deco...

Illumination-Based Data Augmentation for Robust Background Subtraction

A core challenge in background subtraction (BGS) is handling videos with...

A Unified Multimodal De- and Re-coupling Framework for RGB-D Motion Recognition

Motion recognition is a promising direction in computer vision, but the ...

Workflow Augmentation of Video Data for Event Recognition with Time-Sensitive Neural Networks

Supervised training of neural networks requires large, diverse and well ...