AdaFocusV3: On Unified Spatial-temporal Dynamic Video Recognition

09/27/2022
by   Yulin Wang, et al.
5

Recent research has revealed that reducing the temporal and spatial redundancy are both effective approaches towards efficient video recognition, e.g., allocating the majority of computation to a task-relevant subset of frames or the most valuable image regions of each frame. However, in most existing works, either type of redundancy is typically modeled with another absent. This paper explores the unified formulation of spatial-temporal dynamic computation on top of the recently proposed AdaFocusV2 algorithm, contributing to an improved AdaFocusV3 framework. Our method reduces the computational cost by activating the expensive high-capacity network only on some small but informative 3D video cubes. These cubes are cropped from the space formed by frame height, width, and video duration, while their locations are adaptively determined with a light-weighted policy network on a per-sample basis. At test time, the number of the cubes corresponding to each video is dynamically configured, i.e., video cubes are processed sequentially until a sufficiently reliable prediction is produced. Notably, AdaFocusV3 can be effectively trained by approximating the non-differentiable cropping operation with the interpolation of deep features. Extensive empirical results on six benchmark datasets (i.e., ActivityNet, FCVID, Mini-Kinetics, Something-Something V1 V2 and Diving48) demonstrate that our model is considerably more efficient than competitive baselines.

READ FULL TEXT
research
05/07/2021

Adaptive Focus for Efficient Video Recognition

In this paper, we explore the spatial redundancy in video recognition wi...
research
12/28/2021

AdaFocus V2: End-to-End Training of Spatial Dynamic Networks for Video Recognition

Recent works have shown that the computational efficiency of video recog...
research
08/22/2018

Video Jigsaw: Unsupervised Learning of Spatiotemporal Context for Video Action Recognition

We propose a self-supervised learning method to jointly reason about spa...
research
04/27/2021

FrameExit: Conditional Early Exiting for Efficient Video Recognition

In this paper, we propose a conditional early exiting framework for effi...
research
12/15/2014

Automatic video scene segmentation based on spatial-temporal clues and rhythm

With ever increasing computing power and data storage capacity, the pote...
research
01/17/2022

Action Keypoint Network for Efficient Video Recognition

Reducing redundancy is crucial for improving the efficiency of video rec...
research
08/23/2021

Dynamic Network Quantization for Efficient Video Inference

Deep convolutional networks have recently achieved great success in vide...

Please sign up or login with your details

Forgot password? Click here to reset