Skimming and Scanning for Untrimmed Video Action Recognition

04/21/2021
by   Yunyan Hong, et al.
5

Video action recognition (VAR) is a primary task of video understanding, and untrimmed videos are more common in real-life scenes. Untrimmed videos have redundant and diverse clips containing contextual information, so sampling dense clips is essential. Recently, some works attempt to train a generic model to select the N most representative clips. However, it is difficult to model the complex relations from intra-class clips and inter-class videos within a single model and fixed selected number, and the entanglement of multiple relations is also hard to explain. Thus, instead of "only look once", we argue "divide and conquer" strategy will be more suitable in untrimmed VAR. Inspired by the speed reading mechanism, we propose a simple yet effective clip-level solution based on skim-scan techniques. Specifically, the proposed Skim-Scan framework first skims the entire video and drops those uninformative and misleading clips. For the remaining clips, it scans clips with diverse features gradually to drop redundant clips but cover essential content. The above strategies can adaptively select the necessary clips according to the difficulty of the different videos. To trade off the computational complexity and performance, we observe the similar statistical expression between lightweight and heavy networks, thus it supports us to explore the combination of them. Comprehensive experiments are performed on ActivityNet and mini-FCVID datasets, and results demonstrate that our solution surpasses the state-of-the-art performance in terms of both accuracy and efficiency.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 17

page 18

page 19

page 20

research
05/10/2023

Few-shot Action Recognition via Intra- and Inter-Video Information Maximization

Current few-shot action recognition involves two primary sources of info...
research
04/08/2019

SCSampler: Sampling Salient Clips from Video for Efficient Action Recognition

While many action recognition datasets consist of collections of brief, ...
research
07/03/2020

Egocentric Action Recognition by Video Attention and Temporal Context

We present the submission of Samsung AI Centre Cambridge to the CVPR2020...
research
06/28/2020

Dynamic Sampling Networks for Efficient Action Recognition in Videos

The existing action recognition methods are mainly based on clip-level c...
research
01/12/2022

OCSampler: Compressing Videos to One Clip with Single-step Sampling

In this paper, we propose a framework named OCSampler to explore a compa...
research
04/20/2021

MGSampler: An Explainable Sampling Strategy for Video Action Recognition

Frame sampling is a fundamental problem in video action recognition due ...

Please sign up or login with your details

Forgot password? Click here to reset