Where and When to Look? Spatio-temporal Attention for Action Recognition in Videos

10/01/2018
by   Lili Meng, et al.
0

Inspired by the observation that humans are able to process videos efficiently by only paying attention when and where it is needed, we propose a novel spatial-temporal attention mechanism for video-based action recognition. For spatial attention, we learn a saliency mask to allow the model to focus on the most salient parts of the feature maps. For temporal attention, we employ a soft temporal attention mechanism to identify the most relevant frames from an input video. Further, we propose a set of regularizers that ensure that our attention mechanism attends to coherent regions in space and time. Our model is efficient, as it proposes a separable spatio-temporal mechanism for video attention, while being able to identify important parts of the video both spatially and temporally. We demonstrate the efficacy of our approach on three public video action recognition datasets. The proposed approach leads to state-of-the-art performance on all of them, including the new large-scale Moments in Time dataset. Furthermore, we quantitatively and qualitatively evaluate our model's ability to accurately localize discriminative regions spatially and critical frames temporally. This is despite our model only being trained with per video classification labels.

READ FULL TEXT
research
12/20/2017

Human Action Recognition: Pose-based Attention draws focus to Hands

We propose a new spatio-temporal attention based mechanism for human act...
research
11/12/2015

Action Recognition using Visual Attention

We propose a soft attention based model for the task of action recogniti...
research
05/11/2022

Video-ReTime: Learning Temporally Varying Speediness for Time Remapping

We propose a method for generating a temporally remapped video that matc...
research
11/18/2021

M2A: Motion Aware Attention for Accurate Video Action Recognition

Advancements in attention mechanisms have led to significant performance...
research
11/30/2016

Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures

This paper introduces a novel approach for generating videos called Sync...
research
04/30/2019

Attentive Spatio-Temporal Representation Learning for Diving Classification

Competitive diving is a well recognized aquatic sport in which a person ...
research
04/02/2020

Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention

Attentive video modeling is essential for action recognition in unconstr...

Please sign up or login with your details

Forgot password? Click here to reset