Action Recognition using Visual Attention

11/12/2015
by   Shikhar Sharma, et al.
0

We propose a soft attention based model for the task of action recognition in videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units which are deep both spatially and temporally. Our model learns to focus selectively on parts of the video frames and classifies videos after taking a few glimpses. The model essentially learns which parts in the frames are relevant for the task at hand and attaches higher importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51 and Hollywood2 datasets and analyze how the model focuses its attention depending on the scene and the action being performed.

READ FULL TEXT

page 2

page 5

page 6

page 7

page 8

page 11

research
10/01/2018

Where and When to Look? Spatio-temporal Attention for Action Recognition in Videos

Inspired by the observation that humans are able to process videos effic...
research
10/03/2020

A Variational Information Bottleneck Based Method to Compress Sequential Networks for Human Action Recognition

In the last few years, compression of deep neural networks has become an...
research
03/27/2016

Recurrent Mixture Density Network for Spatiotemporal Visual Attention

In many computer vision tasks, the relevant information to solve the pro...
research
02/16/2015

Unsupervised Learning of Video Representations using LSTMs

We use multilayer Long Short Term Memory (LSTM) networks to learn repres...
research
06/13/2020

Exploiting the ConvLSTM: Human Action Recognition using Raw Depth Video-Based Recurrent Neural Networks

As in many other different fields, deep learning has become the main app...
research
12/20/2017

Human Action Recognition: Pose-based Attention draws focus to Hands

We propose a new spatio-temporal attention based mechanism for human act...
research
05/09/2017

CHAM: action recognition using convolutional hierarchical attention model

Recently, the soft attention mechanism, which was originally proposed in...

Please sign up or login with your details

Forgot password? Click here to reset