Attentive Action and Context Factorization

04/10/2019
by   Yang Wang, et al.
0

We propose a method for human action recognition, one that can localize the spatiotemporal regions that `define' the actions. This is a challenging task due to the subtlety of human actions in video and the co-occurrence of contextual elements. To address this challenge, we utilize conjugate samples of human actions, which are video clips that are contextually similar to human action samples but do not contain the action. We introduce a novel attentional mechanism that can spatially and temporally separate human actions from the co-occurring contextual factors. The separation of the action and context factors is weakly supervised, eliminating the need for laboriously detailed annotation of these two factors in training samples. Our method can be used to build human action classifiers with higher accuracy and better interpretability. Experiments on several human action recognition datasets demonstrate the quantitative and qualitative benefits of our approach.

READ FULL TEXT

page 6

page 8

research
04/21/2016

Improving Human Action Recognition by Non-action Classification

In this paper we consider the task of recognizing human actions in reali...
research
03/30/2021

Weakly Supervised Temporal Action Localization Through Learning Explicit Subspaces for Action and Context

Weakly-supervised Temporal Action Localization (WS-TAL) methods learn to...
research
06/03/2019

A Hybrid RNN-HMM Approach for Weakly Supervised Temporal Action Segmentation

Action recognition has become a rapidly developing research field within...
research
11/30/2020

Annotation-Efficient Untrimmed Video Action Recognition

Deep learning has achieved great success in recognizing video actions, b...
research
11/24/2015

Fine-Grain Annotation of Cricket Videos

The recognition of human activities is one of the key problems in video ...
research
07/12/2022

Efficient Human Vision Inspired Action Recognition using Adaptive Spatiotemporal Sampling

Adaptive sampling that exploits the spatiotemporal redundancy in videos ...
research
01/29/2020

Human Action Performance using Deep Neuro-Fuzzy Recurrent Attention Model

A great number of computer vision publications have focused on distingui...

Please sign up or login with your details

Forgot password? Click here to reset