-
Improving Human Action Recognition by Non-action Classification
In this paper we consider the task of recognizing human actions in reali...
read it
-
Weakly Supervised Gaussian Networks for Action Detection
Detecting temporal extents of human actions in videos is a challenging c...
read it
-
A Hybrid RNN-HMM Approach for Weakly Supervised Temporal Action Segmentation
Action recognition has become a rapidly developing research field within...
read it
-
Annotation-Efficient Untrimmed Video Action Recognition
Deep learning has achieved great success in recognizing video actions, b...
read it
-
ACD: Action Concept Discovery from Image-Sentence Corpora
Action classification in still images is an important task in computer v...
read it
-
Fine-Grain Annotation of Cricket Videos
The recognition of human activities is one of the key problems in video ...
read it
-
Speech2Action: Cross-modal Supervision for Action Recognition
Is it possible to guess human action from dialogue alone? In this work w...
read it
Attentive Action and Context Factorization
We propose a method for human action recognition, one that can localize the spatiotemporal regions that `define' the actions. This is a challenging task due to the subtlety of human actions in video and the co-occurrence of contextual elements. To address this challenge, we utilize conjugate samples of human actions, which are video clips that are contextually similar to human action samples but do not contain the action. We introduce a novel attentional mechanism that can spatially and temporally separate human actions from the co-occurring contextual factors. The separation of the action and context factors is weakly supervised, eliminating the need for laboriously detailed annotation of these two factors in training samples. Our method can be used to build human action classifiers with higher accuracy and better interpretability. Experiments on several human action recognition datasets demonstrate the quantitative and qualitative benefits of our approach.
READ FULL TEXT
Comments
There are no comments yet.