-
Adversarial Self-Supervised Learning for Semi-Supervised 3D Action Recognition
We consider the problem of semi-supervised 3D action recognition which h...
read it
-
Unsupervised Learning of View-invariant Action Representations
The recent success in human action recognition with deep learning method...
read it
-
DTG-Net: Differentiated Teachers Guided Self-Supervised Video Action Recognition
State-of-the-art video action recognition models with complex network ar...
read it
-
Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition
Existing deep learning methods for action recognition in videos require ...
read it
-
Semi-Supervised Action Recognition with Temporal Contrastive Learning
Learning to recognize actions from only a handful of labeled videos is a...
read it
-
Temporal Action Detection with Multi-level Supervision
Training temporal action detection in videos requires large amounts of l...
read it
-
Action Recognition in Videos: from Motion Capture Labs to the Web
This paper presents a survey of human action recognition approaches base...
read it
Exploiting Motion Information from Unlabeled Videos for Static Image Action Recognition
Static image action recognition, which aims to recognize action based on a single image, usually relies on expensive human labeling effort such as adequate labeled action images and large-scale labeled image dataset. In contrast, abundant unlabeled videos can be economically obtained. Therefore, several works have explored using unlabeled videos to facilitate image action recognition, which can be categorized into the following two groups: (a) enhance visual representations of action images with a designed proxy task on unlabeled videos, which falls into the scope of self-supervised learning; (b) generate auxiliary representations for action images with the generator learned from unlabeled videos. In this paper, we integrate the above two strategies in a unified framework, which consists of Visual Representation Enhancement (VRE) module and Motion Representation Augmentation (MRA) module. Specifically, the VRE module includes a proxy task which imposes pseudo motion label constraint and temporal coherence constraint on unlabeled videos, while the MRA module could predict the motion information of a static action image by exploiting unlabeled videos. We demonstrate the superiority of our framework based on four benchmark human action datasets with limited labeled data.
READ FULL TEXT
Comments
There are no comments yet.