PSUMNet: Unified Modality Part Streams are All You Need for Efficient Pose-based Action Recognition

08/11/2022
by   Neel Trivedi, et al.
0

Pose-based action recognition is predominantly tackled by approaches which treat the input skeleton in a monolithic fashion, i.e. joints in the pose tree are processed as a whole. However, such approaches ignore the fact that action categories are often characterized by localized action dynamics involving only small subsets of part joint groups involving hands (e.g. `Thumbs up') or legs (e.g. `Kicking'). Although part-grouping based approaches exist, each part group is not considered within the global pose frame, causing such methods to fall short. Further, conventional approaches employ independent modality streams (e.g. joint, bone, joint velocity, bone velocity) and train their network multiple times on these streams, which massively increases the number of training parameters. To address these issues, we introduce PSUMNet, a novel approach for scalable and efficient pose-based action recognition. At the representation level, we propose a global frame based part stream approach as opposed to conventional modality based streams. Within each part stream, the associated data from multiple modalities is unified and consumed by the processing pipeline. Experimentally, PSUMNet achieves state of the art performance on the widely used NTURGB+D 60/120 dataset and dense joint skeleton dataset NTU 60-X/120-X. PSUMNet is highly efficient and outperforms competing methods which use 100 SHREC hand gesture dataset with competitive performance. Overall, PSUMNet's scalability, performance and efficiency makes it an attractive choice for action recognition and for deployment on compute-restricted embedded and edge devices. Code and pretrained models can be accessed at https://github.com/skelemoa/psumnet

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2020

Pose And Joint-Aware Action Recognition

Most human action recognition systems typically consider static appearan...
research
02/08/2022

Joint-bone Fusion Graph Convolutional Network for Semi-supervised Skeleton Action Recognition

In recent years, graph convolutional networks (GCNs) play an increasingl...
research
01/27/2021

NTU60-X: Towards Skeleton-based Recognition of Subtle Human Actions

The lack of fine-grained joints such as hand fingers is a fundamental pe...
research
05/19/2022

PYSKL: Towards Good Practices for Skeleton Action Recognition

We present PYSKL: an open-source toolbox for skeleton-based action recog...
research
07/15/2023

Joint Adversarial and Collaborative Learning for Self-Supervised Action Recognition

Considering the instance-level discriminative ability, contrastive learn...
research
08/21/2023

Local Spherical Harmonics Improve Skeleton-Based Hand Action Recognition

Hand action recognition is essential. Communication, human-robot interac...
research
06/29/2021

Constructing Stronger and Faster Baselines for Skeleton-based Action Recognition

One essential problem in skeleton-based action recognition is how to ext...

Please sign up or login with your details

Forgot password? Click here to reset