Masked Motion Predictors are Strong 3D Action Representation Learners

08/14/2023
by   Yunyao Mao, et al.
0

In 3D human action recognition, limited supervised data makes it challenging to fully tap into the modeling potential of powerful networks such as transformers. As a result, researchers have been actively investigating effective self-supervised pre-training strategies. In this work, we show that instead of following the prevalent pretext task to perform masked self-component reconstruction in human joints, explicit contextual motion modeling is key to the success of learning effective feature representation for 3D action recognition. Formally, we propose the Masked Motion Prediction (MAMP) framework. To be specific, the proposed MAMP takes as input the masked spatio-temporal skeleton sequence and predicts the corresponding temporal motion of the masked human joints. Considering the high temporal redundancy of the skeleton sequence, in our MAMP, the motion information also acts as an empirical semantic richness prior that guide the masking process, promoting better attention to semantically rich temporal regions. Extensive experiments on NTU-60, NTU-120, and PKU-MMD datasets show that the proposed MAMP pre-training substantially improves the performance of the adopted vanilla transformer, achieving state-of-the-art results without bells and whistles. The source code of our MAMP is available at https://github.com/maoyunyao/MAMP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2022

Hierarchically Self-Supervised Transformer for Human Skeleton Representation Learning

Despite the success of fully-supervised human skeleton sequence modeling...
research
09/21/2023

Unveiling the Hidden Realm: Self-supervised Skeleton-based Action Recognition in Occluded Environments

To integrate action recognition methods into autonomous robotic systems,...
research
03/20/2023

Actionlet-Dependent Contrastive Learning for Unsupervised Skeleton-Based Action Recognition

The self-supervised pretraining paradigm has achieved great success in s...
research
08/31/2022

ViA: View-invariant Skeleton Action Representation Learning via Motion Retargeting

Current self-supervised approaches for skeleton action representation le...
research
07/05/2023

MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition

Dynamic facial expression recognition (DFER) is essential to the develop...
research
06/19/2023

Road Barlow Twins: Redundancy Reduction for Road Environment Descriptors and Motion Prediction

Anticipating the future motion of traffic agents is vital for self-drivi...
research
10/01/2021

Unsupervised Motion Representation Learning with Capsule Autoencoders

We propose the Motion Capsule Autoencoder (MCAE), which addresses a key ...

Please sign up or login with your details

Forgot password? Click here to reset