Action-Conditioned 3D Human Motion Synthesis with Transformer VAE

04/12/2021
by   Mathis Petrovich, et al.
0

We tackle the problem of action-conditioned generation of realistic and diverse human motion sequences. In contrast to methods that complete, or extend, motion sequences, this task does not require an initial pose or sequence. Here we learn an action-aware latent representation for human motions by training a generative variational autoencoder (VAE). By sampling from this latent space and querying a certain duration through a series of positional encodings, we synthesize variable-length motion sequences conditioned on a categorical action. Specifically, we design a Transformer-based architecture, ACTOR, for encoding and decoding a sequence of parametric SMPL human body models estimated from action recognition datasets. We evaluate our approach on the NTU RGB+D, HumanAct12 and UESTC datasets and show improvements over the state of the art. Furthermore, we present two use cases: improving action recognition through adding our synthesized data to training, and motion denoising. Our code and models will be made available.

READ FULL TEXT

page 1

page 3

page 8

page 14

research
07/30/2020

Action2Motion: Conditioned Generation of 3D Human Motions

Action recognition is a relatively established task, where givenan input...
research
01/10/2023

Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models

Diffusion-based generative models have recently emerged as powerful solu...
research
08/12/2021

Conditional Temporal Variational AutoEncoder for Action Video Prediction

To synthesize a realistic action sequence based on a single human image,...
research
10/19/2022

PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting

We address the problem of action-conditioned generation of human motion ...
research
04/04/2022

HiT-DVAE: Human Motion Generation via Hierarchical Transformer Dynamical VAE

Studies on the automatic processing of 3D human pose data have flourishe...
research
03/15/2022

ActFormer: A GAN Transformer Framework towards General Action-Conditioned 3D Human Motion Generation

We present a GAN Transformer framework for general action-conditioned 3D...
research
06/14/2022

Recurrent Transformer Variational Autoencoders for Multi-Action Motion Synthesis

We consider the problem of synthesizing multi-action human motion sequen...

Please sign up or login with your details

Forgot password? Click here to reset