NEURAL MARIONETTE: A Transformer-based Multi-action Human Motion Synthesis System

09/27/2022
by   Weiqiang Wang, et al.
0

We present a neural network-based system for long-term, multi-action human motion synthesis. The system, dubbed as NEURAL MARIONETTE, can produce high-quality and meaningful motions with smooth transitions from simple user input, including a sequence of action tags with expected action duration, and optionally a hand-drawn moving trajectory if the user specifies. The core of our system is a novel Transformer-based motion generation model, namely MARIONET, which can generate diverse motions given action tags. Different from existing motion generation models, MARIONET utilizes contextual information from the past motion clip and future action tag, dedicated to generating actions that can smoothly blend historical and future actions. Specifically, MARIONET first encodes target action tag and contextual information into an action-level latent code. The code is unfolded into frame-level control signals via a time unrolling module, which could be then combined with other frame-level control signals like the target trajectory. Motion frames are then generated in an auto-regressive way. By sequentially applying MARIONET, the system NEURAL MARIONETTE can robustly generate long-term, multi-action motions with the help of two simple schemes, namely "Shadow Start" and "Action Revision". Along with the novel system, we also present a new dataset dedicated to the multi-action motion synthesis task, which contains both action tags and their contextual information. Extensive experiments are conducted to study the action accuracy, naturalism, and transition smoothness of the motions generated by our system.

READ FULL TEXT

page 2

page 4

page 12

research
12/12/2022

MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels

We tackle the problem of generating long-term 3D human motion from multi...
research
07/13/2022

Diverse Dance Synthesis via Keyframes with Transformer Controllers

Existing keyframe-based motion synthesis mainly focuses on the generatio...
research
02/09/2021

Robust Motion In-betweening

In this work we present a novel, robust transition generation technique ...
research
08/23/2023

LongDanceDiff: Long-term Dance Generation with Conditional Diffusion Model

Dancing with music is always an essential human art form to express emot...
research
08/03/2023

Synthesizing Long-Term Human Motions with Diffusion Models via Coherent Sampling

Text-to-motion generation has gained increasing attention, but most exis...
research
03/03/2023

Multi-Scale Control Signal-Aware Transformer for Motion Synthesis without Phase

Synthesizing controllable motion for a character using deep learning has...
research
11/10/2018

Near Real-Time Data Labeling Using a Depth Sensor for EMG Based Prosthetic Arms

Recognizing sEMG (Surface Electromyography) signals belonging to a parti...

Please sign up or login with your details

Forgot password? Click here to reset