GANimator: Neural Motion Synthesis from a Single Sequence

05/05/2022
by   Peizhuo Li, et al.
0

We present GANimator, a generative model that learns to synthesize novel motions from a single, short motion sequence. GANimator generates motions that resemble the core elements of the original motion, while simultaneously synthesizing novel and diverse movements. Existing data-driven techniques for motion synthesis require a large motion dataset which contains the desired and specific skeletal structure. By contrast, GANimator only requires training on a single motion sequence, enabling novel motion synthesis for a variety of skeletal structures e.g., bipeds, quadropeds, hexapeds, and more. Our framework contains a series of generative and adversarial neural networks, each responsible for generating motions in a specific frame rate. The framework progressively learns to synthesize motion from random noise, enabling hierarchical control over the generated motion content across varying levels of detail. We show a number of applications, including crowd simulation, key-frame editing, style transfer, and interactive control, which all learn from a single input sequence. Code and data for this paper are at https://peizhuoli.github.io/ganimator.

READ FULL TEXT

page 6

page 8

page 9

page 10

research
06/01/2023

Example-based Motion Synthesis via Generative Motion Matching

We present GenMM, a generative model that "mines" as many diverse motion...
research
06/16/2022

MoDi: Unconditional Motion Synthesis from Diverse Data

The emergence of neural networks has revolutionized the field of motion ...
research
02/12/2023

Single Motion Diffusion

Synthesizing realistic animations of humans, animals, and even imaginary...
research
09/07/2023

Broadband Ground Motion Synthesis via Generative Adversarial Neural Operators: Development and Validation

We present a data-driven model for ground-motion synthesis using a Gener...
research
11/15/2018

Motion Style Extraction Based on Sparse Coding Decomposition

We present a sparse coding-based framework for motion style decompositio...
research
09/27/2020

Recognition and Synthesis of Object Transport Motion

Deep learning typically requires vast numbers of training examples in or...
research
01/30/2023

DanceAnyWay: Synthesizing Mixed-Genre 3D Dance Movements Through Beat Disentanglement

We present DanceAnyWay, a hierarchical generative adversarial learning m...

Please sign up or login with your details

Forgot password? Click here to reset