Neural Marionette: Unsupervised Learning of Motion Skeleton and Latent Dynamics from Volumetric Video

02/17/2022
by   Jinseok Bae, et al.
9

We present Neural Marionette, an unsupervised approach that discovers the skeletal structure from a dynamic sequence and learns to generate diverse motions that are consistent with the observed motion dynamics. Given a video stream of point cloud observation of an articulated body under arbitrary motion, our approach discovers the unknown low-dimensional skeletal relationship that can effectively represent the movement. Then the discovered structure is utilized to encode the motion priors of dynamic sequences in a latent structure, which can be decoded to the relative joint rotations to represent the full skeletal motion. Our approach works without any prior knowledge of the underlying motion or skeletal structure, and we demonstrate that the discovered structure is even comparable to the hand-labeled ground truth skeleton in representing a 4D sequence of motion. The skeletal structure embeds the general semantics of possible motion space that can generate motions for diverse scenarios. We verify that the learned motion prior is generalizable to the multi-modal sequence generation, interpolation of two poses, and motion retargeting to a different skeletal structure.

READ FULL TEXT

page 6

page 7

page 19

page 20

page 21

page 22

page 23

page 24

research
07/13/2022

Global-local Motion Transformer for Unsupervised Skeleton-based Action Learning

We propose a new transformer model for the task of unsupervised learning...
research
12/02/2021

Hierarchical Neural Implicit Pose Network for Animation and Motion Retargeting

We present HIPNet, a neural implicit pose network trained on multiple su...
research
12/06/2022

An Unsupervised Machine Learning Approach for Ground Motion Clustering and Selection

Clustering analysis of sequence data continues to address many applicati...
research
05/12/2020

Skeleton-Aware Networks for Deep Motion Retargeting

We introduce a novel deep learning framework for data-driven motion reta...
research
12/19/2021

MoCaNet: Motion Retargeting in-the-wild via Canonicalization Networks

We present a novel framework that brings the 3D motion retargeting task ...
research
08/11/2023

Semantics2Hands: Transferring Hand Motion Semantics between Avatars

Human hands, the primary means of non-verbal communication, convey intri...
research
08/14/2018

MT-VAE: Learning Motion Transformations to Generate Multimodal Human Dynamics

Long-term human motion can be represented as a series of motion modes---...

Please sign up or login with your details

Forgot password? Click here to reset