MotionCLIP: Exposing Human Motion Generation to CLIP Space

03/15/2022
by   Guy Tevet, et al.
0

We introduce MotionCLIP, a 3D human motion auto-encoder featuring a latent embedding that is disentangled, well behaved, and supports highly semantic textual descriptions. MotionCLIP gains its unique power by aligning its latent space with that of the Contrastive Language-Image Pre-training (CLIP) model. Aligning the human motion manifold to CLIP space implicitly infuses the extremely rich semantic knowledge of CLIP into the manifold. In particular, it helps continuity by placing semantically similar motions close to one another, and disentanglement, which is inherited from the CLIP-space structure. MotionCLIP comprises a transformer-based motion auto-encoder, trained to reconstruct motion while being aligned to its text label's position in CLIP-space. We further leverage CLIP's unique visual understanding and inject an even stronger signal through aligning motion to rendered frames in a self-supervised manner. We show that although CLIP has never seen the motion domain, MotionCLIP offers unprecedented text-to-motion abilities, allowing out-of-domain actions, disentangled editing, and abstract language specification. For example, the text prompt "couch" is decoded into a sitting down motion, due to lingual similarity, and the prompt "Spiderman" results in a web-swinging-like solution that is far from seen during training. In addition, we show how the introduced latent space can be leveraged for motion interpolation, editing and recognition.

READ FULL TEXT
research
11/12/2021

Diversity-Promoting Human Motion Interpolation via Conditional Variational Auto-Encoder

In this paper, we present a deep generative model based method to genera...
research
06/16/2022

MoDi: Unconditional Motion Synthesis from Diverse Data

The emergence of neural networks has revolutionized the field of motion ...
research
04/12/2023

VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs

We propose VidStyleODE, a spatiotemporally continuous disentangled Video...
research
10/19/2022

PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting

We address the problem of action-conditioned generation of human motion ...
research
10/26/2022

IMU2CLIP: Multimodal Contrastive Learning for IMU Motion Sensors from Egocentric Videos and Text

We present IMU2CLIP, a novel pre-training approach to align Inertial Mea...
research
09/18/2017

LS-VO: Learning Dense Optical Subspace for Robust Visual Odometry Estimation

This work proposes a novel deep network architecture to solve the camera...
research
06/07/2021

Multi-frame sequence generator of 4D human body motion

We examine the problem of generating temporally and spatially dense 4D h...

Please sign up or login with your details

Forgot password? Click here to reset