Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation

05/16/2023
by   Samaneh Azadi, et al.
0

Text-guided human motion generation has drawn significant interest because of its impactful applications spanning animation and robotics. Recently, application of diffusion models for motion generation has enabled improvements in the quality of generated motions. However, existing approaches are limited by their reliance on relatively small-scale motion capture data, leading to poor performance on more diverse, in-the-wild prompts. In this paper, we introduce Make-An-Animation, a text-conditioned human motion generation model which learns more diverse poses and prompts from large-scale image-text datasets, enabling significant improvement in performance over prior works. Make-An-Animation is trained in two stages. First, we train on a curated large-scale dataset of (text, static pseudo-pose) pairs extracted from image-text datasets. Second, we fine-tune on motion capture data, adding additional layers to model the temporal dimension. Unlike prior diffusion models for motion generation, Make-An-Animation uses a U-Net architecture similar to recent text-to-video generation models. Human evaluation of motion realism and alignment with input text shows that our model reaches state-of-the-art performance on text-to-motion generation.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

research
09/04/2023

DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion

We present DiverseMotion, a new approach for synthesizing high-quality h...
research
04/14/2023

Text-Conditional Contextualized Avatars For Zero-Shot Personalization

Recent large-scale text-to-image generation models have made significant...
research
05/21/2023

GMD: Controllable Human Motion Synthesis via Guided Diffusion Models

Denoising diffusion models have shown great promise in human motion synt...
research
04/25/2022

StyleGAN-Human: A Data-Centric Odyssey of Human Generation

Unconditional human image generation is an important task in vision and ...
research
10/28/2022

OhMG: Zero-shot Open-vocabulary Human Motion Generation

Generating motion in line with text has attracted increasing attention n...
research
09/12/2023

Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model

Text-driven human motion generation in computer vision is both significa...
research
05/23/2023

Understanding Text-driven Motion Synthesis with Keyframe Collaboration via Diffusion Models

The emergence of text-driven motion synthesis technique provides animato...

Please sign up or login with your details

Forgot password? Click here to reset