DiverseMotion: Towards Diverse Human Motion Generation via Discrete Diffusion

09/04/2023
by   Yunhong Lou, et al.
0

We present DiverseMotion, a new approach for synthesizing high-quality human motions conditioned on textual descriptions while preserving motion diversity.Despite the recent significant process in text-based human motion generation,existing methods often prioritize fitting training motions at the expense of action diversity. Consequently, striking a balance between motion quality and diversity remains an unresolved challenge. This problem is compounded by two key factors: 1) the lack of diversity in motion-caption pairs in existing benchmarks and 2) the unilateral and biased semantic understanding of the text prompt, focusing primarily on the verb component while neglecting the nuanced distinctions indicated by other words.In response to the first issue, we construct a large-scale Wild Motion-Caption dataset (WMC) to extend the restricted action boundary of existing well-annotated datasets, enabling the learning of diverse motions through a more extensive range of actions. To this end, a motion BLIP is trained upon a pretrained vision-language model, then we automatically generate diverse motion captions for the collected motion sequences. As a result, we finally build a dataset comprising 8,888 motions coupled with 141k text.To comprehensively understand the text command, we propose a Hierarchical Semantic Aggregation (HSA) module to capture the fine-grained semantics.Finally,we involve the above two designs into an effective Motion Discrete Diffusion (MDD) framework to strike a balance between motion quality and diversity. Extensive experiments on HumanML3D and KIT-ML show that our DiverseMotion achieves the state-of-the-art motion quality and competitive motion diversity. Dataset, code, and pretrained models will be released to reproduce all of our results.

READ FULL TEXT
research
05/16/2023

Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation

Text-guided human motion generation has drawn significant interest becau...
research
08/28/2023

Priority-Centric Human Motion Generation in Discrete Latent Space

Text-to-motion generation is a formidable task, aiming to produce human ...
research
09/12/2023

Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model

Text-driven human motion generation in computer vision is both significa...
research
05/23/2023

Understanding Text-driven Motion Synthesis with Keyframe Collaboration via Diffusion Models

The emergence of text-driven motion synthesis technique provides animato...
research
06/06/2023

Dance Generation by Sound Symbolic Words

This study introduces a novel approach to generate dance motions using o...
research
11/28/2022

Action-GPT: Leveraging Large-scale Language Models for Improved and Generalized Action Generation

We introduce Action-GPT, a plug-and-play framework for incorporating Lar...
research
11/18/2022

3d human motion generation from the text via gesture action classification and the autoregressive model

In this paper, a deep learning-based model for 3D human motion generatio...

Please sign up or login with your details

Forgot password? Click here to reset