MCM: Multi-condition Motion Synthesis Framework for Multi-scenario

09/06/2023
by   Zeyu Ling, et al.
0

The objective of the multi-condition human motion synthesis task is to incorporate diverse conditional inputs, encompassing various forms like text, music, speech, and more. This endows the task with the capability to adapt across multiple scenarios, ranging from text-to-motion and music-to-dance, among others. While existing research has primarily focused on single conditions, the multi-condition human motion generation remains underexplored. In this paper, we address these challenges by introducing MCM, a novel paradigm for motion synthesis that spans multiple scenarios under diverse conditions. The MCM framework is able to integrate with any DDPM-like diffusion model to accommodate multi-conditional information input while preserving its generative capabilities. Specifically, MCM employs two-branch architecture consisting of a main branch and a control branch. The control branch shares the same structure as the main branch and is initialized with the parameters of the main branch, effectively maintaining the generation ability of the main branch and supporting multi-condition input. We also introduce a Transformer-based diffusion model MWNet (DDPM-like) as our main branch that can capture the spatial complexity and inter-joint correlations in motion sequences through a channel-dimension self-attention module. Quantitative comparisons demonstrate that our approach achieves SoTA results in both text-to-motion and competitive results in music-to-dance tasks, comparable to task-specific methods. Furthermore, the qualitative evaluation shows that MCM not only streamlines the adaptation of methodologies originally designed for text-to-motion tasks to domains like music-to-dance and speech-to-gesture, eliminating the need for extensive network re-configurations but also enables effective multi-condition modal control, realizing "once trained is motion need".

READ FULL TEXT

page 2

page 4

page 6

page 7

page 12

research
08/05/2023

DiffDance: Cascaded Human Motion Diffusion Model for Dance Generation

When hearing music, it is natural for people to dance to its rhythm. Aut...
research
09/11/2023

Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation

This paper describes a system developed for the GENEA (Generation and Ev...
research
02/02/2020

Music2Dance: Music-driven Dance Generation using WaveNet

In this paper, we propose a novel system, named as Music2Dance, for addr...
research
04/05/2023

TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration

We propose a novel task for generating 3D dance movements that simultane...
research
12/06/2022

Pretrained Diffusion Models for Unified Human Motion Synthesis

Generative modeling of human motion has broad applications in computer a...
research
12/07/2022

Magic: Multi Art Genre Intelligent Choreography Dataset and Network for 3D Dance Generation

Achieving multiple genres and long-term choreography sequences from give...
research
07/30/2023

TransFusion: A Practical and Effective Transformer-based Diffusion Model for 3D Human Motion Prediction

Predicting human motion plays a crucial role in ensuring a safe and effe...

Please sign up or login with your details

Forgot password? Click here to reset