Dance Revolution: Long Sequence Dance Generation with Music via Curriculum Learning

06/11/2020
by   Ruozi Huang, et al.
0

Dancing to music is one of human's innate abilities since ancient times. In artificial intelligence research, however, synthesizing dance movements (complex human motion) from music is a challenging problem, which suffers from the high spatial-temporal complexity in human motion dynamics modeling. Besides, the consistency of dance and music in terms of style, rhythm and beat also needs to be taken into account. Existing works focus on the short-term dance generation with music, e.g. less than 30 seconds. In this paper, we propose a novel seq2seq architecture for long sequence dance generation with music, which consists of a transformer based music encoder and a recurrent structure based dance decoder. By restricting the receptive field of self-attention, our encoder can efficiently process long musical sequences by reducing its quadratic memory requirements to the linear in the sequence length. To further alleviate the error accumulation in human motion synthesis, we introduce a dynamic auto-condition training strategy as a new curriculum learning method to facilitate the long-term dance generation. Extensive experiments demonstrate that our proposed approach significantly outperforms existing methods on both automatic metrics and human evaluation. Additionally, we also make a demo video to exhibit that our approach can generate minute-length dance sequences that are smooth, natural-looking, diverse, style-consistent and beat-matching with the music. The demo video is now available at https://www.youtube.com/watch?v=P6yhfv3vpDI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2022

Dance Style Transfer with Cross-modal Transformer

We present CycleDance, a dance style transfer system to transform an exi...
research
09/03/2023

MAGMA: Music Aligned Generative Motion Autodecoder

Mapping music to dance is a challenging problem that requires spatial an...
research
03/29/2023

Robust Dancer: Long-term 3D Dance Synthesis Using Unpaired Data

How to automatically synthesize natural-looking dance movements based on...
research
11/05/2019

Dancing to Music

Dancing to music is an instinctive move by humans. Learning to model the...
research
04/25/2023

GTN-Bailando: Genre Consistent Long-Term 3D Dance Generation based on Pre-trained Genre Token Network

Music-driven 3D dance generation has become an intensive research topic ...
research
08/14/2018

MT-VAE: Learning Motion Transformations to Generate Multimodal Human Dynamics

Long-term human motion can be represented as a series of motion modes---...
research
04/10/2017

Learning Human Motion Models for Long-term Predictions

We propose a new architecture for the learning of predictive spatio-temp...

Please sign up or login with your details

Forgot password? Click here to reset