DeepAI
Log In Sign Up

Learn to Dance with AIST++: Music Conditioned 3D Dance Generation

01/21/2021
by   Ruilong Li, et al.
0

In this paper, we present a transformer-based learning framework for 3D dance generation conditioned on music. We carefully design our network architecture and empirically study the keys for obtaining qualitatively pleasing results. The critical components include a deep cross-modal transformer, which well learns the correlation between the music and dance motion; and the full-attention with future-N supervision mechanism which is essential in producing long-range non-freezing motion. In addition, we propose a new dataset of paired 3D motion and music called AIST++, which we reconstruct from the AIST multi-view dance videos. This dataset contains 1.1M frames of 3D dance motion in 1408 sequences, covering 10 genres of dance choreographies and accompanied with multi-view camera parameters. To our knowledge it is the largest dataset of this kind. Rich experiments on AIST++ demonstrate our method produces much better results than the state-of-the-art methods both qualitatively and quantitatively.

READ FULL TEXT

page 7

page 11

03/18/2021

DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer

In this work, we propose a novel deep learning framework that can genera...
06/25/2021

Transflower: probabilistic autoregressive dance generation with multimodal attention

Dance requires skillful composition of complex movements that follow rhy...
07/08/2022

Music-driven Dance Regeneration with Controllable Key Pose Constraints

In this paper, we propose a novel framework for music-driven dance motio...
06/23/2020

Incorporating Music Knowledge in Continual Dataset Augmentation for Music Generation

Deep learning has rapidly become the state-of-the-art approach for music...
12/31/2021

InverseMV: Composing Piano Scores with a Convolutional Video-Music Transformer

Many social media users prefer consuming content in the form of videos r...
03/30/2022

Symbolic music generation conditioned on continuous-valued emotions

In this paper we present a new approach for the generation of multi-inst...
07/20/2022

BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis

Generative models for audio-conditioned dance motion synthesis map music...