Dual Learning Music Composition and Dance Choreography

01/28/2022
by   Shuang Wu, et al.
0

Music and dance have always co-existed as pillars of human activities, contributing immensely to the cultural, social, and entertainment functions in virtually all societies. Notwithstanding the gradual systematization of music and dance into two independent disciplines, their intimate connection is undeniable and one art-form often appears incomplete without the other. Recent research works have studied generative models for dance sequences conditioned on music. The dual task of composing music for given dances, however, has been largely overlooked. In this paper, we propose a novel extension, where we jointly model both tasks in a dual learning approach. To leverage the duality of the two modalities, we introduce an optimal transport objective to align feature embeddings, as well as a cycle consistency loss to foster overall consistency. Experimental results demonstrate that our dual learning framework improves individual task performance, delivering generated music compositions and dance choreographs that are realistic and faithful to the conditioned inputs.

READ FULL TEXT
research
12/03/2021

Music-to-Dance Generation with Optimal Transport

Dance choreography for a piece of music is a challenging task, having to...
research
06/09/2023

Everybody Compose: Deep Beats To Music

This project presents a deep learning approach to generate monophonic me...
research
05/09/2020

Dual-track Music Generation using Deep Learning

Music generation is always interesting in a sense that there is no forma...
research
06/15/2023

Taming Diffusion Models for Music-driven Conducting Motion Generation

Generating the motion of orchestral conductors from a given piece of sym...
research
04/07/2022

Genre-conditioned Acoustic Models for Automatic Lyrics Transcription of Polyphonic Music

Lyrics transcription of polyphonic music is challenging not only because...
research
12/05/2017

Learning to Fuse Music Genres with Generative Adversarial Dual Learning

FusionGAN is a novel genre fusion framework for music generation that in...
research
10/11/2022

DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability

In this paper we propose a novel generative approach, DiffRoll, to tackl...

Please sign up or login with your details

Forgot password? Click here to reset