Learning to Fuse Music Genres with Generative Adversarial Dual Learning

by   Zhiqian Chen, et al.

FusionGAN is a novel genre fusion framework for music generation that integrates the strengths of generative adversarial networks and dual learning. In particular, the proposed method offers a dual learning extension that can effectively integrate the styles of the given domains. To efficiently quantify the difference among diverse domains and avoid the vanishing gradient issue, FusionGAN provides a Wasserstein based metric to approximate the distance between the target domain and the existing domains. Adopting the Wasserstein distance, a new domain is created by combining the patterns of the existing domains using adversarial learning. Experimental results on public music datasets demonstrated that our approach could effectively merge two genres.


page 1

page 2

page 3

page 4


C-RNN-GAN: Continuous recurrent neural networks with adversarial training

Generative adversarial networks have been proposed as a way of efficient...

Adversarial Learning for Zero-shot Domain Adaptation

Zero-shot domain adaptation (ZSDA) is a category of domain adaptation pr...

Generative Deep Learning for Virtuosic Classical Music: Generative Adversarial Networks as Renowned Composers

Current AI-generated music lacks fundamental principles of good composit...

Learning to Repair Software Vulnerabilities with Generative Adversarial Networks

Motivated by the problem of automated repair of software vulnerabilities...

Dual Learning Music Composition and Dance Choreography

Music and dance have always co-existed as pillars of human activities, c...

Resembled Generative Adversarial Networks: Two Domains with Similar Attributes

We propose a novel algorithm, namely Resembled Generative Adversarial Ne...

Code Repositories


Codes for the paper 'Learning to Fuse Music Genres with Generative Adversarial Dual Learning' ICDM 17

view repo

Please sign up or login with your details

Forgot password? Click here to reset