MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

09/19/2017
by   Hao-Wen Dong, et al.
1

Generating music has a few notable differences from generating images and videos. First, music is an art of time, necessitating a temporal model. Second, music is usually composed of multiple instruments/tracks with their own temporal dynamics, but collectively they unfold over time interdependently. Lastly, musical notes are often grouped into chords, arpeggios or melodies in polyphonic music, and thereby introducing a chronological ordering of notes is not naturally suitable. In this paper, we propose three models for symbolic multi-track music generation under the framework of generative adversarial networks (GANs). The three models, which differ in the underlying assumptions and accordingly the network architectures, are referred to as the jamming model, the composer model and the hybrid model. We trained the proposed models on a dataset of over one hundred thousand bars of rock music and applied them to generate piano-rolls of five tracks: bass, drums, guitar, piano and strings. A few intra-track and inter-track objective metrics are also proposed to evaluate the generative results, in addition to a subjective user study. We show that our models can generate coherent music of four bars right from scratch (i.e. without human inputs). We also extend our models to human-AI cooperative music generation: given a specific track composed by human, we can generate four additional tracks to accompany it. All code, the dataset and the rendered audio samples are available at https://salu133445.github.io/musegan/ .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2017

MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation

Most existing neural network models for music generation use recurrent n...
research
04/21/2022

SinTra: Learning an inspiration model from a single multi-track music segment

In this paper, we propose SinTra, an auto-regressive sequential generati...
research
07/13/2021

Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music

Modern keyboards allow a musician to play multiple instruments at the sa...
research
05/15/2022

Conditional Vector Graphics Generation for Music Cover Images

Generative Adversarial Networks (GAN) have motivated a rapid growth of t...
research
10/11/2022

ConchShell: A Generative Adversarial Networks that Turns Pictures into Piano Music

We present ConchShell, a multi-modal generative adversarial framework th...
research
07/06/2023

Track Mix Generation on Music Streaming Services using Transformers

This paper introduces Track Mix, a personalized playlist generation syst...
research
08/02/2019

High-Level Control of Drum Track Generation Using Learned Patterns of Rhythmic Interaction

Spurred by the potential of deep learning, computational music generatio...

Please sign up or login with your details

Forgot password? Click here to reset