MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation

03/31/2017
by   Li-Chia Yang, et al.
1

Most existing neural network models for music generation use recurrent neural networks. However, the recent WaveNet model proposed by DeepMind shows that convolutional neural networks (CNNs) can also generate realistic musical waveforms in the audio domain. Following this light, we investigate using CNNs for generating melody (a series of MIDI notes) one bar after another in the symbolic domain. In addition to the generator, we use a discriminator to learn the distributions of melodies, making it a generative adversarial network (GAN). Moreover, we propose a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars (e.g. a priming melody), among other possibilities. The resulting model, named MidiNet, can be expanded to generate music with multiple MIDI channels (i.e. tracks). We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google's MelodyRNN models, each time using the same priming melody. Result shows that MidiNet performs comparably with MelodyRNN models in being realistic and pleasant to listen to, yet MidiNet's melodies are reported to be much more interesting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2017

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

Generating music has a few notable differences from generating images an...
research
07/30/2018

Lead Sheet Generation and Arrangement by Conditional Generative Adversarial Network

Research on automatic music generation has seen great progress due to th...
research
05/18/2020

Unconditional Audio Generation with Generative Adversarial Networks and Cycle Regularization

In a recent paper, we have presented a generative adversarial network (G...
research
11/22/2019

GANkyoku: a Generative Adversarial Network for Shakuhachi Music

A common approach to generating symbolic music using neural networks inv...
research
08/03/2021

A Benchmarking Initiative for Audio-Domain Music Generation Using the Freesound Loop Dataset

This paper proposes a new benchmark task for generat-ing musical passage...
research
12/15/2019

Wykorzystanie sztucznej inteligencji do generowania treści muzycznych

This thesis is presenting a method for generating short musical phrases ...
research
02/10/2020

Modeling Musical Onset Probabilities via Neural Distribution Learning

Musical onset detection can be formulated as a time-to-event (TTE) or ti...

Please sign up or login with your details

Forgot password? Click here to reset