Song From PI: A Musically Plausible Network for Pop Music Generation

by   Hang Chu, et al.

We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how pop music is composed. In particular, the bottom layers generate the melody, while the higher levels produce the drums and chords. We conduct several human studies that show strong preference of our generated music over that produced by the recent method by Google. We additionally show two applications of our framework: neural dancing and karaoke, as well as neural story singing.


page 1

page 2

page 3

page 4


Dance2Music: Automatic Dance-driven Music Generation

Dance and music typically go hand in hand. The complexities in dance, mu...

Generating Nontrivial Melodies for Music as a Service

We present a hybrid neural network and rule-based system that generates ...

Controllable deep melody generation via hierarchical music structure representation

Recent advances in deep learning have expanded possibilities to generate...

Feel The Music: Automatically Generating A Dance For An Input Song

We present a general computational approach that enables a machine to ge...

Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

Convolutional neural networks (CNNs) have been successfully applied on b...

Lyrics-Based Music Genre Classification Using a Hierarchical Attention Network

Music genre classification, especially using lyrics alone, remains a cha...

Generating Tips from Song Reviews: A New Dataset and Framework

Reviews of songs play an important role in online music service platform...