Song From PI: A Musically Plausible Network for Pop Music Generation

11/10/2016
by   Hang Chu, et al.
0

We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how pop music is composed. In particular, the bottom layers generate the melody, while the higher levels produce the drums and chords. We conduct several human studies that show strong preference of our generated music over that produced by the recent method by Google. We additionally show two applications of our framework: neural dancing and karaoke, as well as neural story singing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2021

Dance2Music: Automatic Dance-driven Music Generation

Dance and music typically go hand in hand. The complexities in dance, mu...
research
10/06/2017

Generating Nontrivial Melodies for Music as a Service

We present a hybrid neural network and rule-based system that generates ...
research
09/02/2021

Controllable deep melody generation via hierarchical music structure representation

Recent advances in deep learning have expanded possibilities to generate...
research
09/01/2022

What is missing in deep music generation? A study of repetition and structure in popular music

Structure is one of the most essential aspects of music, and music struc...
research
11/22/2022

On Narrative Information and the Distillation of Stories

The act of telling stories is a fundamental part of what it means to be ...
research
07/15/2017

Lyrics-Based Music Genre Classification Using a Hierarchical Attention Network

Music genre classification, especially using lyrics alone, remains a cha...
research
06/21/2020

Feel The Music: Automatically Generating A Dance For An Input Song

We present a general computational approach that enables a machine to ge...

Please sign up or login with your details

Forgot password? Click here to reset