DeepAI AI Chat
Log In Sign Up

A-Muze-Net: Music Generation by Composing the Harmony based on the Generated Melody

by   Or Goren, et al.

We present a method for the generation of Midi files of piano music. The method models the right and left hands using two networks, where the left hand is conditioned on the right hand. This way, the melody is generated before the harmony. The Midi is represented in a way that is invariant to the musical scale, and the melody is represented, for the purpose of conditioning the harmony, by the content of each bar, viewed as a chord. Finally, notes are added randomly, based on this chord representation, in order to enrich the generated audio. Our experiments show a significant improvement over the state of the art for training on such datasets, and demonstrate the contribution of each of the novel components.


Dual-track Music Generation using Deep Learning

Music generation is always interesting in a sense that there is no forma...

Multi-Instrumentalist Net: Unsupervised Generation of Music from Body Movements

We propose a novel system that takes as an input body movements of a mus...

Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset

Generating musical audio directly with neural networks is notoriously di...

SingSong: Generating musical accompaniments from singing

We present SingSong, a system that generates instrumental music to accom...

WuYun: Exploring hierarchical skeleton-guided melody generation using knowledge-enhanced deep learning

Although deep learning has revolutionized music generation, existing met...

LoopNet: Musical Loop Synthesis Conditioned On Intuitive Musical Parameters

Loops, seamlessly repeatable musical segments, are a cornerstone of mode...

EDGE: Editable Dance Generation From Music

Dance is an important human art form, but creating new dances can be dif...