DeepAI AI Chat
Log In Sign Up

PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network

by   Bryan Wang, et al.
Academia Sinica

Music creation is typically composed of two parts: composing the musical score, and then performing the score with instruments to make sounds. While recent work has made much progress in automatic music generation in the symbolic domain, few attempts have been made to build an AI model that can render realistic music audio from musical scores. Directly synthesizing audio with sound sample libraries often leads to mechanical and deadpan results, since musical scores do not contain performance-level information, such as subtle changes in timing and dynamics. Moreover, while the task may sound like a text-to-speech synthesis problem, there are fundamental differences since music audio has rich polyphonic sounds. To build such an AI performer, we propose in this paper a deep convolutional model that learns in an end-to-end manner the score-to-audio mapping between a symbolic representation of music called the piano rolls and an audio representation of music called the spectrograms. The model consists of two subnets: the ContourNet, which uses a U-Net structure to learn the correspondence between piano rolls and spectrograms and to give an initial result; and the TextureNet, which further uses a multi-band residual network to refine the result by adding the spectral texture of overtones and timbre. We train the model to generate music clips of the violin, cello, and flute, with a dataset of moderate size. We also present the result of a user study that shows our model achieves higher mean opinion score (MOS) in naturalness and emotional expressivity than a WaveNet-based model and two commercial sound libraries. We open our source code at


page 2

page 5


Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation

We present in this paper PerformacnceNet, a neural network model we prop...

Mel Spectrogram Inversion with Stable Pitch

Vocoders are models capable of transforming a low-dimensional spectral r...

Deep Performer: Score-to-Audio Music Performance Synthesis

Music performance synthesis aims to synthesize a musical score into a na...

jazznet: A Dataset of Fundamental Piano Patterns for Music Audio Machine Learning Research

This paper introduces the jazznet Dataset, a dataset of fundamental jazz...

Representations of Sound in Deep Learning of Audio Features from Music

The work of a single musician, group or composer can vary widely in term...

Towards Movement Generation with Audio Features

Sound and movement are closely coupled, particularly in dance. Certain a...

Audio-to-symbolic Arrangement via Cross-modal Music Representation Learning

Could we automatically derive the score of a piano accompaniment based o...