DeepAI AI Chat
Log In Sign Up

Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation

by   Yu-Hua Chen, et al.
Academia Sinica

We present in this paper PerformacnceNet, a neural network model we proposed recently to achieve score-to-audio music generation. The model learns to convert a music piece from the symbolic domain to the audio domain, assigning performance-level attributes such as changes in velocity automatically to the music and then synthesizing the audio. The model is therefore not just a neural audio synthesizer, but an AI performer that learns to interpret a musical score in its own way. The code and sample outputs of the model can be found online at


page 1

page 2

page 3


PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network

Music creation is typically composed of two parts: composing the musical...

Score-informed Networks for Music Performance Assessment

The assessment of music performances in most cases takes into account th...

jazznet: A Dataset of Fundamental Piano Patterns for Music Audio Machine Learning Research

This paper introduces the jazznet Dataset, a dataset of fundamental jazz...

Towards Movement Generation with Audio Features

Sound and movement are closely coupled, particularly in dance. Certain a...

Genre-Agnostic Key Classification With Convolutional Neural Networks

We propose modifications to the model structure and training procedure t...

Generation of lyrics lines conditioned on music audio clips

We present a system for generating novel lyrics lines conditioned on mus...

Audio-to-symbolic Arrangement via Cross-modal Music Representation Learning

Could we automatically derive the score of a piano accompaniment based o...