Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation

05/28/2019
by   Yu-Hua Chen, et al.
0

We present in this paper PerformacnceNet, a neural network model we proposed recently to achieve score-to-audio music generation. The model learns to convert a music piece from the symbolic domain to the audio domain, assigning performance-level attributes such as changes in velocity automatically to the music and then synthesizing the audio. The model is therefore not just a neural audio synthesizer, but an AI performer that learns to interpret a musical score in its own way. The code and sample outputs of the model can be found online at https://github.com/bwang514/PerformanceNet.

READ FULL TEXT

page 1

page 2

page 3

research
11/11/2018

PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network

Music creation is typically composed of two parts: composing the musical...
research
08/01/2020

Score-informed Networks for Music Performance Assessment

The assessment of music performances in most cases takes into account th...
research
11/26/2020

Towards Movement Generation with Audio Features

Sound and movement are closely coupled, particularly in dance. Certain a...
research
08/16/2018

Genre-Agnostic Key Classification With Convolutional Neural Networks

We propose modifications to the model structure and training procedure t...
research
09/30/2020

Generation of lyrics lines conditioned on music audio clips

We present a system for generating novel lyrics lines conditioned on mus...
research
02/17/2023

jazznet: A Dataset of Fundamental Piano Patterns for Music Audio Machine Learning Research

This paper introduces the jazznet Dataset, a dataset of fundamental jazz...
research
12/30/2021

Audio-to-symbolic Arrangement via Cross-modal Music Representation Learning

Could we automatically derive the score of a piano accompaniment based o...

Please sign up or login with your details

Forgot password? Click here to reset