High-resolution Piano Transcription with Pedals by Regressing Onsets and Offsets Times

10/05/2020
by   Qiuqiang Kong, et al.
2

Automatic music transcription (AMT) is the task of transcribing audio recordings into symbolic representations such as Musical Instrument Digital Interface (MIDI). Recently, neural networks based methods have been applied to AMT, and have achieved state-of-the-art result. However, most of previous AMT systems predict the presence or absence of notes in the frames of audio recordings. The transcription resolution of those systems are limited to the hop size time between adjacent frames. In addition, previous AMT systems are sensitive to the misaligned onsets and offsets labels of audio recordings. For high-resolution evaluation, previous works have not investigated AMT systems evaluated with different onsets and offsets tolerances. For piano transcription, there is a lack of research on building AMT systems with both note and pedal transcription. In this article, we propose a high-resolution AMT system trained by regressing precise times of onsets and offsets. In inference, we propose an algorithm to analytically calculate the precise onsets and offsets times of note and pedal events. We build both note and pedal transcription systems with our high-resolution AMT system. We show that our AMT system is robust to misaligned onsets and offsets labels compared to previous systems. Our proposed system achieves an onset F1 of 96.72 dataset, outperforming the onsets and frames system from Google of 94.80 system achieves a pedal onset F1 score of 91.86 result on the MAESTRO dataset. We release the source code of our work at https://github.com/bytedance/piano_transcription.

READ FULL TEXT

page 1

page 6

page 7

research
10/11/2020

GiantMIDI-Piano: A large-scale MIDI dataset for classical piano music

Symbolic music datasets are important for music information retrieval an...
research
10/29/2018

Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset

Generating musical audio directly with neural networks is notoriously di...
research
06/15/2023

Exploring Isolated Musical Notes as Pre-training Data for Predominant Instrument Recognition in Polyphonic Music

With the growing amount of musical data available, automatic instrument ...
research
10/04/2022

Learning the Spectrogram Temporal Resolution for Audio Classification

The audio spectrogram is a time-frequency representation that has been w...
research
07/19/2017

Metrical-accent Aware Vocal Onset Detection in Polyphonic Audio

The goal of this study is the automatic detection of onsets of the singi...
research
02/27/2022

Hierarchical Linear Dynamical System for Representing Notes from Recorded Audio

We seek to develop simultaneous segmentation and classification of notes...
research
09/05/2023

The Batik-plays-Mozart Corpus: Linking Performance to Score to Musicological Annotations

We present the Batik-plays-Mozart Corpus, a piano performance dataset co...

Please sign up or login with your details

Forgot password? Click here to reset